(CAN) GT – Senior Software Developer – Machine Learning – Conversational AI
City : Toronto
Category : Regular/Permanent Technology
Industry : Retail
Employer : Walmart Canada
Position Summary...The Cortex team is the core A.I. platform powering the vision of delivering the world’s best intelligent personal assistants to Walmart's customers, accessible via natural voice commands, text messages, rich UI interactions, and a mix of all of the above via multi-modal experiences.
We believe /conversations/ are a natural and powerful user interface for interacting with technology and enable a richer customer experiences -- both online and in-store. We are building and designing the next generation of Natural Language Understanding (NLU) services that other teams can easily integrate and leverage, and build rich experiences: from pure voice and text shopping assistants (Siri, Google Assistant, [[https://texttoshop.walmart.com/][Text to Shop]]), to customer care channels, to mobile apps with rich, intertwined, multi-modal interaction modes ([[https://apps.apple.com/us/app/me-walmart/id1459898418][Me@Walmart]]).
What you'll do...
Interested in diving in?
We need solid engineers with the talent and expertise required to
design, build, improve and evolve our capabilities in at least some of
the following areas:
Service oriented architecture in charge of exposing our NLU capabilities at scale, and enabling increasingly sophisticated model orchestration.
Since the service takes in traffic for a large set of Walmart customers (that is 80% of American households!), you will get to solve non trivial challenges in terms of service scalability and availability.
You will design and build the primitives to efficiently orchestrate model-serving microservices, taking into account their dependencies, and improving the /combined/ latency and robustness of such microservices (e.g. fan out in parallel to N services for a single request, and reply with whichever gives the fastest answer).
You will also bake-in functionality which can drive improved machine learning modeling and experimental design, such as A/B testing.
Model serving and operations
There is a constant tension between model improvements (more computations) and model serving latency. So, we are always in a quest of crunching more numbers, while preserving our SLAs, and controlling the operational costs.
You will guide our efforts to always find the best tradeoffs in terms of architecture, tooling (Tensorflow serving? / ONNYX? / Triton?) and infrastructure (CPU? / GPU?, GCP? / Azure?) for model serving -- based on the latest model developments and product requirements.
In particular, you will drive principled and scientific load-testing efforts, to clearly identify the tradeoffs at hands, and tune/optimize the model-serving stack.
Tooling, infrastructure and pipelines for reproducible workflow and models, enabling rapid innovation across the entire product lifecycle.
You will author and maintain pipelines that safely build and deploy models to production via continuous deployment.
You will achieve scalable and efficient resource management capabilities (cloud infrastructure).
You will provide robust and built-in diagnostics for quality control throughout.
You will integrate -- or build -- labeling tools which can seamlessly integrate at the heart of our conversation data store (GCP, BigQuery) and intertwine multiple labeling sources of various confidence levels.
Come at the right time, and you will have an enormous opportunity to
make a massive impact on the design, architecture, and implementation
of an innovative, mission critical product, used every day, by people
you know, and which customers love.
As part of the emerging tech group, you will also have the additional
opportunity of building demos, proof of concepts, creating white
papers, writing blogs, etc.
Here are some of our team publications:
- Blog Posts
- "Building a conversational assistant platform for voice-enabled
- "Using Context to Improve Intent Classification in Walmart’s
- "Making Walmart’s Shopping Assistant Proactive"
- Papers & Talks
- Semantic Representation and Parsing for Voice eCommerce (Paper and
Talk) - Vivek Kaul, Shankara B Subramanya, R2K Workshop at KR 2018
- Knowledge Graphs for AI-Powered Shopping Assistants, GraphConnect
2020, Ghodrat Aalipour, Mohammed Samiul Saeef
- Improving Intent Classification in an E-commerce Voice Assistant
by Using Inter-Utterance Context (Paper + Talk) - Arpit Sharma,
e-Commerce & NLP at ACL 2020
Solid data skills, sound computer-science fundamentals, and strong programming experience.
Deep hands-on technical expertise in full-stack development.
Programming experience with at least one modern language with an efficient runtime, such as Scala, Java, C++, or C#.
Experience with at least one relational database technology such as MySQL, PostgreSQL, Oracle, or MS SQL.
Some level of fluency in Python (lingua-franca of our data-scientists).
Understanding of the challenge of distributed data-processing at scale.
Deal well with ambiguous/undefined problems; ability to think abstractly.
Ability to take a project from scoping requirements through actual launch.
A continuous drive to explore, improve, enhance, automate, and optimize systems and tools.
Capacity to apply scientific analysis and mathematical modeling techniques to predict, measure and evaluate the consequences of designs and the ongoing success of our platform.
Excellent oral and written communication skills.
Bachelor’s degree or certification in Computer Science, Engineering, Mathematics, or any other related field.
Large scale distributed systems experience, including scalability and fault tolerance.
Experience taking a leading role in building complex data-driven software systems successfully delivered to customers
Relentless focus on scalability, latency, performance robustness, and cost trade-offs -- especially those present in highly virtualized, elastic, cloud-based environments.
Exposure to cloud infrastructure, such as Open Stack, Azure, GCP, or AWS as well as infrastructure management tech (Docker, Kubernetes)
Experience building/operating highly available systems of data extraction, ingestion, and massively parallel processing for large data sets. In particular experience in building large scale data pipelines using big data technologies (e.g. Spark / Kafka / Cassandra / Hadoop / Hive / BigQuery / Presto / Airflow).
Hands-on expertise in many disparate technologies, typically ranging from front-end user interfaces through to back-end systems and all points in between.
Familiarity with Machine Learning concepts & processes
Masters or PhD in Computer Science, Physics, Engineering, Math, or equivalent.
Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications.Age – 16 or older
Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications.
Walmart will accommodate the disability-related needs of applicants and associates as required by law.
Primary Location…1940 Argentia Rd, Mississauga, ON L5N 1P9, Canada