Seeking a Machine Learning Architect who will lead the organizational transformation from traditional transactional databases and warehousing to cloud data platforms, deep learning and advance analytics domains. You'll work on transformative and cutting edge technologies and also deliver working software that produces real results. Our philosophy is think big, start small, act fast – we value open source technologies, solve challenging and unique problems, and innovate quickly. We work in a very unique onsite-offshore model, and encourage creativity from our architects and engineers every step of the way. This position reports directly to our SVP, Engineering in beautiful San Diego, CA.
By the very nature of our role, you will be working with various teams including product, user experience, portals, operations, core, and systems. While our teams are small enough to make fast decisions, our audience and reach is large enough that your work, your voice will have an immediate and tremendous impact.
- Define the strategies, roadmaps and solutions in the analytics and deep learning space and evangelize the vision to the organization.
- Responsible for design, execution and delivery of exploratory concepts, rapid prototypes, and pilot solutions designed to test hypothesis and incubate transformative new capabilities by applying Machine Learning, data mining techniques, doing statistical analysis, and building high quality prediction systems.
- You will be hands on ML expert. You will implement and deploy these as an Enterprise-grade technology stack, and be responsible for all aspects of the solution – data pipelines, model generation, and training and inference engines.
- Your solution would encompass the gamut of highly-available data lakes to highly-performant-compute clusters, from storage and networking infrastructure to platforms and micro services.
- You will help build an internal team, including recruiting new members and coaching and mentoring existing ones.
- Master Degree in Machine Learning, Data Mining, Computer Science, Statistical Inference, Mathematical modeling or similar fields with 10-12 years of strong, demonstrable SDLC experience –minimum of 5 of these years should be direct experience in the machine learning, big data space.
- Experience implementing at least two to three Machine Learning pipelines in production.
- Deep hands-on technical ability. Excellent understanding of machine learning techniques and algorithms. Deep understanding of statistics and probability.
- Experience with Hortonworks or Cloudera distributions.
- Very strong written and oral communication skills – must able to present complex ideas in an understandable way.
- Prior experience working with the ELK stack including Elasticsearch
- Proficiency with one or more of Python, Java, Scala preferably in a Linux Environment.
- Experience with deep learning frameworks such as MxNet, Caffe/2, SparkML, Gluon, TensorFlow, Theano, Keras, Pandas, NumPy, scikit-learn.
- Experience with distributed computing frameworks, containers and microservices (Yarn, Kubernetes, AWS ECS, Mesos).
- Experience with at least two noSQL variants - Hive, Mongodb, Cassandra, Impala, and expertise with Kafka, MLLib
- Excellent understanding of algorithms and data structures for optimization.
- Prior experience with traditional RDBMS (Oracle, SQL Server, etc.) and/or large scale traditional data warehousing is a must (4-5 years preferred)
- Must have machine learning implementation experience
- U.S. citizen/green card holder