Seeking a Sr. Machine Learning Architect with actual machine learning implementation experience. Reporting into the SVP, Engineering this role is critical in leading the organizational transformation from traditional transactional databases and warehousing to cloud data platforms, deep learning and advance analytics domains. You will not only work on transformative and cutting-edge technologies, but also deliver working software that produces real results. You will be working with various teams including product, user experience, portals, operations, core, and systems. While our teams are small enough to make fast decisions, our audience and reach is large enough that your work, your voice will have an immediate and tremendous impact.
- You will help define the strategies, roadmaps and solutions in the analytics and deep learning space and evangelize the vision to the organization.
- You will be primarily responsible for design, execution and delivery of exploratory concepts, rapid prototypes, and pilot solutions designed to test hypothesis and incubate transformative new capabilities by applying Machine Learning, data mining techniques, doing statistical analysis, and building high quality prediction systems.
- You will be hands on. You will implement and deploy these as an Enterprise-grade technology stack and be responsible for all aspects of the solution – data pipelines, model generation, and training and inference engines.
- Your solution would encompass the gamut of highly-available data lakes to highly-performant-compute clusters, from storage and networking infrastructure to platforms and micro services.
- You will help build an internal team, including recruiting new members and coaching and mentoring existing ones.
- Bachelor’s/Master’s Degree in Machine Learning, Data Mining, Computer Science, Statistical Inference, Mathematical modeling or similar fields with 10-12 years of strong, demonstrable SDLC experience –minimum of 5 of these years should be direct experience in the machine learning, big data space.
- Experience implementing at least two to three Machine Learning pipelines in production.
- Deep hands-on Technical ability. Excellent understanding of machine learning techniques and algorithms. Deep understanding of statistics and probability.
- Experience with Hortonworks or Cloudera distributions.
- Very strong written and oral communication skills – must able to present complex ideas in an understandable way.
- Prior experience working with the ELK stack including Elasticsearch
- Proficiency with one or more of Python, Java, Scala preferably in a Linux Environment.
- Experience with deep learning frameworks such as MxNet, Caffe/2, SparkML, Gluon, TensorFlow, Theano, Keras, Pandas, NumPy, scikit-learn.
- Experience with distributed computing frameworks, containers and microservices (Yarn, Kubernetes, AWS ECS, Mesos).
- Experience with at least two noSQL variants - Hive, Mongodb, Cassandra, Impala, and expertise with Kafka, MLLib
- Excellent understanding of algorithms and data structures for optimization.
- Prior experience with traditional RDBMS (Oracle, SQL Server, etc.) and/or large scale traditional data warehousing is a must (4-5 years preferred)
- U.S. citizen/green card holder required.