Role: Data Engineer
No of positions: 2
Location: Pleasanton, CA (Remote/Hybrid Model)
Our Enterprise Architecture and Data Services team is looking for a Data Engineer that enables self-service analytics teams (Marketing, Sales, Finance and Services) to explore outliers and take actions that differentiate us from our competition. You will work closely with other team members like solution architects, technical leads, and business analysts to understand what the business is trying to achieve, move data from source to target, and design optimal data models. You will also be responsible for building and maintaining the data platform. This hands-on technical role demands excellent knowledge and can demonstrate best practices in the industry.
An ideal candidate will have extensive knowledge of the data warehouse and data engineering using the latest tools and Open-source frameworks.
Develop and automate high-performance data processing systems to drive Workday business growth and improve the product experience.
Evangelize high-quality software engineering practices toward building data models and pipelines at scale.
Build reliable, efficient, testable, & maintainable data pipelines.
Design and Develop data pipelines using Metadata driven ETL Tools and Open-source data processing frameworks.
Hands-on experience with source version control, continuous integration, and experience with release/change management delivery tools.
Provide production support and resolve high-priority incidents and the development coding issues.
Work with cross-functional teams to enable data insights through the Data lifecycle.
8+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business.
Experience with very large-scale data warehouses and data engineering projects.
Experience building analytical solutions to Sales, Finance, Product and Marketing teams.
Prior experience with CRM systems like SFDC is required.
Experience developing low latency data processing solutions like AWS Kinesis, Apache Kafka, Apache Spark Stream processing and other Data Integration tools.
Strong experience in one or more programming languages for processing large data sets, such as Python and Scala.
Working experience with SQL and NoSQL databases. Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs.
Experience working with AWS Cloud Data Services like S3, EC2, EMR, Lambda, Redshift etc.
Ability to create enterprise data models, STAR schemas for data consuming.
Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
Ability to mentor, guide, and lead associate engineers in the team.
Education: Bachelor's degree in Computer Science or related field or equivalent combination of industry-related professional experience and education.