Data Engineer II, Professional Services Strategy & Operations, Data Science & Engineering

Data Engineer II, Professional Services Strategy & Operations, Data Science & Engineering

15 Aug 2024
Washington, Seattle-tacoma, 98101 Seattle-tacoma USA

Data Engineer II, Professional Services Strategy & Operations, Data Science & Engineering

DescriptionAmazon Web Services (AWS) is seeking an experienced Data Engineer to build next generation data products for the Professional Services - Operations Technology - Data Science and Engineering team. This is a unique opportunity to think big, insist on the highest standards, and invent and simplify data products to scale and accelerate our enterprise customers' journey to the cloud. The team builds advanced analytical products including AI/ML and generative AI tools for use by thousands of internal customers.AWS provides companies of all sizes with an infrastructure web services platform in the cloud. With AWS you can requisition compute power, storage, and many other services – gaining access to a suite of elastic IT infrastructure services as your business demands them. AWS is the leading provider for designing and developing applications for the cloud and is growing rapidly with millions of customers in over 190 countries. Many of these customers seek help from AWS Professional Services in their journey to a cloud-based IT operating model.Do you have deep expertise in the end to end development of large datasets across a variety of platforms? Are you great at designing data systems and redefining best practices with a cloud-based approach to scalability and automation? In this role, you will be responsible for scaling our existing infrastructure, incorporating new data sources, and building robust data pipelines. In partnership with product and business teams, you will work backwards from our business questions to drive scalable solutions. You will be a technical leader owning the architecture of our data platform and influence best practices across multiple teams. Above all, you should be passionate about working with data.Key job responsibilitiesIn this role, you will have the opportunity to display and develop your skills in the following areas

Develop and support ETL pipelines with robust monitoring and alarming

Develop data models that are optimized and aggregated for business needs

Develop and optimize data tables using best practices for partitioning, compression, parallelization, etc.

Build robust and scalable data integration (ETL) pipelines using SQL, Python, and AWS services such as Glue, Lambda, and Step Functions

Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL/Redshift

Interface with business customers, gathering requirements, and delivering complete reporting solutions

Continually improve ongoing reporting and analysis processes and automating or simplifying self-service support for customers

Explore and learn the latest AWS technologies to provide new capabilities and increase efficiencies

Work closely with business owners, analysts, and Business Intelligence Engineers to explore new data sources and deliver new data products

About the teamThe ProServe Strategy & Operations team delivers relentless innovation that accelerates smarter decisions for a better Professional Services through technology, automation, and advanced analytics. Our mission is to provide the AWS with the right information at the right time to make analytically-informed decisions about business performance and desired outcomes. The team supports AWS Professional Service's mission by ensuring that our data are trusted and secured via business systems and automation technologies to deliver actionable insights that drive business growth and efficiencies.Basic Qualifications

Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical discipline

5+ years of industry experience in data engineering related field with solid background in manipulating, processing, and extracting value from large datasets

Ability to write high quality, maintainable, and robust code, often in SQL and Python.

5+ Years of Data Warehouse Experience with Oracle, Redshift, Postgres, Snowflake etc. with demonstrated strength in SQL, Python, PySpark, data modeling, ETL development, and data warehousing

Extensive experience working with cloud services (AWS or Azure or GCS) with a strong understanding of cloud databases (e.g. Redshift/Aurora/DynamoDB), compute engines (e.g. EMR/EC2), data streaming (e.g. Kinesis), storage (e.g. S3) etc.

Experience/Exposure using big data technologies (Hadoop, Hive, Hbase, Spark, etc.)

Fundamental understanding of version control software such as Git

Experience with CI/CD, automated testing, and DevOps best practices

Preferred Qualifications

Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions

Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Masters in computer science, mathematics, statistics, economics, or other quantitative fields

7+ years of experience as a experience in data engineering related field in a company with large, complex data sources

Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

Experience using big data technologies (Hadoop, Hive, Hbase, Spark, etc.)

Experience working with AWS (Redshift, S3, EMR, Glue, Airflow, Kinesis, Step Functions)

Hands-on in any scripting language (BASH, C#, Java, Python, Typescript)

Hands on experience using ETL tools (SSIS, Alteryx, Talend)

Background in non-relational databases or OLAP is a plus

Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations

Strong analytical skills, 5+ years’ experience with Python, Scala and an interest in real time data processing

Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $118,900/year in our lowest geographic market up to $205,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.

Job Details

Jocancy Online Job Portal by jobSearchi.