Software Engineer II - Global Data Warehouse

Software Engineer II - Global Data Warehouse

13 Jun 2024
California, Sanfrancisco, 94103 Sanfrancisco USA

Software Engineer II - Global Data Warehouse

Vacancy expired!

Data underpins our products, enabling intelligent decision making and improved user experiences. Leveraging the latest ML, Big Data, data visualization, and NLP technologies, the Product Platform team works at the intersection of engineering and data science to enhance our services and deliver actionable insights.

As a Software Engineer II at Uber, you will play a meaningful role in scaling the global data warehouse to power analytics for teams across Uber. You are a self-starter with industrial experience in SQL, data modeling, and ETL pipeline design. You possess a proven understanding of implementing ETL pipelines in Hive or another MPP database architecture. You are comfortable coding in Python, Java, or Scala. Peers describe you as a trusted team member who is skilled in investigating, root-causing, and independently solving sophisticated data problems in a timely manner. You are able to balance multiple simultaneous projects with limited supervision. You are detail-oriented, passionate about testing your code and writing excellent documentation. You regularly perform code reviews and help define code quality standards for your team. Does this describe you? If so, we would love to hear from you!

Basic Qualifications
  • BS or MS in Computer Science or a related technical field, or equivalent experience.
  • 2+ years experience analyzing business metrics and investigate data problems and improving data quality.
  • 2+ years experience writing and deploying code in one of the following programming languages: Python, Scala, or Java.
  • Proven record of successful partnerships with product and engineering teams resulting in timely delivery of impactful data products.

Preferred skills:

  • Familiarity with Kimball's data warehouse lifecycle and dimensional data modeling.
  • Proven familiarity with industry-leading Big Data ETL best practices.
  • Experience with real-time data ingestion and stream processing.
  • 1+ years hands-on experience using Hadoop, Hive, Presto, Spark, or another MPP database system like AWS Redshift or Teradata. Proficient in writing and analyzing SQL queries.

Job Details

Jocancy Online Job Portal by jobSearchi.