Data Engineer- Bigdata :: Seattle, LA or NY

Data Engineer- Bigdata :: Seattle, LA or NY

03 Mar 2024
Washington, Seattle-tacoma, 98101 Seattle-tacoma USA

Data Engineer- Bigdata :: Seattle, LA or NY

Vacancy expired!

Location:
Client is open to Seattle, LA or NY…they have tech centers in these locations…but, with Covid they should be fine for remote candidates for now (with the caveat that they could eventually move to one of these locations post pandemic).

Work
  • Design and develop the data platform to efficiently and cost effectively address various data needs across the business.
  • Build software across entire cutting-edge data platform, including event driven data processing, storage, and serving through scalable and highly available APIs, with awesome cutting-edge technologies.
  • Performing exploratory and quantitative analytics, data mining, and discovery.
  • Make data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.
  • Implementing and refining robust data processing, REST services, RPC (in an out of HTTP), and caching technologies.
  • Working closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems which can scale elastically
  • Help build and maintain foundational data products such as but not limited to Various conformed datasets, Consumer 360, data marts etc.
  • Ensure data quality by implementing re-usable data quality frameworks.
  • Build process and tools to maintain Machine Learning pipelines in production.
  • Develop and enforce data engineering, security, data quality standards through automation.

Essential requirements
  • Bachelor’s degree in computer science or Similar discipline.
  • 5+ years of experience in software engineering
  • 2+ years of experience in data engineering.
  • Ability to work in fast paced, high pressure, agile environment.
  • Expertise in at least few programming languages - Java, Scala, Python or similar.
  • Expertise in building and managing large volume data processing (both streaming and batch) platform is a must.
  • Expertise in stream processing systems such as Kafka, Kinesis, Pulsar or Similar
  • Expertise in building micro services and managing containerized deployments, preferably using Kubernetes
  • Expertise in distributed data processing frameworks such as Apache Spark, Flink or Similar.
  • Expertise in SQL, Spark SQL, Hive etc.
  • Expertise in OLAP databases such as Snowflake or Redshift.
  • No-SQL (Apache Cassandra, DynamoDB or similar) is a huge plus
  • Experience in operationalizing and scaling machine models is a huge plus.
  • Experience with variety of data Tools & frameworks (example: Apache Airflow, Druid) will be a huge plus.
  • Experience with Analytics Tools such as Looker, Tableau is preferred.
  • Cloud (AWS) experience is preferred
  • Ability to learn and teach new languages and frameworks.
  • Excellent data analytical skills
  • Direct to consumer digital business experience preferred
  • Digital Advertising Tech experience will be a huge plus

Job Details

Jocancy Online Job Portal by jobSearchi.