Senior DevOps Engineer (Big Data & Kubernetes)

Senior DevOps Engineer (Big Data & Kubernetes)

25 Jan 2024
California, Cupertino, 95014 Cupertino USA

Senior DevOps Engineer (Big Data & Kubernetes)

Vacancy expired!

We are looking for a Senior DevOps / Big Data Engineer with infrastructure coding experience that can take the lead on a critical new project to scale and build out a Kubernetes backed big data platform, which will be a key component for all our data pipelines, running daily with petabytes of data under high-performance requirements, utilizing bleeding-edge technologies. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. The ideal candidate is a driven, enthusiastic, and technology-proficient DevOps/Software engineer, who is eager and able to drive, scale, and be the de-facto Kubernetes expert for a data platform that will interconnect +15 teams and several hundreds of engineers.

Responsibilities:
  • Resolve bottlenecks and scale up infrastructure on Kubernetes supporting dozens of use-cases for hundreds of engineers
  • Work independently and directly with stakeholders, architects, testers, developers, and analysts to identify issues and define solutions
  • Suggest, design, and implement scalable solutions for running new platform components on Kubernetes
  • Own end-to-end development for specific components together with key developers
  • Contribute to project discussions and present results to key stakeholders
  • Write design documentation, present decisions and motivate these
  • Work inside a team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at a massive scale

Requirements:
  • +8 years experience working with DevOps, at least half in BigData
  • +6 years experience working with Kubernetes
  • +3 years of experience working with event-messaging systems - Kafka in particular
  • Scala coding experience - good Java and/or Python as an alternative
  • Deep knowledge of the challenges of scaling big data applications on Kubernetes and how to resolve these
  • Deep knowledge of the challenges of connecting & running many disjunct big data technologies on Kubernetes
  • Strong communication skills
  • Worked with DevOps for big data pipelines at terabyte/petabyte scale
  • Worked with HDFS

What will be a big plus:
  • Understanding Spark, in particular on Kubernetes
  • Understanding scheduling technologies - Airflow, Azkaban
  • Understanding streaming technologies - Cassandra

We offer:
  • Opportunity to work on bleeding-edge projects
  • Work with a highly motivated and dedicated team
  • Competitive salary
  • Flexible schedule
  • Medical insurance
  • Benefits program
  • Corporate social events

NB:
Placement and Staffing Agencies need not apply. We do not work with C2C at this time.
At this moment, we are not able to process H1B transfers.


About us:

Grid Dynamics is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season. Founded in 2006 and headquartered in San Ramon, California with offices throughout the US and Eastern Europe, we focus on big data analytics, scalable omnichannel services, DevOps, and cloud enablement.

Related jobs

Job Details

  • ID
    JC8553908
  • State
  • City
  • Job type
    Permanent
  • Salary
    $160,000 - $180,000
  • Hiring Company
    Grid Dynamics International, Inc.
  • Date
    2021-01-22
  • Deadline
    2021-03-23
  • Category

Jocancy Online Job Portal by jobSearchi.