C2H - IT - Sr. Data Engineer (Cloud), Data Warehousing / Big Data, Cloud, Git

C2H - IT - Sr. Data Engineer (Cloud), Data Warehousing / Big Data, Cloud, Git

10 Dec 2024
Illinois, Schaumburg, 60173 Schaumburg USA

C2H - IT - Sr. Data Engineer (Cloud), Data Warehousing / Big Data, Cloud, Git

Vacancy expired!

Description: Max Bill Rate xxxx

Title- Senior Data Engineer (Cloud)
Location- Remote work or 1600 McConnor Parkway, Schaumburg, IL

What Project/Projects will the candidate be working on while on assignment?:
Create and maintain data pipelines between on-premise data center, Azure Data Lake Storage, and Azure Synapse database using Databricks and Apache Spark/Scala.
This role is for a senior data engineer that will join a team responsible for managing a growing cloud-based data ecosystem consisting of a metadata driven data lake and databases that support real time analytics, extracts, and reporting. The right candidate will have a solid background in data engineering and should have a few years of experience on a major cloud platform such as Azure.

Is this person a sole contributor or part of a team?:
Team

If so, please describe the team? (Name of team, size of team, etc.):
Team Name: Alefgard, Size: Approximately 10+

What are the top 5-10 responsibilities for this position? (Please be detailed as to what the candidate is expected to do or complete on a daily basis):
Building and maintaining a data processing framework on Azure using Databricks
Writing code in Apache Spark/Scala
Working with existing Databricks Delta Lake tables to optimize for CDC performance using techniques
Working with existing Databricks Notebooks to optimize or address performance concerns
Create new Databricks Notebooks or stand-alone Apache Spark/Scala code as needed
Working with existing on-premise data management tools as required and willing to learn Ab Initio.

What software tools/skills are needed to perform these daily responsibilities?
Databricks
Apache Spark
Scala programming
Azure

What skills/attributes are a must have?
Data Warehousing / Big Data Best Practices
Understanding of how best to partition and organize data depending on the technology and use case: 2 years
Experience in regularly dealing with data in hundreds of Terabytes up to 1-2 Petabytes: 2 years
Data engineering experience: 5 years
Cloud platform experience: 2 years
Version Control (Git or equivalent): 2 years

We cannot provide sponsorship upon conversion

What skills/attributes are nice to have?
Data Integration Tools (Spark/Databricks or equivalent): 2 years
Scripting (Linux/Unix Shell scripting or equivalent): 2 years
Ab Initio experience
Netezza Experience

Where is the work to be performed? (Please list preferred Client facility, if other please specify i.e. remote work, rural, etc.)
Remote work or 1600 McConnor Parkway, Schaumburg, IL

What are the work hours? (ex. 9am-5pm, day/night shifts, rotating shifts, etc)
This is a software engineering position, after hour deployments will be needed on a rotational basis.

What does the interview process look like?
How many rounds?
Round 1 (Video Webex call with a few of our data engineers)
Discussion on resume with technical questions.
Round 2 (Video Webex call with hiring manager(s))

Job Details

  • ID
    JC6529247
  • State
  • City
  • Job type
    Contract
  • Salary
    N/A
  • Hiring Company
    Tanson Corp
  • Date
    2020-12-10
  • Deadline
    2021-02-08
  • Category

Jocancy Online Job Portal by jobSearchi.