Role-Azure Data Engineer Location : Dallas, TX (Remote till Covid) Duration:12+ Months Contract
Job Responsibilities • Design and implement Data Ingestion from multiple sources to Azure Data Storage services. • Implement Azure data services and tools to ingest, egress, and transform data from multiple sources. • Responsible for creating an ETL pipeline with Azure Ecosystem like Azure Data Bricks, Azure Data Factory. • Build simple to complex pipelines, activities, Datasets & data flows • Utilize Azure compute services [Databricks, Data Lake Store, PySpark, Apache Spark, Synapse, Data Factory] to implement transformation logic and stage transformed data. • Design data ingestion into data modelling services to create cross domain data models for end user consumption. • Implement ETL, related jobs to curate, transform and aggregate data to create source models for end user analytics use cases. • Scheduling automation and monitoring instrumentation for data movement jobs. • Working experience with Azure monitor and Azure log Analytics. • Implement/Support Azure DBaaS infrastructure and services • Experience working in Agile/Scrum/Kanban team environments
• 7-8 years of experience managing SQL Queries, Data Lake, Data Bricks, PySpark, Synapse, Data factory etc. • Strong ETL pipeline development experience with Azure Ecosystem like Azure Data Bricks, Azure Data Factory. Build simple to complex notebooks, pipelines, activities, Datasets & data flows. • Experience with message ingestion systems like Kafka / Azure Event Hub. • Experience in data modelling and Proficient in SQL developer skills in writing stored procedures, functions, transformations etc. • Experience in PySpark is must • Knowledge in Service provisioning, scripted provisioning, blueprint development for data service deployment etc. • Excellent oral and written communication skills. • Candidate should possess a strong work ethic, good interpersonal and communication skills, and a high energy level