ETL Developer with AWS, Java and Spark experience

ETL Developer with AWS, Java and Spark experience

02 Jun 2024
Pennsylvania, Philadelphia, 19160 Philadelphia USA

ETL Developer with AWS, Java and Spark experience

Vacancy expired!

Hi
Hope you are doing well.
Greetings from Noralogic

Please find below job description & let me know if it matches with your job profile. If you have any question you can call me back on or email me at .


Role: ETL Developer with AWS, Java and Spark experience
Location: Philadelphia, PA
Position Type: Long Term Contract
Responsibilities:

  • Hands-on architecture/development of ETL pipelines using our internal framework written in Apache Spark & Java
  • Hands-on development in consuming Kafka/REST APIs or other streaming sources using Spark and persisting data in Graph or any NoSQL databases.
  • Implement DQ metrics and controls for data in a big data environment
  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality
  • Acquire data from primary or secondary data sources and maintain databases/data systems
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Filter and clean data by reviewing reports and performance indicators to locate and correct problems
  • Work with management to prioritize business and information needs
  • Locate and define new process improvement opportunities. Provide architectural, best practice ideas and suggestions to better current setup.
  • Qualifications:
    At least 6+ years of experience architecting and implementing complex ETL pipelines preferably with Spark toolset.
    At least 4+ years of experience with Java particularly within the data space
    Required Technical Skills:
    • 4+ years Spark/Java Hands on development experience : Kafka, Spark Streaming is must
    • 3+ years of hands-on Development experience in Hadoop ecosystem tools - Hive, Parquet, Sqoop, Presto, DistCp is must
    • 4+ years of development experience in Big Data on Cloud - Specifically in AWS - S3, Glue
      • AWS certification is preferable: AWS Developer/Architect/DevOps/Big Data
    • 3+ years of experience with NoSQL implementation ( Mongo, Cassandra, Graph)
    • 3+ years of experience in SQL
    • 2+ years of experience with UNIX/Linux, including basic commands and shell scripting
    • 2+ years of experience with Agile engineering practices
    Additional Requirements:
    • Technical expertise regarding data models, database design development, data mining and segmentation techniques
    • Good experience writing complex SQL and ETL processes
    • Excellent coding and design skills, particularly in Java/Scala and Python and or Java.
    • Experience working with large data volumes, including processing, transforming and transporting large-scale data
    • Excellent working knowledge of Apache Hadoop, Apache Spark, Kafka, Scala, Python etc.
    • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
    • Good understanding & usage of algorithms and data structures
    • Good Experience building reusable frameworks.
    • Experience working in an Agile Team environment.
    • Excellent communication skills both verbal and written


    Ujjwal Tiwari
    Talent Acquisition Specialist




    Related jobs

    Job Details

    Jocancy Online Job Portal by jobSearchi.