Vacancy expired!
Must Have:
Systematic problem-solving approach and knowledge of algorithms, data structures and complexity analysis.
Familiarly with different types of data serialization formats such as Parquet, Avro, etc.
Understanding of big data infrastructure tools and software such as Spark, Yarn Cluster, Hadoop, MPP, Kafka, HDFS and Hive.
Experience with a relational database (Oracle)
Strong Hands-on programming languages on Scala, Shell Scripting, SQL
Grit, drive and a strong sense of ownership coupled with an appetite for collaboration.
Perform source system data analysis in order to manage source to target data mapping.
Perform migration and testing of data from one core system to another.
Perform data migration audit, reconciliation and exception reporting.