1.Minimum experience of 4+ years in relevant technologies like Big Data, Hive, Hive QL, Spark
2. People with Good working exposure in Spark Streaming ( Spark API, RDD, Data Frames, Spark SQL) with Scala & Python
3. Understanding of Apache Spark Ecosystem.
4. Good understanding Distributed Data Processing ( HDFS with Spark)
5. Should have worked on Data Ingestion, Data Processing along with transformation in Hadoop environment leveraging Spark, Scala and Python
Nice to have
Kafka Event Processing
Knowledge on SAS algorithms and conversion of the same to Python Scripts
Senior presence and ability to operate at different levels (e.g., research/analysis, thought-partnership with seniors, presentations, interactions with partners, clients).
Exceptional strategic skills; ability to conceptualize and execute on the business and technology vision.
Original and innovative thinking, creativity
Rigorous attention to detail and focus on execution