Design and development of a technical solution for our analytics platform on the AWS cloud that aligns with architectural and quality standards, and effectively integrates solutions delivered with adjacent and tertiary solutions/technologies.
Independently identifies, defines, directs, and/ or performs analysis to resolve complex, first-time issues in his or her area(s) of expertise.
Ensures knowledge sharing within the team and across teams.
Brings technical knowledge into the organization from external sources and links new, emerging technologies with business needs.
Implement full DevOps culture of Build, Test automation with continuous integration and deployment.
Required Job Qualifications
Linux: 5 or more years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu.
Big Data: 3 year operational experience with the Hadoop stack (Spark, Hive, Ranger, Sentry, HDFS).
AWS: Working experience and good understanding of the AWS environment, including VPC, EMR, EC2, EBS, S3, RDS, SQS, Cloud Formation, Lambda and HBase.
Container Runtimes: Hands-on experience with container runtimes, such as Docker and Kubernetes.
Programming: Experience programming microservices or APIs with java, Python, Scala.
AWS EMR: Experience in Amazon EMR cluster configuration.
DevOps Automation: Experience with DevOps - Orchestration/Configuration Management and CI/CD tools (Jenkins, CircleCI, Atlantis, Puppet, Jenkins, Troposphere, Terraform, Serverless, etc.).
Networking: Working knowledge of TCP/IP networking, SMTP, HTTP and HTTPS, load-balancers (ELB, HAProxy), NGINX and high availability architecture.
Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, SignalFx and Splunk.
Version Control: Working experience with one or more version control platforms (Bitbucket, Github).
ETL: Job scheduler experience like Airflow or Data Pipeline. Nice to have Airflow experience.