2+ years of experience of developing, implementing, and supporting cloud-based (AWS Preferred) solutions.
Knowledge and working experience on AWS Global Infrastructure – Compute (EC2, Lambda, Containers, etc.), Databases (RDS, Redshift, DynamoDB, etc.), Storage (EBS, EFS, etc.), Security (IAM, CloudTrail, etc.) and Analytics (Athena, EMR, etc.) at least.
Knowledge of Dataiku (or equivalent Platform) that supports ML Operations
Knowledge of provisioning infrastructure, access management, resource management of cloud Environments and supporting applications installed on cloud
Sound knowledge about Virtual Private Clouds (VPCs), Subnets, Security groups, Endpoints, and other networking concepts.
Knowledge of well-architected framework, cloud best practices and governance, performance/cost optimization and automation.
Experience in writing one or more coding languages (e.g., Python, Node, Go, etc.).
In depth knowledge and hands on experience of working on Linux Operating System
Ability to collaborate with multiple cross-functional teams to deliver quality service/product
Excellent problem-solving skills and ability to communicate effectively with clients
Preferred Qualifications:
Associate-level Certifications in any of the Cloud platforms will be preferred. Having a professional certificate will be an added advantage
Knowledge of CI/CD tools like Jenkins, AWS Code Pipeline, CloudFormation, etc.
Knowledge of microservice architecture including API Gateway, Kubernetes, etc.
Hands on experience in supporting application deployments on Kubernetes
Roles and Responsibilities:
Understand end-to-end solution design following architectural best practices, security, compliance, and regulation.
Collaborate with various techno-functional teams (Operations, Build, Validation, etc.) to deliver high quality solutions/services on Cloud.
Implementation of Automated Frameworks to reduce manual intervention.
Support the Cloud Infrastructure composed of AWS Platform, Dataiku, EMR / EC2, EKS, Databases, File Storage(S3)
Optimal resource – Space, CPU, network allocation and prioritization. Continuous monitoring and optimization of costs across pipelines, jobstreams, ML models , etc.