Your Responsibilities:Contribute to the development of the Everywhere Inference platform - a Kubernetes-based solution enabling scalable and portable AI inference across a wide range of environments.Design and implement APIs and developer tools to simplify deployment, management, and monitoring of AI applications.Focus on packaging and integrating new ML models into the platform, using Python and common ML frameworks.Optimize serverless container workflows for AI workloads, ensuring performance, scalability, and seamless autoscaling.Collaborate with customers to fine-tune ML model performance and support their unique use cases.Work with cross-functional teams to improve the AI applications marketplace and ensure smooth model onboarding and lifecycle management.Stay current with trends in Kubernetes, machine learning, and MLOps, and help drive innovation within the platform.