Big Data Engineer (multiple positions open) - will start as remote however will be onsite at Seattle once the pandemic restrictions are lifted. Minimum Requirements:
5+ years of experience in big data ecosystem. Example technologies include batch and stream processing (e.g. Spark, Hive, Flink, Beam), analytical engines (e.g. Presto, Druid), search platform (e.g. Solr/Lucene), tooling (e.g. Airflow, Jupyter, Superset, Tableau), and storage format (e.g. Iceberg)
Excellent verbal and written communication skills, able to collaborate cross-functionally with data science, machine learning, data platform and analytics teams
Customer-focused mindset, with emphasis on user experience and satisfaction
Superb problem-solving skills, and able to thrive in a fast-paced and dynamic environment
Hands-on in designing, building, scaling, and troubleshooting solutions to big data problems
Must be self-driven, and able to provide advice and support to users to properly integrate with our data platform
Programming experience in Java, Python, Scala, or similar languages
Passionate about latest big data technologies, open source community presence is a big plus
Experience with AWS, Kubernetes, Infrastructure-as-code, and data privacy & compliance is a big plus