Initial remote and onsite after 3 months.
Work Location: New York, NY (Hybrid Work)
Job Title: Java Big-Data Engineer
We are seeking hands on Data Engineer consultants to build out the next generation data warehouse/mess for the organization.
To solve the data availability and access issues of all data across the organization.
Enabling a graph of connectivity between 100s of data sets.
We need people that are enthusiastic about enabling internal and external clients by streamlining and facilitating easy access to their critical data that is well defined and has established transparent levels of quality.
This engineer will leverage our data platforms to achieve this, while providing critical input to extend data platform capabilities.
Familiarity with ETL and Cloud Platform data pipeline solutions is critical, as is REST API authoring for data access.
Member of the Business Date Engineering team, work to deliver Data Ingest/Enrich Pipelines, and Access APIs using common cloud technologies.
Work with consumers to understand the data requirements and deliver data contracts with well-defined SLIs to track SLA agreements.
Harness modern application best practices with code quality, API test Coverages, Agile Development, DevOps, and Observability and support.
Maintain programming standards and ensure the usage of the pattern / template for API Proxy.
Conduct code reviews and automatic test coverage
Standardize the CI/CD setup for API management tools and automated deployment.
Utilize problem-solving skills to help your peers in the research and selection of tools, products, and frameworks (which is vital to support business initiatives)
Qualifications & Experience:
5+ years of proven industry experience; bachelor's degree in IT or related fields
Hands-on development expertise in Java, Python, GraphQL, SQL, Junit, Spring Boot, OpenAPI, Spark, Flink, Kafka
Experience working in cloud data platforms such as Azure, Snowflake, Yellowbrick, Singlestore, GBQ
Understanding of Databases, API Frameworks, Governance Frameworks, and expertise in hosting and managing platforms like: Hadoop, Spark, Flink, Kafka, SpringBoot, BI Tools like Tableau, Alteryx, Governance Tools like Callibra, Soda, Amazon DeeQu
Strong understanding of Twelve-Factor App Methodology
Solid understanding of API and integration design principles and pattern experience with web technologies.
Experience with test-driven development and API testing automation.
Demonstrated track record of full project lifecycle and development, as well as post-implementation support activities
Hands-on experience in designing and developing high volume REST using API Protocols and Data Formats.
Financial experience: Public and Alternatives Asset Management
Familiar in NoSQL\NewSQL databases
Working with Azure API and DB Platforms
Strong documentation capability and adherence to testing and release management standards
Design, development, modification and testing of databases designed to support Data Warehousing and BI business teams
Familiarity with SDLC methodologies, defect tracking (JIRA, Azure DevOps, ServiceNow etc.)
Candidate must have an analytical and logical thought process for developing project solutions
Strong interpersonal and communication skills; works well in a team environment
Ability to deliver under competing priorities and pressures.
Excellent organizational skills in the areas of code structuring & partitioning, commenting and documentation for team alignment and modifications