Role: Bigdata Solution Architect
Location: Jacksonville, FL(Remote)
Duration: Long Term
Key Skill: Apache Hadoop
Mandatory Skills: Hive, Spark
The big data solutions architect is responsible for managing the full life-cycle of a Hadoop solution. This includes creating the requirements analysis, the platform selection, design of the technical architecture, design of the application design and development, testing, and deployment of the proposed solution.
Needs to have experience with the major big data solutions like Hadoop, MapReduce, Hive, HBase, MongoDB, Cassandra. Quite often they also need to have experience in big data solutions like Impala, Oozie, Mahout, Flume, ZooKeeper and/or Sqoop.
Firm understanding of major programming/scripting languages like Java, Linux, PHP, Ruby, Phyton and/or R.
Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms.
Experience in working with ETL tools such as Informatica, Talend and/or Pentaho will be good to have.
To be able to benchmark systems, analyse system bottlenecks and propose solutions to eliminate them;
To be able to clearly articulate pros and cons of various technologies and platforms;
To be able to help program and project managers in the design, planning and governance of implementing projects of any kind;
To be able to perform detailed analysis of business problems and technical environments and use this in designing the solution;
To be able to work creatively and analytically in a problem-solving environment;
To be a self-starter;
To be able to work in teams, as a big data environment is developed in a team of employees with different disciplines;
To be able to work in a fast-paced agile development environment
Interested candidates do share profiles to pauls (at) e-deft (dot) com