NEWPosted 2 hours ago

Job ID: JOB_ID_4553

Job Description:

We are seeking a skilled Hadoop Developer to join our team. The ideal candidate will be responsible for developing and maintaining big data solutions using the Hadoop ecosystem. This role involves designing and implementing data processing workflows, ingesting data from various sources, and optimizing performance.

Responsibilities:

  • Develop and maintain big data solutions using Hadoop ecosystem components such as HDFS, MapReduce, Hive, and HBase.
  • Design and implement data processing workflows using Apache Spark, Pig, and Hive for large-scale data analysis.
  • Ingest and process data from multiple sources using tools like Sqoop, Flume, and Kafka.
  • Optimize data storage and query performance through partitioning, indexing, and data compression techniques.
  • Monitor cluster performance, troubleshoot issues, and ensure scalability and reliability of Hadoop environments.

Required Qualifications:

  • Proven experience with Hadoop ecosystem components (HDFS, MapReduce, Hive, HBase).
  • Experience with Apache Spark, Pig, and Hive for data processing.
  • Familiarity with data ingestion tools such as Sqoop, Flume, and Kafka.
  • Strong understanding of data storage optimization techniques.
  • Experience in monitoring and troubleshooting Hadoop clusters.
  • Excellent problem-solving and analytical skills.

Additional Information:

This is a contract position with a duration of 12 months. The role is based in Chicago, IL.


Compensation & Location

Salary: $70,000 – $120,000 per year (Estimated)

Location: Chicago, IL


Recruiter / Company – Contact Information

Recruiter / Employer: DATALAB INFOTECH

Email: etha@datalabinfotech.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
etha@datalabinfotech.com with the subject:

DELETE_JOB_ID_4553

to delete@join-this.com.