NEWPosted 2 hours ago

Job ID: JOB_ID_5067

Job Title: Data Engineer with AI Kubernetes

We are seeking a skilled Data Engineer with expertise in AI and Kubernetes to join our team for a contract-to-hire position. This role requires a strong understanding of data processing, pipeline development, and cloud technologies, particularly AWS. The ideal candidate will have hands-on experience with big data tools and a passion for solving complex data challenges.

Responsibilities:

  • Develop and maintain robust data pipelines using Python and Apache Spark (PySpark).
  • Utilize AWS services such as S3, Glue, EMR, Redshift, Athena, and Lambda for data storage, processing, and analysis.
  • Design and implement data models, adhering to data warehousing concepts and ETL frameworks.
  • Work with large-scale, distributed data systems to ensure data integrity and accessibility.
  • Implement and manage CI/CD pipelines and utilize version control tools like Git for code management.
  • Collaborate with cross-functional teams to understand data needs and deliver effective solutions.
  • Troubleshoot and resolve data-related issues, ensuring smooth operation of data infrastructure.
  • Participate in the full software development lifecycle, from design to deployment and maintenance.
  • Stay up-to-date with the latest trends and technologies in data engineering, AI, and cloud computing.

Required Skills & Qualifications:

  • Strong proficiency in Python for data processing and pipeline development.
  • Hands-on experience with Apache Spark (PySpark preferred).
  • Solid experience with AWS services such as S3, Glue, EMR, Redshift, Athena, Lambda.
  • Experience with SQL and relational/non-relational databases.
  • Knowledge of data modeling, data warehousing concepts, and ETL frameworks.
  • Experience working with large-scale, distributed data systems.
  • Familiarity with CI/CD pipelines and version control tools (Git).
  • Strong problem-solving and communication skills.

Preferred / Nice to Have:

  • Experience with Airflow or other workflow orchestration tools.
  • Knowledge of Kafka, Kinesis, or streaming data platforms.
  • Experience with Docker/Kubernetes.
  • Exposure to Delta Lake, Iceberg, or HuD.

This is a contract-to-hire role, and we are looking for candidates who are visa independent.


Special Requirements

Visa independent candidate required. In-person interview preferred.


Compensation & Location

Salary: $90,000 – $130,000 per year (Estimated)

Location: Columbus, OH


Recruiter / Company – Contact Information

Email: u.k@procorpsystems.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
u.k@procorpsystems.com with the subject:

DELETE_JOB_ID_5067

to delete@join-this.com.