NEWPosted 3 hours ago

Job ID: JOB_ID_6971

Job Title: Data Engineer (Ab Initio Databricks Modernization)

We are seeking an experienced Data Engineer for an onsite role in Wilmington, DE. This position requires 12+ years of experience and focuses on modernizing legacy ETL systems.

Overview:

We are seeking a Data Engineer to lead the modernization of legacy ETL systems by migrating Ab Initio workflows to scalable, modular PySpark pipelines on Databricks. The role involves transforming complex data ecosystems into cloud-native architectures while ensuring data integrity, performance, and reliability.

Key Responsibilities:

  • ETL Modernization & Development: Analyze and migrate legacy ETL workflows from Ab Initio to PySpark-based pipelines. Design and develop scalable data pipelines on Databricks. Refactor monolithic processes into modular, reusable components. Leverage existing enterprise datasets to avoid redundancy.
  • Data Integration & Processing: Build and maintain ETL/ELT pipelines integrating data from Snowflake and other sources. Process and publish enriched datasets for downstream applications. Support batch and near real-time data processing.
  • Data Lineage & Optimization: Create end-to-end data lineage and data flow diagrams. Identify redundancies and drive process consolidation and optimization. Ensure adherence to data governance and quality standards.
  • Testing & Validation: Develop unit, integration, and reconciliation frameworks. Perform dual-run comparisons with legacy systems. Validate outputs in UAT and pre-production environments.
  • Deployment & Operations: Support cutover and migration strategy from legacy systems. Decommission legacy workflows and optimize scheduling (e.g., Control-M). Develop runbooks, monitoring, and operational documentation.
  • Collaboration: Work with data architects, analysts, and downstream application teams. Coordinate user acceptance testing (UAT/FAT) and stakeholder sign-offs.

Required Skills:

  • 12+ years of experience in Data Engineering.
  • Expertise in migrating Ab Initio ETL workflows.
  • Strong experience with PySpark and Databricks.
  • Experience building and maintaining ETL/ELT pipelines.
  • Proficiency in data integration from sources like Snowflake.
  • Experience with data lineage and data flow diagramming.
  • Knowledge of data governance and quality standards.
  • Experience in developing unit, integration, and reconciliation frameworks.
  • Familiarity with Control-M for scheduling.
  • Experience with cloud-native architectures.
  • Excellent collaboration and communication skills.

Special Requirements

Onsite role in Wilmington, DE. Requires migration of Ab Initio to PySpark on Databricks. Experience with Snowflake, Control-M, and cloud-native architectures preferred. E-Verify participating employer.


Compensation & Location

Salary: $60 – $80 per year (Estimated)

Location: Wilmington, DE


Recruiter / Company – Contact Information

Recruiter / Employer: Client

Email: ket.yadav@ktekresourcing.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
ket.yadav@ktekresourcing.com with the subject:

DELETE_JOB_ID_6971

to delete@join-this.com.