NEWPosted 6 hours ago

Job ID: JOB_ID_7883

Job Overview

We are seeking a Sr Data Engineer with expertise in Databricks, PySpark, SQL, and Azure data services to build high-performance ETL/ELT pipelines and end-to-end AI/ML projects. This role involves optimizing data pipelines, data models, and Power BI datasets for scalability, performance, and governance.

Key Responsibilities

  • Expertise in Databricks (Spark, SQL, Delta Lake, notebooks, cluster optimization, and Unity Catalog).
  • Strong proficiency in PySpark and SQL for building high-performance ETL/ELT pipelines.
  • Hands-on experience building Power BI Dashboards and implementing PBI best practices.
  • Strong understanding of Azure data services (ADLS, ADF, Azure DevOps, Git-based CI/CD).
  • Proven ability to optimize data pipelines, data models, and Power BI datasets for scalability, performance, and governance.
  • Experience delivering end-to-end AI/ML projects, including model development, training, evaluation, and deployment.

Technical Skills

  • Databricks (Spark, SQL, Delta Lake, Unity Catalog)
  • PySpark
  • SQL
  • Power BI
  • Azure Data Services (ADLS, ADF)
  • Azure DevOps, Git-based CI/CD
  • AI/ML project lifecycle

Special Requirements

LOCAL CANDIDATES ONLY if not at least nearer states. Keywords: continuous integration continuous deployment artificial intelligence machine learning access management business intelligence South Carolina


Compensation & Location

Salary: $55 – $75 per year (Estimated)

Location: Greenville, SC


Recruiter / Company – Contact Information

Email: rla@gacsol.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
rla@gacsol.com with the subject:

DELETE_JOB_ID_7883

to delete@join-this.com.