NEWPosted 3 hours ago

Job ID: JOB_ID_5432

Job Description

We are seeking an experienced Data Engineer / BI Engineer with strong expertise in Incorta, Databricks, and modern data warehousing. This role focuses on building scalable data pipelines, designing high-performance data models, and delivering analytics solutions that support business intelligence and reporting needs.

Key Responsibilities

  • Incorta Development
    • Design and develop Incorta schemas, data models, joins, business views, and dashboards.
    • Implement Direct Data Mapping (DDM), incremental data loads, and performance optimizations for large datasets.
    • Configure security models, including object-level and row-level security.
    • Manage environment promotions and automated data-refresh processes.
  • Databricks and Data Engineering
    • Develop scalable data pipelines using Databricks, PySpark, and SQL for both batch and streaming workloads.
    • Build and maintain ETL/ELT frameworks following best practices for performance, partitioning, orchestration, and monitoring.
    • Implement Change Data Capture (CDC) pipelines, Delta Lake optimizations, and advanced data transformations.
  • BI and Data Warehousing
    • Design dimensional data models, including Star and Snowflake schemas.
    • Implement Slowly Changing Dimensions (SCD Type 1 and Type 2) and conformed dimensions.
    • Build semantic layers and certified datasets aligned with governance and metadata standards.
    • Optimize queries and warehouse design to support low-latency reporting and analytics.
  • API Development and Integration
    • Build and consume REST and GraphQL APIs for data ingestion and data services.
    • Develop secure API layers using OAuth2, JWT, and rate-limiting best practices.
    • Integrate with ERP/CRM systems, cloud platforms, and downstream analytical applications.
  • Operational Excellence
    • Implement CI/CD pipelines for data workflows, jobs, and dashboards.
    • Deploy and maintain monitoring, logging, alerting, and automated recovery mechanisms for critical data pipelines.
    • Ensure high data quality through rule-based validation, profiling, and business checks.
    • Provide L3 support, perform root cause analysis, and maintain documentation including runbooks and knowledge articles.

Required Skills and Qualifications

  • Strong experience with Incorta development and data modeling.
  • Hands-on experience with Databricks, PySpark, and SQL.
  • Expertise in ETL/ELT pipeline development and optimization.
  • Experience with data warehousing concepts, including Star/Snowflake schemas and SCD models.
  • Experience building and integrating REST/GraphQL APIs.
  • Knowledge of data security practices, including OAuth2 and JWT.
  • Experience implementing CI/CD pipelines and operational monitoring solutions.
  • Strong troubleshooting skills with experience in production support and root cause analysis.

Special Requirements

Onsite


Compensation & Location

Salary: $120,000 – $160,000 per year (Estimated)

Location: Seattle, WA


Recruiter / Company – Contact Information

Email: u.t@flexontechnologies.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
u.t@flexontechnologies.com with the subject:

DELETE_JOB_ID_5432

to delete@join-this.com.