NEWPosted 3 hours ago

Job ID: JOB_ID_5471

About the Role:

Lead the team technically to complete milestones on time. Understand complete requirement, create Architecture and update all stakeholders. Create POCs. Manage delivery/release to customer.

Develop Services to enable data ingestion from and synchronization with systems which expose required data access mechanisms ensuring near-real-time updates. Ingest data from multiple sources using Python and any other ETL tools. Design and implement an event-driven architecture using AWS EventBridge, Kafka, or SNS/SQS for real-time data streaming. Design, implement, and maintain scalable data pipelines that integrate both on-prem and AWS cloud environments. Develop efficient Python scripts and applications using libraries like pandas, NumPy, etc., to handle and process large datasets. Work with various NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) to support high-performance data storage and retrieval. Develop and deploy applications in a cloud-native architecture, leveraging modern cloud technologies for scalability and resilience. Continuously monitor data workflows and systems, troubleshoot issues, and optimize performance for reliability and scalability. Transition existing pipeline to MSSQL server.

Collaborate with the business application owner on the existing data architecture, including data ingestion, data pipelines, business logic, data consumption patterns, and analytics requirements. Design and document the target data architecture, pipelines, processing and analytics architecture. Identify opportunities for optimization and consolidation. Collaboration with data team on decomposition of business logic and data transformation patterns.

Key Skills:

  • AWS
  • Glue
  • SNS/SQS
  • Python
  • Py-Spark
  • Data Lake
  • Cloud Watch
  • Cloud Trail
  • DB Design
  • SQL

Responsibilities:

  • Lead the team technically to complete milestones on time.
  • Understand complete requirement, create Architecture and update all stakeholders.
  • Create POCs.
  • Manage delivery/release to customer.
  • Develop Services to enable data ingestion from and synchronization with systems which expose required data access mechanisms ensuring near-real-time updates.
  • Ingest data from multiple sources using Python and any other ETL tools.
  • Design and implement an event-driven architecture using AWS EventBridge, Kafka, or SNS/SQS for real-time data streaming.
  • Design, implement, and maintain scalable data pipelines that integrate both on-prem and AWS cloud environments.
  • Develop efficient Python scripts and applications using libraries like pandas, NumPy, etc., to handle and process large datasets.
  • Work with various NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) to support high-performance data storage and retrieval.
  • Develop and deploy applications in a cloud-native architecture, leveraging modern cloud technologies for scalability and resilience.
  • Continuously monitor data workflows and systems, troubleshoot issues, and optimize performance for reliability and scalability.
  • Transition existing pipeline to MSSQL server.
  • Collaborate with the business application owner on the existing data architecture, including data ingestion, data pipelines, business logic, data consumption patterns, and analytics requirements.
  • Design and document the target data architecture, pipelines, processing and analytics architecture.
  • Identify opportunities for optimization and consolidation.
  • Collaboration with data team on decomposition of business logic and data transformation patterns.

Special Requirements

Contract


Compensation & Location

Salary: $110,000 – $150,000 per year

Location: Remote


Recruiter / Company – Contact Information

Recruiter / Employer: HCLTech

Email: ishekkumar@intellectt.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
ishekkumar@intellectt.com with the subject:

DELETE_JOB_ID_5471

to delete@join-this.com.