NEWPosted 4 hours ago

Job ID: JOB_ID_6407

Job Summary:

We are seeking an experienced AWS Cloud Data Engineer to design and implement innovative data architectures that empower our organization’s insights and decision-making. This is an exciting opportunity to work at the forefront of cloud data engineering, utilizing cutting-edge AWS services and open-source technologies.

Key Responsibilities:

  • Design and Develop Data Architecture: Create scalable, resilient, and efficient data lakehouse solutions on AWS, leveraging Apache Iceberg, AWS native services, and Snowflake to meet complex business requirements.
  • Build and Maintain Data Pipelines: Develop, automate, and optimize ETL/ELT workflows to ingest and process data from diverse sources into our AWS and Snowflake ecosystem, ensuring high data quality and timeliness.
  • Create and Manage Data APIs: Design and maintain secure, scalable RESTful APIs and other data access endpoints, enabling seamless integration for internal teams and applications using AWS services.
  • Implement AWS Data Services: Utilize Amazon S3, Amazon EMR, AWS Lake Formation, and other AWS tools to process, store, and analyze data efficiently, with native support for Iceberg tables.
  • Manage Apache Iceberg Tables: Build and oversee Iceberg tables on Amazon S3, facilitating advanced data lakehouse features such as ACID transactions, schema evolution, and time travel.
  • Optimize Data Performance: Apply partitioning strategies, data compaction, and performance tuning techniques to enhance query speed and reduce latency.
  • Ensure Data Quality & Security: Implement rigorous data validation, error handling, and security measures, including access control via AWS Lake Formation, IAM, and Cognito, to ensure compliance with data protection standards.
  • Collaborate Across Teams: Work closely with data scientists, analysts, software engineers, and business stakeholders to understand their data needs and deliver tailored solutions.
  • Provide Technical Support & Documentation: Troubleshoot data pipeline and API issues effectively, and maintain comprehensive technical documentation for workflows, processes, and API specifications.

Qualifications & Skills:

  • Proven experience in designing and deploying data architectures on AWS, with familiarity in Iceberg, Snowflake, and related tools.
  • Strong programming skills in Python, Scala, or Java for data pipeline development.
  • Hands-on experience with AWS data services such as S3, EMR, Glue, and Lake Formation.
  • Knowledge of data modeling, schema design, and performance optimization for large datasets.
  • Understanding of security best practices for cloud data environments, including access controls and compliance standards.
  • Excellent communication skills to collaborate effectively with cross-functional teams.
  • Experience with CI/CD pipelines for data engineering workflows is a plus.
  • Familiarity with containerization technologies like Docker and Kubernetes is beneficial.

Special Requirements

Visa: GC and USC; Interview: Face to Face


Compensation & Location

Salary: $60 – $85 per year

Location: Richardson, TX


Recruiter / Company – Contact Information

Email: as.tek3@gmail.com


Interested in this position?
Apply via Email

Recruiter Notice:
To remove this job posting, please send an email from
as.tek3@gmail.com with the subject:

DELETE_JOB_ID_6407

to delete@join-this.com.