Job ID: JOB_ID_4636
Job Summary:
We are looking for an experienced AWS Data Engineer to join our team. The ideal candidate will have a strong background in developing data pipelines using Python/Java/Scala and Spark, with significant experience in AWS data services. You will be responsible for designing, building, and maintaining scalable and efficient data solutions on the AWS platform. This role requires expertise in Infrastructure as Code (IaC) using Terraform to manage cloud resources.
Key Responsibilities:
- Develop and maintain robust data pipelines using Python, Java, Scala, and Spark.
- Utilize AWS data services such as AWS Glue, AWS S3, AWS Lambda, AWS Athena, and AWS Redshift for data processing and storage.
- Design and implement data models optimized for performance and scalability.
- Develop and manage Infrastructure as Code (IaC) using Terraform.
- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.
- Troubleshoot and resolve issues related to data pipelines and AWS infrastructure.
- Ensure data quality, integrity, and security across all data solutions.
Required Skills and Experience:
- 5 years of hands-on experience in developing data pipelines using Python/Java/Scala, Spark.
- 3 years of experience in AWS Services around Data Engineering, including AWS Glue, AWS S3, AWS Lambda, AWS Athena, AWS Redshift.
- Experience with developing IaaC (Infrastructure as Code) using Terraform.
- Strong understanding of data warehousing concepts and best practices.
- Excellent problem-solving and analytical skills.
Location:
San Francisco, CA (100% Onsite)
Employment Type:
Contract
Special Requirements
100% Onsite
Compensation & Location
Salary: $75 – $95 per year (Estimated)
Location: San Francisco, CA
Recruiter / Company – Contact Information
Email: .y@nityainc.com
Recruiter Notice:
To remove this job posting, please send an email from
.y@nityainc.com with the subject:
DELETE_JOB_ID_4636