NEWPosted 2 hours ago
Job ID: JOB_ID_8398
Role: Sr. AWS Data Engineer – ‘s are accepted
Minneapolis, MN (Onsite) – Only Local to Minneapolis
12+ Months
We are looking for a Senior Data Engineer / Tech Lead to be the technical anchor for a squad of junior and mid-level data engineers working on our AWS Lakehouse. You will work within a larger initiative team, alongside senior leaders, architects, and domain experts, owning the day-to-day engineering execution for your squad’s workstream.
What You’ll Do
Hands-On Engineering:
- Be an active, high-output contributor: building AWS Glue pipelines, writing transformation logic, implementing data models, and debugging complex issues.
- Implement harmonization and modeling workstreams using open table formats (Apache Iceberg or Delta Lake), ensuring correct partitioning, schema evolution, and data quality patterns.
- Build and optimize consumption layer pipelines serving analytical workloads via Athena.
- Apply engineering best practices across your work: testing, documentation, code modularity, and performance awareness.
Technical Guidance for the Squad:
- Serve as the go-to technical resource for junior engineers: pair programming, answering implementation questions, and reviewing code with a teaching mindset.
- Conduct thorough, constructive code reviews that raise the quality bar and accelerate learning.
- Flag technical risks or blockers to initiative leads and architects early, with context and proposed options.
Delivery & Collaboration:
- Translate workstream requirements (defined by initiative leads) into well-scoped engineering tasks for the squad.
- Keep delivery on track at the squad level: identify dependencies, surface blockers, and coordinate with peers across the broader initiative team.
- Participate actively in technical discussions, contributing implementation-level insight to broader initiative planning.
Required:
- 5+ years of data engineering experience, with a track record of delivering complex pipelines and data models in production.
- Strong, hands-on experience with AWS Glue: authoring, scheduling, and troubleshooting ETL jobs at scale.
- Deep working knowledge of core AWS data services: S3, Athena, and the broader AWS data ecosystem.
- Practical experience with open table formats (Apache Iceberg or Delta Lake): including partitioning, schema evolution, and compaction.
- Grounding in data modeling concepts: dimensional modeling or similar approaches applied in a lakehouse or data warehouse context.
- Experience informally leading or mentoring junior engineers: through code review, pairing, or task guidance.
- Strong problem-solving instincts: able to work through implementation complexity independently and know when to escalate.
- Clear communicator: comfortable asking good questions, giving precise technical feedback, and flagging issues early.
Preferred:
- Familiarity with AI-assisted development tools (e.g., GitHub Copilot, Amazon CodeWhisperer, or similar) and a genuine openness to integrating them into day-to-day engineering workflows.
- Experience with Apache Spark (via AWS Glue or EMR) for large-scale data processing.
- Familiarity with streaming data patterns (Flink, Kafka, Kinesis) and how they integrate with lakehouse architectures.
- Exposure to CI/CD practices for data pipelines.
- AWS certifications (e.g., Data Engineer – Associate, Solutions Architect).
View this job online here.
Special Requirements
Location: Minneapolis, MN (Onsite) – Only Local to Minneapolis. Contract duration: 12+ Months.
Compensation & Location
Salary: $65 – $85 per year (Estimated)
Location: Minneapolis, MN
Recruiter / Company – Contact Information
Email: dev@fusiongs.com
Recruiter Notice:
To remove this job posting, please send an email from
dev@fusiongs.com with the subject:
DELETE_JOB_ID_8398