Job ID: JOB_ID_2137
Role Overview
As we move into 2026, the demand for sophisticated data architectures has never been higher. We are seeking a Senior GCP Data Engineer to join our dynamic team in Houston, Texas. In this role, you will serve as the primary architect and builder of our enterprise data backbone. This is not just a coding position; it is a strategic leadership role where you will design scalable, resilient systems that transform raw data into high-value strategic assets. You will operate in a high-velocity environment where the Google Cloud Platform (GCP) serves as our primary technological playground.
Key Responsibilities
- Design, develop, and maintain complex ETL/ELT pipelines using Cloud DataProc to facilitate large-scale data processing and analytics.
- Implement and manage robust messaging and streaming architectures utilizing Cloud Pub/Sub to ensure real-time data availability across the organization.
- Develop lightweight, event-driven microservices and data triggers using Cloud Functions and Cloud Run to optimize system responsiveness.
- Architect and optimize enterprise storage solutions on Google Cloud Storage (GCS), with a heavy focus on data security, governance, and cost-efficiency.
- Utilize Pig Query (Apache Pig on DataProc) for the analysis and transformation of massive, unstructured datasets.
- Write clean, production-grade Python code for data manipulation, complex API integrations, and advanced automation scripts.
- Collaborate with data scientists and business analysts to ensure data schemas meet the evolving needs of the business.
- Implement Infrastructure as Code (IaC) using Terraform to ensure reproducible and stable cloud environments.
Technical Requirements
- Minimum of 10 years of experience in Data Engineering, with a significant focus on the Google Cloud Platform ecosystem.
- Expert-level proficiency with Cloud DataProc and Pig Query for big data processing.
- Deep understanding of Google Cloud Storage (GCS), including bucket policies, object lifecycles, and storage class optimization.
- Strong hands-on experience with Pub/Sub for building decoupled, asynchronous systems.
- Proficiency in deploying containerized applications via Cloud Run and serverless snippets via Cloud Functions.
- Advanced Python programming skills, including mastery of libraries such as Pandas, PySpark, or Apache Beam.
- Proven experience in data design, schema modeling, and managing complex data lifecycles.
Preferred Qualifications
- Experience with BigQuery optimization and seamless Looker integration for business intelligence.
- Familiarity with modern DevOps practices and CI/CD pipelines for data engineering.
- Strong communication skills with the ability to translate technical concepts for non-technical stakeholders.
- Relevant GCP certifications (e.g., Professional Data Engineer).
The 2026 Data Landscape
In the current technological era, data is the lifeblood of innovation. Our team in Houston is at the forefront of integrating AI-driven insights with robust data engineering practices. By joining us, you will be part of a forward-thinking culture that values technical excellence and creative problem-solving. We provide the tools and the autonomy needed to build world-class data systems that drive real-world impact.
Compensation & Location
Salary: $155,000 – $205,000 per year (Estimated)
Location: Houston, TX
Recruiter / Company – Contact Information
Recruiter / Employer: Anveta Inc
Email: steve@anveta.com
Recruiter Notice:
To remove this job posting, please send an email from
steve@anveta.com with the subject:
DELETE_JOB_ID_2137