Staff Data Engineer
ww • United States - Remote
Posted: April 24, 2026
Job Description
Who we are
The Data Team at Weight Watchers is in a key position to shape the future of our business and products. Leveraging tens of billions of data points per year, we produce insights that enable Weight Watchers to better serve our 4 million worldwide members. We are equal partners with our peers in Product, Engineering, Marketing, Finance, and beyond. At Weight Watchers there’s an authentic appetite for data-informed decision-making across the company, and our role is to educate and empower stakeholders by leading with transparency and expanding our stakeholders’ understanding, often of deeply technical concepts.
What you will Do
We are seeking a seasoned Senior / Staff Data Engineer to drive scale, performance, and actionability within our data ecosystem. You will strengthen and scale our Python and Snowflake-based infrastructure, building the core capabilities required to support dynamic analytics and our transition toward more automated, self-service data workflows.
We need an engineer who can adapt to a dynamic product environment, who can ask the ‘why’ behind the ‘what’ to ensure we are building the right solutions. For candidates at the Staff level, we expect a proven track record of setting technical roadmaps and serving as a multiplier for the engineering and analytics teams.
Key Responsibilities
- Data Enablement: Support the analytics layer by developing core Looker views and building out our agentic AI infrastructure to derive automated self-service capabilities and reduce reporting bottlenecks.
- Pipeline Architecture: Design, build, and scale ELT pipelines that are resilient, efficient, and modular.
- Cross-Functional Collaboration: Partner with Finance, Product, and Analytics to ensure our data models solve the right problems.
- Data Modeling: Build and maintain analytics schemas (including Star Schemas) that abstract complex logic into user-friendly datasets.
- End-to-End Ownership: Lead projects from inception to production, taking accountability for data integrity and the trustworthiness of the platform.
- Operational Excellence: Monitor production health using monitoring tools (e.g. Datadog, Monte Carlo) ensuring our data ecosystem remains robust and reliable.
- Technical Leadership: Act as a technical lead, conduct code reviews, define engineering culture, and champion best practices.
Who you are
- Experience: 5+ years in data engineering, with at least 2+ years focused on distributed, large-scale cloud data warehouses.
- Snowflake Expertise: Proven experience with Snowflake performance optimization and cost-governance. Familiarity with Snowflake Cortex/MCP is a plus.
- Technical Mastery:
- Advanced Software Engineering (Python): Deep proficiency in Python, with a focus on writing modular, reusable, and testable code (unit/integration tests) for complex data processing.
- Data Modeling & SQL Mastery: Expert-level SQL and a sophisticated understanding of data warehousing methodologies to build performant, scalable analytics layers.
- Infrastructure & Automation: Practical experience with modern CI/CD frameworks (e.g., GitHub Actions, Argo CD) to drive engineering velocity and platform stability.
- Workflow Orchestration: Expertise in architecting and scaling orchestration-as-code workflows (e.g., Prefect or Airflow) to manage complex dependencies and ensure pipeline resilience.
- Observability & Reliability: Deep proficiency in deploying monitoring and alerting frameworks (e.g., Datadog) to maximize system uptime while mitigating alert fatigue.
- Operational Maturity: Experience managing business-critical production pipelines with a focus on uptime, data quality, and defining SLAs.
- Education: BS/MS in Computer Science, Information Systems, or a related technical field.
Additional Content
Who we are
The Data Team at Weight Watchers is in a key position to shape the future of our business and products. Leveraging tens of billions of data points per year, we produce insights that enable Weight Watchers to better serve our 4 million worldwide members. We are equal partners with our peers in Product, Engineering, Marketing, Finance, and beyond. At Weight Watchers there’s an authentic appetite for data-informed decision-making across the company, and our role is to educate and empower stakeholders by leading with transparency and expanding our stakeholders’ understanding, often of deeply technical concepts.
What you will Do
We are seeking a seasoned Senior / Staff Data Engineer to drive scale, performance, and actionability within our data ecosystem. You will strengthen and scale our Python and Snowflake-based infrastructure, building the core capabilities required to support dynamic analytics and our transition toward more automated, self-service data workflows.
We need an engineer who can adapt to a dynamic product environment, who can ask the ‘why’ behind the ‘what’ to ensure we are building the right solutions. For candidates at the Staff level, we expect a proven track record of setting technical roadmaps and serving as a multiplier for the engineering and analytics teams.
Key Responsibilities
- Data Enablement: Support the analytics layer by developing core Looker views and building out our agentic AI infrastructure to derive automated self-service capabilities and reduce reporting bottlenecks.
- Pipeline Architecture: Design, build, and scale ELT pipelines that are resilient, efficient, and modular.
- Cross-Functional Collaboration: Partner with Finance, Product, and Analytics to ensure our data models solve the right problems.
- Data Modeling: Build and maintain analytics schemas (including Star Schemas) that abstract complex logic into user-friendly datasets.
- End-to-End Ownership: Lead projects from inception to production, taking accountability for data integrity and the trustworthiness of the platform.
- Operational Excellence: Monitor production health using monitoring tools (e.g. Datadog, Monte Carlo) ensuring our data ecosystem remains robust and reliable.
- Technical Leadership: Act as a technical lead, conduct code reviews, define engineering culture, and champion best practices.
Who you are
- Experience: 5+ years in data engineering, with at least 2+ years focused on distributed, large-scale cloud data warehouses.
- Snowflake Expertise: Proven experience with Snowflake performance optimization and cost-governance. Familiarity with Snowflake Cortex/MCP is a plus.
- Technical Mastery:
- Advanced Software Engineering (Python): Deep proficiency in Python, with a focus on writing modular, reusable, and testable code (unit/integration tests) for complex data processing.
- Data Modeling & SQL Mastery: Expert-level SQL and a sophisticated understanding of data warehousing methodologies to build performant, scalable analytics layers.
- Infrastructure & Automation: Practical experience with modern CI/CD frameworks (e.g., GitHub Actions, Argo CD) to drive engineering velocity and platform stability.
- Workflow Orchestration: Expertise in architecting and scaling orchestration-as-code workflows (e.g., Prefect or Airflow) to manage complex dependencies and ensure pipeline resilience.
- Observability & Reliability: Deep proficiency in deploying monitoring and alerting frameworks (e.g., Datadog) to maximize system uptime while mitigating alert fatigue.
- Operational Maturity: Experience managing business-critical production pipelines with a focus on uptime, data quality, and defining SLAs.
- Education: BS/MS in Computer Science, Information Systems, or a related technical field.