AI Data Engineer
Enroute • Mexico
Posted: February 17, 2026
Job Description
We love technology, and we enjoy what we do. We are constantly driven by curiosity, innovation, and the desire to improve every day. We take ownership of our work, value collaboration, and foster a culture of trust and accountability. Our Enrouters embrace challenges, ask questions, learn quickly, and grow together.
We pride ourselves on offering competitive compensation, excellent benefits, a great work environment, flexible schedules, and policies that promote a healthy work-life balance. We care about who you are both inside and outside the workplace, and we are committed to building a strong community of driven, responsible, respectful—and above all—happy individuals. We want you to genuinely enjoy working with us.
We are seeking a data-driven BI Engineer to join our team at a high-growth advertising technology company. This role focuses on scaling our reporting infrastructure for advertising performance and billing reconciliation, ensuring that financial and operational data is accurate, automated, and actionable.
In this role, you will be responsible for developing robust data pipelines, ensuring data quality and reliability, and enabling efficient data consumption across the organization. You will collaborate closely with cross-functional teams including Product, Engineering, Analytics, and Business stakeholders to deliver high-impact data platforms.
The ideal candidate is a proactive problem-solver with strong technical expertise, capable of working with large datasets, modern data architectures, and cloud-based environments. You thrive in fast-paced settings, navigate ambiguity with confidence, and are passionate about turning data into actionable value.
We love technology, and we enjoy what we do. We are constantly driven by curiosity, innovation, and the desire to improve every day. We take ownership of our work, value collaboration, and foster a culture of trust and accountability. Our Enrouters emb...Databricks & Data Engineering
Strong experience working with Databricks Lakehouse architecture
Experience designing and maintaining scalable data pipelines
Ability to process and model large-scale analytical datasets
Advanced SQL & Data Modeling
Expert-level SQL development, including:
- Window functions
- Performance optimization
- Complex joins and aggregations
Experience using Jinja SQL templating (dbt or similar frameworks)
Ability to build modular and reusable SQL transformations
Advertising Metrics & Billing
Experience building reporting models for:
- Advertising performance metrics
- Billing reconciliation
- Financial data validation
Platform & Engineering Foundations
Experience working with GitHub workflows
Familiarity with CI/CD automation using Jenkins
Comfortable working with YAML/YML configuration files
Understanding of Metrics View layers used by BI tools
AI & Data Workflow Automation
(Preferred)
The role includes exposure to AI-powered data workflows, particularly where AI assists in data processing, enrichment, or anomaly detection.
Integrating LLM APIs (e.g., OpenAI, Anthropic) into data pipelines
Using AI for:
- Automated data classification
- Anomaly detection
- Data enrichment
- LangChain
- LlamaIndex
- Exposure to Model Context Protocol (MCP) or similar approaches to connect AI models with external tools and data sources
Key Responsibilities
- Build, maintain, and optimize scalable data pipelines and workspaces within the Databricks environment.
- Develop and audit complex reporting models focused on advertising performance metrics and financial billing.
- Write high-performance SQL queries and leverage Jinja SQL to create dynamic and reusable transformations.
- Design data models that support accurate and actionable business insights.
- Collaborate with cross-functional teams to ensure data reliability and reporting accuracy.
- Explore and implement basic AI/ML enhancements such as predictive billing models or anomaly detection.
- Support automation efforts through CI/CD practices using GitHub and Jenkins.
- Maintain clean and well-documented configuration files using YAML/YML.
- Continuously improve data workflows for efficiency, scalability, and quality.
Additional Content
We love technology, and we enjoy what we do. We are constantly driven by curiosity, innovation, and the desire to improve every day. We take ownership of our work, value collaboration, and foster a culture of trust and accountability. Our Enrouters embrace challenges, ask questions, learn quickly, and grow together.
We pride ourselves on offering competitive compensation, excellent benefits, a great work environment, flexible schedules, and policies that promote a healthy work-life balance. We care about who you are both inside and outside the workplace, and we are committed to building a strong community of driven, responsible, respectful—and above all—happy individuals. We want you to genuinely enjoy working with us.
We are seeking a data-driven BI Engineer to join our team at a high-growth advertising technology company. This role focuses on scaling our reporting infrastructure for advertising performance and billing reconciliation, ensuring that financial and operational data is accurate, automated, and actionable.
In this role, you will be responsible for developing robust data pipelines, ensuring data quality and reliability, and enabling efficient data consumption across the organization. You will collaborate closely with cross-functional teams including Product, Engineering, Analytics, and Business stakeholders to deliver high-impact data platforms.
The ideal candidate is a proactive problem-solver with strong technical expertise, capable of working with large datasets, modern data architectures, and cloud-based environments. You thrive in fast-paced settings, navigate ambiguity with confidence, and are passionate about turning data into actionable value.
We love technology, and we enjoy what we do. We are constantly driven by curiosity, innovation, and the desire to improve every day. We take ownership of our work, value collaboration, and foster a culture of trust and accountability. Our Enrouters emb...Databricks & Data Engineering
Strong experience working with Databricks Lakehouse architecture
Experience designing and maintaining scalable data pipelines
Ability to process and model large-scale analytical datasets
Advanced SQL & Data Modeling
Expert-level SQL development, including:
- Window functions
- Performance optimization
- Complex joins and aggregations
Experience using Jinja SQL templating (dbt or similar frameworks)
Ability to build modular and reusable SQL transformations
Advertising Metrics & Billing
Experience building reporting models for:
- Advertising performance metrics
- Billing reconciliation
- Financial data validation
Platform & Engineering Foundations
Experience working with GitHub workflows
Familiarity with CI/CD automation using Jenkins
Comfortable working with YAML/YML configuration files
Understanding of Metrics View layers used by BI tools
AI & Data Workflow Automation
(Preferred)
The role includes exposure to AI-powered data workflows, particularly where AI assists in data processing, enrichment, or anomaly detection.
Integrating LLM APIs (e.g., OpenAI, Anthropic) into data pipelines
Using AI for:
- Automated data classification
- Anomaly detection
- Data enrichment
- LangChain
- LlamaIndex
- Exposure to Model Context Protocol (MCP) or similar approaches to connect AI models with external tools and data sources
Key Responsibilities
- Build, maintain, and optimize scalable data pipelines and workspaces within the Databricks environment.
- Develop and audit complex reporting models focused on advertising performance metrics and financial billing.
- Write high-performance SQL queries and leverage Jinja SQL to create dynamic and reusable transformations.
- Design data models that support accurate and actionable business insights.
- Collaborate with cross-functional teams to ensure data reliability and reporting accuracy.
- Explore and implement basic AI/ML enhancements such as predictive billing models or anomaly detection.
- Support automation efforts through CI/CD practices using GitHub and Jenkins.
- Maintain clean and well-documented configuration files using YAML/YML.
- Continuously improve data workflows for efficiency, scalability, and quality.