Runware logo

Hardware - General Manager

Runware Romania


No Relocation

Posted: May 12, 2026

Job Description

Company Description

Runware is the fastest AI-as-a-Service platform for media generation

Runware is an AI-as-a-Service platform that delivers real-time inference at 5–10 × lower cost than competitors. Our platform is purpose-built for speed & efficiency: custom GPU design, server setup, and datacenter architecture matched with performance-optimized software and a best-in-class API. Engineering teams who work with Runware save up to 80% on inference, improve response times, and scale instantly across 400K+ AI models, all through a single flexible API. Usage-based pricing and on-demand capacity are already battle-tested by Wix, OpenArt, NightCafe, Freepik, and thousands more. Backed by Insight Partners, a16z Speedrun, Begin Capital, and Zero Prime.

Join Runware to power the AI products that are changing the world

Runware is building the infrastructure powering the next generation of AI media applications. Our platform delivers high-performance AI inference through a global GPU infrastructure designed for speed, scale, and efficiency.

As Runware continues to grow, compute infrastructure becomes a core strategic asset. We are looking for an experienced Hardware Infrastructure Lead to take ownership of our GPU and server infrastructure strategy, working closely with our CEO and engineering leadership.

This role combines technical depth, infrastructure strategy, and vendor management to ensure Runware operates the most efficient and scalable compute infrastructure possible.

About the Role

You will be responsible for defining and executing Runware’s hardware and datacenter strategy, including GPU sourcing, server infrastructure, hosting partnerships, and long-term capacity planning.

You will help the company scale its infrastructure globally while optimizing performance, reliability, and cost efficiency.

This is a highly strategic role sitting at the intersection of engineering, operations, and business, with direct impact on the company’s ability to scale its AI platform.

Responsibilities

  • Define and execute Runware’s hardware and compute infrastructure strategy.
  • Own the GPU procurement and supply strategy, including relationships with vendors and hosting providers.
  • Design and optimize server and datacenter infrastructure supporting AI inference workloads.
  • Evaluate and negotiate partnerships with datacenter providers, GPU vendors, and infrastructure partners.
  • Work closely with engineering teams to ensure infrastructure supports evolving product and model requirements.
  • Drive cost/performance optimization across hardware deployments.
  • Lead capacity planning to support rapid company growth and global infrastructure expansion.
  • Monitor industry trends across GPU hardware, AI infrastructure, and datacenter innovation.
  • Support the deployment of infrastructure across multiple regions and providers.
  • Help build the long-term roadmap for Runware’s global compute platform.
Company DescriptionRunware is the fastest AI-as-a-Service platform for media generationRunware is an AI-as-a-Service platform that delivers real-time inference at 5–10 × lower cost than competitors. Our platform is purpose-built for speed &...
  • Strong experience with GPU infrastructure, servers, or datacenter operations.
  • Experience managing large-scale compute infrastructure for cloud, AI, or high-performance workloads.
  • Background working with GPU vendors, hosting providers, or infrastructure partners.
  • Strong understanding of AI workloads and GPU performance optimization.
  • Experience negotiating infrastructure contracts or managing hardware procurement.
  • Ability to balance technical decisions with business and cost considerations.
  • Comfort working in a fast-moving startup environment.
  • Excellent communication skills and ability to collaborate across technical and business teams.

Nice to Have

  • Experience working in AI infrastructure, cloud providers, or GPU hosting companies.
  • Experience building or scaling global compute infrastructure.
  • Background in HPC, cloud infrastructure, or datacenter operations.

Additional Content

Company Description

Runware is the fastest AI-as-a-Service platform for media generation

Runware is an AI-as-a-Service platform that delivers real-time inference at 5–10 × lower cost than competitors. Our platform is purpose-built for speed & efficiency: custom GPU design, server setup, and datacenter architecture matched with performance-optimized software and a best-in-class API. Engineering teams who work with Runware save up to 80% on inference, improve response times, and scale instantly across 400K+ AI models, all through a single flexible API. Usage-based pricing and on-demand capacity are already battle-tested by Wix, OpenArt, NightCafe, Freepik, and thousands more. Backed by Insight Partners, a16z Speedrun, Begin Capital, and Zero Prime.

Join Runware to power the AI products that are changing the world

Runware is building the infrastructure powering the next generation of AI media applications. Our platform delivers high-performance AI inference through a global GPU infrastructure designed for speed, scale, and efficiency.

As Runware continues to grow, compute infrastructure becomes a core strategic asset. We are looking for an experienced Hardware Infrastructure Lead to take ownership of our GPU and server infrastructure strategy, working closely with our CEO and engineering leadership.

This role combines technical depth, infrastructure strategy, and vendor management to ensure Runware operates the most efficient and scalable compute infrastructure possible.

About the Role

You will be responsible for defining and executing Runware’s hardware and datacenter strategy, including GPU sourcing, server infrastructure, hosting partnerships, and long-term capacity planning.

You will help the company scale its infrastructure globally while optimizing performance, reliability, and cost efficiency.

This is a highly strategic role sitting at the intersection of engineering, operations, and business, with direct impact on the company’s ability to scale its AI platform.

Responsibilities

  • Define and execute Runware’s hardware and compute infrastructure strategy.
  • Own the GPU procurement and supply strategy, including relationships with vendors and hosting providers.
  • Design and optimize server and datacenter infrastructure supporting AI inference workloads.
  • Evaluate and negotiate partnerships with datacenter providers, GPU vendors, and infrastructure partners.
  • Work closely with engineering teams to ensure infrastructure supports evolving product and model requirements.
  • Drive cost/performance optimization across hardware deployments.
  • Lead capacity planning to support rapid company growth and global infrastructure expansion.
  • Monitor industry trends across GPU hardware, AI infrastructure, and datacenter innovation.
  • Support the deployment of infrastructure across multiple regions and providers.
  • Help build the long-term roadmap for Runware’s global compute platform.
Company DescriptionRunware is the fastest AI-as-a-Service platform for media generationRunware is an AI-as-a-Service platform that delivers real-time inference at 5–10 × lower cost than competitors. Our platform is purpose-built for speed &...
  • Strong experience with GPU infrastructure, servers, or datacenter operations.
  • Experience managing large-scale compute infrastructure for cloud, AI, or high-performance workloads.
  • Background working with GPU vendors, hosting providers, or infrastructure partners.
  • Strong understanding of AI workloads and GPU performance optimization.
  • Experience negotiating infrastructure contracts or managing hardware procurement.
  • Ability to balance technical decisions with business and cost considerations.
  • Comfort working in a fast-moving startup environment.
  • Excellent communication skills and ability to collaborate across technical and business teams.

Nice to Have

  • Experience working in AI infrastructure, cloud providers, or GPU hosting companies.
  • Experience building or scaling global compute infrastructure.
  • Background in HPC, cloud infrastructure, or datacenter operations.