While the broader world debates the implications of this shift, the gaming industry is quietly undergoing its own structural transformation.
To understand where things are heading, we need to look at the infrastructure behind it. We spoke with Antonina Batova, SVP of Infrastructure at Boosteroid — a company operating a globally distributed GPU infrastructure platform designed to support latency-sensitive workloads at scale. From her front-row seat, Batova sees how the AI-driven data center boom is fundamentally changing how computing power is delivered to end users.
Gaming itself isn’t going anywhere — demand for immersive, interactive experiences continues to grow. But as Batova points out, how we access those experiences is already beginning to change.
The Competing Demands of Modern Gaming
At the core of modern gaming lies a fundamental paradox. As players, we expect:
- Increasingly immersive, photorealistic experiences
- Affordable and accessible devices
- Full flexibility — the ability to play anywhere, anytime
Historically, these demands have been difficult to reconcile. High-performance gaming requires expensive, power-hungry hardware, which directly conflicts with portability and accessibility.
“Thermal constraints are becoming the defining factor in how far local hardware can scale. We’ve effectively reached the point where squeezing truly high-performance GPUs into portable devices is no longer practical,” explains Batova. “At the same time, GPU manufacturers are increasingly prioritizing AI and data center workloads, where demand — and margins — are significantly higher. This is gradually shifting innovation and capacity away from consumer gaming hardware. Cloud gaming demonstrated early on that performance doesn’t need to be local, and with the rise of AI infrastructure, that shift toward centralized compute is accelerating fast.”
The Shared Infrastructure of AI and Gaming
To support the rapid expansion of artificial intelligence, companies are scaling high-density GPU infrastructure at an unprecedented pace. While the applications differ, the underlying requirements are closely aligned.
“The server architecture behind generative AI and cloud gaming is built on similar principles. Both are highly GPU-intensive and require low-latency, high-performance infrastructure,” notes Batova. “As the industry rapidly scales data center capacity for AI — from power and cooling to rack density — cloud gaming is becoming an unexpected beneficiary of that expansion.”
This convergence means that investments driven by AI are indirectly accelerating the capabilities of other compute-intensive services, including real-time interactive applications like cloud gaming.
Navigating Complex Physical Constraints
Scaling this infrastructure is not just a software challenge — at a certain point, it becomes a question of physics. At higher densities, the real bottlenecks shift to power delivery and heat management.
“Modern AI-grade data centers operate at extreme power densities within a limited footprint. At that scale, air cooling becomes insufficient, making liquid cooling a necessity rather than an option. At the same time, securing sufficient grid capacity and redundancy becomes critical to keeping systems running under continuous load,” Batova explains.
Infrastructure development increasingly depends on access to power-dense locations with strong grid connectivity, as well as proximity to major internet exchange points. These factors directly impact both operational stability and end-user experience.
“AI workloads can often tolerate latency through batching and asynchronous processing, but cloud gaming cannot,” Batova adds. “For latency-sensitive use cases, even with all the processing power available, the network must deliver rendered frames to the end user in real time. This makes proximity to major internet hubs a critical requirement for any new facility.”
Delivering a seamless experience ultimately requires balancing multiple constraints simultaneously — compute density, energy availability, cooling capacity, and network performance.
Solving the Player Paradox Through Infrastructure
This shift in infrastructure is giving cloud gaming a new level of viability. For years, streaming games from the cloud was often seen as a secondary option, limited by latency and inconsistent performance. As infrastructure evolves, those limitations are gradually being addressed.
Cloud gaming offers a way to resolve the long-standing trade-offs between performance, accessibility, and portability. Instead of relying on local hardware, users can access high-end gaming experiences through a wide range of devices — from laptops to smart TVs — while the heavy computation is handled remotely.
As infrastructure continues to mature — particularly in terms of latency and regional coverage — access to high-performance gaming will depend less on the device itself and more on the quality of the underlying network and data center footprint.






