If you think text-to-video is the furthest we could get with today’s artificial intelligence (AI) technology, think again. GoogleDeepMind recently unveiled Genie – a generative interactive environment trained on Internet videos. In short, it’s an early prototype for a full-blown text-to-video-games model.
Genie takes any text, image, photograph, or sketch prompt and generates a controllable virtual world out of it. But don’t expect the output to have triple A graphics just yet. The model is only capable of creating 2D platformers for now – just like the classic Super Mario Bros games we used to play.
How is this all possible? DeepMind shares that the model has 11B parameters, and was trained on over 200,000 hours of videos from 2D platformer games. Several models work behind the scenes – first, a tokenizer is used to compress each frame of the videos into discrete tokens, or units of data that serve as a basis for encoding and decoding. From there, a latent action model encodes the transitions between two frames as one of eight latent actions. A third dynamics model is then used to predict future frames.
This training allowed Genie to “learn diverse latent actions that control characters in a consistent manner.” Currently, the games generated by Genie only run at only 1FPS, but DeepMind’s Tim Rocktäschel clarifies that the model is not confined to 2D platform games. They trained another Genie on robotics data, and it was able to create controllable simulator games.
Genie is the latest example of a “world model” in AI, where predictions guide the model’s actions. Developed following the concepts of unsupervised learning, this generative tool teaches itself, to the extent it can logically create a virtual environment in which to operate.
It’s safe to say this marks a significant milestone in AI gaming. The transition from raw data to a playable game isn’t instantaneous, but the prospect of generating full-fledged custom games from plain ASCII text is nothing short of groundbreaking.
It paves the way for an era where we can automate the design of custom games, transforming game narratives, characters, and environments in mere seconds. It’s not hard to envision a future filled with AI-driven dynamic gaming where characters and scenarios evolve in real time based on player choices and actions.
In the realm of large language models (LLMs), Genie is a trailblazer as well. Taken as a whole, the model’s innovative use of LLMs demonstrates their potential to unravel intricate patterns and datasets, combine them and create something new. The large language models post by MongoDB explains how LLMs function similarly, working to predict the next word in a sentence based on the context provided. Genie takes this a notch higher and implements it in gaming – it predicts not just words but also actions and transitions in its game environment.
Genie’s release comes on the heels of OpenAI’s Sora, a text-to-video model that translates text into “realistic and imaginative scenes.” Sora combines a diffusion model and a transformer architecture, where the diffusion model generates video pixels and the transformer predicts future frames. While the scenes look uncannily real, The DeepMind team pointed out that such outputs need actions, hence the birth of Genie.
The implications of a text-to-video-games model are huge. Whether this technology or any AI progress is related to the massive layoffs in the game development industry in recent years remains a speculative question. An undoubtedly contentious issue, and one we cannot ignore as we edge closer to an era where a substantial part of the game development may happen by machine inference.
As gamers, we can’t help but be intrigued by the possibility of truly intelligent, dynamic games crafted by the likes of Genie. Yet, it’s crucial that we view these advances with a healthy dose of caution. Not just thinking about the implications for game development, but largely about the broader impacts of AI on our society. Since there’s no public release date for Genie yet, we’ll just have to wait and see how this will all progress.