In a bold leap forward for artificial intelligence and creative tools, Google DeepMind has unveiled Project Genie, an experimental prototype that lets users create, explore, and remix interactive AI-generated worlds using simple text prompts or images.
Project Genie introduces a new class of “world models”, AI systems capable of generating navigable environments in real time, rather than static images or videos. Built on Genie 3 – DeepMind’s latest world model – the prototype synthesizes immersive 3D landscapes that dynamically respond as users move through them.
Currently accessible to Google AI Ultra subscribers in the U.S. (18+), Project Genie runs in a browser and combines text and image prompts with visual generation powered by Nano Banana Pro and Google’s Gemini technologies. Through a process called World Sketching, users describe an environment or upload a photo to bring a living world to life, allowing them to walk, fly, or drive in first- or third-person views.
Beyond creation, the tool enables World Exploration – navigating the generated environment – and World Remixing, allowing existing worlds to be modified or expanded. Users can even download videos of their journeys, turning AI-generated spaces into shareable stories.
Though still experimental, with limitations like shorter sessions and evolving realism, Project Genie represents a new frontier where imagination and AI converge, transforming how we design, tell stories, and interact with virtual worlds.






