Google has begun rolling out access to Project Genie, an experimental research prototype that allows users to create and explore interactive worlds using artificial intelligence. Currently available to Google AI Ultra subscribers in the U.S., Project Genie represents a significant step towards democratizing world-building and interactive experience creation.
The core of Project Genie is its ability to translate text and image prompts into fully realized, navigable environments. Built on Google’s Genie 3 world model, the prototype allows users to “sketch” worlds using natural language descriptions or visual references. Unlike previous attempts at generative 3D environments, Genie 3 focuses on creating interactive worlds, meaning the environments respond in real-time to user actions. This is achieved through the model’s ability to generate paths and interactions as a user moves through the created space.
The process is remarkably straightforward. Users access Project Genie through labs.google.com/fx/projectgenie and can begin by either describing their desired world in text fields or opting for a randomly generated “surprise world” by clicking “Roll the dice.” Users can then refine the initial sketch by adding, changing, or removing elements. Once satisfied, clicking “Next” initiates the world generation process, which takes a few moments to complete. Notably, because Project Genie is an early access research prototype, it does not currently draw from a user’s AI credit allocation.
Once generated, users navigate the world using standard WASD keyboard controls for movement, the spacebar for jumping or ascending, and arrow keys to adjust the camera orientation. Exploration is time-limited to 60 seconds, indicated by a progress bar at the top of the screen. This constraint is a direct result of the prototype’s research-focused nature, allowing Google to gather data on user interaction and model performance within a controlled timeframe.
Project Genie builds upon Google’s earlier preview of Genie 3 in August 2025, which demonstrated the model’s potential for generating dynamic environments. The current prototype integrates Genie 3 with Nano Banana Pro and Gemini, leveraging the strengths of each model to create a more robust and responsive experience. The development represents a shift from static 3D scene generation to truly interactive world simulation.
While the technology is promising, Google acknowledges that Project Genie is still in its early stages. Limitations currently exist in areas such as world realism and character control. However, the company emphasizes that the prototype is continually improving, and they aim to expand access and refine the world-building technology over time. The current focus is on the three core capabilities of world sketching, exploration, and remixing, with future iterations likely to address the existing limitations and introduce new features.
The implications of a technology like Project Genie extend beyond simple entertainment. The ability to rapidly prototype and explore virtual environments has potential applications in fields such as architecture, urban planning, game development, and education. Imagine architects quickly visualizing and iterating on building designs, or educators creating immersive learning experiences tailored to specific student needs. The ease of creation could also empower individuals with limited technical skills to bring their imaginative worlds to life.
However, the development also raises questions about the potential for misuse. The ability to generate realistic environments could be exploited for malicious purposes, such as creating convincing simulations for disinformation campaigns. As with any powerful generative AI technology, responsible development and deployment will be crucial to mitigating these risks.
Project Genie’s availability is currently restricted to Google AI Ultra subscribers aged 18 and over residing in the United States. This limited rollout allows Google to carefully monitor performance, gather user feedback, and refine the technology before broader public release. The company has not yet announced a specific timeline for expanding access, but the ongoing research and development efforts suggest a commitment to making this technology more widely available in the future. The prototype serves as a compelling demonstration of the potential of AI to transform how we create, explore, and interact with digital worlds.
