AI will never make your game. You will.
Cecilia Uhr
CPO

This week at GDC, the conversation about AI and game development is louder than it has been in years. A growing number of AI companies and investors are betting on a future where you prompt a game, or major parts of one, into existence - where the craft of building games is gradually replaced by the act of describing them. We think that fundamentally misses the point of game development.
Bezi exists for developers who already know what they are making, and who want to do more of it, faster. People who have opinions about their render pipeline, who care about code architecture, who know what their game should feel like before a single line of code gets written. AI should expand what those developers can do without trying to replace the judgment and craft that makes games worth playing.
Everything we have shipped over the past quarter, and everything we are building next, follows from that conviction. We’re here to show where we are heading.
What we believe
We believe AI's greatest use in game development is to remove the friction and tediousness between a developer's vision and their ability to realize it, not to generate games on their behalf.
The best games come from the teams with the most creative bandwidth to explore, iterate, and get the details right. Our job is to compress the non-creative and non-stimulating work into solved problems so that developers can focus on the parts of their game that only they can build.
Game developers are craftspeople, and they are right to be skeptical of anything that claims to replace their skills. We share that skepticism. Bezi exists to redirect effort toward the work that matters most, designing mechanics, crafting narrative, building worlds, and creating the experiences that players will remember, by handling the tedious work that stands in the way.
The foundation: understanding your project
When we were at GDC last year, Bezi was a knowledge tool (and we didn’t even publicly launch yet). Its core capability was deep, contextual understanding of your Unity project: your codebase, your scenes, your assets, your documentation. You could ask it questions and get answers grounded in your actual game, not generic advice pulled from a forum post.
That foundation was the prerequisite for everything that came after. The reason Bezi can now act inside your editor, plan complex tasks, and verify its own work visually is because it first learned to understand the project it is operating in.
What's changed: from understanding to action
Over the past quarter, we shipped Actions, Plan Mode, and Model Selection. Together, they moved Bezi from understanding your project to acting inside it. Actions lets Bezi create GameObjects, edit prefabs, configure components, and generate materials directly in the Unity editor.
Because it builds on your existing project context, it extends your patterns rather than generating from scratch: set up one prefab example and Bezi produces fifty more following the same architecture.
Plan Mode and Model Selection expanded the complexity Bezi could handle, letting you structure your intent before execution and match the right model to each task.
What's coming: expanding Bezi's reach
Next, we’re expanding Bezi in three directions: helping it see your game visually, connect with the tools you use outside Unity, and share context across your team:
Vision lets Bezi see your game. Today, it works from code and serialized data without understanding what a scene actually looks like. Vision will allow Bezi to capture the Game view, Scene view, and asset previews to verify its work visually—whether that means diagnosing an invisible collider, checking a UI layout, iterating on a material against concept art, or confirming camera framing in a cutscene.
Connectors and MCPs let Bezi work beyond Unity. Game development context lives across many tools, but today Bezi only sees what’s inside your project. Connectors and MCPs will let it pull in outside context, like Jira tickets, Figma mocks, server-side API definitions, and art tools such as Substance or Houdini.
Team Workspaces let Bezi work across teams. Project knowledge is often fragmented across people and projects. Workspaces will bring shared pages, multi-project connectivity, and team-wide context into one place, so teams can reference related projects and keep shared documentation centralized and enforceable by Bezi.
Where this is heading
Everything described above is building toward something larger.
Game development is full of multi-step processes that are well-understood but tedious: wiring up input systems across platforms, cleaning up animator state machines, configuring render pipeline materials, etc. Developers repeat them over and over.
We are building toward a future where Bezi lets developers compress those messy, multi-step processes into purpose-built tools and applications with deterministic outputs, and then share those tools for others to discover and use. Something closer to an app than a prompt: you open it, it solves a specific problem reliably, and you move on. For example, some these tools would specialize in:
Asset import and cleanup: takes a batch of raw FBX files and handles material remapping, scale correction, and animation reference preservation, producing clean, properly configured assets in your project instead of an hour of manual inspection per import cycle.
Networking layer setup: takes a high-level description of your multiplayer requirements and configures the networking layer with proper spawning logic, connection management, and synchronization, giving you a working foundation instead of a week of boilerplate.
Localization pipelines: scans your project for localizable content, generates string tables, and wires up UI elements to the localization system, compressing one of the most universally tedious parts of shipping a game into a single step.
Each tool or application encodes deep domain knowledge, handles common failure modes, and produces reliable results.
And the value compounds: tools a team builds encode institutional knowledge that becomes reusable, and tools shared to the broader ecosystem save the next developer from solving the same problem from scratch.
The analogy we like to use is with a smartphone. When they launched, their underlying capability were vast, but only became transformative when compressed into specific, reliable applications.
Uber packaged GPS, payments, mapping, and real-time communication and into one deterministic experience: open it, get a car. Spotify took the same phone and made it a jukebox. The enabling technologies disappeared into the solution, and the same will happen with AI.
Before game engines, building a game meant writing everything from scratch. Engines like Unity modularized game development into components, visual editors, and scripting layers. But the engine paradigm is decades old. Enormous portions of game development still require tedious, repetitive work inside the engine. The engine gave developers the environment but also created problems that happen inside it. That is the next layer we are building toward.
While Bezi is built for Unity today, this vision is not Unity-specific. The problems we are working to solve are universal to game development. As we look to expand to Unreal Engine and beyond in the future, the ecosystem grows with it.
Try Bezi
We will be at GDC all week. Come find us and see everything above in person. If you are not at GDC, join the Discord.
