Introducing Vision Tool - Bezi can see your game now

Cecilia Uhr

Chief Product Officer

Bezi has always understood your Unity project - your code, your hierarchy, your assets, your components. It could read everything about how your project was configured. But configuration and appearance are two different things.

The Vision Tool gives Bezi eyes inside your Unity Editor. It captures screenshots of your Scene view, Game view, and asset previews, so it can see what's actually rendering - not just what the data says should be there. It can spot visual bugs at runtime, iterate on UI and shaders, and verify each step of its own work without you needing to describing every detail.

It closes the feedback loop: Bezi checks its work visually, decides if it's right, and keeps going.

What the Vision Tool does

When Bezi needs visual context, it captures a screenshot of the relevant viewport and interprets it alongside your project's code, components, hierarchy, and assets. That combination - of structural and visual context - is what makes Vision powerful: from a series of screenshots it can parse a composition, identify necessary changes, iterate on solutions, and check the result.

In practice:

  • Scene view capture - Bezi can focus on specific GameObjects in your active scene, choose camera angles (front, back, top, sides), and adjust zoom to inspect any part of your scene - all without changing your scene camera. Works in both Edit and Play mode.

  • Game view capture - Bezi can snap pictures through the player camera while the game is running, so it can debug gameplay issues that arise in real time.

  • Asset previews - Bezi stages prefabs, meshes, and materials to inspect and iterate on them in isolation, useful for fine-tuning shader properties, comparing material variants, or verifying a prefab looks right before it goes into a scene.

  • UI rendering - Bezi captures renders of your UI to verify layout, sizing, and styling without entering Play mode. It can capture in both portrait and landscape orientations, so you can catch responsiveness issues across screen configurations.

  • Animation and time-based capture (experimental) - Bezi can capture how materials, particles, and animations behave over time rather than in a single frame, letting it verify and iterate on time-based visual elements. This is an early capability that we're still tuning, but worth trying - no other AI tool for Unity can do this today.

This tool is activated via prompting Bezi to “Look at” or “Take a screenshot of” a given asset or issue, or by simply asking a question that requires additional visual information to answer. It’s cued for active tasks - running only when prompted, to provide the context that Bezi, and you, need.

Getting started

Here are three workflows to try and help you get started with the Vision Tool:

Visual debugging

Your character can't move forward, but there's nothing visible blocking the path. The bug is in the scene layout - an invisible collider - not in the code.

You tell Bezi: "Help! My player is stuck and I can't keep moving but there's nothing in front of me!" Bezi captures the Scene view, cross-references it with serialized scene data, identifies an invisible wall collider as the culprit, and correctly rules out other nearby colliders. Diagnosed and with a single action, fixed.

A lot of Unity bugs are cross-layer. They live in the gap between what the data says and what the player actually sees. Vision lets Bezi work both sides of that gap at once.

UI iteration with visual feedback

UI work has always been a weak spot for code-only AI tools. Bezi could edit your uGUI prefab or UI Toolkit stylesheet, but it had no way to check whether the result actually looked right. You'd end up in a loop: prompt, screenshot, describe what's wrong, re-prompt, screenshot again.

Now Bezi closes that loop itself. You describe the issue, then Bezi uses Actions to update the hierarchy or stylesheet and captures a screenshot to verify the fix. If something's still off - a button overflowing its container, a panel that's the wrong size - Bezi spots it and handles it in a follow-up pass. You don't need to re-explain what "not quite right" looks like.

Material creation from concept art

You have 2D concept art and want a matching material on a 3D prefab. The workflow is: create a shader, tune parameters, compare against the reference, adjust, compare again. It's tedious to do manually and impossible for a code-only AI that can't see its own output.

With Vision, you attach the concept art and tell Bezi to match it. Bezi creates materials and shaders, applies them to the prefab, captures asset preview screenshots to compare against your reference, and refines across multiple passes. It can stage individual materials on preview objects outside your scene for fine-tuning - inspecting each one in isolation before committing.

Try it now

Bezi’s Vision Tool is available today for all users.

Ready to get started? Try Bezi for free below.

The sightless AI

Eyes, now open, to your game

Vision is unlocked

Haikus by Bezi