Updated November 18, 2025.
Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.
What if the AI could see the actual structure behind your design instead of guessing from flat images?
In this guide, we’ll walk through a full workflow for design-to-code with the Figma Dev Mode MCP server. You’ll learn how to set up the server, connect your AI tooling, and generate usable code that aligns with your design system while saving time on manual handoff.
The Figma MCP server is a local service that exposes the structured contents of your Figma file through the Model Context Protocol (MCP), an open standard that allows AI tools to communicate directly with software instead of interpreting screenshots or exported assets.
Once the server is running, an AI agent can request live design data from your selected layer. This includes hierarchy, layout rules, text styles, component properties, and image references. Because this information reflects the way your design is actually constructed, it forms a more reliable foundation for generating code.
The Figma MCP server gives AI assistants a clear understanding of how a design is built.
Instead of guessing from rendered pixels, the agent receives real structural information from your file. This often produces cleaner code, better alignment with your existing components, and fewer manual corrections.
The MCP server is a central part of modern Figma design-to-code workflows because it gives AI tools a consistent and accurate view of your layout structure.
The Figma server allows an MCP client to read the selected layer in your file and access the details that matter for implementation.
The Figma server allows an MCP client (like Cursor or Claude Code) to read the selected layer and access the details that matter for implementation.
These include the node tree, variant information, layout constraints, design tokens, and asset references, which screenshot-based tools cannot capture.
Because the data is structured and consistent, it supports workflows like design-to-code, automated documentation, and AI-assisted development inside IDEs that support MCP.
Okay, down to business. Feel free to follow along. We're going to:
- Grab a design
- Enable the Figma MCP server
- Get the MCP server running in Cursor (or your client of choice)
- Set up a quick target repo
- Walk through an example design to code flow
If you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.
I'll be using this screen from the Material 3 Design Kit for my test:
First, be aware that Figma’s official server only works a paid plan that includes Dev Mode. If you’d rather use a free community server, I’ve used this one a bunch and had good luck with it.
Once you have a Figma plan, you can head over to preferences in any design file and check “Enable Dev Mode MCP Server”
The server should now be running at http://127.0.0.1:3845/sse. Depending on your operating system and firewall settings, the port may differ.
Now we can hop into an MCP client of your choosing.
For this tutorial, I'll be using Cursor, but Claude Code, Windsurf, Zed, or any IDE tooling with MCP support is totally fine. (Here’s a breakdown of the differences.) My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.
Cursor has an online directory from which you can install Figma’s MCP server.
Clicking “Add Figma to Cursor” will open up Cursor with an install dialog.
After clicking “Install,” if the server is working properly, you should see a green dot and the enabled tools in your Cursor Settings.
If you see a red dot, try toggling the server off and on again, making sure that you have the Figma Desktop app open in the background with the MCP server enabled.
Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."
Next, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.
For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest. Feel free to try this on a branch of one of your existing repos to truly stress test the workflow.
Okay, we should be all set. Select the relevant layer in Figma (note that you can only select one) and then switch over to Cursor. The server can only see what you have selected.
In this case, I’ll prompt Cursor with: “Could you replace my homepage with this Figma component? It should be a chat app.”
In total, the AI wrote 215 lines of React and 350 lines of CSS. The component mostly looks like the design, though it’s not pixel perfect, it’s missing some of the data in the original, and none of the buttons work. The generation took around 4 minutes with Claude 4.
I can use a few more prompts to add functionality, clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like (too many magic numbers). But it definitely saves me time over setting this all up by hand.
In the demo above, I kept the instructions simple on purpose. In real projects, you can get much better results from the Figma MCP server by treating it like a junior engineer who needs context, not magic.
The MCP server makes the structure of the selected Figma layer available, but it has no knowledge of your codebase, naming conventions, or component usage patterns. That is the part developers can shape. The more guidance you provide inside your project, the more consistent the AI becomes.
Here are a few reliable ways to improve results without touching the Figma file itself:
- Create small (even AI-generated) markdown files the agent can reference. Think of these as onboarding documents. Add them to your repo and point the agent there each time you convert a design.
- Add component hints inside your codebase. Include a short README in each component folder that shows how it's intended to be used. Even 2–3 examples can anchor the model.
- Annotate tricky requirements directly in your prompt. If a design includes behavior that is hard to infer from the Figma node tree, such as "this card needs a loading state," call it out.
- Give the agent the right starting point. When using MCP, the agent can only see the selected Figma layer. So, be explicit about which file or component it should modify, remind it where to place new components, and link any related code it should reference.
Ultimately, since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results depend on learning how to get the most out of Cursor. For that, we have a whole bunch of best practices and setup tips, if you're interested.
Remember, the goal isn’t to replace yourself; it’s just to be able to focus on the more interesting parts of the job.
The most powerful way to guide the MCP workflow is through Figma’s Code Connect feature.
It lets you map a design component in Figma to the real code component in your repository. Once mapped, those details flow through Dev Mode and into the MCP server responses, which tells your AI client exactly which component to use.
For developers, the CLI is usually the best place to start. It allows you to define component mappings, property mappings, and example usage right in your codebase. Designers can then view and rely on those mappings inside Figma’s Dev Mode UI.
With Code Connect plus a little written context, the MCP server gets both the design structure and your project rules.
These tools make the workflow smoother, but they do not remove the bigger constraints of the MCP architecture itself.
Figma MCP offers a cool workflow. As a more senior developer, offloading the boring work to configurable AI is really fun.
But when you move from experimentation to building production-ready software for a team, I still see a lot of fundamental limitations:
- Even with Code Connect providing instructions like, “use our
Buttoncomponent,” the AI’s process is still a one-way street. It has no visibility into the final, rendered output of its own code, unable to see if a global style from another file is overriding its work or if a component renders incorrectly on the page. - Design systems are mapped, but not dynamically enforced. Although Code Connect provides component advice, the AI can still “creatively” generate one-off styles for margins or colors when it encounters something not explicitly in the map. It lacks the live, structural understanding of your codebase required to guarantee consistency.
- The entire process—from mapping components in Code Connect to configuring the IDE and merging code—is highly technical. This not only excludes designers and PMs from the process, but it also means the developer bears the full burden of setup, maintenance, and visually QA'ing every AI output to catch the inevitable discrepancies.
These limitations create a workflow that, while definitely an improvement over screenshots, still feels like a fragile translation layer between Figma and your codebase. It requires constant developer oversight, struggles with bespoke systems, and doesn't solve for team-wide collaboration.
This is the gap that a more integrated visual development platform is designed to fill. Let’s take a look.
The solution to these limitations is to give the AI eyes to see the rendered output, and to strategically enforce your components and design system.
This is the gap that Builder.io’s Fusion is designed to fill. It moves beyond static mapping to create a live, visually-aware development environment where the AI edits your existing repository by observing your application.
Instead of working from disconnected instructions, Fusion's AI agent operates on a live, interactive preview of your application. This is made possible by instrumenting your code to create a precise, two-way map between the rendered UI and the source files.
Under the hood, every element in the visual canvas is enriched with attributes that point to its exact origin in the codebase—the specific file and line of code that generated it. For example: