My First Experiment with Figma Make

Introduction

In the summer of 2025, I started noticing a growing phenomenon: more and more designers were posting on LinkedIn digital interfaces they had made using GenAI tools.

Some results seemed pretty nice, and, I must confess, this made me feel a hint of FOMO. I also experienced excitement and curiosity, asking myself “What if I am missing out on something really important here?”.

I decided to test this relatively new way of designing, and this essay is a record of how it went.


Tasks & Constraints

I gave myself the following tasks and constraints:

  1. Design a desktop version of a digital product that I myself would like to use;

  2. Design the first version by exclusively using a GenAI tool;

  3. Design the second one “by hand” in Figma, spending roughly the same amount of time (4 hours).

At the end, I would compare the results and even try to feed my design to the AI to see what it can do with it.


Results

Handmade Design (click to try it)

AI Design (click to try it)

How I got there

I started by brainstorming ideas. An early one was to design a Pomodoro Timer inspired by the AppleTV show Severance (yes, I am a huge fan), but then opted to abandon this enticing idea… to avoid any potential copyright infringement!

The second idea that came to my mind was to design a Circadian Rhythm tracker.

I am a self-optimization aficionado, and this seemed like something cool to create.

I did a quick competitive audit, and confirmed that such tools are available online, which suggested that there is interest in them.

The next step was to diagram the user’s mental model, so that I could have a clearer idea of what to include in the design.

Finally, I was ready to research the best AI tool to use for this experiment. After having evaluated some GenAI products (Replit, Lovable, etc.), I have opted for using Figma Make, since:

  1. It’s free to use;

  2. It can easily be connected to Figma files for further manipulation via prompting.

Excitedly, I crafted my first prompt, to then feed it to the AI.

I experienced something akin to magic, seeing the AI create the UI in a few minutes. It has been one of the quickest ways I’ve seen an idea go From Zero to One.

While I noticed that some things didn’t work properly, and the design felt a bit barebones, I was amazed with the AI’s speed. I went on prompting with enthusiasm.

After several prompts, though, I must admit that I started feeling a bit frustrated.

Yes, I went From Zero to One blazingly fast, but it was quite challenging to go further and get to the design I had in mind.

I kept prompting Figma Make in order to change or fix something, and sometimes this would cause one of two things:

  1. The AI would fix something, but break something else;

  2. The AI would occasionally ignore my instructions.

Some examples of the AI not following my prompts:

  1. I asked for “bubbles” representing activities the user wants to plot on the graph. Figma Make kept not generating them;

  2. Once it finally managed to generate the bubbles, it kept positioning them in the wrong place. I kept prompting, but the AI never managed to find a way to position them on the line chart;

  3. I asked for a “Download” button. The AI never managed to make it work;

  4. The AI kept having issues leaving the line chart still. I would see it moving when hovering on the bubbles, and no prompt managed to fix that.

The funny thing was that the AI would lie with absolute confidence, and then apologize when I would point out a problem was still there, unresolved.

Nevertheless, I was having fun, and kept prompting the AI to improve the design. Here is the design of version 1 vs the final version:

Once I felt that I had pushed the AI for long enough, it was time to see what I would design “by hand” in Figma.

Figma Make had produced a pretty minimal visual design. For my handmade version I opted for a more visually complex one, with a halo effect on the line chart and glass design for the “activity bubbles”. I wanted to give a feeling of physicality to the UI, inviting users to interact with it.

In the AI version, you would first input data on the first page, to then see the graph.

In the handmade version, users can directly manipulate the graph by adding, removing, and dragging the bubbles representing the activities of the day. No need to input information on a previous page. It all lives in the graph.

By clicking the bubbles, users can read the details of the activity, at what time it is located, and what would be the best activities to do before and after.

The process of designing this version was more effortful, since I had to do everything by myself, yet I noticed that:

  1. I felt more focused, because I had to think and immerse myself in each moment of the design;

  2. I entered a “flow” state more often, since there were no breaks in which I had to wait for the AI to produce its output.

It was time to move on to the last part of the experiment: feeding my design to the AI. You can imagine the curiosity with which I approached this moment!

I was hoping to get the best of both worlds: a more polished design that I could manipulate via prompting.

Unfortunately, even after several prompts, the AI couldn’t reproduce my design.

Among other things:

  1. It kept having issues understanding that I needed the activity bubbles to be visible on the line chart;

  2. It kept representing the line chart itself completely wrong.

Lessons Learned

As a result of this experiment, I have learned these lessons:

  1. GenAI is really fun. Being able to put together a working UI quickly and just through words feels like magic;

  2. This tech is great to quickly show a working prototype to stakeholders, but I believe it can’t help you build a real product (yet);

  3. It can be quite frustrating to iterate on the AI design to get to something you find worth sharing;

  4. The experience of designing “by hand” feels more focused. You don’t have to constantly interrupt your focus, wait for the AI to produce its output, and then correct its eventual errors;

  5. Our current GenAI tends to lie. It makes mistakes and pretends to have addressed them without actually doing so;

  6. You can’t simply feed your designs to the AI and hope it will just “get them”.

  7. Prompting in a chat box isn’t a convenient interface for design work. It’s inconvenient to use words to describe things to be done, while moving around the UI with a pointer is incredibly more intuitive and effective.

Of course, all the above might tell much more about me than about GenAI. All my biases and habits definitely influence my experience. Your experience might be different.

Yet, I wanted to honestly share my thoughts with you.

Otherwise, what’s the point of writing an essay about my experience?

Next Steps

I want to:

  1. Experiment with other GenAI tools and ideas for UIs and digital products, to see if the results of this first experiment will be repeated or not;

  2. Discuss with other professionals about their experience with AI tools, so that I can see if there are good use-cases for GenAI in Product Design (and in business in general).

Next
Next

How Antifragility Leads to Innovation