top of page

Imagine with Meta AI

Group 1 Portfolio.png

In the run-up to Meta Connect in September 2023, the GenAI team saw an opportunity to productize its image generation model by including "imagine" as a capability of the AI assistant we'd be announcing. I jumped on board as the sole content designer to quickly name the product, sketch out its IA, and ensure content consistency across Messenger, WhatsApp, and Instagram in a matter of weeks.

Whereas at launch the only way to trigger imagine was in a 1:1 chat with Meta AI, we quickly followed with a dedicated affordance that could be accessed in any chat with friends. Tapping it brings up example prompts above a freeform input field — I worked with our backend content team, engineering, product, and design to ensure the original examples best showcased the possibilities.

Group 2 Portfolio.png
Group 3 Portfolio.png

As we learn from research and how people use the product, we've been intentionally adding features that enable more creative license, like regeneration, editing, and animation. There's also a strong feedback loop that allows people to share why a given image isn't to their liking. The tone here is intentionally neutral to give people a sense of control and let them know how their feedback is used to improve the AI.

A particularly engaging evolution of the product has been the ability to imagine yourself as any character or in any scene you can come up with. All you have to do is take a selfie and ask Meta AI with the trigger phrase "imagine me" followed by your specific prompt.

Group 4 Portfolio.png

As with AI Studio on web, the desktop version of imagine allows for more expansive editing and controls. Each prompt sends back four versions of the requested image, and when one is selected you can jump into a fullscreen editing mode.

© Victor Beigelman 2025

bottom of page