Adobe’s new Firefly AI Assistant could forever change the way you use its apps
Adobe is rolling out the public beta for its Firefly AI Assistant later this month, turning complex creative workflows into a simple chat interface across applications like Photoshop, Premiere, Illustrator, or Lightroom. You type what you want, and the AI connects the dots behind the scenes to make it happen. Since it’s a multi-modal interface, it can tune with precision via context-aware control panels when needed beyond the text-based prompt. It’s a first step in what creative apps may become in the future, removing the complexity of user interfaces while keeping powerful control.
If the final product works like the demo, the new Firefly AI Assistant will change the fundamental way people interact with design software, giving the keys of the walled professional creative castle to anyone willing to pay the money, write in plain English, and move sliders that appear contextually to finely tune aspects of their creations whenever it is needed. Instead of forcing newcomers to memorize a labyrinth of menus, nested palettes, and pop-up windows, the assistant lets them achieve complex results just by asking.
At the same time, the new assistant is the first stepping stone into a new type of automation for professionals. It gives veterans a fast track to bypass the tedious grunt work they already know how to do. “We have the full spectrum covered from people coming new to our franchise and they don’t know the full power of Photoshop and they want to achieve some amazing edits they can also tap into it and just talk to the assistant,” Adobe vice president of AI and innovation Alexandru Costin told me in an interview. “On the other side of the spectrum, the creative professionals that fully understand our tools can actually take those assets and continue editing them in our tools.”
This tool evolved from Project Moonlight, which Adobe teased at last year’s MAX conference and tested in a private beta. The core idea came directly from working professionals who were looking for a modern upgrade to Photoshop Actions, a decades-old feature that allows users to record and replay mouse clicks. Actions only works for fixed, repeatable chores, like adjusting a thousand images’ hue and saturation using fixed values. But users wanted a smarter type of automation agent that could adapt to what the agent sees in each image, video, or illustration. Adobe decided to create something that goes beyond basic editing, changing things on media according to the context and content of the image or the video itself, even creating new images, mockups and final candidates for art.
The new, smart ‘Photoshop Actions’
“At Adobe MAX actually I was meeting a large group of professionals to ask them about agentic,” Costin tells me, using the industry term for an AI that actively executes multi-step tasks across different software on your behalf. “They said ‘look I would love you guys to give me a button like Photoshop actions where I can record in an agent what I’m doing and then have the agent be able to replay that for me so I can basically decide… that this automation is doing the things my way.'”
While the new Firefly AI Assistant still has two limitations that will not make it a direct Actions replacement—more on this later—it certainly has the potential to become a huge time saver for any professional willing to work with a crew of AI bots.
To make that kind of automation happen, Adobe built what it calls Creative Skills. The AI learns how a specific creator likes to work over time, picking up on their favorite tools and visual style, and can apply that knowledge to handle your files. “You can actually describe your particular taste or approach as a creative professional and then be able to ‘replay’ that using the Firefly AI system, so you save time and you can automate some portion of your work so you have more time for creativity,” Costin says.
The current beta provides a standard set of default skills to start, though making those skills fully editable and shareable is coming in a subsequent version (one of the limitations that set it apart from Photoshop Actions, which are fully shareable).
Instead of relying on rigid templates, the system acts like an autonomous digital art director. It actively evaluates the raw materials on your canvas to figure out the right context before making a move, rather than just executing blind commands based on file metadata. The software doesn’t just hijack your cursor either; it checks in with you constantly to clarify what you actually want to achieve, ensuring you remain the driver of the final artwork.
This integration goes beyond Adobe’s own engines, extending the platform to leading third-party AI models, including Anthropic’s Claude. It also hooks into review platforms like Frame.io. If a client leaves notes on a project, the agent can digest that feedback and execute the revisions on its own, selecting whatever software handles the job best.
The path for multi-model, adaptive interfaces
The assistant also introduces Precision Flow, replacing the tedious chore of painting pixel-perfect masks with semantic editing. Instead of dealing with raw pixels, the AI recognizes the actual physical objects inside the frame—knowing a coffee cup is a cup and the beans next to it are beans. “Precision flow gives you the opportunity to use generative AI to edit an asset not on the pixel level but on the semantic side,” Costin says. “Like in this case it knows that this is a photo of some coffee with coffee beans so it creates these dynamic semantic sliders that enable you then to change your image semantically without having to re-prompt.”
This semantic ability automatically generates new control panels for granular control. In the case of the coffee cup with the coffee beans splashing over the liquid coffee, users will see a panel with sliders like “Coffee beans” and “Splash” that users can move to precisely increase or decrease the amount of beans or coffee splash. This is the right step for future apps. Natural language is great for a starting point, but language is interpretable by nature, leading to imprecision and misunderstandings. There is no imprecision in a slider that can increase the amount of coffee beans in the image, interactively, in real time.
Because the assistant is hooked directly into the guts of the Creative Cloud, standard tools like hue and saturation get passed directly into the host application. The result isn’t a dead image file; it generates a native PSD document, Adobe’s standard file format that stacks individual edits on separate, adjustable layers.
“This is actually a Photoshop control where once you do these edits behind the scenes we can actually create the PSD and that PSD is loaded in Photoshop,” Costin says. “Using an actual Photoshop feature you will have that full adjustment layer applied in Photoshop.”
However, the current beta still hits a hard wall when it comes to those advanced semantic edits. While any regular control panel—like HUE & Saturation—will generate an editable layer in Photoshop, when you use Precision Flow to modify objects in the assistant, those specific changes do not yet cross over into Photoshop as live, manipulable sliders. “We haven’t yet integrated precision flow into Photoshop. You can imagine this is coming to Photoshop too, but I don’t think we’re announcing it yet,” Costin says. “Right now if you do these edits the image that will be passed to Photoshop will be a raster image.”
That means the semantic tweaks arrive as a flattened, uneditable layer of pixels. For absolute beginners, this technical hurdle won’t matter much as long as the final image looks good. For the high-end professionals that Adobe is trying to court, I suspect it will be more like a temporary inconvenience. It would be really nice to have it, but I can work without it.
It’s clear that the Firefly AI Assistant may be a massive leap forward in making software work for the user rather than the other way around, where you direct an AI to do what you want, with the AI learning your work style, and only surfacing precise controls when needed.