Adobe Firefly Expands Image and Video Generation With Custom AI Models
Adobe is moving past the random luck phase of AI image generation by letting you teach its tools exactly how you want your work to look.
For a long time, using AI felt like a roll of the dice; you’d type a prompt and hope the computer guessed your vibe correctly. Adobe is changing that with the launch of Firefly Custom Models in public beta. Instead of relying on a general style, you can now train the AI using 10 to 30 of your own images.
Whether it’s a specific brush stroke you use in illustrations or the exact way you light your photography, the AI learns your fingerprint to make sure every new image it generates looks like it actually came from you.
“Generative AI began with an idea as simple as it was revolutionary: type a prompt, get an image,” the company wrote. “The creative process is now evolving into more fluid AI-powered workflows where you generate, refine and shape ideas into work that is uniquely yours.”
The company says the process is now moving toward a more natural “conversational” workflow where the AI acts more like a partner than a slot machine.
Solving the identity crisis for brands
The biggest problem with AI for professional creators and big companies has been consistency.
It’s hard to build a brand if every image looks slightly different. These new custom models are designed to fix that by focusing on three areas: illustration styles, specific characters that need to look the same in every scene, and photographic looks.
“No matter who you are, it takes years of investment building a visual identity. Maintaining that across media, campaigns, formats and platforms takes intention. To grow a brand, you need a steady stream of assets that consistently express who you are. Those assets should be yours and yours alone,” Adobe wrote.
Privacy and the stolen art question
One of the biggest worries in the art world is whether AI is stealing work to learn. Adobe is trying to get ahead of this by making all custom models private by default. This means if you train a model on your art, Adobe won’t use that data to train its general Firefly system.
Deepa Subramaniam, Adobe’s Vice President of Product Marketing for Creative Professionals, told Digital Camera World: “Your data is yours.”
However, the responsibility still falls on the user to play fair. Adobe’s help page notes that users will be asked to confirm they have the rights to the images they upload, ensuring they don’t “infringe on the copyright, IP, likeness, or privacy rights of others.”
More models, quick cuts, and a chatty AI assistant
Custom Models wasn’t the only announcement.
Adobe also confirmed that Firefly now gives users access to over 30 third-party AI models in one place, including Google’s Veo 3.1, Runway’s Gen-4.5, and Kling’s 2.5 Turbo, as well as its own newly generally available Firefly Image Model 5. Adobe is currently offering unlimited video and image generations across this model library.
On the video side, a new feature called Quick Cut promises to take raw footage and organize it into a structured first edit within minutes, a useful shortcut for content creators who spend hours on rough-cut assembly.
The company is also expanding access to Project Moonlight, a private beta for a conversational AI interface. Instead of typing a static prompt, users chat with an AI assistant that can take real actions across Adobe apps, helping to move from concept to completion in a more natural, back-and-forth conversation.
Also worth a read: Google’s Gemini is now turning your prompts into social media-ready photos. Here’s how it works.
The post Adobe Firefly Expands Image and Video Generation With Custom AI Models appeared first on eWEEK.