{"id":27190,"date":"2026-03-22T01:15:54","date_gmt":"2026-03-22T01:15:54","guid":{"rendered":"https:\/\/1cliqueconsultancy.com\/?p=27190"},"modified":"2026-03-23T21:41:33","modified_gmt":"2026-03-23T21:41:33","slug":"adobe-generative-ai-7","status":"publish","type":"post","link":"https:\/\/1cliqueconsultancy.com\/index.php\/2026\/03\/22\/adobe-generative-ai-7\/","title":{"rendered":"adobe generative ai 7"},"content":{"rendered":"
Adobe Unveils AI-Powered Tools at MAX 2024 Conference <\/p>\n
<\/p>\n
Contributors are compensated when their Stock image is used as a reference point, once the edited image resulting from the generated output is downloaded,\u201d Adobe tells PetaPixel. Artificial intelligence is playing an increasing role across many genres of photography, but few feel the impact of AI more acutely than stock photography. Whether outright generating images or editing them with AI, Adobe has, unsurprisingly, fully embraced its Firefly AI technology within Adobe Stock. With designers complaining about wanting a faster creative process, Adobe seems to have heard the collective sighs and is rolling out some exciting new generative AI features for Illustrator and Photoshop. “We\u2019re standing on the threshold of a transformative moment in generative AI,” he tells me, revealing we’re about to see a “shift from the prompt-based era to a controls era”.<\/p>\n<\/p>\n
<\/p>\n
The concern for creatives is seeing their work potentially lumped in with those tasks. But you have to trust that the company isn’t “taking stuff from other people and reappropriating it,” said Acevedo. “I think that people will see AI as a good starting point, but then as things look all the same over and over again, I think that people would be very fatigued with how it looks,” said Natalie Andrewson, an illustrator and printmaker. Our community is about connecting people through open and thoughtful conversations.<\/p>\n<\/p>\n
But the newest AI tool for Adobe Photoshop allows editors to remove distractions in one click. Called automatic image distraction removal, the tool uses AI to not just remove the distractions, but find the distractions. At Sundance 2025 in Utah, the creative tech giant has announced a new AI-powered Media Intelligence tool that automatically analyses visuals across thousands of clips in seconds. Available in Premiere Pro in beta, it can identify the contents of each clip to make them searchable by text, potentially saving video editors many hours when searching through footage. We truly believe that [generative AI] can revolutionize our marketing content supply chain. To do so we\u2019ll need to not only focus on the technology platform but also on people and process components.<\/p>\n<\/p>\n
Since 2001, he has been editor-in-chief of TV Tech (), the leading source of news and information on broadcast and related media technology and is a frequent contributor and moderator to the brand\u2019s Tech Leadership events. For those who have followed Adobe Firefly\u2019s evolution of tools like Generative Fill in Photoshop, this really shouldn\u2019t come as a huge surprise. However, to see it in person is still quite impressive\u2014much in the way the very first generative AI images of Generative Fill were for image editors. Kicking off their annual Adobe MAX conference in Miami, Florida this year, Adobe has announced that their Firefly video model is finally ready to release to the public and is available to try out today. Before designers can edit a section of an image, they have to select it in the Photoshop interface.<\/p>\n<\/p>\n
“Some [AI] things are game changers, but I understand that with generative AI, it’s controversial. There are other companies that are being a little suspicious as to how they’re pulling stuff.” But professional creators now face a difficult choice about what role — if any — AI should play in their work. Adobe Firefly is the technology powering the new generative AI innovations in both Photoshop and Illustrator.<\/p>\n<\/p>\n
For photographers, the new pixels are nearly always meant to jive with the background, making it look like a distraction was never there in the first place. If the pixels are too smooth, too noisy, or the wrong color, one distraction has just been replaced with a new one. Adobe is aware of the issues and explains that, unlike non-AI tools, those powered by technology like Firefly, which is constantly being fine-tuned behind the scenes, are not continuously improving in every possible situation. While a one-step backward, two-step-forward situation is foreign to most photo editing applications, reality has changed in the age of AI.<\/p>\n<\/p>\n
But many artists still have serious concerns about how generative AI is trained and used, and how its enormous impact on the creative industry is shaping it now and for years to come. Generative AI is one of the most controversial topics in the industry, and professional creators have been pointing out all the reasons why AI cannot meaningfully replace them for years now. Even with Adobe’s thoughtfully crafted caveat that AI isn’t here to replace creators, the company is diving into the deep end with a plan for integrating AI across all its products. In the future Adobe is imagining, AI won’t be a dirty word; it’ll be the newest tool in professionals’ arsenals. It’s an idealistic future, to be sure, but it’s one Adobe is committed to bringing to life, even if it’s a steep uphill climb. During my time at its Adobe Max annual creative conference last month, the message came up in every interview, on the showroom floor, during demos and literally within the first 10 minutes of the two keynotes.<\/p>\n<\/p>\n
The update with the latest Firefly Vector model is now available in public beta, and as Adobe continues to push the boundaries of what’s possible with AI in design, we can expect even more innovative features and updates. The update also brings a new Dimension tool to Illustrator that automatically adds sizing information to your projects, and a Mockup feature that helps you visualize your designs on real-life objects. Retype is another nifty tool that converts static text in images into editable text.<\/p>\n<\/p>\n
<\/p>\n
However, the \u201cGenerative Extend\u201d AI beta is not full-on generative AI, but rather a feature that allows creators to extend clips to cover gaps in footage, smooth out transitions or hold onto shots longer for perfectly timed edits. As the disgruntled photo editor adds, there is no simple way to roll back to an older version of the Firefly tools. Images are processed server-side, so there is not much available by way of user control.<\/p>\n<\/p>\n
The Firefly Video Model also incorporates the ability to eliminate unwanted elements from footage, akin to Photoshop’s content-aware fill. Adobe says its generative AI technology edits each frame and maintains consistency throughout the timeline, turning a typically slow, manual process into a faster, automated one. In September, Adobe previewed its text-to-video (similar to OpenAI’s Sora and Meta’s Movie Gen) and image-to-video features.<\/p>\n<\/p>\n
\u201cAfter the plan-specific number of generative credits is reached, you can keep taking generative AI actions to create vector graphics or standard-resolution images, but your use of those generative AI features may be slower,\u201d Adobe says. The company recently previewed the upcoming offering, which will include such features as text-to-video, being able to remove objects from scenes, and smoothing jump-cut transitions. Stager\u2019s Generative Background feature helps designers explore backgrounds for staging 3D models, using text descriptions to generate images.<\/p>\n<\/p>\n
\u201cOur goal is to empower all creative professionals to realize their creative visions,\u201d said Deepa Subramaniam, Adobe Creative Cloud\u2019s vice president of product marketing. The company remains committed to using generative AI to support and enhance creative expression rather than replace it. Adobe continues to expand its AI capabilities, with recent hires for generative AI research roles in India. Despite some backlash from creative professionals concerned about job automation, Adobe emphasizes that its AI tools aim to amplify human creativity. The company has also responded to ethical concerns, such as removing AI imitations following a complaint from the Ansel Adams estate.<\/p>\n<\/p>\n
Further, like every other Adobe Stock asset, anything created or changed using AI is designed to be commercially safe and backed by IP indemnification (for eligible customers). With Generate Variations, Stock customers can customize existing content to fit stylistic and compositional preferences with Firefly. For example, if someone likes the content of an image but it doesn\u2019t fit the style of the rest of a brand\u2019s identity or marketing campaign, they can use AI to apply a new style or aesthetic to the image. These Generative Edits rely heavily on existing assets, even if they include AI-generated pixels. Generative Variations takes the AI further, creating an all-new asset based on an existing reference image. Sometimes an image on Stock is nearly perfect, but it\u2019s not the right size or aspect ratio for a particular application.<\/p>\n<\/p>\n
The beta was released today alongside Photoshop 25.7, the new stable version of the software. In discussing the feature, Shantanu Narayen, Chair & CEO of Adobe, described the Adobe Experience Platform as \u201ccritical\u201d to supporting the \u201cheterogeneous environment\u201d in which their customers reside. They can be edited to your liking, but it uses intelligence to apply animations to specific element types. No matter the path forward, Fong emphasizes the importance of remembering where AI-generated content comes from.<\/p>\n<\/p>\n
<\/p>\n
Additional improvements include expanded tethering support for select Sony Alpha mirrorless cameras, like the Sony a7 IV and a7R V. This provides access and control to a connected camera. When users use Generative Remove, Lightroom offers three potential variants, each with a slightly different spin on AI-powered object removal. In a pre-launch demo, PetaPixel asked Adobe to go off-script and remove different objects in various photos, and Generative Remove didn\u2019t skip a beat. It lets users remove unwanted objects from any photo entirely non-destructively with just a single click. Well, the tool requires one click to activate, but users must paint a general shape over the object(s) they want to remove.<\/p>\n<\/p>\n
It projected digital media segment revenue of between $4.09 billion to $4.12 billion and digital experience segment revenue of between $1.36 billion to $1.38 billion. However, the amount of manual control photographers have over the depth map and visualization depends on the platform. Lens Blur uses artificial intelligence to create a three-dimensional depth map of a two-dimensional image. If an image file has depth map data already attached, like a Portrait Mode shot from an iPhone, Lens Blur can use it.<\/p>\n<\/p>\n
The model boasts several notable features, including the capacity to generate B-roll footage from text prompts, with Adobe asserting that high-quality clips can be produced in under two minutes. This capability mirrors the pure video generation offered by platforms like Sora, Kling, or Dream Machine. Adobe says that, like with other Firefly generative models, both the Firefly Video Model and the features it powers are designed to be safe for commercial use.<\/p>\n<\/p>\n
Deepa Subramaniam, vice president of Creative Cloud product marketing, said in an interview that this high usage proved Adobe was on the right track. “[It] really shows us that we’re addressing something that our customers are really struggling with.” For some creators, Adobe’s focus on convenience and problem-solving — along with its safety protocols — is great news.<\/p>\n<\/p>\n
I\u2019d also recommend organizations come into this process knowing it is going to be iterative. I might not know what Adobe is going to invent in five or 10 years but I do know that we will evolve our assessment to meet those innovations and the feedback we receive. Five years ago, we formalized our AI Ethics process by establishing our AI Ethics principles of accountability, responsibility, and transparency, which serve as the foundation for our AI Ethics governance process. We assembled a diverse, cross-functional team of Adobe employees from around the world to develop actionable principles that can stand the test of time. Think of a bounding box around your Generative Fill selection, and try to keep it inside that block.<\/p>\n<\/p>\n
New Innovations in Photoshop and Illustrator Transform Creative Workflows and Deliver More Speed, Precision and Power Than Ever Before.<\/p>\n