Adobe is moving to generative AI for video in Premiere with Firefly, which is currently available in Premiere Pro. Today, the company’s Firefly Video Model, teased earlier this year, officially debuts. In the new beta tools, users can automatically make videos from text prompts or images or extend existing clips, for example. Adobe’s foray into AI-enhanced video editing begins today and aims to simplify the workflows for all creators.
The show’s star is Generative Extend, a beta tool now available in Premiere Pro. This feature addresses a common issue: just a bit too short clips. AI is ideal for small adjustment work at the start or end of footage, as it can extend them over two seconds. It doesn’t only do that; it can also correct mid-shot problems such as shifting eye lines or even random movements, which are small fixes but ones that normally require a retake.
Adobe’s newest trick inside Premiere Pro
The Adobe Firefly-powered “Generative Extend” tool opens up fresh possibilities for creatives. It’s all within reach if you need a few seconds to put together a clip or if you want to blend a little audio. But the tool has its limitations. You only get extended footage (beyond the two seconds it takes to snap off an image on this device) at up to 1080p at 24 FPS, and that footage is capped. Impaired is audio, too, only extending ambient sound and stuff to ten seconds, leaving spoken dialogue and music untouched.
While these limitations might make “Generative Extend” better suited for minor adjustments rather than major edits, it can save users from reshooting minor issues. In a nutshell, it’s an ideal quick-fix option for those seeking to get things functioning without the need to travel back down the path to beginning again.
Adobe has launched two more tools outside of Premiere Pro: The Adobe Firefly web app is now in beta for Text-to-Video and Image-to-Video. Announced in September, these tools allow users to create short video clips either from text prompts or from still images. They can even fiddle with camera-like tabs as they customize the videos using angles, motion, and shooting distance.
Text-to-video: A new era in video creation
Like similar AI video generators, the Text-to-Video tool follows a well-known pattern. All users need to do is input a description of the video clip they want, and Adobe’s AI does it all from there. But there are so many options; if you want to create a film resembling 3D animation, stop motion, or a more traditional film style, there are many ways to do so. With the added feature of “camera controls,” users can refine these clips even further by adjusting visual elements such as camera angles and motion.
Adobe also offers an image-to-video feature. However, this tool goes one step further, allowing users to provide a reference image alongside their text prompt. This feature is useful whether crafting fast broll coverage or visualizing reshoots, allowing more control over the final output. But this tool is not perfect either. Early tests have rendered it a wobbly cable and background shift test—that is, AI still has some growing to do.
The efficiency sells this – you always get 720p at 24 FPS and max out at five seconds. Clips can be generated in about 90 seconds, although Adobe is already working on a “turbo mode” to speed things up even more.
Safe and sound: Adobe Firefly’s commercial advantage
The biggest difference is that Adobe’s Firefly tools are viable commercially. Setting creators at ease, Adobe promises its AI model is trained on content it can legally use. Unlike some competitors, which have faced criticism for training their models on unauthorized content (including YouTube videos), Adobe’s tools are “commercially safe.”
Videos made with Adobe’s AI video tools also have embedded Content Credentials. That means that if any AI is involved in creating or editing the content, it’s disclosed, making things transparent about how videos were created and their rights.
As part of a broader push towards AI-powered tools in all of Adobe’s apps, the company’s Firefly Video Model is one. These new features were announced during a MAX conference last weekend, making them publicly available—a head start before other similarly thinking companies such as Meta, Google, and OpenAI, whose video generation tools are still in the pipeline of their public releases.
Image credit: Adobe