On Wednesday, Adobe unveiled Firefly AI video generation tools that will arrive in beta later this year. Like many things related to AI, the examples are equal parts mesmerizing and terrifying as the company slowly integrates tools built to automate much of the creative work its prized user base is paid for today. Echoing AI salesmanship found elsewhere in the tech industry, Adobe frames it all as supplementary tech that “helps take the tedium out of post-production.”

Adobe describes its new Firefly-powered text-to-video, Generative Extend (which will be available in Premiere Pro) and image-to-video AI tools as helping editors with tasks like “navigating gaps in footage, removing unwanted objects from a scene, smoothing jump cut transitions, and searching for the perfect b-roll.” The company says the tools will give video editors “more time to explore new creative ideas, the part of the job they love.” (To take Adobe at face value, you’d have to believe employers won’t simply increase their output demands from editors once the industry has fully adopted these AI tools. Or pay less. Or employ fewer people. But I digress.)

Firefly Text-to-Video lets you — you guessed it — create AI-generated videos from text prompts. But it also includes tools to control camera angle, motion and zoom. It can take a shot with gaps in its timeline and fill in the blanks. It can even use a still reference image and turn it into a convincing AI video. Adobe says its video models excel with “videos of the natural world,” helping to create establishing shots or b-rolls on the fly without much of a budget.

For an example of how convincing the tech appears to be, check out Adobe’s examples in the promo video:

Although these are samples curated by a company trying to sell you on its products, their quality is undeniable. Detailed text prompts for an establishing shot of a fiery volcano, a dog chilling in a field of wildflowers or (demonstrating it can handle the fantastical as well) miniature wool monsters having a dance party produce just that. If these results are emblematic of the tools’ typical output (hardly a guarantee), then TV, film and commercial production will soon have some powerful shortcuts at its disposal — for better or worse.

Meanwhile, Adobe’s example of image-to-video begins with an uploaded galaxy image. A text prompt prods it to transform it into a video that zooms out from the star system to reveal the inside of a human eye. The company’s demo of Generative Extend shows a pair of people walking across a forest stream; an AI-generated segment fills in a gap in the footage. (It was convincing enough that I couldn’t tell which part of the output was AI-generated.)

Still from an Adobe video showing a text prompt creating a moody shot of a man on a rainy street.Still from an Adobe video showing a text prompt creating a moody shot of a man on a rainy street.

Adobe

Reuters reports that the tool will only generate five-second clips, at least at first. To Adobe’s credit, it says its Firefly Video Model is designed to be commercially safe and only trains on content the company has permission to use. “We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters,” Adobe’s VP of Generative AI, Alexandru Costin, told Reuters. The company also stressed that it never trains on users’ work. However, whether or not it puts its users out of work is another matter altogether.

Adobe says its new video models will be available in beta later this year. You can sign up for a waitlist to try them.



Source link


administrator

Leave a Reply

Your email address will not be published. Required fields are marked *