Runway has recently unveiled its latest artificial intelligence software, called Gen-2 Runway AI, which marks a significant technological leap from its predecessor, Gen-1. While the previous version was able to generate new videos by using data from existing ones, the Gen-2 Runway AI can create complete videos solely from text descriptions.
The company has been working on this cutting-edge model since September of last year and is proud to announce that it is now the first publicly available text-to-video model on the market, capable of realistically and consistently synthesizing new videos.
Generate videos with nothing but words. If you can say it, now you can see it.
Introducing, Text to Video. With Gen-2.
Learn more at https://t.co/PsJh664G0Q pic.twitter.com/6qEgcZ9QV4
— Runway (@runwayml) March 20, 2023
Gen-2 Runway AI can create videos using text prompts
By combining the impressive features of Gen-1, which allowed it to apply the composition and style of an image or text prompt to the structure of a source video to create a new one, the Gen-2 Runway AI is a significant step forward. It can now produce entirely new video content from text descriptions alone, which is a remarkable achievement. The web-based platform is capable of generating relatively high-resolution videos that, while not photorealistic, clearly demonstrate the power of this technology. Compared to what is currently available on the market, the videos produced by the Gen-2 Runway AI are quite impressive.
“Deep neural networks for image and video synthesis are becoming increasingly precise, realistic, and controllable. In a couple of years, we have gone from blurry low-resolution images to both highly realistic and aesthetic imagery allowing for the rise of synthetic media,” the company states.
“Runway Research is at the forefront of these developments and we ensure that the future of content creation is both accessible, controllable and empowering for users. We believe that deep learning techniques applied to audiovisual content will forever change art, creativity, and design tools.”
Although the videos generated by the Gen-2 Runway AI are not yet able to seamlessly replace actual videos, the technology has come a long way from its early days. With further advancements, it is likely that this may become possible in the near future, particularly if the technology follows a similar trajectory to text-to-image generators such as Midjourney.
For instance, just last year, Midjourney was unable to create images that could reliably pass as actual photos. However, with the launch of version 5 last week, this has changed, demonstrating the rapid progress being made in the field of AI-generated visuals. If the Gen-2 Runway AI continues to develop at a similar pace, it is possible that it will soon be able to produce videos that are virtually indistinguishable from real footage.
It is important to acknowledge that while Runway is the first company to make this technology available to the public, it is not the only one working on text-to-video generation. Google, for instance, has been experimenting with this technology for a while. Similarly, just as there are many players in the text-to-image sector, it is likely that the text-to-video field will see numerous competitors emerge rapidly as the technology continues to advance. As a result, we can expect to see a flurry of new developments in this area over the coming months and years.
The latest breakthrough from Runway’s Gen-2 Runway AI is a significant step forward for text-to-video technology. While the videos generated by the AI are not yet photorealistic, they showcase the potential of this groundbreaking technology.
Moreover, the fact that Runway has made this technology publicly available underscores the importance of democratizing AI and making it accessible to a wider range of people. It’s worth noting that Runway is not alone in this field, and other companies such as Google are also experimenting with text-to-video generation. As the technology advances, we can expect to see more players enter the market and further developments in this space.