I tried Runway, a buzzy generative AI platform that lets you transform images and videos with text. The results were not what I expected.
- Runway is a generative AI platform with tools for transforming images and video clips.
- The platform was used in the Oscar-winning movie "Everything Everywhere All at Once."
- Runway has now made some of its tools publicly available. Here's my experience trying them out.
Back in high school, I took a graphic design class during which I learned to use Adobe InDesign and Photoshop. My image editing experience ends there — and I haven't even touched video.
Theoretically, that makes me an ideal candidate to test Runway, a generative AI platform that aims to make photo editing and filmmaking more accessible to the general public.
The platform has already made a splash in the echelon of Oscar winners: Its tools were critical to producing best picture winner "Everything Everywhere All at Once." And brands like New Balance are using the tool for brainstorming, storyboarding, and prototyping, Insider previously reported.
Runway relies on AI models to let users revamp images and videos in a variety of creative ways, and create images from text prompts.
But its most unique feature is that it can generate full videos from text prompts. Right now, that feature is only available to a limited set of users on the chat app Discord, Axios reported. I wasn't able to access it for the purpose of this story, and Runway did not immediately respond to Insider's request for a comment.
In a previous interview with Insider, Runway's cofounder Cristóbal Valenzuela said the platform's ultimate goal is to make itself available to artists with and without resources.
"It's the responsibility of every generation of artist to use the maximum amount of tech out there to make art," Valenzuela said. "Art is a point of view, it doesn't need to be technical or sophisticated, it just needs to communicate something that's meaningful."
Still, I was a little skeptical that I would actually produce anything of artistic value in my trial of Runway.
Here's a closer look at my experience:
As you can see there are a variety of generative AI features available, including portrait generators to text-to-image generators.
The roses and oranges were missing, but the desert itself had a interesting pinky-orange glow.
This time, the desert itself was gone. The oranges and roses, though, were spot on. Clearly you can't have it all.
Overall, the experience of using Runway's text-to-image generator felt pretty similar to using OpenAI's Dall-E 2 or Stable Diffusion. In fact, Runway originally collaborated with researchers from the University of Munich on the first version of Stable Diffusion, before Stability AI took over.
I was asked to feed the generator at least 15 images of myself. I went above and beyond and provided it with 22 images.
If you're wondering what I actually look like, here is my author page. I was particularly surprised by the creative liberties that Runway's tool took with my portrait.
I've documented my thoughts on the problems with using AI on faces of color. In some sense, Runway's tool wasn't unlike my experience using Lensa, another AI portrait generator that was wildly popular a few months ago.
Runway's images of me certainly defied any reasonable boundaries of what I think I look like, but I appreciated how creative they were and how they drew upon a range of artistic styles.
This was not what I had in mind, of course.
According to Runway, the extract depth tool automatically generates a depth map of any video. Here's a clip of the Men's Semifinals from the 2021 US Open.
The platform certainly democratizes access to advanced video-editing tools that novices like me could never use otherwise.
For those with even a latent interest in graphic design or filmmaking, I think this is a great platform. I, however, am probably far from using it to win an Oscar — or even a Razzie.