Sora needs to increase her game to comply with the new track AI video model

I always enjoy the chance to deal with AI video generators. Even if they are terrible, they may be fun and they may be surprising when they shoot. So I was eager to play with Runway’s new Gen-4 model.
The company praised Gen-4 (and smaller, faster sister model Gen-4 Turbo) that it could perform better in terms of quality and consistency from its previous Gen-3 model. Gen-4 is nailed that the so-called characters may look like themselves and should appear between the scenes along with more fluent movement and advanced environmental physics.
In addition, the following instructions had to be quite good. You give a visual reference and some descriptive texts and produces a video that resembles what you imagine. In fact, it was very similar to how Openai introduced Sora, the AI video creator.
Although Sora’s videos are usually spectacular, they are sometimes unreliable in terms of quality. A scene can be perfect and can have floating characters such as ghosts or doors that go anywhere.
Magic movie
Runway Gen-4 threw himself as a video magic, so I decided to test it by keeping it in mind and see if I could make videos telling the story of a magician. I have developed a few ideas for a small fantasy trilogy played by a mobile magician.
I asked the magician to meet a Elf princess and then chase him from the magic portals. Then, when he meets him again, he is hidden as a magical animal and transforms him into a princess.
The aim was not something that broke a box office records. I just wanted to see how much Gen-4 can stretch with a minimum entry. I have benefited from the newly upgraded chatgpt picture manufacturer to create any photos of real wizards, to create convincing still images.
Sora may not be flying Hollywood in the air, but I cannot deny the quality of some pictures produced by Chatgpt. I made the first video, then I used the runway to “fix” a seed to make the characters look consistent in the videos. I put a short break between each of the three videos in a single movie below.
AI Cinema
You can see that it is not perfect. There are some strange object movements and the consistent appearance is not perfect. Some background items shone strangely, and I don’t put these clips on a theater screen yet. However, the real movements, expressions and emotions of the characters were surprisingly real.
Also, I liked the repetition options that didn’t drown me with too much manual options, but at the same time providing enough control, so I felt that I was actively involved in creation and that I didn’t just press one button and pray for consistency.
Now, will Sora and Openai’s professional filmmakers drop their partners? No, it’s definitely not right now. But if I was an amateur filmmaker who wanted a relatively cheap way to see what some of my ideas could look like, at least I would have tried it. At least before spending a ton of money on people who need to look and feel as strong as a film vision.
And if I grow up comfortably with it and if it’s good enough to use AI and manipulate what I want every time, I may not even think about using Sora. You don’t have to be a magician to see that the magic runway hopes to pour into the potential user base.