Experiments with Gen 1

I've been very excited about what RunwayML has been working on. Both Gen 1 and Gen 2 are really promising. Here are my first tests with Gen 1. This tool takes input footage and transforms it into new footage in the style of reference images or text prompts. Here we see the original footage plus the reference 'style frame' and the Gen 1 result below. Hand drawn cel-animation on demand! (kinda).

Replacing faces with Stable Diffusion

I have seen a bunch of similar experiments like the one I did below and wanted to try one myself. It was created by extracting still frames from footage, then the face was isolated and a new face was inpainted using Stable Diffusion image2image.

Gen 1 footage style painting

Another quick experiment using Gen1 to repaint footage in different styles. These techniques would work well for a music video project.

Full color animated
storyboards for film & television

A film and television director approached me after seeing my 'SamuChix' project on LinkedIn. We had a great conversation about the possibilities of using generative AI to create full color motion storyboards. Here is a quick test I put together. The frames were generated using Midjourney and then animated subtly with a tool called Leiapix. This test was done a while ago, if I were to do it again I would use Stable Diffusion with Controlnet instead of Midjourney- this way I would have much more control over the body positions of the people and overall composition of the frame.

Footage stylization with Vtoonify

Here are some quick goofy tests I put together using Vtoonify. It is an AI tool that runs through a browser with a simple user interface. There are a bunch of pre-canned styles to choose from with sliders to adjust: 'cartoon, 3D character, super hero' etc. It only works on faces, but I could see a use case such as a selfie filter for Instagram or Tiktok.