Generative AI has taken the world of video production by storm in recent years. With tools like Midjourney and Chat GPT, it’s evident that AI integration is shaping the future of content creation. Notably, generative AI isn’t limited to images alone; it’s making strides in video production as well with OpenAI’s Sora and Runway ML’s Gen 2.
Element 7 has always been on the forefront of technology advancements in video, and we were not afraid to push the possibilities generative AI can bring to creative advertising. These tools empower creators to streamline pre-production tasks, such as storyboarding and pre-vis. Traditionally time-consuming tasks, like compositing and visual effects, can now be accomplished more efficiently. Shots and concepts that once required hours or months of work can be generated in minutes, even with a smaller team.
The Commercial
Our team faced an exciting challenge: creating an ad campaign for a local car dealership. The dealership had been using the same style of ads for years and wanted something fresh and unexpected. Our goal was to catch viewers off guard while adhering to a tight budget. We decided on an abstract, dreamlike approach. Imagine a car buyer envisioning their new vehicle in surreal scenarios:
Safari Hunt: A perspective car buyer deep in the jungle, on the hunt for his dream car. The lush greenery contrasts with the sleek vehicle he seeks.
Harmony of Horsepower: In another dream, a woman conducts an orchestra of cars. Each note resonates with the anticipation of finding her perfect car.
Despite our budget constraints, we wanted to create visually captivating dream sequences. Here’s how we approached it:
Live Action: We shot our actors against a green screen, capturing their reactions and movements and some final shots in a bedroom and the dealership.
AI-Generated Dreams: The dreams themselves needed to be otherworldly. We turned to generative AI to create these sequences. By leveraging Generative AI tools, we transformed mundane scenes into fantastical realms.
Balancing Creativity and Constraints: Yes, it was low budget, and yes, the concept was quirky. But we believe in pushing creative boundaries. Generative AI allowed us to achieve the impossible within our limitations.
Assessing the Shot Requirements
Before embarking on our creative journey, we needed to determine the number of shots and the assets required. Our dream sequences consisted of two distinct scenarios:
Jungle Quest: Five unique jungle backdrops where we could seamlessly insert photoshopped cars from the dealership. These backgrounds would set the stage for the dreamers’ wild adventures.
Orchestral Pursuit: Three backgrounds. However, we decided to zoom in on certain aspects, necessitating some upscaling. We also needed individual pictures of a row of vehicles that would also need to be photoshopped to match the lighting of our generated backgrounds.
The Workflow
Here’s the step-by-step process we followed to bring our dreams to life:
High-Resolution Backgrounds with Midjourney & Photoshop Generative Fill:
We began by creating high-resolution jungle and orchestral backgrounds using Midjourney. These backgrounds were tailored to our compositional needs. Next, we fine-tuned and adjusted these backgrounds in Photoshop using generative fill techniques. This allowed us to enhance details and any fix any issues with the original image.
Adding the Car and Shading:
Once the backgrounds were finalized in Photoshop, we introduced the car into the scene. Careful attention was paid to shading and perspective to ensure it blended naturally with the environment.
*Image-to-video straight from Runway ML Gen 2
RunwayML’s Gen 2 for Image-to-Video Conversion:
Our dream sequences needed movement. We took our background images into RunwayML’s Gen 2—an AI-powered image-to-video tool. While RunwayML can generate images from text, we found Midjourney more intuitive for controlling details like aspect ratio and retaining space of our car. With the generative video capabilities of RunwayML, we animated the foliage, jungle trees, and light rays within our dream sequences. These dynamic elements added depth and realism to our dreams.
*Before and After Topaz Video AI
Upscaling with Topaz Video AI:
RunwayML’s video output is limited to around 1080p for short durations. To ensure our backgrounds looked pristine even when post-zoomed, we turned to Topaz Video AI. Topaz allowed us to upscale our low-res video to glorious 4K resolution, maintaining visual fidelity.
Compositing the Final Shot:
Finally, we assembled our dream sequence. In our timeline:
The upscaled Video background was placed first.
The ground layer from Photoshop (which didn’t need animation) was added.
The car and other foreground elements were placed.
A subtle light ray and dust particles completed the composition.
Do the process minus the Image-to-video generation for the second spot, and were done. The end result was two head turning spots. Without this workflow, traditional methods to do the same thing would either cost too much, or look much worse. Everyone involved was happy with how they turned out, and it opened new possibilities for us in terms of our capabilities.
The Future of AI in Video Production
Generative AI is just getting started. While Sora leads the pack, other companies are
catching up. The market will soon be crowded with innovative tools that transform the way we create content. As we embrace this technology, let’s remember that it should enhance—not replace—the creativity of human artists. There were still many man hours fixing our generated background and cars to fit the lighting of the environments, but having a VFX team for this would've been out of the question. The possibilities for fun, creative and different commercials and advertising with these tools is going to change the industry. You can watch these two spots below:
Safari Dream
Harmony of Horsepower
Комментарии