Emergence — A Forever Collaboration
Emergence, by Tendril x FutureDeluxe
— A Mixed-media generative AI, real-time collage study for virtual production in Unreal Engine.
The era of exponential change is here and the world continues to mutate, every hour, every minute and every second. In this time of aggressive change the landscape of artistic expression is undergoing a seismic shift, propelled by groundbreaking technological advancements and an unprecedented proliferation of digital tools. AI is exciting to some, scary to others (both are right) but undoubtedly here to supercharge everything we touch.
The boundaries of creativity are being redefined, unlocking a spectrum of previously inconceivable avenues for artistic expression. It is also challenging our preconceived notions of humanity, with a continuous blurring of the line where our bodies end and our technological tools begin.
For us central to this transformation is the emergence of innovative proprietary tools. We wanted something that could embody the fusion of artificial intelligence with creative and pioneering methodologies, devoid of conventional copyright constraints and with the ability to be trained on our own source material. We wanted a tool that brought precision and even greater attention to detail and care to AI output. Read on to see how we created this prototype.
A Proprietary Mixed-Media AI Generator
Our exploration ventures into the integration of AI-based Gaussian splatting, a novel approach to depicting 3D scenes. This technique utilizes thousands of ellipses, each imbued with multifaceted light and color information, diverging from traditional mesh-based representations. The splats, meticulously trained on a bespoke dataset—our case study being a video—offer a unique, multidimensional perspective. The prototype enables the amalgamation of diverse digital techniques, marrying AI-driven Gaussian splatting with our own travel photography and footage, all remixed using Unreal Engine.
A Virtual Stage We Can Control
There is a base Gen AI, stable diffusion network, with multiple models combined together which we’ve been using thus far to generate still images with precision and control. We took this one step further, resulting in precise and dynamic motion output, and a stage set composed in Unreal. This Unreal set was not only pulling from the generative output, but also incorporated precise placement of Quixel assets like grass, rocks, and more. This collage of mixed media elements allowed us to compose a scene where we can animate and direct our cameras to output the narrative required for each shot.
We’re not new to virtual production. In our recent work with Mercedes-Benz, many interior car shots were handled on the VP stage, allowing us to spend the whole day capturing without fear of the sun going down. Our environment was modeled up in Unreal Engine and used as a backplate on the stage. But now we can add gen-AI to these environments and push the boundaries of what's possible. This artistic exploration is not merely a derivative blend of various digital elements; it signifies the dawn of groundbreaking artistic techniques and expressions that result in new art that can be surgically precise.