FREEDOM AND SAFETY
Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into.
Today, the industry is almost entirely dematerialized, while storytellers and storytelling mediums explode in number. And this is just the beginning.
Netflix turned entertainment on its head practically overnight, shooting from a market cap of US$8 billion in 2010 (the same year Blockbuster filed for bankruptcy) to a record US$185.6 billion only 8 years later. This year, it is expected to spend a whopping 15 billion on content alone.
Meanwhile, VR platforms like Google’s Daydream and Oculus have only begun bringing the action to you, while mixed reality players like Dreamscape will forever change the way we experience stories, exotic environments and even classrooms of the future.
In the words of Barry Diller, a former Fox and Paramount executive and the chairman of IAC, “Hollywood is now irrelevant.”
In this two-part series, I’ll be diving into three future trends in the entertainment industry: AI-based content curation, participatory story-building, and immersive VR/AR/MR worlds.
Today, I’ll be exploring the creative future of AI’s role in generating on-demand, customized content and collaborating with creatives, from music to film, in refining their craft.
Let’s dive in!
For many of us, film brought to life our conceptions of AI, from Marvel’s JARVIS to HAL in 2001: A Space Odyssey.
And now, over 50 years later, AI is bringing stories to life like we’ve never seen before.
Converging with the rise of virtual reality and colossal virtual worlds, AI has begun to create vastly detailed renderings of dead stars, generate complex supporting characters with intricate story arcs, and even bring your favorite stars - whether Marlon Brando or Amy Winehouse - back to the big screen and into a built environment.
While still in its nascent stages, AI has already been used to embody virtual avatars that you can converse with in VR, soon to be customized to your individual preferences.
But AI will have far more than one role in the future of entertainment as industries converge atop this fast-moving arena.
You’ve likely already seen the results of complex algorithms that predict the precise percentage likelihood you’ll enjoy a given movie or TV series on Netflix, or recommendation algorithms that queue up your next video on YouTube. Or think Spotify playlists that build out an algorithmically refined, personalized roster of your soon-to-be favorite songs.
And AI entertainment assistants have barely gotten started.
Currently the aim of AIs like Google’s Assistant or Huawei’s Xiaoyi (a voice assistant that lives inside Huawei’s smartphones and smart speaker AI Cube), AI advancements will soon enable your assistant to search and select songs based on your current and desired mood, movies carefully picked out to bridge you and your friends’ watching preferences on a group film night, or even games whose characters are personalized to interact with you as you jump from level to level.
Or even imagine your own home leveraging facial technology to assess your disposition, cross-reference historical data on your entertainment choices at a given time or frame of mind, and automatically queue up a context-suiting song or situation-specific video for comic relief.
Beyond personalized predictions, however, AIs are now taking on content generation, multiplying your music repertoire, developing entirely new plotlines, and even bringing your favorite actors back to the screen or - better yet - directly into your living room.
Take AI motion transfer, for instance.
Employing the machine learning subset of generative adversarial networks (GAN), a team of researchers at UC Berkeley has now developed an AI motion transfer technique that superimposes the dance moves of professionals onto any amateur (‘target’) individual in seamless video.
By first mapping the target's movements onto a stick figure, Caroline Chan and her team create a database of frames, each frame associated with a stick-figure pose. They then use this database to train a GAN and thereby generate an image of the target person based on a given stick-figure pose.
Map a series of poses from the source video to the target, frame-by-frame, and soon anyone might moonwalk like Michael Jackson, glide like Ginger Rogers or join legendary dancers on a virtual stage.
Somewhat reminiscent of AI-generated "deepfakes," the use of generative adversarial networks in film could massively disrupt entertainment, bringing legendary performers back to the screen and granting anyone virtual stardom.
Just as digital artists increasingly enhance computer-generated imagery (CGI) techniques with high-fidelity 3D scanning for unprecedentedly accurate rendition of everything from pores to lifelike hair textures, AI is about to give CGI a major upgrade.
Fed countless hours of footage, AI systems can be trained to refine facial movements and expressions, replicating them on any CGI model of a character, whether a newly generated face or iterations of your favorite actors.
Want Marilyn Monroe to star in a newly created Fast and Furious film? No problem! Keen to cast your brother in one of the original Star Wars movies? It might soon be as easy as contracting an AI to edit him in, ready for his next Jedi-themed birthday.
Companies like Digital Domain, co-founded by James Cameron, are hard at work to pave the way for such a future. Already, Digital Domain’s visual effects artists employ proprietary AI systems to integrate humans into CGI character design with unparalleled efficiency.
As explained by Digital Domain’s Digital Human Group director Darren Handler, “We can actually take actors’ performances - and especially facial performances - and transfer them [exactly] to digital characters.
And this weekend, AI-CGI cooperation took center stage in Avengers: Endgame, seamlessly recreating facial expressions on its villain Thanos.
Even in the realm of video games, upscaling algorithms have been used to revive childhood classic video games, upgrading low-resolution features with striking new graphics.
One company that has begun commercializing AI upscaling techniques is Topaz Labs. While some manual craftsmanship is required, the use of GANs has dramatically sped up the process, promising extraordinary implications for gaming visuals.
But how do these GANs work? After training a GAN on millions of pairs of low-res and high-res images, one part of the algorithm attempts to build a high-resolution frame from its low-resolution counterpart, while the second algorithm component evaluates this output. And as the feedback loop of generation and evaluation drives the GAN’s improvement, the upscaling process only gets more efficient over time.
“After it’s seen these millions of photos many, many times it starts to learn what a high resolution image looks like when it sees a low resolution image,” explained Topaz Labs CTO Albert Yang.
Imagine a future in which we might transform any low-resolution film or image with remarkable detail at the click of a button.
But it isn’t just film and gaming that are getting an AI upgrade. AI songwriters are now making a major dent in the music industry, from personalized repertoires to melody creation.
While not seeking to replace your favorite song artists, AI startups are leaping onto the music scene, raising millions in VC investments to assist musicians with creation of novel melodies and underlying beats... and perhaps one day with lyrics themselves.
Take Flow Machines, a songwriting algorithm already in commission. Now used by numerous musical artists as a creative assistant, Flow Machines has even made appearances on Spotify playlists and top music charts.
And startups are fast following suit, including Amper, Popgun, Jukedeck and Amadeus Code.
But how do these algorithms work? By processing thousands of genre-specific songs or an artist’s genre-mixed playlist, songwriting algorithms are now capable of optimizing and outputting custom melodies and chord progressions that interpret a given style. These in turn help human artists refine tunes, derive new beats, and ramp up creative ability at scales previously unimaginable.
As explained by Amadeus Code’s founder Taishi Fukuyama, “History teaches us that emerging technology in music leads to an explosion of art. For AI songwriting, I believe [it’s just] a matter of time before the right creators congregate around it to make the next cultural explosion.”
Envisioning a future wherein machines form part of the creation process, Will.i.am has even described a scenario in which he might tell his AI songwriting assistant, “Give me a shuffle pattern, and pull up a bass line, and give me a Bootsy Collins feel...”
Over the next decade, entertainment will undergo its greatest revolution yet. As AI converges with VR and crashes into democratized digital platforms, we will soon witness the rise of everything from edu-tainment, to interactive game-based storytelling, to immersive worlds, to AI characters and plot lines created on-demand, anywhere, for anyone, at almost zero cost.
We’ve already seen the dramatic dematerialization of entertainment. Streaming has taken the world by storm, as democratized platforms and new broadcasting tools birth new convergence between entertainment and countless other industries.
Posing the next major disruption, AI is skyrocketing to new heights of creative and artistic capacity, multiplying content output and allowing any artist to refine their craft, regardless of funding, agencies or record deals.
And as AI advancements pick up content generation and facilitate creative processes on the back end, virtual worlds and AR/VR hardware will transform our experience of content on the front-end.
In our next blog of the series, we’ll dive into mixed reality experiences, VR for collaborative storytelling, and AR interfaces that bring location-based entertainment to your immediate environment.