What is Freely and why is it important?
Is TalkTV signposting the end of broadcast TV or are they just sore losers?

Predictions on the impact of generative AI on the media industry


You can probably guess that generative AI and its impact on the media industry, was one of the main topics of conversion at the recent Connecting the Dot Summit on the Summit organised by Andreas Waltenspiel . Since then, I’ve been thinking more on the subject and, building on the conversations in the Austrian Alps, have three initial predictions that I would like to share with you.

As well as sharing the prediction, I'd like to give some historical background that leads to them. After all change doesn’t happen in one big moment, but with small incremental changes. Though often there is a key moment that will be remembered as, using one of my favourite phases, a tipping point.

The rise of canny influencers

My first prediction is that to be in big budget TV series and films you won’t need to be able to act, you will just need to look good and win followers. In an age of AI generated content why do we need real actors at all, why not just create perfect people to be in our films? Well, films need stars, people who can appear on the red carpet, be written about in gossip columns and have active Instagram accounts.

I don’t think we are far off having films made with actors, who are great at acting but don’t have star quality, filmed with motion capture and then replaced with the likeness of influencers, people who can really look and ‘act’ the part off screen.

Actors have had stunt-doubles since the early years of film. According to Wikipedia, the first possible appearance of a stunt-double was Frank Hanaway in The Great Train Robbery, shot in 1903. So why not acting-doubles, someone to step in and do the difficult emotional scenes or tricky dialogue.

Fab Morvan and Rob Pilatus from Milli Vanilli

The historical parallel that comes to my mind, comes from the music industry and is Milli Vanilli. Milli Vanilli were a German R&B band in the last 80’s and early 90’s. It turns out that they had two lip syncing performers, Fab Morvan and Rob Pilatus who looked the part and then seasoned profession session singers recording the songs. It was a great combination, and they were really successful, selling 9 million albums.  It was however considered a fraud when this arrangement was eventually unmasked, causing lot of consternation within the music industry but not really with the fans. The band did then disappear, but I suspect the issues were not really that they were fake but the shelf-life of the band had expired. At the time lip-syncing was common even for real bands, it is less common now we have autotune, which can take out of key singers and put them right on key. It could be argued that autotune is a form at AI that is enhancing performances.

If you want to learn more, last October a new documentary on Milli Vanilli was launched on Paramount+.

Brandon Lee in The Crow

The film industry has been digitally replacing actors for years, one of the oldest and most famous is in the 1994 film The Crow, which starred Brandon Lee, son of the famous martial artist and actor Bruce Lee. Brandon was tragically killed on set in a firearms incident towards the end of filming. The film was rescued with some rewriting and by digitally replacing Brandon’s stunt-doubles face with Brandon’s in a few additional scenes. In 1994, CGI was only just at the point it could support this.

Uncanny children from The Polar Express

Quick side bar. As human’s we are very perceptive of and disturbed by things that are not quite right or uncanny. Almost certainly this comes from spotting tigers in jungles. When someone tries to make something real but is not 100% right it disturbs us more that something that is not made to look real.  The Polar Express from 2004 is a good example of trying to be too realistic and ending up with something I find quite scary. The other horrendous example is Cats from 2019. With CGI we see film makers pushing forward beyond its capabilities which look uncanny and get rejected by our brains and by audiences. But CGI is improving all the time and AI is accelerating this.

The uncanny valley

Scientifically, the point at which realism becomes to realistic but not realistic enough, creating a lack of affinity is known as the Uncanny valley and was first referenced by the Japanese robotics professor Masahiro Mori in 1970. He was thinking of robots but it equally applies to CGI.

A new version of The Crow will be released this coming June with Bill Skarsgård playing Lee’s lead role.

The real breakthrough CGI film for me was Jurassic Park which was released in 1993. It was one of the first times I saw CGI that I knew wasn’t real, but it didn’t look unreal, if you know what I mean. But then I’ve never seen a live dinosaur, so it is easier to make an uncanny one, than it is to make an uncanny human, of which I’ve seen lots.

Oliver Reed in The Gladiator

Back on human replacement. The other famous early face replacement was with Oliver Reed in Gladiator from 2000. Reed died of a heart attack during the filming attributed to him taking part in a drinking game with some British sailors on shore leave. His yet to be filmed scenes were done by a body double and CGI.

CGI has been tinkering with people appearances in numerous ways since 1993, be that Voldemort’s nose in Harry Potter from 2005 onwards and Brad Pitt’s whole body in the Curious Case of Benjamin Button in 2008. I would say that last year's Indiana Jones and the Dial of Destiny showed a big step forward in actor replacement. If you haven’t seen it, for 25 minutes of the film, Harrison Ford is shown as he was back in the 80s. There are some complaints by industry critics, but I don’t think it was uncanny.

The real and the CGI Harrison Ford in the Indiana Jones and the Dial of Destiny

French films in Welsh and hyper personalised advertising

My second prediction is that films will increasingly have different versions created for different audiences. The first obvious step is that dubbing will be based on the original actors voice (or at least the voice used in the original language) with lip movements changed to achieve perfect lip-sync. I can’t watch dubbed films, preferring subtitles, as I find the lack of lip-sync uncanny.

Creating uncanny CGI lip-sync is possible today, but I suspect that it has been too expensive for wider scale use.

As AI massively reduces the cost, not only will it become more common, it will be possible to dub content into many more languages, including Welsh, the 22 scheduled Indian languages and may be the 122 major Indian languages.  You can see the SVOD giants really pushing this one hard.

The Upside and The Intouchables

Will this further kill local production? The French film industry has always been very strong and is arguably the oldest but, since its collapse during the 1st World War, has not dominated globally as Hollywood has done.  You may be less aware that there is a steady stream of French films that are remade each year by Hollywood as English language films, a testament to their quality. A good example is the 2017 film The Upside which was a remake of the 2011 French film The Intouchables. If you take a look at stills of the two films (as shown above), many of the sequences seem identical, especially the hand gliding sequences. Might the Intouchables have been an international hit with lip-sync dubbing?  Could all great French films and TV series have a strong international audience.

Large scale dubbing of content feels like it is almost here, so not much of a prediction. But what other regionalisation is going to be possible? With whole body replacement, the star in a film could be completely replaced with a local star, someone a local audience better identifies with. I don’t think we will see this with the main characters as these are often played by stars with international cache, but what about the trustworthy best friend or favourite grandfather figure.

Going back to the Intouchables, it wasn’t just remade into the English it was also remade into Spanish and various Indian languages. One Indian remake was filmed simultaneously in Telegu and Tamil. Could the original film have been taken and have not only the language and its cast replaced but also its location?

Thozha is Telugu/tamil adaptation of French film The Intouchables.

One area where this kind of regionalisation could go further and faster is advertising. Every ad could have a person that you identify with or aspire to be, promoting the product. They can wear clothes you are comfortable with; speak in a dialect you connect with and in an environment you recognise. I’m currently studying what makes advertising effective as part of the ThinkBox TV Masters course and I believe this hyper targeting of a brand messages is going to happen and will be a future driver of addressable advertising, but that is worthy of a future blog post.

Lawsuits and counter lawsuits

One of the elephants in the room around generative AI is the abuse of intellectual property in training the models. If a piece of content is created from a model using some original content, is the new piece of content essentially a derivative work of the original?

Men at Work

You just have to look at the music industry and the lawsuits based on fragments of tunes appearing in other songs. One famous one is the global hit Down Under by Men at work, first released in 1980.  In 2009, 28 years after the release of the recording, Men at Work were sued for copyright infringement. It was alleged that part of the flute riff of Down Under was copied from a children’s song Kookaburra. Kookaburra, which was published in 1932 and a widely sung childrens’ song  (I’m pretty certain I sang it at school) was generally thought to be in the public domain. The court case went on to 2011, with the band being liable to pay the publishers  5% of the royalties from 2002.  It is claimed that the stress of the court case led to the early death of the band member Greg Ham.

I am not a legal expert, but I can easily predict that using generative AI for big budget media will lead to lots of lawsuits and lawyers making more money from it than the AI scientists (until generative AI makes the lawyers redundant).

What this will require is being able to forensically identify what was used to train a model and that the rights and licenses around the material. This is likely to be an industry it its own right.

If you know anything about actors residual contracts you will know this is going to be a complicated area. Residual payments for actors dates back to radio in the 1930s. Originally radio programs would be acted out multiple times to cover multiple US time zones, with the actors being paid for each performance. Once audio recording achieved a good enough quality to support time delays, the actors would be paid a residual payment for the recorded performance.  Residual payments to actors, writers and directors now cover all sort of different uses of original material. It has been residual payments that have been at the heart of many actors and writers strikes over the years as new technology has emerged which was not fully covered by the existing residual payment contracts.

2023 Writers Guild of America strike

Last year’s writers strike, which we are still feeling the effects from, was in part, a dispute around residuals from streaming services. It was also about the impact of AI on the writers. Many releases were delayed by the strike, this included films that had completed production but were not released as the actors would not promote the affected films during the strike, another reminder of the importance of having real people to promote films.

Ironically, one production indefinitely suspended due to the strike was an Amazon production call Unstoppable.

Which year will we see an Actor strike around AI generated body replacement?

Cameron Diaz in The Mask

Sadly, one thing I believe we will see is the young struggling models and actors selling their images for use in generative AI movies only to regret it when they are famous. This will mirror the common story of Hollywood actors who appeared in porn films on their way up. Cameron Diaz appeared in an adult softcore film when she was 19, three years before she got her big break in 1994 film The Mask.


The conversation of the slopes in Austria and my prediction all predate the announcement of Sora. Sora is the solution from OpenAI (producers of ChatGPT) for creating video from text. So far this provides a short video clip (up to 1 minute) that expresses the text given.  There are currently 48 Sora videos on the OpenAI website I don’t know how many they have picked from, so it is not clear whether we are just seeing the very best.  Some of the example videos have been chosen to highlight where the model has weaknesses. While in the examples animals growing extra limbs can be a bit disconcerting, the video of humans is not uncanny and illustrates how close the future is.

Still from a Sora video.

One last thought and a story. Do we want to need more and cheaper video entertainment generated by AI? The historical story that is brought to mind is that of the Cartoon studio Hanna-Barbera found in 1957 by William Hanna and Joseph Barbera. They had both worked at MGM’s cartoon studio on such classics as Tom and Jerry. They left MGM when MGM closed the studio down, not because there was not a strong enough market for cartoon, but MGM felt they had a strong enough back catalogue for re-release so did not need to produce any more cartoons. Can you imagine any content producer today feeling they had enough content to stop all future production?   While at MGM Hanna and Barbera had worked on high quality theatrical shorts, at their own studio they focused on creating cartoons for the growing TV market, which require more content at a much lower cost. They developed techniques to reduce the cost of production across the board, including reusing both scripts and animation sequences across their titles. They were highly successful, matching the needs of audiences and the medium. My childhood was filled by Hanna-Barbera titles such as the Flintstones and Top Cat. MGM certainly missed a key market opportunity by believing that they needed to maintain quality and there was not a demand for quantity.

Hanna-Barbera characters

While I have found memories of watching Hanna-Barbera cartoons on a Saturday morning, I’m not so sure the content has lasted the test of time. Whereas (apart from some very dodgy racial stereotypes) the original MGM Tom and Jerry has stood the test of time, better for viewing on UHD televisions.

Time will tell how generative AI will be used across the entertainment industry.

I have one last extra prediction which the rise of Artisan films, films produced without AI. Much as we have cheap sliced supermarket bread, people still pay extra for Artisan sough dough loaves. So maybe having real actors will be a key selling point of some films.



Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.