Shock over the news of the world’s first AI anchors has rocked the world, as many fear fake news will reach an all-time high when digital broadcasters appear on television across the globe next year.
The rise of ChatGPT has seen artificial intelligence creep into newsrooms across the country, leading to many journalistic mistakes that have led to the firing of executives and loss of trust.
But now, Los Angeles-based Channel 1 is using the tech to create digital humans to provide updates about what is happening worldwide.
Channel 1 plans to launch free ad-supported streaming TV (FAST) – apps like Crackle, Tubi, or Pluto – as early as February.
Journalists said the AI anchors could ‘have huge ramifications for an already depleted news industry.’
Some of Channel 1’s anchors are ‘digital doubles,’ avatars created from a scan of an actual person. These anchors will read the news with a digitally generated voice
Ruby Media Group CEO Kristen Ruby shared her concerns on X
Ruby Media Group CEO Kristen Ruby shared on X: ‘If you believe in the concept of ‘fake news,’ you have seen nothing. At least your news is presented by humans. When AI news anchors replace human news anchors – the concept of fake news will have a totally different meaning.’
Alec Lazenby, a Canada’s BC Today reporter, also shared his concerns on X: ‘This is utterly utterly terrifying. While the development of an entirely AI-powered broadcast is beyond impressive, it could have huge ramifications for an already depleted news industry and accelerate the loss of high-quality reporters and anchors.’
DailyMail.com spoke with Channel 1 founder Adam Mosam, who assured his company will not exploit the controversial technology.
Mosam admitted that misuse of AI-generated news is inevitable, but Channel 1 aims to ‘get out in front of this and create a responsible use of the technology.’
But the public foresees a dystopian future where ‘fake news’ runs rampant because computer algorithms generate it.
An AI-generated anchor reads the news, including auto-generated polls that aggregate opinions from social media
Channel 1’s ‘digital doubles’ mimic real people’s body language while their animated face reads the news in a digital voice
Many online publications attempted this journey in the last year, and nearly all regretted the decision.
Mass media company Gannett owns several US publications, including the Louisville Courrier Journal, AZ Central, Florida Today and the Milwaukee Journal Sentinel, which used an AI service called LedeAI to pump out local high school sports stories.
Gannet pulled the plug on its AI experiment in August after numerous article errors, and while the public may not be so fussed over local news about high schools, they fear such mistakes could happen on a global scale.
Sports Illustrated is another that tried and failed.
Last month, the legacy magazine came under fire for publishing AI-generated writing while using headshots of fake authors and creating bogus profiles.
And on Tuesday, Arena Group Holdings, which owns Sports Illustrated, fired CEO Ross Levinsohn over the allegations.
Journalists said the AI anchors could ‘have huge ramifications for an already depleted news industry’
A digital double news anchor named ‘Oliver’ repetitively tents his fingers as he reads entertainment news
Mosam 1 told DailyMail.com that it plans to be transparent with viewers about what footage is original and what is AI-generated.
But, the founder’s words do not seem enough to ease the public’s mind.
American tech and social media commentator Lance Ulanoff echoed similar criticism in a post: ‘AI news anchors are exactly what you don’t need in your fact-based, news-starved life.’
The outrage is not just from the media industry – ordinary people are also unnerved by digital humans.
Director Lee Kirton said: ‘But the news anchors completely generated in AI does concern me. We are living in a world of misinformation, digital media literacy is important, the social media channels are already flooded with fake videos, misinformation and power will use it… don’t look up.’
Ireland-based Olga Klofac shared: ‘It’s already getting hard to trust anything online, and it’s only going to get harder.’
Channel 1’s use of digital doubles helps their news anchors appear more natural, avoiding some of the problems of fully digitally created avatars – like uncanny fingers or teeth
Channel 1’s anchors can read the news in a variety of languages. The company’s sampler video shows an anchor reading in Greek and Tamil
Mosam told DailyMail.com that Channel 1 intends to use human anchors on the scene for more critical stories, but that scenario was not displayed in the 21-minute demonstration video.
Instead, the clip shows several AI-generated humans reporting the news using similar hand gestures that look very natural.
However, a closer look at the hands, viewers will see they have longer fingers and more than five on each hand.
And while their eyes blink, they appear to be dead inside with no emotion.
Jane Rosenzweig, director of Harvard Writing Center, commented on this issue: ‘Much to say about this but for now just noting that these AI-generated non-human news anchors are not really going to be delivering ‘heartfelt’ news stories.’
The actual information contained in Channel 1’s reports will come from three sources: partnerships with yet-to-be-named legacy news outlets, commissioned freelance journalists, and AI-generated news reports drawn from trusted official sources like public records and government documents.
The clip shows several AI-generated humans reporting the news using similar hand gestures that look very natural.
Mosam would not say which legacy news outlets have partnered with Channel 1.
He also told DailyMail.com that humans would be involved in the station’s production.
One of Channel 1’s main goals is to produce personalized news streams with an app that functions like TikTok and learns what each viewer wants to see.
But a news station comparing itself to TikTok has also raised red flags, as the Chinese-owned app is known to be rampant with fake news stories and deepfakes of politicians spewing misinformation.
‘We believe that we can create a better news product to really better inform people,’ Mosam said.
Rather than giving viewers a standardized broadcast that plays the same hour or two of content for everyone in the world, Channel 1 will allow consumers to select which news stories they watch.
‘The average person watches 25 minutes of news a night on cable, so that might be 9 or 10 stories,’ Mosam said.
‘If we can generate 500 stories and choose the right 9 or 10 for you, then we’re going to do a better job of informing you, showing you what you’re looking for in your allotted time.’
And over time, Mosam said, the app will learn a viewer’s preferences and habits.
‘If it was financial news, maybe we’re reporting on the stocks you own or the areas you’re interested in. If it was sports, maybe it’s your favorite teams.’
Another AI-based that Channel 1 will deploy is translation for international audiences.
The company’s sample reel included a local news story featuring a French man, with his voice and mouth movements digitally replaced with an English translation.
Using digital double anchors raises concerns about people’s rights to their own likeness, which actors raised as a major concern in the recent Screen Actors Guild (SAG) negotiations and strikes.
‘We wouldn’t want our likeness to be used to something we don’t believe in, to say something insane, to say something untrue to fool people,’ Mosam said. ‘That’s a terrifying thought. And we plan to follow all the best practices and standards that are being laid out whether it’s our industry, the entertainment industry, or just, you know, as humanity at large deals with this.’