The Element of Machine Hallucinations in AI-Generated News

NewsGPT, the AI-powered news generator and the world’s first fully AI-driven 24/7 news channel – is a revolutionary platform that leverages advanced machine learning, deep learning, and proprietary algorithms to automate all aspects of the news production process, including story selection, research, content generation, and delivery through AI avatars. It seems to be void of the element of political biases, yet is not exempt from error.

Transforming the News Gathering Ecosystem

The channel is set to transform and disrupt the news industry, in the way that news is consumed and gathered. Closely trying to imitate the same process that journalists have adopted in assembling the information and packaging it in an unbiased, fact-based, accurate and timely form that takes media ethics into consideration when carrying out this process. Accuracy is one of the most important aspects of this process  – which causes experts to question the reliability of an AI-powered news source.

In an article by Oxford University, expert David Caswell a consultant, builder & researcher focused on the AI in news, explores what AI means for the news ecosystem and how journalists should be ready to adapt.

Measuring Accuracy

NewsGPT’s CEO Alan Levy said, “We believe that everyone deserves access to unbiased and fact-based news.”, however the level of accuracy is questionable as the AI – news agency has also said that machine hallucinations might happen.

Machine Hallucinations Leaves Room for Error

NewsGPT has said that machine hallucinations “might” happen. But they aren’t under the impression that this is a big deal. Machine hallucinations are unexpected and nonsensical outputs generated by artificial intelligence (AI) or machine learning models. The outputs can be in the form of text, images, sounds, or any other form of data that AI systems are designed to generate. This would then indicate that AI is not exempt from error.

There is a lack of transparency when it comes to information regarding the nature of the machine learning model used to produce NewGPT’s articles, leaving the general public in the dark about the credibility of the individuals is feeding its AI source.