You are using an outdated browser. For a faster, safer browsing experience, upgrade for free today.
Artificial Intelligence
30 lượt xem

Why You Shouldn’t Rely On ChatGPT Search For Accurate Information

Why You Shouldn’t Rely On ChatGPT Search For Accurate Information

OpenAI's new AI search engine has partnered with major news publications, but don't rely on it if for accurate results.

Just a month after OpenAI released its inaugural AI search engine – SearchGPT – the tool is already attracting scrutiny for failing to answer queries and identify sources correctly.

This is after the AI powerhouse made a fuss about partnering with major publications like Vogue, The New Yorker, and The Atlantic, to ensure information was reliable, up-to-date, and obtained in a way that supported independent journalism.

Unfortunately, this isn’t the first time ChatGPT has been accused of being liberal with the truth. We explore what might be the root of the AI search engines’ bluffing problem, and discuss whether it is best for publications to make a deal with the devil, or continue to block ChatGPT’s controversial web crawlers.

SearchGPT Is Inaccurate, Even When It’s Working With Partnered Publications

Since OpenAI first teased the release of SearchGPT, the AI-powered search tool has been heralded as the next Google or Bing competitor.

By using AI instead of traditional web-based sources, the new search tool set out to challenge the status quo by providing users with more conversational, targeted answers. The AI powerhouse claimed it was going to do so ethically too, by “collaborating extensively with the news industry”, and “carefully listening to feedback” from publishers.

Now SearchGPT has been out for a month, has the new feature been able to live up to its promises? Researchers from the Columbia Journalism Review don’t seem to think so. After extensively testing the AI search tool, they found that it frequently produced inaccurate, and unsubstantiated results – even when citing publishers OpenAI has a deal with like The Atlantic, Alex Springer, and The Wall Street Journal.

In addition to frequently churning out incorrect answers, SearchGPT also failed to attribute quotes to sources correctly. When the researchers tasked the chatbot with identifying 200 quotes from 20 publications, it was unable to do so correctly on 153 occasions. Occasionally, ChatGPT Search was slightly better at correctly identifying sources from paired publications, rather than those OpenAI has a neural relationship with or those that have actively blocked its search crawler, but not every time.

SearchGPT Can’t Admit When It’s Wrong

Even the most novice AI user knows to take the responses generated by these tools with a pinch of salt. However, even though no AI chatbot is a barometer of truth, what makes OpenAI’s search tool guilty of so many inaccuracies?

For one, researchers at the Columbia Journalist Review noticed that SearchGPT rarely admitted when it wasn’t able to give a correct answer. Instead, the chatbot would fill in the gaps by churning out partially or fully incorrect responses. For instance, sometimes the tool would cite the wrong publisher and the wrong date, and sometimes the entire response was erroneous.

Problematically, SearchGPT rarely acknowledged when it was doing this too. The chatbot only used qualifying phrases like “might”, “it’s possible” or “I couldn’t locate the exact article” for seven out of the 153 occasions it gave a wrong answer, making it difficult for users to know they shouldn’t be trusting this information.

ChatGPT’s Underlying AI Model Is Responsible for Errors

According to the analysis by the Columbia Journalism Review, the web crawlers ChatGPT uses to access data seem to be performing perfectly. Instead, it’s the chatbot’s underlying AI model that may be at fault. ChatGPT’s large language model (LLM) has consistently been known to bug out and provide inaccurate responses – from incorrect medical diagnoses and fabricated political statements – especially if it’s trying to resolve queries that are unusual or based on recent, developing events.

Yet, the results of ChatGPT’s search engine highlight that the chatbot’s underlying AI is still capable of making mistakes, even with licensed access to content. But what does this mean for journalism? As the intellectual property battle between publishers and AI companies wages on, media agencies are still at risk of getting their information scraped whether they like it or not. However, this analysis suggests that partnering with companies like OpenAI may not be the answer either, since tools like SearchGPT aren’t even capable of accurately crediting material to the right source.

For the casual AI user, if you’re looking for an AI-powered Google alternative, we’d recommend using Perplexity AI over SearchGPT. Not only does the tool include useful links to sources (accurately, we should add), but it also accompanies the text with relevant images, in a similar way to Google Search.

Nguồn:: tech.co

Looking for a custom web design? Then Contact the website designers at The ÂN in Vietnam via (+84).36217.9854 (phone, zalo, viber) or schedule a consultation.

Other useful information:

Others article