X users treating Grok like a fact-checker spark concerns over misinformation

Spread the love

Some users of Elon Mask X are leaning towards the AI ​​Bot Grock of Kastur for fact-cheeking, raising anxiety among human fact-checkers that it can enhance the misinformation.

Early this month, x Capable Users to call Jai Grock and ask questions on various issues. Were the steps ConfusedWhich is running an automatic account in X to give a similar experience.

Immediately after creating an automatic account of Grock on XAI X, users begin the test -examination by asking questions. Some people in the market, including India, begin to ask Grock for comments and questions that target specific political beliefs.

Fact-checkers are concerned about using Grock-or any other AI assistant in this way because the bot is not correctly correct, but their answers can frame to convey their answers. Examples Spread the fake news And Incorrect information I met Grock in the past.

In August last year, five state secretaries Request Kasturi to implement Grock’s critical changes after the confusing information generated by assistant in social networks before the US elections.

Other chatbots, including Openai’s Chatzept and Google’s Genie, were also seen Incorrect Last year in the elections. Separately, Disinformation Researchers have found in 2023 that AI chatbots with Chatzipt can be used to produce easily ConfuserThe

“AI assistants, like Grock, are really good in the use of the natural language and gives an answer that seems like a human being. And thus, this claim on the normality and pure sounding reactions of AI products, even if they are possible, it will be in danger here,” director of the International Fact-Chiking Network of Pieter, the director of the International Fact-Chating Network.

Grock was asked to make the truth-cheek on the claim made by another user by a user of X

Unlike AI assistants, human fact-checkers use multiple, credible sources to verify information. They also take full accountability for their searching by attaching their names and companies to ensure credibility.

Pratik Sinha, co-founder of India’s non-profit fact-cheeking website Alt News, says that Grock’s current is the answer, the data provided is only good.

He mentioned, “Who will decide what data it provides and will come there in the government intervention, etc.,” he mentioned.

“There is no transparency. Anything that is lacking in transparency can cause harm because anything of transparency lacking can be done in some way.”

“Can be abused – to spread the incorrect information”

In a reaction posted at the beginning of this week, Grock’s account in X Recognized It “can be abused – spreading incorrect information and breaking privacy.”

However, the automatic account does not show any disobedience when the users get the answers, for example, the answer is hallucinet, which is a possible disadvantage of AI.

Grock’s response to whether it may spread the wrong information (translated from Hinglish)

“It can create information for providing a response,” said Anushka Jain TechCrunch, a research associate at the Goa -based multidisciplinary research collective digital future Future Lab.

There are also questions about how many posts use on Grock X as training data and what quality controls these posts are used to make truth-chokes. Last summer, it Out of a change out It appeared to allow Grock to allow the user to swallow the user’s data by default.

Other fields of AI assistants like Grock through social media platforms are the fact that they supply information to the public – chatzPT or other chatboats are being personally used personally.

Even if a user knows well that the information obtained from the assistant may be confusing or may not be fully accurate, others on the platform may still believe it.

This can cause serious social damage. Examples of this were seen earlier in India WhatsApp leads the misinformation on top of the Mobile LunchingThe However, these serious events occurred before the arrival of the genius, which made the synthetic content easier and more realistic.

“If you see a lot of these Grock answers you are going to say, hey, well, most of them are right, and it may be something that can be wrong and

AI vs Real Fact-Checker

Although AI companies, including Jai, are refining their AI models to communicate as more humans, they are still not – and cannot – not replace people.

Over the past few months, technology companies have been exploring ways to reduce dependence on human factor-checkers. Platforms with X and Meta begin to embrace the new concept of crowded Fact-Checking through the so-called community notes.

Naturally, these national changes are also caused by fact checkers.

Sinha of Alt News hopefully believe that people will learn to distinguish between machines and human factor checkers and will give more value to human accuracy.

“We are going to look back at the Pennadulum at the end of more fact checking,” IFCN’s Hall said.

However, he mentioned that in the meantime, fact-checkers could probably work more with the rapid spread of AI-exposed information.

“This issue depends on many people, do you really care about what the truth is? Are you looking for a verade that sounds and feels the truth without the truth? Because AI assist will get you,” he said.

X and Jai did not respond to our request to comment.

Leave a Reply

Your email address will not be published. Required fields are marked *