X is piloting a program that lets AI chatbots generate Community Notes

Spread the love

Social platform x Would pilot a feature It allows the AI ​​chatbots to make community notes.

Community Notes is a Twitter-era feature that Elon Mask has expanded under its service ownership, now known as X. Users who are part of this fact-checking program can contribute to comments that add the context to specific posts, which are later checked before other users are connected to a post. For example, an AI-exposed video post may appear in a community note that is not clear about its synthetic sources, or as an addition to a misleading post from a politician.

The notes become universal when they achieve the sens in groups that do not agree to the Histor in past ratings.

The notes of the community have been successful enough to inspire the X Meet, TickAnd YouTube To follow similar initiatives – meta Eradicated Its third-party fact-checking programs are completely in exchange for these low-cost, community-informed labor.

However, it is still a matter of whether use as a fact-checker of the AI ​​chatbots will prove helpful or harmful.

These AI notes can be made by using the GOK of X or using other AI tools and connecting them to X via API. Any note that will be the same as the notes submitted by any AI submitted by any AI submitted, which means that it will pass through the same test process to encourage accuracy.

The use of AI seems suspicious, how common is it for AIS HallucinateOr create the context that is not based on reality.

Figure Credit:Research by the notes of the X community (Opens in a new window)

According to a paper Published By researchers working on X community notes this week, it is recommended that people and LLM work in Tendem. Human response can increase the generation through reinforcement education, with human note rates as final checks before the notes are released.

“The goal is not to create an AI assistant that tells the users to think about what to think, rather to create a ecosystem that gives people more critically thinking and the ability to understand the world better,” the paper said. “LLM and people can work together in a virtuous loop.”

Even with human checks, there is still the risk of relying too much on AI, especially since users will be able to embed LLM from the third party. For example, Openai’s Chatzipt, a model recently experienced with additional experience PsychoopanticThe If an LLM gives priority to a “helpful” than a fact-checking correctly, AI-exposed comments may be incorrectly ended.

There is also concern that human rates will be extra burdened by the amount of AI-exposed comments, which will reduce their inspiration to adequately complete the work of this volunteer.

Users still do not expect to visit AI-exposed communities notes-X are successful if they are successful they plan to test these AI contributions for several weeks before it is more extensive.

Leave a Reply

Your email address will not be published. Required fields are marked *