Meta has freed top advertisers from the regular content editing process

Spread the love

Meta has exempted some of its top advertisers from traditional content moderation, protecting its multibillion-dollar business from the fact that the company’s systems penalize top brands for mistakes.

Published by the Financial Times in 2015 In 2023, internal documents show that the Facebook and Instagram owner introduced a series of “safeguards” that “protect high spenders”.

The previously unreported notes said Meta Depending on how much an advertiser spends on the platform and some top advertisers will be reviewed by people instead, it will “spend off earnings”.

A document costing more than $1,500 a day – “are exempt from advertising restrictions” but still “eventually sent to manual human review,” the group said.

The notes were from this week. advertisement According to CEO Mark Zuckerberg, Meta is ending its third-party fact-checking program and calling off automated content moderation, as Donald Trump prepares to return to the presidency.

2023 documents that Meta’s automated systems incorrectly flagged some of the most overspent accounts in violation of company rules.

The company told the FT that it had been exposed to false notifications of breaches that could have disproportionately affected high-cost accounts. He did not respond to questions about whether any of the measures in the documents were temporary or ongoing.

Ryan Daniels, a Meta spokesman, called the FT report “simply inaccurate” and that “by reading cherry-picked documents, this effort is intended to address an issue we have publicly identified: preventing errors in implementation.” .

Advertising accounts for the majority of Meta’s annual revenue of nearly $135 billion by 2023.

The tech giant uses a combination of artificial intelligence and human moderators to show ads to stop violations of its standards, in an effort to avoid content such as scams or harmful content.

In a document titled “Preventing Costly Mistakes,” Meta said it has seven safeguards in place to protect business accounts that generate more than $1,200 in revenue over a 56-day period, while also having users spend more than $960 on advertising. time.

He wrote that the safeguards help the company “determine whether an investigation can proceed to enforcement” and are “designed to suppress investigations.” . . based on characteristics such as the level of ad spend”.

As an example, he presented the business in “more than 5 percent of the income”.

Meta told the FT that it uses “higher spend” as a defense because it often means a company’s ads are more accessible and the consequences can be severe if a company or its ads are mistakenly removed.

The company admitted that it had prevented some high-cost accounts from being automatically disabled by sending them out for human review when they were concerned about the integrity of their system.

However, he said all businesses were still subject to the same advertising standards and no advertiser was exempt from the rules.

On the note of “high cost error prevention”, the company defines different categories of guards as “low”, “medium” or “high” protection.

Meta staff labeled the experience of accessing cost-related safeguards as “low” deterrence.

Other means of protection, such as using knowledge of the integrity of the business to detect policy violations to help decide whether to take automated action, are labeled “high” protection.

He said that even if the word “protection” is interpreted in the wrong way, it refers to the difficulty of explaining the idea of ​​protection to the stakeholders.

The 2023 filings do not name the top spenders who fell through the company’s watchdogs, but suggest that the spending limits could exempt thousands of advertisers from the normal arbitration process.

According to estimates from market intelligence firm Sensor Tower, the top 10 US moneymakers on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.

Meta has posted record revenues in recent quarters, and its stock is trading at an all-time high as it recovers from a pandemic slump in the global ad market.

But Zuckerberg has warned that since the rise of AI, ByteDance-owned rival TikTok is gaining popularity among younger users, posing a threat to its business.

A person familiar with the documents argued that the company was “putting revenue and profits ahead of consumer loyalty and health,” adding that there were concerns about bypassing the normal regulatory process.

Zuckerberg said on Tuesday that the complexity of Meta’s content moderation system introduced “too many bugs and too much censorship.”

His comments came after Trump accused Meta of censoring conservative speech last year and accused Zuckerberg of “spending the rest of his life in prison” if the company meddled in the 2024 election.

The internal documents also show Meta plans to pursue other exemptions for certain high-spending advertisers.

On a side note, Meta’s staff offered “stronger protection” from overbuying to so-called “platinum and gold spenders,” which together account for more than half of ad revenue.

“Enforcement of false positives against high value advertisers costs meta revenue (and) damages our credibility,” the memo reads.

It suggested the option of exempting these advertisers from certain enforcement except in “very rare” cases.

According to the memo, the staff concluded that platinum and gold promoters were “not an appropriate class” for broad exemptions because, based on the company’s tests, approximately 73 percent of enforcement was correct.

The internal documents also show that Meta has found several AI-generated accounts in the categories of big publishers.

Meta has previously come under scrutiny for carving out exemptions for essential users. In the year In 2021, the Facebook informant Francis Haugen Leaked documents show the company has an internal system called “cross-checking” designed to review content from politicians, celebrities and journalists to make sure posts aren’t mistakenly deleted.

According to Haugen’s documents, this was used to protect some users from enforcement even if they violated Facebook’s rules, a practice known as “whitewashing.”

Meta’s oversight board — a company-funded independent “supreme court” body that oversees the most difficult moderation decisions — found that the censorship system left dangerous content online. He called for a change in the system that started with Meta.

Leave a Reply

Your email address will not be published. Required fields are marked *