Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Meta CEO Mark Zuckerberg has promised to create artificial General Intelligence (AGI) – which has been defined as roughly AI that can do any work any human – publicly available one day. However New policy documentMeta suggests that there are certain situations where it cannot publish a highly capable AI system developed internally.
Meta detects two types of AI systems that call its Frontier AI Framework that is very risky to publish the company: “high risk” and “critical risk” system.
As meta defined them, both “high-risky” and “critical-risky” systems are capable of assisting in cybercrossi, chemicals and biological attacks, the difference is that “critical-risky” systems may result in a “catastrophic result [that] Cannot be subdued [a] The context of the proposed deployment. “In contrast, high-risky systems can make it easier to attack, but reliably or reliably not as a critical risk system.
What kind of attacks are we talking about here? Meta gives a few examples, such as “a best-deficient-saved corporate-scale environment automatically ending” and “the spread of high-influence biological weapons”. The list of potential disasters in Meta document is completely far away, the company recognizes, but among them Meta believes that the “most urgent” and powerful AI system is commendable for the publication of the public.
Something surprising is, according to the document, the risk of the meta system is not on the basis of an experienced test but notified by inputs of internal and external researchers who are subject to review by “senior-level decision makers”. Why? Meta says that it does not believe that the science of evaluation is “strong enough to supply a specific quantitative metric” to decide on a system’s risk decision.
If the meta determines that a system is high-risk, the agency says it will limit access to the system internally and it will not reveal it until it is applied to “reduce the risk of moderate levels”. On the other hand, if a system is considered as critical-risky, Meta says it will implement the unmanned protection protection to prevent the system from being exhaustrated and to stop development until the system is less dangerous.
Mater Border AI Framework, which the company said the changed AI landscape and which would develop with meta Promised to publish before This month, before France AI Action Summit, the organization’s “open” method for the development of the system seems to be a response to criticism. Meta’s AI Technology has adopted the strategy to publicly – although Is not open source by the definition of generally burdensome – In contrast to companies like Openai that prefer to gate their systems on the back of an API.
For Meta, the open release system has been proven as a blessing and a curse. The family of the AI ​​models is called, called LamaDownload a few million downloads. However there is also Lamar Report At least one US anti -US has used to develop a defense chatbot.
In the publication of its Frontier AI structure, Meta can also notice the opposite of its open AI strategy with the Chinese AI Farm DIPSEC. Dipsc Also makes its systems publicly available. However the agency has a few protection of AI and it may be easily driven Produce toxic and harmful resultsThe
“[W]And believe that both the advantage and risk to decide on how to develop and deploy advanced AI, “Meta document writes,” that technology is possible to supply that technology to society that maintains a suitable level of risk by saving the benefits of society. Time to keep “
There is an AI-centric newsletter in TechCrunch! Sign up here Every Wednesday gets it to your inbox.