Eric Schmidt argues against a ‘Manhattan Project for AGI’

Spread the love

According to a policy published on Wednesday, former Google CEO Eric Shmidt, Scale AI chief executive Alexander Wang, and AI Protection Director Dan Handrixs Center said that AI systems known as the “superhum” in the United States should not be followed by the “Superhum” intelligence.

Paper, heading “Superjection strategy“Emphasizes that the aggressive bid to control the superintendent AI systems in the United States can take intense revenge from China, probably in a cybertack form, which can destabilize international relations.

“[A] Manhattan project [for AGI] Co-authors wrote, “It is assumed that rivals will recognize a permanent imbalance or publicity instead of resisting,” co-authors wrote. “What starts as a superway in a superway and global control encourages resistance and enhances tension, thereby reducing the technique to protect the technique very much stability.”

Co-authored by three highly influential figures in the AI ​​industry in America, the paper comes a few months after The US Congressional Commission proposed a ‘Manhattan Project-Style’ In the 1940s an attempt to fund the AGI development to model after the US nuclear bomb program. US Energy Secretary Chris Wright recently said in the United States “Starting a new Manhattan project“OpenAI co-founder Greg Brockman stands in front of a super computer site in AI.

Superintelligence Strategy Paper challenges the idea that in recent months a number of American policies and industrial leaders have been champions that an official -backed program that followed AGI is the best way to compete with China.

According to Shamidt, Wang and Hendricks, an AGI in the United States is not different between standoff The destruction of mutual assuranceThe The way global energy is exclusively on nuclear weapons-which can trigger a primitive strike from any opponents, the-Servants and its co-authors argued that the United States should be careful about dominating the United States very powerful AI systems.

While comparing the AI ​​system with nuclear weapons, world leaders have already considered AI as a top military benefit. Already, Pentagon says that AI is helping to speed up the AI ​​military kill chainThe

Shmid at Al. Introduce a concept that they call the mutual assured AI error (MAIM), so that the governments can actively disable AI projects as threatening AI projects without waiting for the opponents to arms the AGI.

Shmidt, Wang and Hendrix suggests that the United States focuses on its focus on the procedure transferred in its focus from its “competition to its” competition Nuts the other countries From creating superintendent AI. Co-authors argued that the government should “expand” [its] The arsenal of CyberTacks to disable the threat AI projects “restricts access to opponents in advanced AI chips and open-source models, as well as controlled by other countries.”

Co-writers have identified AI policy in the world that the AI ​​policy has played in the world. Here are “Dumors” who believe that catastrophic results from AI development support a prediction conclusion and supporting the AI ​​progress of the natives. On the other hand, there are “ostrich”, who believe that countries will accelerate AI development and basically only hope it will be effective.

The paper offers a third way: A measuring method for developing AGIs that prioritize protective techniques.

This strategy is particularly significant from the Shamidt, who was previously speaking about the need to compete aggressively with China to develop advanced AI systems in the United States. Just a few months ago, Shamidt published an OP-Aid quote DIPSEC has identified a turning point in the US AI competition with China.

The Trump administration seems to be Deadset about the AI ​​development of the United States. However, as co-authorship notes, American decisions around the AGI do not exist in the vacuum.

As the world looks like the United States pushes the AI ​​to the boundary, the samidut and its co-authors suggest that it can be intelligent to accept a protective view.

Leave a Reply

Your email address will not be published. Required fields are marked *