Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In A paper Elios AI argues in favor of evaluating AI consciousness using a “calculatory effective” method published. The same idea was once championed by someone other than Putnam, although he Criticized It was later in his career. The The theory gives suggestions The human mind can be considered as a certain type of calculating system. From there, you then realize that other calculating systems have indicators of sensitivity like any human like a chabot.
Elios AI studied that “a major challenge in the application of this approach” This is this is the formation of the indicators and both are significant judgment calls involved in their presence or absence assessment in the AI ​​system. “
Model welfare is certainly a stranger and still developing field. It has recently received a lot of critics including Microsoft AI’s CEO Mostafa Sulaimman Reveal a blog About “seemingly aware of AI.”
“It is both premature and openly dangerous,” Solomon wrote, usually mentioned in model welfare research. “They will all increase confusion, create more dependence-related problems, hunt on our psychological weakness, introduce new levels of polarization, complicate the existing struggle for rights and create a huge new division error for society.”
Sulaiman writes that “there is zero evidence today” that is aware of AI exists. He includes a link Paper In 2021, this long -term associate proposed a new structure to evaluate whether the AI ​​system contains the “index feature” of the consciousness. (Sulaiman did not respond to any request for making wired comments.)
Immediately after Solomon published his blog, I chatted with Long and Campbell. They told me that even though they agreed with what they said, they did not believe that model welfare research should be stopped. Rather they argue that Solomon’s losses are the right reasons Why They want to study the matter in the first place.
“When you have a big, misleading problem or question, one way to guarantee you is not going to solve it is to put your hand on top and ‘oh wow, it’s very complicated,’ said Campbell. “I think we should try the least.”
Model welfare researchers initially expressed concern with the questions of consciousness. If we can prove that you and I are aware, they argue, the same argument can be applied to big language models. Obviously, no one in Long or Campbell think that AI is aware today, and they are not sure that it will ever be. But they want to develop tests that allow us to prove it.
“The confusion came from the people concerned about the actual question, ‘is it aware of it?’ And to have a scientific structure for thinking about it, I think it’s just good, good, “Long says.
However, in a world where AI research can be packaged on sensitive titles and social media videos, major philosophical questions and mind-borne tests can easily create misconceptions. Accept what happened during anthropological manifestation Security report It showed that Claud Opus 4 can take “harmful verbs” in extreme situations, such as preventing an imaginary engineer from being stopped from stopping it to blackmail.