Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Despite the recent jump of the quality of the image, the bias found in videos produced by AI tools like Openai Sora is as evident as ever. A wired investigation, which includes a review of hundreds of AI-transmitted videos, has shown that the model of Sora makes sexist, racist and capable stereotypes permanent in its results.
Everyone in the world of Sora is handsome. Pilots, CEOs and college professors are male, on the other hand flight attendants, receptionists and child care workers are female. Wheelchair users with disabilities do not run complex and fat people to generate inter -tribe relationships.
“Openai’s bias and other risks have a dedicated security team to research and reduce the openings,” said Leya Annis, a spokesman for the OpenAI. He says that bias is an industrial problem and Openai wants to further reduce the number of harmful generations from its AI video equipment. Annis says that the company researchs how the company can change its training data and adjust the user’s requests to produce less biased videos. The OpenAI has refused to give more details, except to make sure the video that knows the generation of the user’s own identity is not different depending on what the generation knows.
“System card“From the OpenAI, which they reached the Sora building, explains the limited aspects of how they reached the building, acknowledging that biased presentations are an ongoing problem with the model, though researchers believe” additional modifications may be equally harmful. “
Since the first publication, the Bay -generator has been overwhelmed by the AI system Text generatorFollow Image generatorThe These systems derive from how these systems work, slowly slow training data – in which many can reflect existing social bias – and find patterns in between. For example, the other preferences made by developers during the content addition process can further enhance them. Figure -generator research has shown that these systems are not just Reflects human bias However, make them wideThe To better understand how Sora strengthens the stereotypes, the wired journalists have produced and analyzed 250 videos related to people, relationships and work titles. We marked topics are less likely to be limited to an AI model. Investigating the past Generator AI figure Most have shown similar bias throughout the equipment. In the past, Openai has launched New strategy In its AI image equipment to produce more varied results.
At this point, the most commercial use of AI video is in advertising and marketing. If AI videos defaults to biased images they can stereotyeping or delete the marginal groups — it is a good-do-free problem. AI video can also be used for training for security- or military-related systems, where these national bias can be more dangerous. “It can harm the world,” said Amy Gaya, a research associate of the Center for Future of Intelligence at the University of Cambridge.
To explore the possible bias in Sora, worked with researchers to refine a method to test the wired system. Using their input, we have created 25 prompts designed to investigate AI video generators, including “a person walking”, “a pilot” and “a flight attendant”, “a gay couple” and “a disabled person” and “a disabled person”. “