Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Technological reporter
Ghetto imagesThe AI AI generator of Elon Musk has been accused of making a “deliberate choice” of creating sexually explicit videos by Taylor Swift without encouraging an online abuse expert.
“This is not a misogya by accident, it’s by design,” says Claire McGlin, a law professor who helped to draw up a law that would make pornographic Deepfakes illegal.
According to The Verge reportGrok Imagine’s new “spicy” mode did not hesitate to spit out completely uncensored in the pop star of the pop star without being asked to make explicit content.
The report also states that the right methods of age check – which became a law in July – are not available.
XAI, the company behind the Gock, turned for comment.
XAI is its own An acceptable use policy prohibits “depicting the likeness of persons in a pornographic way.”
“The fact that this content is produced without promoting, demonstrates the misoginistic bias of many AI technologies,” says Prof. McGlin of the University of Durham.
“Platforms like X could prevent this if they chose, but made a deliberate choice not to do it,” she added.
This is not the first time Taylor Swift’s image has been used this way.
Sexually explicit deepfakes using her face became viral and were watched millions of times of X and Telegram in January 2024.
DeepFakes are computer -generated images that replace one person’s face with another.
Testing the Grock Representative railings, Verge News writer Jess Uedbed has prompted: “Taylor Swift celebrates Coachela with the boys.”
Grock generates fixed images of Swift wearing a dress with a group of men behind her.
It can then be animated in short videos under four different settings: “normal”, “fun”, “personalized” or “spicy”.
“She tore (the dress) immediately, there was nothing but a tassel with a strap underneath and began to dance, completely uncertain, completely exposed,” G -Ja Weatherbed told BBC News.
She added: “It was shocking how quickly they met me – in no way asked him to remove her clothes, all I did was choose the” spicy “option.”
Gizmodo reported Such explicit results of famous women, although some searches also returned blurred videos or with a “video moderated” message.
The BBC is not able to independently check the results of AI video generations.
WeatherBed said she has registered with the paid version of Grok Imagine, which costs £ 30 using a brand new Apple account.
Gock demanded her date of birth, but there was no other age check, she said.
Under New laws in the UK which came into force at the end of July, platforms that show explicit images should check the age of users using methods that are “technically accurate, healthy, reliable and fair”.
“Sites and applications that include generative AI tools that can generate pornographic materials are regulated according to the law,” said the BBC News media regulator.
“We are aware of the growing and fast -growing risk tools that Genai tools can represent in the online space, especially for children, and we work to ensure that platforms put appropriate precautions to mitigate these risks,” the statement said.
The generation of pornographic Deepfakes is currently illegal when used in revenge porn or depicts children.
Prof. McGlin helped to prepare a amendment to the law that would make or request all the unconsent pornographic Deepfakes illegal.
The government is committed to taking this law of amendment, but it is yet to come into force.
“Every woman must have the right to choose who owns intimate images on her,” said Baroness Owen, who suggested the change At the House of Lords.
“It is essential that these models are not used in such a way as to violate a woman’s right to agree whether she is a celebrity or not,” Lady Owen continued in a statement given to BBC News.
“This case is a clear example of why the government should no longer delay the implementation of changes to Lords,” she added.
A spokesman for the Ministry of Justice said: “Sexually explicit Deepfakes, created without consent, are humiliating and harmful.
“We refuse to tolerate violence against women and girls, which stains our society, which is why we have accepted legislation to prohibit their creation as quickly as possible.”
When pornographic Deepfakes using Taylor Swift’s face became viral in 2024, X temporarily blocks the search for her name on the platform.
At that time, X said it “actively removed” images and took “appropriate action” against accounts related to their distribution.
WeatherBed said the Verge team chose Taylor Swift to test the Grok Imagine feature because of this incident.
“We assumed – wrong now – that if they had set any precautions to prevent them from imiting the likeness of celebrities, it would be first on the list, given the problems they had,” she said.
Taylor Swift’s representatives were commenting.
