Sam Altman comes out swinging at The New York Times

Spread the love

It was clear from the moment OpenAI’s CEO, Sam Altman Stepod, was clear that it could not be an ordinary interview.

Ultman and his chief operating officer, Brad Lightcap, stood strangely on the back of the stage by organizing a jazz concert in the jam-packed San Francisco venue. On Wednesday night, several hundred people filled up the theater-style seat to visit the New York Times columnist Kevin Rouge, and Cassie Newton, the platform, recorded a live episode of their popular technology, Podcast, Hard Fork.

https://www.youtube.com/watch?v=ct63mvQN54O

Ultman and Lightcap were the main event, but they came out very soon. Rouge explained that he and Newton were planning – ideally, before the executives of the Openai were about to come out – a number of titles about the Opina were listed in the weeks towards this event.

“It’s more fun that we’re here for it,” Ultman said. A few seconds later, the CEO of the OpenAI asked, “You’re going to talk about where you suit us because you don’t like user privacy?”

Within minutes of the program started, Ultman hijacked the conversation to talk about the New York Times case against OpenAI and its largest investor, where the publisher complained that the publisher complained Its articles have been incorrectly used for training greater language modelsThe Ultman was especially concerned about the recent development of the case, where lawyers representing the New York Times OpenA asked Customer Chatzipt and API Customer’s data to retain dataThe

“The New York Times, one of the great organizations, is, for a long time, has been taking a position that our users have to be chatted in private mode, even if they tell us to erase them,” Ultman said. “Still loves the New York Times, but we feel strongly about it.”

For a few minutes, the Opehena chief executive officer pressured the podcasters to share their personal opinion on the New York Times case – they were disappointed, noting that journalists who were shown in the New York Times were not involved in the case.

The entrance to the brush of Ultman and Lightcap lasted a few minutes, and the rest of the interview was apparently as per the plan. However, peak ups seem to be felt in the Influence point Silicon Valley index in its relationship with the media industry.

Over the past several years, multiple publishers have filed cases against OpenAI, ethnographic, Google and Meta for training their AI models in copyrighted compositions. At a higher level, these cases argue that AI models have the potential to underestimate and even replace copyrighted works produced by media organizations.

However, the tide can become a favor of technology companies. OpenAI contestant at the beginning of this week The anthropologists got a big win in legal battles against publishersThe A federal judge ruled that its AI models were legal in some situations for training, which could have a wide impact on OpenAI, Google and other publishers’ cases against Meta.

Perhaps Ultman and Lightcap seemed to be encouraged by the victory of the art in their direct interview with journalists in the New York Times. However, in these days, the open is stopping the threat from every direction and it becomes clear all night.

Mark Zuckerberg has been trying recently Opena’s top talents recruit by providing their Million 100 million compensation package To join the Mater AI Superintelligence Lab, Ultman published a few weeks ago at his brother’s podcast.

Meta CEO really believes in the superintendent AI systems or if asked if it is only one recruitment strategy then Lightcap asked: “I think [Zuckerberg] He believes he is supernatural. “

Later, Rouge asked Altman about the OPNY relationship with Microsoft, which was reported to have pushed one Partners with a new contract discussion with a boiling point in recent monthsThe Although Microsoft was once the chief of the opener, the two are now competing in Enterprise software and other domains.

“There is a matter of excitement in any deep partnership and we must have them,” said Ultman. “We are both ambitious agencies, so we find some flashpoints, but I hope this is a thing that we find deep prices for both sides for a long time.”

The leadership of the Openai today seems to have spent a lot of time to minimize the litigation and cases. It can go on the power to solve a wide range of problems around AI, such as how to safely set up high intelligent AI systems on a scale.

At one point Newton Open AAEA asked how they were thinking about recent stories Mentally unstable people use the chatzipt to overcome the hazardous rabbit holesIncluding conspiracy theory or suicide with the chatboat.

Ultman said that Open took many steps to prevent these conversations, as they were cut off quickly or managed users’ professional services where they could get help.

“We do not want to slide the previous generation of the previous generation of technology companies that do not respond to the mistakes that are not quick to react,” Altman said. For a follow-up question, the OpenAI chief executive officer added, “However, users that are in a fragile mental place, which are at the edge of a psychological break, we have not yet understood how any alert.”

Leave a Reply

Your email address will not be published. Required fields are marked *