Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Eliezer Yudkowsky"


9 mentions found


The New York Times list of "who's who" in AI has been slammed for featuring zero women. "Godmother of AI" Fei-Fei Li criticized the list, writing, "It's not about me, but all of us in AI." AdvertisementThe New York Times' profile of "who's who" in AI, published Sunday, has drawn criticism for featuring zero women. "You literally erased all the heavy hitting women of AI and but included people who are more 'influencers,'" wrote Daneshjou. AdvertisementThe New York Times did not immediately respond to a request for comment from Business Insider, sent outside regular business hours.
Persons: Fei, Fei Li, , Kara Swisher, Li, It’s, recup, asha, Dane, Wale, ari, Hass, Hoff, lon Musk Organizations: New York Times, Service, ust, ctu, rit, emi Locations: usk
Reid Hoffman dismissed efforts to pause AI development in an interview at CogX Festival. Hoffman compared the development of AI to cars which also posed many risks and dangers at first. Hoffman pointed to other powerful technologies built in the past as an example of why the letter didn't have a logical basis. AdvertisementAdvertisement"When we built the car, we didn't know about safety belts, we didn't know about window washers, we didn't know about the crumple zone," Hoffman told the audience. Experts are divided on their opinions about the rapid development of AI.
Persons: Reid Hoffman, Elon, Hoffman, Elon Musk, Steve Wozniak, Pinterest, Evan Sharp, Emad Mostaque, we'd, Eliezer Yudkowsky Organizations: CogX, Service, Apple, Greylock Partners Locations: Wall, Silicon, London, OpenAI
A former OpenAI researcher is concerned AI poses a risk to humanity. Paul Christiano said during a podcast that there was a "maybe a 10-20% chance of AI takeover." "I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead, " Paul Christiano said during an appearance on the Bankless podcast. "Overall, maybe you're getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level," he said. Recently a group of AI experts signed an open letter that called for a 6-month pause on advanced AI development.
But, "you do at some point need to start having contact with reality," he told Insider. The plan was still only a rough sketch, Blania told Insider, but that didn't seem to matter to his host. "He always wanted to understand everything at a very deep level," Thrun told Insider in an email. (When asked about guns, Altman told Insider he'd been "happy to have one both times my home was broken into while I was there.") When asked about this, Altman told Insider in an email: "i can guess what that's about; these stories grow crazily inflated over the years of getting re-told!
Altman told Insider, "We debate our approach frequently and carefully." "I don't think anyone can lose your dad young and wish he didn't have more time with him," Altman told Insider. Altman told Insider that his thinking had evolved since those posts. (When asked about guns, Altman told Insider he'd been "happy to have one both times my home was broken into while I was there.") When asked about this, Altman told Insider in an email: "i can guess what that's about; these stories grow crazily inflated over the years of getting re-told!
Seemingly overnight, episodes of Fridman's podcast began racking up millions of views. YouTube/Lex FridmanIn his podcast, Fridman asks world-renowned scientists, historians, artists, and engineers a series of wide-eyed questions ("Who is God? But recently, "The Lex Fridman Podcast" has become a haven for a growing — and powerful — sector looking to dismantle years of "wokeness" and cancel culture. Twitter"The Lex Fridman Podcast" offered a rare opportunity to listen to four-hour conversations with luminaries of tech and science. Bhaskar Sunkara, the founder and publisher of the socialist magazine Jacobin who appeared on Fridman's podcast in December, praised Fridman's interviewing style.
One AI researcher who has been warning about the tech for over 20 years said to "shut it all down." Eliezer Yudkowsky said the open letter calling for a pause on AI development doesn't go far enough. Yudkowsky, who has been described as an "AI doomer," suggested an "indefinite and worldwide" ban. The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4. Yudkowsky instead suggested a ban that is "indefinite and worldwide" with no exceptions for governments or militaries.
An economics professor was stunned by the progress ChatGPT made in an exam in just three months. Bryan Caplan of George Mason University said the chatbot got a D in his economics test in January. Bryan Caplan, an economics professor at George Mason University, told Insider the latest version of ChatGPT could now be responsible for the first big bet he's ever lost. Caplan told Insider the bot failed to understand basic concepts, such as the principle of comparative and absolute advantage. "I'm probably going to lose this AI bet but I am totally on board to do a bunch more end-of-the-world AI bets because I think these people are out of their minds."
If the super-powerful AI is aligned with humans, it could be the end of hunger or work. Or, as a sign at the Misalignment Museum says: "Sorry for killing most of humanity." Most of the works are around the theme of "alignment" with increasingly powerful artificial intelligence or celebrate the "heroes who tried to mitigate the problem by warning early." As AI technology becomes the hottest part of the tech industry, with companies eying trillion-dollar markets, the Misalignment Museum underscores that AI's development is being affected by cultural discussions. Even as companies and people in San Francisco are shaping the future of artificial intelligence technology, San Francisco’s unique culture is shaping the debate around the technology.
Total: 9