Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Roman Yampolskiy"


4 mentions found


For example, a recent study conducted with 2,700 AI researchers indicated there's only a 5% chance that AI will lead to human extinction. The AI researcher teaches computer science at the University of Louisville and just came out with a book called "AI: Unexplainable, Unpredictable, Uncontrollable." Yampolskiy said he finds that unlikely since no AI model has been completely safe from people attempting to get the AI to do something it wasn't designed to do. AdvertisementGoogle AI Overviews, based on Google's Gemini AI model, is the latest product rollout that didn't stick the landing. The CEO of ChatGPT developer OpenAI, Sam Altman, has suggested a "regulatory sandbox" where people experiment with AI and regulate it based on what "went really wrong" and what went "really right."
Persons: , Lex Fridman, Fridman, Roman Yampolskiy, Yampolskiy, they've, Biden, Sam Altman, Altman, there'll, ChatGPT, Elon Musk, Eric Schmidt, Schmidt Organizations: Service, Business, University of Louisville, Google Locations: Africa
For example, a recent study conducted with 2,700 AI researchers indicated there's only a 5% chance that AI will lead to human extinction. The AI researcher teaches computer science at the University of Louisville and just came out with a book called "AI: Unexplainable, Unpredictable, Uncontrollable." Yampolskiy said he finds that unlikely since no AI model has been completely safe from people attempting to get the AI to do something it wasn't designed to do. AdvertisementGoogle AI Overviews, based on Google's Gemini AI model, is the latest product rollout that didn't stick the landing. The CEO of ChatGPT developer OpenAI, Sam Altman, has suggested a "regulatory sandbox" where people experiment with AI and regulate it based on what "went really wrong" and what went "really right."
Persons: , Lex Fridman, Fridman, Roman Yampolskiy, Yampolskiy, they've, Biden, Sam Altman, Altman, there'll, ChatGPT, Elon Musk, Eric Schmidt, Schmidt Organizations: Service, Business, University of Louisville, Google Locations: Africa
He estimates there's a 10-20% chance AI could destroy humanity but that we should build it anyway. An AI safety expert told BI that Musk is underestimating the risk of potential catastrophe. AdvertisementElon Musk is pretty sure AI is worth the risk, even if there's a 1-in-5 chance the technology turns against humans. "One of the things I think that's incredibly important for AI safety is to have a maximum sort of truth-seeking and curious AI." Musk said his "ultimate conclusion" regarding the best way to achieve AI safety is to grow the AI in a manner that forces it to be truthful.
Persons: Elon Musk, , Elon, recalculated, Geoff Hinton, Yamploskiy, Musk, Sam Altman, Hinton Organizations: Service, Cyber Security, University of Louisville, New York Times, Summit, Independent, CNN, Business
Rumman Chowdhury Ethicist and researcherAnd then build the technology that will create the world that we want to have for ourselves? Yudhanjaya Wijeratne Writer and data scientistWe certainly are coming towards this idea of the human plus A.I. Sebastian Thrun Entrepreneur and educatorWhat if everything you’ve done in your life, everything you’ve learned, you can do in a day? Sebastian Thrun Entrepreneur and educatorAnd then everything other people have learned, you can be master of in a day. And you can solve really, really hard problems because now you have the world’s experience.
Persons: Hell, Rumman Chowdhury Ethicist, we’re, Stephanie Dinkins, I’m, ” Stephanie Dinkins, Yudhanjaya, Siri, Sebastian Thrun, you’ve, You Organizations: Yampolskiy, Yampolskiy Computer Locations: China, Japan
Total: 4