Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Cullen O'Keefe"


2 mentions found


AdvertisementIt's a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI's ongoing chaos. Safety team implosionOpenAI has been in full damage control mode following the exit of key employees working on AI safety. He said the safety team was left "struggling for compute, and it was getting harder and harder to get this crucial research done." Silenced employeesThe implosion of the safety team is a blow for Altman, who has been keen to show he's safety-conscious when it comes to developing super-intelligent AI. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.
Persons: , Jan Leike, Ilya Sutskever, Sam Altman, Altman, Leike, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders, Cullen O'Keefe, Kokotajlo, Vox, OpenAI, Joe Rogan's, Neel Nanda, i've, Scarlett Johansson, OpenAI didn't Organizations: Service, Business, AGI
Interviews with a U.S. senator, congressional staffers, AI companies and interest groups show there are a number of options under discussion. Some proposals focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance. Other possibilities include rules to ensure AI isn't used to discriminate or violate someone's civil rights. Another debate is whether to regulate the developer of AI or the company that uses it to interact with consumers. GOVERNMENT MICROMANAGEMENTThe risk-based approach means AI used to diagnose cancer, for example, would be scrutinized by the Food and Drug Administration, while AI for entertainment would not be regulated.
Total: 2