Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Daniel Kokotajlo"


14 mentions found


Gavin Newsom vetoed an artificial intelligence safety bill on Sunday. Gavin Newsom vetoed an artificial intelligence safety bill on Sunday, a win for AI heavyweights like OpenAI and Big Tech companies that lobbied against it. The debate in California reflects the challenge governments face walking the fine line between allowing tech companies to innovate while protecting against new potential risks. Elon Musk, who founded AI company xAI last year, said last month that although it was "a tough call and will make some people upset," he thought "California should probably pass the SB 1047 AI safety bill." Several former OpenAI employees also supported the safety bill and said that OpenAI's opposition to the bill was disappointing.
Persons: Gavin Newsom, Newsom, , Sen, Scott Weiner, Wiener, didn't, Charles Schwab, Rob Sherman, Marc Andreessen, Andreessen Horowitz, Newsom's, Jason Kwon, Meta, Elon Musk, xAI, Dario Amodei, William Saunders, Daniel Kokotajlo Organizations: Service, Big Tech, Chevron, Tesla, Oracle, Venture Capital Locations: California, Silicon Valley
Sutskever announced his departure from OpenAI in May and launched his own AI startup focused on safety. Of the 11 cofounders of OpenAI from 2015, only Altman, Brockman (who, again, is on leave), and Wojciech Zaremba remain. Daniel Kokotajlo and William Saunders, who previously worked on OpenAI's governance and safety teams, respectively, left OpenAI in the first half of 2023. AdvertisementAdditionally, OpenAI cofounder John Schulman left in August to join OpenAI rival Anthropic — where Leike also landed. On Wednesday, Meta announced that Meta AI is on track to be "the most used AI assistant in the world."
Persons: , Mira Murati, Sam Altman, There's, OpenAI, Airbnb, KugtVv9o6N, Matt Turck, Altman, Murati, Ilya Sutskever, Greg Brockman, Brockman, Sutskever, Sam, Madeline, PnRxZSbGgx — sophie, Bob McGrew, Barret Zoph, Sarah Friar, Jakub Pachocki, Brad Lightcap, Kevin Weil, Wojciech Zaremba, Elon Musk, Musk, Jan Leike, Daniel Kokotajlo, William Saunders, John Schulman, Anthropic, Leike, Andrej Karpathy, Kokotajlo, Fortune Organizations: Service, Bloomberg, Business, Wired, OpenAI, , Microsoft, Meta, Apple, Nvidia, Reuters Locations: OpenAI, Slack,
Read previewOpenAI just announced the members of its revamped Safety and Security Committee, and CEO Sam Altman is not on the list. When the group was announced, Altman, Taylor, and five OpenAI technical and policy experts were named to the committee, alongside the independent board members. AdvertisementThe safety committee will "exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed," the blog post said. OpenAI's troublesLast month, the company battled to stop an AI safety bill in California, saying it would stifle progress and drive companies out of the state. Weeks before that, nine current and former OpenAI employees signed an open letter pointing out the risks of generative AI.
Persons: , Sam Altman, Bret Taylor, OpenAI's, Altman, Taylor, Zico Kolter, Adam D'Angelo, Paul Nakasone, Nicole Seligman, William Saunders, Daniel Kokotajlo, Weeks Organizations: Service, Security Committee, Business, Carnegie Mellon University, US Army, Sony Corporation, o1, OpenAI, Securities and Exchange Commission Locations: California, NDAs
"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing," the researchers, William Saunders and Daniel Kokotajlo, wrote in the letter. "But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems." AdvertisementThey continued: "Developing frontier AI models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public." SB1047 "has inspired thoughtful debate," and OpenAI supports some of its safety provisions, Kwon's letter, dated a day before the researchers' letter was sent, read. "We cannot wait for Congress to act — they've explicitly said that they aren't willing to pass meaningful AI regulation," Saunders and Kokotajlo wrote.
Persons: , Gavin Newsom, William Saunders, Daniel Kokotajlo, Saunders, Kokotajlo, Sam Altman, Jason Kwon, Scott Wiener, Kokotajlo aren't, OpenAI, California's, — they've, Newsom Organizations: Service, Politico, California Gov, Business, California Legislature, Wiener Locations: California
"OpenAI opposes even the extremely light-touch requirements in SB 1047, most of which OpenAl claims to voluntarily commit to, raising questions about the strength of those commitments." They said the existing federal legislation OpenAI is using to support its case is "woefully inadequate." They said they hoped their letter would help push the California legislature to pass SB 1047. In an email to Business Insider, a spokesperson for OpenAI said the letter misrepresented the company's position on the bill. "We strongly disagree with the mischaracterization of our position on SB 1047.
Persons: , OpenAI, California Sen, Scott Wiener, Sam Altman, William Saunders, Daniel Kokotajlo, Saunders, Wiener, we've Organizations: Service, Business, Congressional, Labor, U.S, AI Safety Locations: California, Sacramento, OpenAI
OpenAI just lost 3 key leaders, report says
  + stars: | 2024-08-06 | by ( Shubhangi Goel | ) www.businessinsider.com   time to read: +3 min
Read previewThree leaders at OpenAI have just left the company, according to a report by The Information, which cited a person familiar with the matter. Another cofounder, John Schulman, has left OpenAI to join rival AI firm Anthropic. "I've made the difficult decision to leave OpenAI. On X, OpenAI CEO and serial entrepreneur Sam Altman responded to Schulman's departure. Business Insider reported in May that two employees who worked on safety and governance had resigned in recent months.
Persons: , Greg Brockman, Peter Deng, John Schulman, OpenAI, Brockman, Schulman, I've, Sam Altman, Deng, Jan Leike, OpenAI's, Leike, Ilya Sutskever, Daniel Kokotajlo, William Saunders Organizations: Service, Business, TechCrunch, Anthropic Locations: OpenAI, Anthropic
Read previewThere's a battle in Silicon Valley over AI risks and safety — and it's escalating fast. This story is available exclusively to Business Insider subscribers. Right to WarnWhile the concerns around AI safety are nothing new, they're increasingly being amplified by those within AI companies. OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours. A spokesperson previously reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee.
Persons: , OpenAI, Bengio, Geoffrey Hinton, Stuart Russell, Jacob Hilton, Hilton, Sam Altman, Helen Toner, Altman, Russell, Daniel Kokotajlo, Kokotajlo Organizations: Service, Google, Business Locations: Silicon Valley, OpenAI
A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry's rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the employees wrote. The letter also details the current and former employees' concerns about insufficient whistleblower protections for the AI industry, saying that without effective government oversight, employees are in a relatively unique position to hold companies accountable. "Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated." Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter.
Persons: OpenAI, they've, Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright, Daniel Ziegler, Ramana Kumar, Neel Nanda, Geoffrey Hinton, Yoshua Bengio, Stuart Russell Organizations: Google, Microsoft, Meta, CNBC, Security Locations: Anthropic
It's all unraveling at OpenAI (again)
  + stars: | 2024-06-04 | by ( Madeline Berg | ) www.businessinsider.com   time to read: +10 min
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee. Safety second (or third)A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are. (In a responding op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.) "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." (Altman and OpenAI said he recused himself from these deals.)
Persons: , Sam Altman, Daniel Kokotajlo, OpenAI, Altman, Helen Toner, Tasha McCauley, Toner, McCauley, Bret Taylor, Larry Summers, Kokotajlo, Jan Leike, Ilya Sutskever, Leike, Stuart Russell, NDAs, Scarlett Johansson, lawyered, Johansson, " Johansson, I've, Sam Altman — Organizations: Service, New York Times, Business, Times, Twitter, Microsoft, The New York Times, BI, Street, OpenAI, OpenAI's, Apple Locations: OpenAI, Russian, Reddit
OpenAI faces more turmoil as another employee announces she quit over safety concerns. It comes after the resignations of high-profile executives Ilya Sutskever and Jan Leike, who ran its now-dissolved safety research team Superalignment. Krueger wrote, "I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. Kokotajlo said he left after "losing confidence that it [OpenAI] would behave responsibly around the time of AGI."
Persons: Ilya Sutskever, Jan Leike, Gretchen Krueger, Krueger, — Gretchen Krueger, Leike, OpenAI, Daniel Kokotajlo, William Saunders, Kokotajlo, OpenAI didn't Organizations: Business
A Safety Check for OpenAI
  + stars: | 2024-05-20 | by ( Andrew Ross Sorkin | Ravi Mattu | Bernhard Warner | ) www.nytimes.com   time to read: +1 min
OpenAI’s fear factorThe tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. “Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity. Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. Leike spoke for many safety-first OpenAI employees, according to Vox.
Persons: Ilya Sutskever, Sam Altman, hadn’t, ” Jan Leike, Sutskever, Leike, , Vox, Daniel Kokotajlo, Altman Organizations: OpenAI, C.E.O
AdvertisementIt's a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI's ongoing chaos. Safety team implosionOpenAI has been in full damage control mode following the exit of key employees working on AI safety. He said the safety team was left "struggling for compute, and it was getting harder and harder to get this crucial research done." Silenced employeesThe implosion of the safety team is a blow for Altman, who has been keen to show he's safety-conscious when it comes to developing super-intelligent AI. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.
Persons: , Jan Leike, Ilya Sutskever, Sam Altman, Altman, Leike, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders, Cullen O'Keefe, Kokotajlo, Vox, OpenAI, Joe Rogan's, Neel Nanda, i've, Scarlett Johansson, OpenAI didn't Organizations: Service, Business, AGI
Jan Leike, the co-lead of OpenAI's superalignment group, announced his resignation on Tuesday. Leike's exit follows the departure of Ilya Sutskever, OpenAI cofounder and chief scientist. Leike co-led OpenAi's superalignment group, a team that focuses on making its artificial intelligence systems align with human interests. Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was exiting. In a post on X, OpenAI cofounder Sam Altman said, "Ilya and OpenAI are going to part ways.
Persons: Jan Leike, OpenAI's, Ilya Sutskever, , shakeup, Leike, OpenAi's, OpenAI, Sutskever, Sutskever's, Sam Altman, Ilya, Altman, Diane Yoon, Chris Clark, Yoon, Clark, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders Organizations: Service, Business Locations: OpenAI
Read previewTwo OpenAI employees who worked on safety and governance recently resigned from the company behind ChatGPT. Daniel Kokotajlo left last month and William Saunders departed OpenAI in February. Kokotajlo, who worked on the governance team, is listed as an adversarial tester of GPT-4, which was launched in March last year. OpenAI also parted ways with researchers Leopold Aschenbrenner and Pavel Izmailov, according to another report by The Information last month. OpenAI, Kokotajlo, and Saunders did not respond to requests for comment from Business Insider.
Persons: , Daniel Kokotajlo, William Saunders, Saunders, Kokotajlo, overton, Ilya Sutskever, Jan Leike, AGI, It's, Sam Altman, Diane Yoon, Chris Clark, Yoon, Clark, OpenAI, Leopold Aschenbrenner, Pavel Izmailov Organizations: Service, Business, Alignment Locations: OpenAI
Total: 14