Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "superalignment"


25 mentions found


New York CNN —The OpenAI co-founder who left the high-flying artificial intelligence startup last month has announced his next venture: a company dedicated to building safe, powerful artificial intelligence that could become a rival to his old employer. Ilya Sutskever announced plans for the new company, aptly named Safe Superintelligence Inc., in a post on X Wednesday. Sutskever then worked on Google’s AI research team, before helping to found what would become the maker of ChatGPT. It’s also not clear exactly what the new company thinks of as “safety” in the context of highly powerful artificial intelligence technology. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever told Bloomberg in an interview published Tuesday.
Persons: Ilya Sutskever, Sutskever, Geoffrey Hinton, Sam Altman, Altman, Kara Swisher, , It’s, ” Sutskever, OpenAI, Jan Leike, Daniel Levy, Daniel Gross Organizations: New, New York CNN, Superintelligence Inc, SSI, Google, CNN, Bloomberg, Apple Locations: New York, OpenAI, , Palo Alto , California, Tel Aviv, Israel
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023. OpenAI co-founder Ilya Sutskever, who left the artificial intelligence startup last month, introduced his new AI company, which he's calling Safe Superintelligence, or SSI. "I am starting a new company," Sutskever wrote on X on Wednesday. Altman and Sutskever, along with other directors, clashed over the guardrails OpenAI had put in place in the pursuit of advanced AI. "I deeply regret my participation in the board's actions," Sutskever wrote in a post on X on Nov. 20.
Persons: Ilya Sutskever, OpenAI, Sutskever, Jan Leike, OpenAI's, Leike, Daniel Gross, Daniel Levy, Sam Altman, Altman, we've Organizations: Tel Aviv University, SSI, Microsoft, Apple Locations: Russian Israeli, Canadian, Tel Aviv, Palo Alto , California
Read previewA former OpenAI employee who quit in February spoke out about what led him to quit, and later sign a letter calling for change at AI companies. William Saunders told Business Insider that concerns he raised while working at OpenAI were "not adequately addressed." This story is available exclusively to Business Insider subscribers. Advertisement'Egregiously insufficient'According to Aschenbrenner, OpenAI told employees that he was fired over sharing a document containing safety ideas with external researchers. AdvertisementOpenAI didn't respond to a request for comment from Business Insider.
Persons: , William Saunders, Saunders, they're, Leopold Aschenbrenner, OpenAI's, podcaster Dwarkesh Patel, Aschenbrenner, OpenAI, he'd, Sam Altman, Altman Organizations: Service, Business Locations: OpenAI
Read previewA former OpenAI researcher opened up about how he "ruffled some feathers" by writing and sharing some documents related to safety at the company, and was eventually fired. Leopold Aschenbrenner, who graduated from Columbia University at 19, according to his LinkedIn, worked on OpenAI's superalignment team before he was reportedly "fired for leaking" in April. The AI researcher previously shared the memo with others at OpenAI, "who mostly said it was helpful," he added. Related storiesHR later gave him a warning about the memo, Aschenbrenner said, telling him that it was "racist" and "unconstructive" to worry about China Communist Party espionage. He said he wrote the document a couple of months after the superalignment team was announced, which referenced a four-year planning horizon.
Persons: , Leopold Aschenbrenner, OpenAI's, podcaster Dwarkesh Patel, Aschenbrenner, OpenAI, Sam, Sam Altman Organizations: Service, Columbia University, Business, China Communist Party Locations: OpenAI
It's all unraveling at OpenAI (again)
  + stars: | 2024-06-04 | by ( Madeline Berg | ) www.businessinsider.com   time to read: +10 min
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee. Safety second (or third)A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are. (In a responding op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.) "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." (Altman and OpenAI said he recused himself from these deals.)
Persons: , Sam Altman, Daniel Kokotajlo, OpenAI, Altman, Helen Toner, Tasha McCauley, Toner, McCauley, Bret Taylor, Larry Summers, Kokotajlo, Jan Leike, Ilya Sutskever, Leike, Stuart Russell, NDAs, Scarlett Johansson, lawyered, Johansson, " Johansson, I've, Sam Altman — Organizations: Service, New York Times, Business, Times, Twitter, Microsoft, The New York Times, BI, Street, OpenAI, OpenAI's, Apple Locations: OpenAI, Russian, Reddit
Read previewThis has been the week of dueling op-eds from former and current OpenAI board members. Current OpenAI board members Bret Taylor and Larry Summers issued a response to AI safety concerns on Thursday, stating that "the board is taking commensurate steps to ensure safety and security." In the last six months, the two current board members said they had found Altman "highly forthcoming on all relevant issues and consistently collegial with his management team." She also said that the old OpenAI board found out about ChatGPT's release on Twitter. OpenAI dissolved the superalignment safety team before later announcing the formation of a new safety committee.
Persons: , Bret Taylor, Larry Summers, Helen Toner, Tasha McCauley, Sam Altman, Taylor, Summers, Altman, OpenAI, WilmerHale, Toner, Openai, he's, Jan Leike, Ilya Sutskever, Gretchen Krueger, Leike, Krueger Organizations: Service, Business, Twitter, World, Summit
Here's a list of the people, companies, and terms you need to know to talk about AI, in alphabetical order. GPU: A computer chip, short for graphic processing unit, that companies use to train and deploy their AI models. Nvidia's GPUs are used by Microsoft and Meta to run their AI models. Multimodal: The ability for AI models to process text, images, and audio to generate an output. As a profession, prompt engineers are experts in fine tuning AI models on the backend to improve outputs.
Persons: , Sam Altman, Altman, OpenAI's, Dario Amodei, Claude, Demis, Hassabis, Jensen Huang, Satya, Mustafa Suleyman, OpenAI, Elon Musk, Sam Bankman, Peter Thiel, Bard, James Webb, empiricists Organizations: Service, Business, OpenAI, Google, Nvidia, Microsoft, Bing, Meta, James Webb Space Telescope Locations: OpenAI, Anthropic
Jan Leike, one of the lead safety researchers at OpenAI who resigned from the artificial intelligence company earlier this month, said on Tuesday that he's joined rival AI startup Anthropic. Leike announced his resignation from OpenAI on May 15, days before the company dissolved the superalignment group that he co-led. "I'm excited to join @AnthropicAI to continue the superalignment mission," Leike wrote on X. AI safety has gained rapid importance across the tech sector since OpenAI introduced ChatGPT in late 2022, ushering in a boom in generative AI products and investments. The committee will recommend "safety and security decisions for OpenAI projects and operations" to the company's board.
Persons: Jan Leike, he's, Leike, OpenAI, Ilya Sutskever, @AnthropicAI, Sam Altman, Dario Amodei, Daniela Amodei, Claude Organizations: OpenAI, Amazon, Microsoft, Google Locations: OpenAI
Ex-OpenAI exec Jan Leike joined rival AI company Anthropic days after he quit over safety concerns. Leike, who co-led OpenAI's Superalignment team, left less than two weeks ago. AdvertisementOpenAI's former executive Jan Leike announced he's joining its competitor Anthropic. Leike co-led OpenAI's Superalignment team alongside cofounder Ilya Sutskever, who also resigned. The team was tasked with ensuring superintelligence doesn't go rogue and has since been dissolved, with remaining staffers joining the core research team.
Persons: Jan Leike, OpenAI's, OpenAI, , he's, Leike, Ilya Sutskever, superintelligence, @AnthropicAI Organizations: Service, Amazon, Business
OpenAI announces new safety board after employee revolt
  + stars: | 2024-05-28 | by ( Brian Fung | ) edition.cnn.com   time to read: +2 min
Washington CNN —OpenAI said Tuesday it has established a new committee to make recommendations to the company’s board about safety and security, weeks after dissolving a team focused on AI safety. In a blog post, OpenAI said the new committee would be led by CEO Sam Altman as well as Bret Taylor, the company’s board chair, and board member Nicole Seligman. The announcement follows the high-profile exit this month of an OpenAI executive focused on safety, Jan Leike. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
Persons: Washington CNN — OpenAI, OpenAI, Sam Altman, Bret Taylor, Nicole Seligman, Jan Leike, Leike, OpenAI’s, , Ilya Sutskever, Sutskever, Altman’s, Organizations: Washington CNN, CNN, Safety, Security
A Vox story on Saturday said the company could take back vested equity if departing employees did not sign a non-disparagement agreement. "For a company to threaten to claw back already-vested equity is egregious and unusual," California employment law attorney Chambord Benton-Hayes told Vox. AdvertisementOn Saturday, OpenAI CEO Sam Altman said on X, "Vested equity is vested equity, full stop." We have not and never will take away vested equity, even when people didn't sign the departure documents. Not signing "could impact your equity," OpenAI told one of them, per Vox.
Persons: , Chambord Benton, Hayes, Vox, Sam Altman, Altman, Kelsey Piper's, OpenAI execs, Jason Kwon, Kwon, OpenAI, i've, Scarlett Johansson Organizations: Service, Equity, Business, Vox Locations: California, Vox
OpenAI faces more turmoil as another employee announces she quit over safety concerns. It comes after the resignations of high-profile executives Ilya Sutskever and Jan Leike, who ran its now-dissolved safety research team Superalignment. Krueger wrote, "I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. Kokotajlo said he left after "losing confidence that it [OpenAI] would behave responsibly around the time of AGI."
Persons: Ilya Sutskever, Jan Leike, Gretchen Krueger, Krueger, — Gretchen Krueger, Leike, OpenAI, Daniel Kokotajlo, William Saunders, Kokotajlo, OpenAI didn't Organizations: Business
Read previewThe age of AGI is coming and could be just a few years away, according to OpenAI cofounder John Schulman. Speaking on a podcast with Dwarkesh Patel, Schulman predicted that artificial general intelligence could be achieved in "two or three years." A spokesperson for OpenAI told The Information that the remaining staffers were now part of its core research team. Schulman's comments come amid protest movements calling for a pause on training AI models. Groups such as Pause AI fear that if firms like OpenAI create superintelligent AI models, they could pose existential risks to humanity.
Persons: , John Schulman, Dwarkesh Patel, Schulman, AGI, Elon Musk, OpenAI, Kayla Wood, Jan Leike, Ilya Sutskever Organizations: Service, Business, Tech, Washington Post
AdvertisementIt's a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI's ongoing chaos. Safety team implosionOpenAI has been in full damage control mode following the exit of key employees working on AI safety. He said the safety team was left "struggling for compute, and it was getting harder and harder to get this crucial research done." Silenced employeesThe implosion of the safety team is a blow for Altman, who has been keen to show he's safety-conscious when it comes to developing super-intelligent AI. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.
Persons: , Jan Leike, Ilya Sutskever, Sam Altman, Altman, Leike, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders, Cullen O'Keefe, Kokotajlo, Vox, OpenAI, Joe Rogan's, Neel Nanda, i've, Scarlett Johansson, OpenAI didn't Organizations: Service, Business, AGI
A Safety Check for OpenAI
  + stars: | 2024-05-20 | by ( Andrew Ross Sorkin | Ravi Mattu | Bernhard Warner | ) www.nytimes.com   time to read: +1 min
OpenAI’s fear factorThe tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. “Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity. Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. Leike spoke for many safety-first OpenAI employees, according to Vox.
Persons: Ilya Sutskever, Sam Altman, hadn’t, ” Jan Leike, Sutskever, Leike, , Vox, Daniel Kokotajlo, Altman Organizations: OpenAI, C.E.O
OpenAI's exit agreements had nondisparagement clauses threatening vested equity, Vox reported. Sam Altman said on X that the company never enforced it, and that he was unaware of the provision. Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementOpenAI employees who left the company without signing a non-disparagement agreement could have lost vested equity if they did not comply — but the policy was never used, CEO Sam Altman said on Saturday.
Persons: Vox, Sam Altman, , Superalignment, Jan Leike, Ilya Sutskever Organizations: Service, Vox News, Business
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." "I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X. Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. The update brings the GPT-4 model to everyone, including OpenAI's free users, technology chief Mira Murati said Monday in a livestreamed event.
Persons: Sam Altman, OpenAI, Ilya Sutskever, Jan Leike, OpenAI's, Leike, Altman, Sutskever, Helen Toner, Tasha McCauley, Adam D'Angelo, Ilya, Jakub Pachocki, Mira Murati, Murati Organizations: OpenAI, Hope, CNBC, Microsoft, Wired, Tuesday, Wall Street Locations: Atlanta, Leike, OpenAI
A top OpenAI executive researching safety quit on Tuesday. Adding that Sam Altman's company was prioritizing "shiny products" over safety. AdvertisementA former top safety executive at OpenAI is laying it all out. "Over the past years, safety culture and processes have taken a backseat to shiny products," Leike wrote in a lengthy thread on X on Friday. This story is available exclusively to Business Insider subscribers.
Persons: Jan Leike, Sam Altman's, , Leike, OpenAI Organizations: Service, Business
OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior. OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned. AdvertisementIn the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported. OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it "going rogue."
Persons: OpenAI's, OpenAI, , Ilya Sutskever, Jan Leike, Sutskever Organizations: Service, Wired, Business
New York CNN —A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever, who also helped lead the superalignment team, on Tuesday that he would leave the company. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations. “i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. i’ll have a longer post in the next couple of days.”–CNN’s Samantha Delouya contributed to this report.
Persons: Jan Leike, superalignment, OpenAI, , Leike, , Ilya Sutskever, Sutskever, Sam Altman, Altman, Kara Swisher, ” Leike, ” Altman, ” –, Samantha Delouya Organizations: New, New York CNN, OpenAI, CNN Locations: New York, ChatGPT
Jan Leike, the co-lead of OpenAI's superalignment group, announced his resignation on Tuesday. Leike's exit follows the departure of Ilya Sutskever, OpenAI cofounder and chief scientist. Leike co-led OpenAi's superalignment group, a team that focuses on making its artificial intelligence systems align with human interests. Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was exiting. In a post on X, OpenAI cofounder Sam Altman said, "Ilya and OpenAI are going to part ways.
Persons: Jan Leike, OpenAI's, Ilya Sutskever, , shakeup, Leike, OpenAi's, OpenAI, Sutskever, Sutskever's, Sam Altman, Ilya, Altman, Diane Yoon, Chris Clark, Yoon, Clark, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders Organizations: Service, Business Locations: OpenAI
And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post. Our current techniques for aligning AI, such as reinforcement learning from human feedback , rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. AdvertisementLeike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created. "Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve.
Persons: , Sam Altman, Ilya Sutskever, Altman, Sutskever, he's, Jan Leike, Leike, OpenAI, superalignment, we're, Google's DeepMind, OpenAI — Organizations: Service, Business, OpenAI Locations: OpenAI
Read previewOpenAI cofounder and chief scientist Ilya Sutskever is stepping away from the company after almost a decade, he said Tuesday in a post on X, formerly known as Twitter. Sutskever said he is "confident" that the company will continue to build technology that is "both safe and beneficial." This story is available exclusively to Business Insider subscribers. AdvertisementIn his own post on X, Altman said, "Ilya and OpenAI are going to part ways. AdvertisementTwo people familiar with the situation told Business Insider in December that Sutskever had essentially been shut out of OpenAI after the attempt to remove Altman as CEO.
Persons: , Ilya Sutskever, Sutskever, @sama, Altman, Ilya, OpenAI, Sam Altman Organizations: Service, Business Locations: OpenAI
Read previewTwo OpenAI employees who worked on safety and governance recently resigned from the company behind ChatGPT. Daniel Kokotajlo left last month and William Saunders departed OpenAI in February. Kokotajlo, who worked on the governance team, is listed as an adversarial tester of GPT-4, which was launched in March last year. OpenAI also parted ways with researchers Leopold Aschenbrenner and Pavel Izmailov, according to another report by The Information last month. OpenAI, Kokotajlo, and Saunders did not respond to requests for comment from Business Insider.
Persons: , Daniel Kokotajlo, William Saunders, Saunders, Kokotajlo, overton, Ilya Sutskever, Jan Leike, AGI, It's, Sam Altman, Diane Yoon, Chris Clark, Yoon, Clark, OpenAI, Leopold Aschenbrenner, Pavel Izmailov Organizations: Service, Business, Alignment Locations: OpenAI
At least two-thirds of OpenAI staff have threatened to quit and join Sam Altman at Microsoft. It follows days of chaos at OpenAI after CEO Sam Altman was fired in a shock move. AdvertisementNearly 500 OpenAI staff have threatened to quit unless all current board members resign and ex-CEO Sam Altman is reappointed. Late on Sunday, Microsoft CEO Satya Nadella announced that Altman and former OpenAI president Greg Brockman would be joining a new AI team at Microsoft, after efforts by investors and current employees to bring him back as OpenAI CEO fell apart. OpenAI and Microsoft did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
Persons: Sam Altman, , Mira Murati, Brad Lightcap, Altman, Kara Swisher, Ilya Sutskever, Jan Leike, Murati, Satya Nadella, Greg Brockman, OpenAI, Emmett Shear, Twitch Organizations: Microsoft, Service, Wired, Sutskever, Business
Total: 25