Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "superalignment"


16 mentions found


A Safety Check for OpenAI
  + stars: | 2024-05-20 | by ( Andrew Ross Sorkin | Ravi Mattu | Bernhard Warner | ) www.nytimes.com   time to read: +1 min
OpenAI’s fear factorThe tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. “Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity. Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. Leike spoke for many safety-first OpenAI employees, according to Vox.
Persons: Ilya Sutskever, Sam Altman, hadn’t, ” Jan Leike, Sutskever, Leike, , Vox, Daniel Kokotajlo, Altman Organizations: OpenAI, C.E.O
OpenAI's exit agreements had nondisparagement clauses threatening vested equity, Vox reported. Sam Altman said on X that the company never enforced it, and that he was unaware of the provision. Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementOpenAI employees who left the company without signing a non-disparagement agreement could have lost vested equity if they did not comply — but the policy was never used, CEO Sam Altman said on Saturday.
Persons: Vox, Sam Altman, , Superalignment, Jan Leike, Ilya Sutskever Organizations: Service, Vox News, Business
OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior. OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned. AdvertisementIn the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported. OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it "going rogue."
Persons: OpenAI's, OpenAI, , Ilya Sutskever, Jan Leike, Sutskever Organizations: Service, Wired, Business
New York CNN —A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever, who also helped lead the superalignment team, on Tuesday that he would leave the company. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations. “i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. i’ll have a longer post in the next couple of days.”–CNN’s Samantha Delouya contributed to this report.
Persons: Jan Leike, superalignment, OpenAI, , Leike, , Ilya Sutskever, Sutskever, Sam Altman, Altman, Kara Swisher, ” Leike, ” Altman, ” –, Samantha Delouya Organizations: New, New York CNN, OpenAI, CNN Locations: New York, ChatGPT
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." "I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X. Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. The update brings the GPT-4 model to everyone, including OpenAI's free users, technology chief Mira Murati said Monday in a livestreamed event.
Persons: Sam Altman, OpenAI, Ilya Sutskever, Jan Leike, OpenAI's, Leike, Altman, Sutskever, Helen Toner, Tasha McCauley, Adam D'Angelo, Ilya, Jakub Pachocki, Mira Murati, Murati Organizations: OpenAI, Hope, CNBC, Microsoft, Wired, Tuesday, Wall Street Locations: Atlanta, Leike, OpenAI
A top OpenAI executive researching safety quit on Tuesday. Adding that Sam Altman's company was prioritizing "shiny products" over safety. AdvertisementA former top safety executive at OpenAI is laying it all out. "Over the past years, safety culture and processes have taken a backseat to shiny products," Leike wrote in a lengthy thread on X on Friday. This story is available exclusively to Business Insider subscribers.
Persons: Jan Leike, Sam Altman's, , Leike, OpenAI Organizations: Service, Business
Jan Leike, the co-lead of OpenAI's superalignment group, announced his resignation on Tuesday. Leike's exit follows the departure of Ilya Sutskever, OpenAI cofounder and chief scientist. Leike co-led OpenAi's superalignment group, a team that focuses on making its artificial intelligence systems align with human interests. Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was exiting. In a post on X, OpenAI cofounder Sam Altman said, "Ilya and OpenAI are going to part ways.
Persons: Jan Leike, OpenAI's, Ilya Sutskever, , shakeup, Leike, OpenAi's, OpenAI, Sutskever, Sutskever's, Sam Altman, Ilya, Altman, Diane Yoon, Chris Clark, Yoon, Clark, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders Organizations: Service, Business Locations: OpenAI
And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post. Our current techniques for aligning AI, such as reinforcement learning from human feedback , rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. AdvertisementLeike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created. "Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve.
Persons: , Sam Altman, Ilya Sutskever, Altman, Sutskever, he's, Jan Leike, Leike, OpenAI, superalignment, we're, Google's DeepMind, OpenAI — Organizations: Service, Business, OpenAI Locations: OpenAI
Read previewOpenAI cofounder and chief scientist Ilya Sutskever is stepping away from the company after almost a decade, he said Tuesday in a post on X, formerly known as Twitter. Sutskever said he is "confident" that the company will continue to build technology that is "both safe and beneficial." This story is available exclusively to Business Insider subscribers. AdvertisementIn his own post on X, Altman said, "Ilya and OpenAI are going to part ways. AdvertisementTwo people familiar with the situation told Business Insider in December that Sutskever had essentially been shut out of OpenAI after the attempt to remove Altman as CEO.
Persons: , Ilya Sutskever, Sutskever, @sama, Altman, Ilya, OpenAI, Sam Altman Organizations: Service, Business Locations: OpenAI
Read previewTwo OpenAI employees who worked on safety and governance recently resigned from the company behind ChatGPT. Daniel Kokotajlo left last month and William Saunders departed OpenAI in February. Kokotajlo, who worked on the governance team, is listed as an adversarial tester of GPT-4, which was launched in March last year. OpenAI also parted ways with researchers Leopold Aschenbrenner and Pavel Izmailov, according to another report by The Information last month. OpenAI, Kokotajlo, and Saunders did not respond to requests for comment from Business Insider.
Persons: , Daniel Kokotajlo, William Saunders, Saunders, Kokotajlo, overton, Ilya Sutskever, Jan Leike, AGI, It's, Sam Altman, Diane Yoon, Chris Clark, Yoon, Clark, OpenAI, Leopold Aschenbrenner, Pavel Izmailov Organizations: Service, Business, Alignment Locations: OpenAI
At least two-thirds of OpenAI staff have threatened to quit and join Sam Altman at Microsoft. It follows days of chaos at OpenAI after CEO Sam Altman was fired in a shock move. AdvertisementNearly 500 OpenAI staff have threatened to quit unless all current board members resign and ex-CEO Sam Altman is reappointed. Late on Sunday, Microsoft CEO Satya Nadella announced that Altman and former OpenAI president Greg Brockman would be joining a new AI team at Microsoft, after efforts by investors and current employees to bring him back as OpenAI CEO fell apart. OpenAI and Microsoft did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
Persons: Sam Altman, , Mira Murati, Brad Lightcap, Altman, Kara Swisher, Ilya Sutskever, Jan Leike, Murati, Satya Nadella, Greg Brockman, OpenAI, Emmett Shear, Twitch Organizations: Microsoft, Service, Wired, Sutskever, Business
OpenAI wants to lure Google researchers with $10 million pay packets, The Information reported. OpenAI is in talks for another employee share sale this year that could value it at $86 billion. OpenAI is exploring options for an employee share sale that values the company at $86 billion , Bloomberg reported last month. If its recruiters are successful in enticing top Google AI researchers, they could benefit from compensation packages of between $5 million and $10 million after the latest share sale, according to The Information. Five former Google researchers were listed in the acknowledgments section of OpenAI's blog post announcing the launch of ChatGPT last November.
Persons: OpenAI, , ChatGPT, OpenAI's, Jan Leike, Leiki, OpenAI didn't Organizations: Google, Service, Bloomberg, Meta
But he told MIT Technology Review that he wasn't sure whether he would choose to become "part AI." Elon Musk has said Neuralink will help people merge with AI — but it is unclear if it's possible. AdvertisementAdvertisementOpenAI's chief scientist has said that people may choose to become "part AI" in the future to compete with superintelligent machines. AdvertisementAdvertisementSutskever is currently working on OpenAI's "superalignment" project , which aims to build fail-safes that will prevent superintelligent AI from going rogue. Despite this, Sutskever told MIT Tech Review that he was unsure whether he would ever choose to merge with AI, should it become possible.
Persons: Ilya Sutskever, Elon Musk, , he's, , Sutskever, OpenAI Organizations: MIT Technology, Service, MIT Tech Review Locations:
He is looking for research engineers, scientists, and managers. Working closely with research engineers, research scientists are responsible for advancing OpenAI's alignment research agenda. The research manager position oversees the research engineers and research scientists. An ideal candidate for the leadership role, Leike said, would have a combination of management experience and machine learning skills. OpenAI isn't just hiring for its superalignment team.
Persons: Jan Leike, , we'll, Leike, OpenAI's, OpenAI Organizations: OpenAI, Research Locations: OpenAI
OpenAI fears that superintelligent AI could lead to human extinction. It is putting together a team to ensure that superintelligent AI aligns with human interests. The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years. OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority. To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Sam Altman, Altman, Elon Musk Organizations: superintelligent, Research
"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research. AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Connor Leahy, Anna Tong, Kenneth Li, Rosalba O'Brien Organizations: Microsoft, Reuters, Thomson Locations: San Francisco
Total: 16