Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Leike"


15 mentions found


OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior. OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned. AdvertisementIn the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported. OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it "going rogue."
Persons: OpenAI's, OpenAI, , Ilya Sutskever, Jan Leike, Sutskever Organizations: Service, Wired, Business
Jan Leike, the co-lead of OpenAI's superalignment group, announced his resignation on Tuesday. Leike's exit follows the departure of Ilya Sutskever, OpenAI cofounder and chief scientist. Leike co-led OpenAi's superalignment group, a team that focuses on making its artificial intelligence systems align with human interests. Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was exiting. In a post on X, OpenAI cofounder Sam Altman said, "Ilya and OpenAI are going to part ways.
Persons: Jan Leike, OpenAI's, Ilya Sutskever, , shakeup, Leike, OpenAi's, OpenAI, Sutskever, Sutskever's, Sam Altman, Ilya, Altman, Diane Yoon, Chris Clark, Yoon, Clark, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders Organizations: Service, Business Locations: OpenAI
And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post. Our current techniques for aligning AI, such as reinforcement learning from human feedback , rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. AdvertisementLeike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created. "Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve.
Persons: , Sam Altman, Ilya Sutskever, Altman, Sutskever, he's, Jan Leike, Leike, OpenAI, superalignment, we're, Google's DeepMind, OpenAI — Organizations: Service, Business, OpenAI Locations: OpenAI
Read previewTwo OpenAI employees who worked on safety and governance recently resigned from the company behind ChatGPT. Daniel Kokotajlo left last month and William Saunders departed OpenAI in February. Kokotajlo, who worked on the governance team, is listed as an adversarial tester of GPT-4, which was launched in March last year. OpenAI also parted ways with researchers Leopold Aschenbrenner and Pavel Izmailov, according to another report by The Information last month. OpenAI, Kokotajlo, and Saunders did not respond to requests for comment from Business Insider.
Persons: , Daniel Kokotajlo, William Saunders, Saunders, Kokotajlo, overton, Ilya Sutskever, Jan Leike, AGI, It's, Sam Altman, Diane Yoon, Chris Clark, Yoon, Clark, OpenAI, Leopold Aschenbrenner, Pavel Izmailov Organizations: Service, Business, Alignment Locations: OpenAI
As Open AI employees celebrated the return of CEO Sam Altman with a five-alarm office party , OpenAI software engineer Steven Heidel was busy publicly rebuffing overtures from Salesforce CEO Marc Benioff. Heidel was one of more than 700 OpenAI employees who's threatened exodus halted a would-be mutiny at one of Silicon Valley's most important AI companies. He was previously a scientist at Facebook AI Research and worked as a member of Google Brain under supervision of Prof. Geoffrey Hinton and Ilya Sutskever. Alec Radford: Radford was hired in 2016 from a small AI company he founded in his dorm room. Tao Xu : technical staff, worked on GPT4 and WhisperChristine McLeavey : technical staff, with contributions to music-related productsChristina Kim : technical staffChristopher Hesse : technical staffHeewoo Jun : technical staff, researchAlex Nichol : technical staff, researchWilliam Fedus: technical staff, researchIlge Akkaya: technical staff, researchVineet Kosaraju : technical staff, researchHenrique Ponde de Oliveira Pinto : technical staffAditya Ramesh : technical staff, developed DALL-E and DALL-E 2Prafulla Dhariwal : research scientistHunter Lightman : technical staffHarrison Edwards : research scientistYura Burda : machine language researcherTyna Eloundou : technical staff, researchPamela Mishkin : researcherCasey Chu : researcherDavid Dohan : technical staff, researchAidan Clark : researcherRaul Puri : research scientistLeo Gao : technical staff, researchYang Song : technical staff, researchGiambattista ParascandoloTodor Markov : Machine learning researcherNick Ryder : technical staff
Persons: Sam Altman, Steven Heidel, Marc Benioff, Heidel, Altman, Mira Murati, Murati, Brad Lightcap, Lightcap, Jason Kwon, Kwon, Wojciech Zaremba, Geoffrey Hinton, Ilya Sutskever, Alec Radford, Radford, OpenAI, Peter Welinder, He's, Github Copilot, Anna Makanju, Andrej Karpathy, OpenAI's, Michael Petrov, Petrov, Greg [ Brockman, Miles Brundage, Brundage, John Schulman OpenAI, Srinivas Narayanan, Scott Grey, Grey, Bob McGrew, Research Che Chang, Lillian Weng, Safety Systems Mark Chen, Frontiers Research Barret Zoph, Peter Deng, Jan Leike Evan Morikawa Steven Heidel Jong Wook Kim, Tao Xu, Christine McLeavey, Christina Kim, Christopher Hesse, Heewoo, Alex Nichol, William Fedus, Henrique Ponde de Oliveira Pinto, Aditya Ramesh, Hunter Lightman, Harrison Edwards, Yura, Tyna, Pamela Mishkin, Casey Chu, David Dohan, Aidan Clark, Raul Puri, Leo Gao, Yang, Giambattista Parascandolo Todor Markov, Nick Ryder Organizations: Business, BI, OpenAI, Khosla Ventures, Facebook, Research, Google, Tesla, U.S . Department of Energy, Oxford University, Safety Systems, Frontiers Research Locations: Albania, Canada, OpenAI
At least two-thirds of OpenAI staff have threatened to quit and join Sam Altman at Microsoft. It follows days of chaos at OpenAI after CEO Sam Altman was fired in a shock move. AdvertisementNearly 500 OpenAI staff have threatened to quit unless all current board members resign and ex-CEO Sam Altman is reappointed. Late on Sunday, Microsoft CEO Satya Nadella announced that Altman and former OpenAI president Greg Brockman would be joining a new AI team at Microsoft, after efforts by investors and current employees to bring him back as OpenAI CEO fell apart. OpenAI and Microsoft did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
Persons: Sam Altman, , Mira Murati, Brad Lightcap, Altman, Kara Swisher, Ilya Sutskever, Jan Leike, Murati, Satya Nadella, Greg Brockman, OpenAI, Emmett Shear, Twitch Organizations: Microsoft, Service, Wired, Sutskever, Business
OpenAI employees are having a tough time after Sam Altman was suddenly ousted from the company. Here's what OpenAI employees are saying about the chaotic transition. Since Friday the company has cycled through three CEOs: cofounder Sam Altman, former CTO Mira Murati, and current CEO Emmett Shear, who cofounded Twitch. AdvertisementMore OpenAI staffers have threatened to join them unless Altman is reinstated and the board resigns. Throughout the night on Sunday, more OpenAI staffers spoke out, sharing the repeated message: "OpenAI is nothing without its people."
Persons: Sam Altman, Altman, , Mira Murati, Emmett Shear, Greg Brockman, Aleksander Madry, Employees haven't, cofounders Brockman, Will Depue, Sam, Greg, Ilya Sutskever, Shengjia Zhao, Ilya, Jan Leike, Andrej Karpathy Organizations: Service, Microsoft, Employees, Business, Sutskever, OpenAI
OpenAI wants to lure Google researchers with $10 million pay packets, The Information reported. OpenAI is in talks for another employee share sale this year that could value it at $86 billion. OpenAI is exploring options for an employee share sale that values the company at $86 billion , Bloomberg reported last month. If its recruiters are successful in enticing top Google AI researchers, they could benefit from compensation packages of between $5 million and $10 million after the latest share sale, according to The Information. Five former Google researchers were listed in the acknowledgments section of OpenAI's blog post announcing the launch of ChatGPT last November.
Persons: OpenAI, , ChatGPT, OpenAI's, Jan Leike, Leiki, OpenAI didn't Organizations: Google, Service, Bloomberg, Meta
He is looking for research engineers, scientists, and managers. Working closely with research engineers, research scientists are responsible for advancing OpenAI's alignment research agenda. The research manager position oversees the research engineers and research scientists. An ideal candidate for the leadership role, Leike said, would have a combination of management experience and machine learning skills. OpenAI isn't just hiring for its superalignment team.
Persons: Jan Leike, , we'll, Leike, OpenAI's, OpenAI Organizations: OpenAI, Research Locations: OpenAI
Elon Musk and Sam Altman are racing to create superintelligent AI. Musk said xAI plans to use Twitter data to train a "maximally curious" and "truth-seeking" superintelligence. Elon Musk is throwing out challenge after challenge to tech CEOs — while he wants to physically fight Meta's Mark Zuckerberg, he's now racing with OpenAI to create AI smarter than humans. On Saturday, Musk said on Twitter Spaces that his new company, xAI, is "definitely in competition" with OpenAI. Over a 100-minute discussion that drew over 1.6 million listeners, Musk explained his plan for xAI to use Twitter data to train superintelligent AI that is "maximally curious" and "truth-seeking."
Persons: Elon Musk, Sam Altman, Musk, Mark Zuckerberg, , Meta's Mark Zuckerberg, he's, OpenAI, Altman, Semafor, Ilya Sutskever, Jan Leike, Sam Altman — Organizations: Twitter, Intelligence
OpenAI fears that superintelligent AI could lead to human extinction. It is putting together a team to ensure that superintelligent AI aligns with human interests. The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years. OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority. To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Sam Altman, Altman, Elon Musk Organizations: superintelligent, Research
"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research. AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Connor Leahy, Anna Tong, Kenneth Li, Rosalba O'Brien Organizations: Microsoft, Reuters, Thomson Locations: San Francisco
How to Spot Robots in a World of A.I.-Generated Text
  + stars: | 2023-02-17 | by ( Keith Collins | ) www.nytimes.com   time to read: +9 min
A detection tool that knew which words were on the special list would be able to tell the difference between generated text and text written by a person. That would be especially helpful for this generated text, as it includes several factual inaccuracies. By contrast, the detection tool OpenAI released requires a minimum of 1,000 characters. A person could repeatedly edit generated text and check it against a detection tool until the text is identified as human-written — and that process could potentially be automated. By that time, educators and researchers had already been calling for tools to help them identify generated text.
Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference on July 6, 2022, in Sun Valley, Idaho. Artificial intelligence research startup OpenAI on Tuesday introduced a tool that's designed to figure out if text is human-generated or written by a computer. The release comes two months after OpenAI captured the public's attention when it introduced ChatGPT, a chatbot that generates text that might seem to have been written by a person in response to a person's prompt. "In our evaluations on a 'challenge set' of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as 'likely AI-written,' while incorrectly labeling human-written text as AI-written 9% of the time (false positives)," the OpenAI employees wrote. The new version is more prepared to handle text from recent AI systems, the employees wrote.
CNN —Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt. OpenAI on Tuesday announced a new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by ChatGPT. Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool.
Total: 15