Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Ilya Sutskever"


25 mentions found


New York CNN —The OpenAI co-founder who left the high-flying artificial intelligence startup last month has announced his next venture: a company dedicated to building safe, powerful artificial intelligence that could become a rival to his old employer. Ilya Sutskever announced plans for the new company, aptly named Safe Superintelligence Inc., in a post on X Wednesday. Sutskever then worked on Google’s AI research team, before helping to found what would become the maker of ChatGPT. It’s also not clear exactly what the new company thinks of as “safety” in the context of highly powerful artificial intelligence technology. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever told Bloomberg in an interview published Tuesday.
Persons: Ilya Sutskever, Sutskever, Geoffrey Hinton, Sam Altman, Altman, Kara Swisher, , It’s, ” Sutskever, OpenAI, Jan Leike, Daniel Levy, Daniel Gross Organizations: New, New York CNN, Superintelligence Inc, SSI, Google, CNN, Bloomberg, Apple Locations: New York, OpenAI, , Palo Alto , California, Tel Aviv, Israel
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023. OpenAI co-founder Ilya Sutskever, who left the artificial intelligence startup last month, introduced his new AI company, which he's calling Safe Superintelligence, or SSI. "I am starting a new company," Sutskever wrote on X on Wednesday. Altman and Sutskever, along with other directors, clashed over the guardrails OpenAI had put in place in the pursuit of advanced AI. "I deeply regret my participation in the board's actions," Sutskever wrote in a post on X on Nov. 20.
Persons: Ilya Sutskever, OpenAI, Sutskever, Jan Leike, OpenAI's, Leike, Daniel Gross, Daniel Levy, Sam Altman, Altman, we've Organizations: Tel Aviv University, SSI, Microsoft, Apple Locations: Russian Israeli, Canadian, Tel Aviv, Palo Alto , California
Ilya Sutskever, the OpenAI co-founder and chief scientist who in November joined other board members to force out Sam Altman, the company’s high-profile chief executive, has helped found a new artificial intelligence company. The new start-up is called Safe Superintelligence. It aims to produce superintelligence — a machine that is more intelligent than humans — in a safe way, according to the company spokeswoman Lulu Cheng Meservey. Dr. Sutskever, who has said he regretted moving against Mr. Altman, declined to comment. She said that as it builds safe superintelligence, the company will not release other products.
Persons: Ilya Sutskever, Sam Altman, Lulu Cheng Meservey, Sutskever, Altman, Meservey Organizations: Mr, Bloomberg Locations: OpenAI
However, current and former OpenAI employees have been increasingly concerned about access to liquidity, according to interviews and documents shared internally. "We're incredibly sorry that we're only changing this language now," an OpenAI spokesperson told CNBC after the company changed course. In at least two tender offers, the sales limit for former employees was $2 million, compared to $10 million for current employees. In addition to current and former employees, OpenAI has a third tier for share sales that consists of ex-employees who now work at competitors. OpenAI said it's never canceled a current or former employee's vested equity or required a repurchase at $0.
Persons: Sam Altman, Jason Redmond, OpenAI, Slack, Siri, Ilya Sutskever, Jan Leike, Altman's, Sarah Friar, Larry Albukerk, Albukerk, CNBC they've, it's, Doug Brayley Organizations: Microsoft, AFP, Getty, CNBC, Apple, Federal Trade Commission, Justice Department, Nvidia, OpenAI, EB Exchange, Ropes & Gray Locations: Redmond , Washington, OpenAI, California
This disastrous mindset has hollowed out Silicon Valley's ability to innovate and caused regular people to grow increasingly frustrated with everyday tech. The large platforms have generally ignored this feedback for one big reason: The tech industry has been taken over by career managers. Now Google Search is more profitable and worse, elevating spammy content and outright scams, a problem exacerbated by artificial intelligence. AdvertisementBut today's tech products feel built to sell a dream of the future rather than solve a customer's existing pains. As long as the tech industry is controlled by people who don't build things, it will continue to build products that help raise growth metrics rather than help consumers with tangible problems.
Persons: scammers hawking, Meta's, Hewlett Packard, Kevin Systrom, Mike Krieger, Adam Mosseri, Systrom, Krieger, Mosseri, Mark Zuckerberg, Instagram, Kylie Jenner, Kim Kardashian, Sundar Pichai, Prabhakar Raghavan, Raghavan, Ben Gomes, Gomes, it's, Sam Altman, Helen Toner, Ilya Sutskever, Larry Summers, Fidji Simo, Meta —, , Steve Jobs, Steve Wozniak Organizations: Facebook, Google, Microsoft, Amazon, Oracle, Adobe, Meta, Builders, Apple, Xerox, HP, Department, Reuters Institute, Oxford University, Silicon Valley Locations: Silicon, Silicon Valley
It's all unraveling at OpenAI (again)
  + stars: | 2024-06-04 | by ( Madeline Berg | ) www.businessinsider.com   time to read: +10 min
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee. Safety second (or third)A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are. (In a responding op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.) "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." (Altman and OpenAI said he recused himself from these deals.)
Persons: , Sam Altman, Daniel Kokotajlo, OpenAI, Altman, Helen Toner, Tasha McCauley, Toner, McCauley, Bret Taylor, Larry Summers, Kokotajlo, Jan Leike, Ilya Sutskever, Leike, Stuart Russell, NDAs, Scarlett Johansson, lawyered, Johansson, " Johansson, I've, Sam Altman — Organizations: Service, New York Times, Business, Times, Twitter, Microsoft, The New York Times, BI, Street, OpenAI, OpenAI's, Apple Locations: OpenAI, Russian, Reddit
Read previewThis has been the week of dueling op-eds from former and current OpenAI board members. Current OpenAI board members Bret Taylor and Larry Summers issued a response to AI safety concerns on Thursday, stating that "the board is taking commensurate steps to ensure safety and security." In the last six months, the two current board members said they had found Altman "highly forthcoming on all relevant issues and consistently collegial with his management team." She also said that the old OpenAI board found out about ChatGPT's release on Twitter. OpenAI dissolved the superalignment safety team before later announcing the formation of a new safety committee.
Persons: , Bret Taylor, Larry Summers, Helen Toner, Tasha McCauley, Sam Altman, Taylor, Summers, Altman, OpenAI, WilmerHale, Toner, Openai, he's, Jan Leike, Ilya Sutskever, Gretchen Krueger, Leike, Krueger Organizations: Service, Business, Twitter, World, Summit
Former OpenAI board member Helen Toner, who helped oust CEO Sam Altman in November, broke her silence this week when she spoke on a podcast about events inside the company leading up to Altman's firing. Toner also said Altman did not tell the board he owned the OpenAI startup fund. Within a week, Altman was back and board members Toner and Tasha McCauley, who had voted to oust Altman, were out. In March, OpenAI announced its new board, which includes Altman, and the conclusion of an internal investigation by law firm WilmerHale into the events leading up to Altman's ouster. "The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg," OpenAI board chair Bret Taylor said at the time, referring to president and co-founder Greg Brockman.
Persons: Helen Toner, CSET, Vox, Sam Altman, OpenAI, Toner, Altman, Sam, Ilya Sutskever, Jan Leike, Anthropic, OpenAI's, Sutskever, Tasha McCauley, Adam D'Angelo, WilmerHale, Greg, Bret Taylor, Greg Brockman, Taylor Organizations: The Ritz, Carlton, Twitter, OpenAI, Microsoft, Street Locations: Laguna Niguel, Dana Point , California
Ex-OpenAI exec Jan Leike joined rival AI company Anthropic days after he quit over safety concerns. Leike, who co-led OpenAI's Superalignment team, left less than two weeks ago. AdvertisementOpenAI's former executive Jan Leike announced he's joining its competitor Anthropic. Leike co-led OpenAI's Superalignment team alongside cofounder Ilya Sutskever, who also resigned. The team was tasked with ensuring superintelligence doesn't go rogue and has since been dissolved, with remaining staffers joining the core research team.
Persons: Jan Leike, OpenAI's, OpenAI, , he's, Leike, Ilya Sutskever, superintelligence, @AnthropicAI Organizations: Service, Amazon, Business
Jan Leike, one of the lead safety researchers at OpenAI who resigned from the artificial intelligence company earlier this month, said on Tuesday that he's joined rival AI startup Anthropic. Leike announced his resignation from OpenAI on May 15, days before the company dissolved the superalignment group that he co-led. "I'm excited to join @AnthropicAI to continue the superalignment mission," Leike wrote on X. AI safety has gained rapid importance across the tech sector since OpenAI introduced ChatGPT in late 2022, ushering in a boom in generative AI products and investments. The committee will recommend "safety and security decisions for OpenAI projects and operations" to the company's board.
Persons: Jan Leike, he's, Leike, OpenAI, Ilya Sutskever, @AnthropicAI, Sam Altman, Dario Amodei, Daniela Amodei, Claude Organizations: OpenAI, Amazon, Microsoft, Google Locations: OpenAI
OpenAI on Tuesday said it created a Safety and Security Committee led by senior executives, after disbanding its previous oversight board in mid-May. The formation of a new oversight team comes after OpenAI dissolved a previous team that was focused on the long-term risks of AI. AI safety has been at the forefront of a larger debate, as the huge models that underpin applications like ChatGPT get more advanced. Bret Taylor, Adam D'Angelo, Nicole Seligman, who are all on OpenAI's board of directors, now sit on the new safety committee alongside Altman. Leike this month wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products."
Persons: Sam Altman, OpenAI, Ilya Sutskever, Jan Leike, AGI, Bret Taylor, Adam D'Angelo, Nicole Seligman, Altman, CNBC's Hayden Field Organizations: Microsoft, Security Locations: Redmond , Washington
OpenAI announces new safety board after employee revolt
  + stars: | 2024-05-28 | by ( Brian Fung | ) edition.cnn.com   time to read: +2 min
Washington CNN —OpenAI said Tuesday it has established a new committee to make recommendations to the company’s board about safety and security, weeks after dissolving a team focused on AI safety. In a blog post, OpenAI said the new committee would be led by CEO Sam Altman as well as Bret Taylor, the company’s board chair, and board member Nicole Seligman. The announcement follows the high-profile exit this month of an OpenAI executive focused on safety, Jan Leike. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
Persons: Washington CNN — OpenAI, OpenAI, Sam Altman, Bret Taylor, Nicole Seligman, Jan Leike, Leike, OpenAI’s, , Ilya Sutskever, Sutskever, Altman’s, Organizations: Washington CNN, CNN, Safety, Security
download the appSign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read previewAI's golden boy, Sam Altman, may be starting to lose his luster. The company has also been dealing with comments from former executives that its commitment to AI safety leaves much to be desired. This story is available exclusively to Business Insider subscribers. ScaJo scandalThe criticism around AI safety is the latest blow for Altman, who is fighting battles on multiple fronts.
Persons: , Sam Altman, Gretchen Krueger, Jan Leike, Ilya Sutskever, Altman, Stuart Russell, Russell, Scarlett Johansson, Paul Morigi, OpenAI Organizations: Service, Business, Wednesday, UC Berkeley, Microsoft Locations: OpenAI, Russian
Back in September, Scarlett Johansson, who played the hauntingly complex AI assistant in the 2013 Spike Jonze film “Her,” got a request from OpenAI’s CEO, Sam Altman. He wanted to hire Johansson to voice his company’s newest ChatGPT model, “Sky.” She said no. Johansson quickly lawyered up, saying Monday she was “shocked, angered and in disbelief” that Altman would use a voice “so eerily similar” to her own. OpenAI was forced to confront some of those concerns late last week, after two prominent employees left the company. “Being friends with AI will be so much easier than forging bonds with human beings,” wrote Wired editor Brian Barrett in a recent essay about the movie.
Persons: CNN Business ’, you’ve, Scarlett Johansson, Spike Jonze, , Sam Altman, Johansson, OpenAI, Altman, Altman —, Jan Leike, OpenAI’s, Ilya Sutskever, ” Altman, , that’s, Joaquin, Brian Barrett, — CNN’s Clare Duffy, Brian Fung Organizations: CNN Business, New York CNN, Google Locations: New York, Silicon
OpenAI faces more turmoil as another employee announces she quit over safety concerns. It comes after the resignations of high-profile executives Ilya Sutskever and Jan Leike, who ran its now-dissolved safety research team Superalignment. Krueger wrote, "I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. Kokotajlo said he left after "losing confidence that it [OpenAI] would behave responsibly around the time of AGI."
Persons: Ilya Sutskever, Jan Leike, Gretchen Krueger, Krueger, — Gretchen Krueger, Leike, OpenAI, Daniel Kokotajlo, William Saunders, Kokotajlo, OpenAI didn't Organizations: Business
Read previewThe age of AGI is coming and could be just a few years away, according to OpenAI cofounder John Schulman. Speaking on a podcast with Dwarkesh Patel, Schulman predicted that artificial general intelligence could be achieved in "two or three years." A spokesperson for OpenAI told The Information that the remaining staffers were now part of its core research team. Schulman's comments come amid protest movements calling for a pause on training AI models. Groups such as Pause AI fear that if firms like OpenAI create superintelligent AI models, they could pose existential risks to humanity.
Persons: , John Schulman, Dwarkesh Patel, Schulman, AGI, Elon Musk, OpenAI, Kayla Wood, Jan Leike, Ilya Sutskever Organizations: Service, Business, Tech, Washington Post
Read previewOpenAI's fight with Scarlett Johansson isn't just a PR disaster (and a big one at that). Most of all, it shows there's just really, really bad judgment going on at the highest levels of Sam Altman's company. But everyone immediately noticed that the "Sky" voice that ended up on ChatGPT reminded them of ScarJo. And then Sam Altman, in one of the greatest self-own moves of the generative AI era, tweeted out "her" during the product demo last week. The OpenAI team used a Scarlett Johansson sound-alike voice — also completely avoidable.
Persons: , Scarlett Johansson isn't, there's, Sam Altman's, ScarJo, Johansson, OpenAI, Scarlett Johansson, Sam Altman, wasn't, Johansson's, Altman, it's, Ilya Sutskever Organizations: Service, Business, Hollywood, SAG, CNN, Reuters, Marvel, tech's, Facebook, Microsoft, eventual Locations: Turkey
A Safety Check for OpenAI
  + stars: | 2024-05-20 | by ( Andrew Ross Sorkin | Ravi Mattu | Bernhard Warner | ) www.nytimes.com   time to read: +1 min
OpenAI’s fear factorThe tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. “Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity. Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. Leike spoke for many safety-first OpenAI employees, according to Vox.
Persons: Ilya Sutskever, Sam Altman, hadn’t, ” Jan Leike, Sutskever, Leike, , Vox, Daniel Kokotajlo, Altman Organizations: OpenAI, C.E.O
AdvertisementIt's a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI's ongoing chaos. Safety team implosionOpenAI has been in full damage control mode following the exit of key employees working on AI safety. He said the safety team was left "struggling for compute, and it was getting harder and harder to get this crucial research done." Silenced employeesThe implosion of the safety team is a blow for Altman, who has been keen to show he's safety-conscious when it comes to developing super-intelligent AI. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.
Persons: , Jan Leike, Ilya Sutskever, Sam Altman, Altman, Leike, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders, Cullen O'Keefe, Kokotajlo, Vox, OpenAI, Joe Rogan's, Neel Nanda, i've, Scarlett Johansson, OpenAI didn't Organizations: Service, Business, AGI
New York CNN —OpenAI says it’s hitting the pause button on a synthetic voice released with an update to ChatGPT that prompted comparisons with a fictional voice assistant portrayed in the quasi-dystopian film “Her” by actor Scarlett Johansson. “We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI said in a post on X Monday. A spokesperson for the company said that structure would help OpenAI better achieve its safety objectives. OpenAI President Greg Brockman responded in a longer post on Saturday, which was signed with both his name and Altman’s, laying out the company’s approach to long-term AI safety. “We have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it,” Brockman said.
Persons: New York CNN — OpenAI, Scarlett Johansson, OpenAI, “ We’ve, ” OpenAI, , Desi Lydic, , ” Lydic, Joaquin Phoenix, Everett, Sam Altman, Johansson, Jan Leike, Ilya Sutskever, Altman, Leike, Greg Brockman, ” Brockman Organizations: New, New York CNN, Daily, Warner Bros ., White, CNN Locations: New York, ChatGPT, OpenAI
OpenAI's exit agreements had nondisparagement clauses threatening vested equity, Vox reported. Sam Altman said on X that the company never enforced it, and that he was unaware of the provision. Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementOpenAI employees who left the company without signing a non-disparagement agreement could have lost vested equity if they did not comply — but the policy was never used, CEO Sam Altman said on Saturday.
Persons: Vox, Sam Altman, , Superalignment, Jan Leike, Ilya Sutskever Organizations: Service, Vox News, Business
OpenAI's Ilya Sutskever and Jan Leike, who led a team focused on AI safety, resigned. Founders Sam Altman and Greg Brockman are now scrambling to reassure everyone. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementTwo of OpenAI's founders, CEO Sam Altman and President Greg Brockman, are on the defensive after a shake-up in the company's safety department this week. Sutskever and Leike led OpenAI's super alignment team, which was focused on developing AI systems compatible with human interests.
Persons: OpenAI's Ilya Sutskever, Jan Leike, Sam Altman, Greg Brockman, , Ilya Sutskever, Leike Organizations: Service, Business
New York CNN —A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever, who also helped lead the superalignment team, on Tuesday that he would leave the company. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations. “i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. i’ll have a longer post in the next couple of days.”–CNN’s Samantha Delouya contributed to this report.
Persons: Jan Leike, superalignment, OpenAI, , Leike, , Ilya Sutskever, Sutskever, Sam Altman, Altman, Kara Swisher, ” Leike, ” Altman, ” –, Samantha Delouya Organizations: New, New York CNN, OpenAI, CNN Locations: New York, ChatGPT
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." "I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X. Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. The update brings the GPT-4 model to everyone, including OpenAI's free users, technology chief Mira Murati said Monday in a livestreamed event.
Persons: Sam Altman, OpenAI, Ilya Sutskever, Jan Leike, OpenAI's, Leike, Altman, Sutskever, Helen Toner, Tasha McCauley, Adam D'Angelo, Ilya, Jakub Pachocki, Mira Murati, Murati Organizations: OpenAI, Hope, CNBC, Microsoft, Wired, Tuesday, Wall Street Locations: Atlanta, Leike, OpenAI
OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior. OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned. AdvertisementIn the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported. OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it "going rogue."
Persons: OpenAI's, OpenAI, , Ilya Sutskever, Jan Leike, Sutskever Organizations: Service, Wired, Business
Total: 25