Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Ilya"


25 mentions found


New York CNN —Nvidia’s eye-popping gains this year have helped propel the stock market to repeated record highs. Can Nvidia’s blockbuster gains continue, and what does its outsized market cap mean for the stock rally? Now everyone says, “the market is dependent on Nvidia’s earnings.” You’re going to see a kind of a shift in market cap over the years. “These lower mortgage rates coupled with the gradually improving housing supply bodes well for the housing market.”Still, mortgage rates remain higher than anything seen in the decade before 2022, the year the Federal Reserve began to raise interest rates to combat inflation. The Fed doesn’t directly set mortgage rates but its actions do influence them through the benchmark 10-year US Treasury yield, which moves in anticipation of the Fed’s policy moves.
Persons: OpenAI’s, Bell, Christopher Barto, ” You’re, Bryan Mena, Freddie Mac, , Sam Khater, Freddie Mac’s, don’t, Read, Ilya Sutskever, Clare Duffy, Sutskever, Geoffrey Hinton Organizations: CNN Business, Bell, New York CNN, Nvidia, Microsoft, Fort Pitt Capital Group, Apple, Meta, Google, Amazon, , Federal Reserve, Treasury, Superintelligence Inc, SSI Locations: New York
New York CNN —The OpenAI co-founder who left the high-flying artificial intelligence startup last month has announced his next venture: a company dedicated to building safe, powerful artificial intelligence that could become a rival to his old employer. Ilya Sutskever announced plans for the new company, aptly named Safe Superintelligence Inc., in a post on X Wednesday. Sutskever then worked on Google’s AI research team, before helping to found what would become the maker of ChatGPT. It’s also not clear exactly what the new company thinks of as “safety” in the context of highly powerful artificial intelligence technology. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever told Bloomberg in an interview published Tuesday.
Persons: Ilya Sutskever, Sutskever, Geoffrey Hinton, Sam Altman, Altman, Kara Swisher, , It’s, ” Sutskever, OpenAI, Jan Leike, Daniel Levy, Daniel Gross Organizations: New, New York CNN, Superintelligence Inc, SSI, Google, CNN, Bloomberg, Apple Locations: New York, OpenAI, , Palo Alto , California, Tel Aviv, Israel
Read previewOpenAI cofounder and former chief scientist Ilya Sutskever announced his new venture on Wednesday — a research lab committed to developing "safe superintelligence." "I am starting a new company," Sutskever said of his new project, Safe Superintelligence Inc. (SSI) in an X post. I am starting a new company: https://t.co/BG3K3SI3A1 — Ilya Sutskever (@ilyasut) June 19, 2024According to SSI's website, the lab has "one goal and one product: a safe superintelligence." AdvertisementRepresentatives for Safe Superintelligence and OpenAI didn't immediately respond to requests for comment from BI sent outside regular business hours. Following Altman's return, Sutskever appeared to have been shut out of OpenAI, BI reported in December, citing people familiar with the situation.
Persons: , Ilya Sutskever, Sutskever, BG3K3SI3A1 — Ilya Sutskever, Daniel Gross, Daniel Levy, Ashlee Vance, Bloomberg's Vance, OpenAI didn't, Elon Musk, Sam Altman's, Altman, Altman's, OpenAI, splintering, Dario, Daniela Amodei Organizations: Service, Superintelligence Inc, Business, SSI, Apple, BI, OpenAI, Amazon, Google Locations: OpenAI
Ilya Sutskever, the OpenAI co-founder and chief scientist who in November joined other board members to force out Sam Altman, the company’s high-profile chief executive, has helped found a new artificial intelligence company. The new start-up is called Safe Superintelligence. It aims to produce superintelligence — a machine that is more intelligent than humans — in a safe way, according to the company spokeswoman Lulu Cheng Meservey. Dr. Sutskever, who has said he regretted moving against Mr. Altman, declined to comment. She said that as it builds safe superintelligence, the company will not release other products.
Persons: Ilya Sutskever, Sam Altman, Lulu Cheng Meservey, Sutskever, Altman, Meservey Organizations: Mr, Bloomberg Locations: OpenAI
Though Justice Clarence Thomas’ decision in a major trademark case last week was unanimous, it prompted a sharp debate led by Justice Amy Coney Barrett over the use of history to decide the case. “There definitely is the potential formation here of an alternative or several alternative approaches to history that ultimately draw a majority,” Wolf said. “What we could be seeing is a more nuanced approach to using that history,” said Elizabeth Wydra, president of the progressive Constitutional Accountability Center. But in a striking concurrence that captured support from both liberal and conservative justices, Justice Elena Kagan asserted that the court’s historic analysis need not end with the late-18th century. Barrett’s concurrence said the dispute could have been dealt with based on the court’s past precedent with trademark law and stressed that just leaning on the nation’s trademark history wasn’t good enough.
Persons: Clarence Thomas ’, Amy Coney Barrett, Barrett, Thomas, , , Tom Wolf, Brennan, ” Wolf, Trump, Thomas ’, Antonin Scalia, Elizabeth Wydra, ” Wydra, Ilya Somin, there’s, Bruen, Sonia Sotomayor, … Bruen, , Elena Kagan, Kagan, Brett Kavanaugh, Sotomayor –, Wolf, Roe, Wade, Vidal, . Elster, Sotomayor, ” Thomas, Kavanaugh, John Roberts, Samuel Alito, Neil Gorsuch, Ketanji Brown Jackson, Barrett’s Organizations: Washington CNN, Brennan Center for Justice, New York, Trump, George Mason University, , Inc, CNN, Consumer Financial Protection Bureau Locations: New, Bruen, United States
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023. OpenAI co-founder Ilya Sutskever, who left the artificial intelligence startup last month, introduced his new AI company, which he's calling Safe Superintelligence, or SSI. "I am starting a new company," Sutskever wrote on X on Wednesday. Altman and Sutskever, along with other directors, clashed over the guardrails OpenAI had put in place in the pursuit of advanced AI. "I deeply regret my participation in the board's actions," Sutskever wrote in a post on X on Nov. 20.
Persons: Ilya Sutskever, OpenAI, Sutskever, Jan Leike, OpenAI's, Leike, Daniel Gross, Daniel Levy, Sam Altman, Altman, we've Organizations: Tel Aviv University, SSI, Microsoft, Apple Locations: Russian Israeli, Canadian, Tel Aviv, Palo Alto , California
The ‘experiential’ complexes, called Netflix Houses, will include elaborate events, themed gift shops and restaurants. In a repurposing of empty retail space, the houses will occupy former department store locations at Dallas Galleria and King of Prussia Mall (near Philadelphia). Shonda Rhimes, Golda Rosheuvel and cast members visit The Queen’s Ball: A Bridgerton Experience in New York on April 30, 2023. For years (the show launched in 2020) the “Queen’s Ball” has been staged in various cities two or three times daily. “Netflix House represents the next generation of our distinctive offerings,” said Netflix Chief Marketing Officer Marian Lee, in a statement.
Persons: of, , Neil Saunders, São Paulo, , Saunders, Shonda Rhimes, Golda Rosheuvel, Ilya S, Netflix TikTok, Carrie Berk, influencers, Marian Lee, ” Berk Organizations: CNN, Netflix, Dallas Galleria, GlobalData, “ Disney, Netflix House Locations: of Prussia, Philadelphia, Kuala Lumpur, Malaysia, Toronto, Canada, Brazil, New York, Queen
However, current and former OpenAI employees have been increasingly concerned about access to liquidity, according to interviews and documents shared internally. "We're incredibly sorry that we're only changing this language now," an OpenAI spokesperson told CNBC after the company changed course. In at least two tender offers, the sales limit for former employees was $2 million, compared to $10 million for current employees. In addition to current and former employees, OpenAI has a third tier for share sales that consists of ex-employees who now work at competitors. OpenAI said it's never canceled a current or former employee's vested equity or required a repurchase at $0.
Persons: Sam Altman, Jason Redmond, OpenAI, Slack, Siri, Ilya Sutskever, Jan Leike, Altman's, Sarah Friar, Larry Albukerk, Albukerk, CNBC they've, it's, Doug Brayley Organizations: Microsoft, AFP, Getty, CNBC, Apple, Federal Trade Commission, Justice Department, Nvidia, OpenAI, EB Exchange, Ropes & Gray Locations: Redmond , Washington, OpenAI, California
This disastrous mindset has hollowed out Silicon Valley's ability to innovate and caused regular people to grow increasingly frustrated with everyday tech. The large platforms have generally ignored this feedback for one big reason: The tech industry has been taken over by career managers. Now Google Search is more profitable and worse, elevating spammy content and outright scams, a problem exacerbated by artificial intelligence. AdvertisementBut today's tech products feel built to sell a dream of the future rather than solve a customer's existing pains. As long as the tech industry is controlled by people who don't build things, it will continue to build products that help raise growth metrics rather than help consumers with tangible problems.
Persons: scammers hawking, Meta's, Hewlett Packard, Kevin Systrom, Mike Krieger, Adam Mosseri, Systrom, Krieger, Mosseri, Mark Zuckerberg, Instagram, Kylie Jenner, Kim Kardashian, Sundar Pichai, Prabhakar Raghavan, Raghavan, Ben Gomes, Gomes, it's, Sam Altman, Helen Toner, Ilya Sutskever, Larry Summers, Fidji Simo, Meta —, , Steve Jobs, Steve Wozniak Organizations: Facebook, Google, Microsoft, Amazon, Oracle, Adobe, Meta, Builders, Apple, Xerox, HP, Department, Reuters Institute, Oxford University, Silicon Valley Locations: Silicon, Silicon Valley
It's all unraveling at OpenAI (again)
  + stars: | 2024-06-04 | by ( Madeline Berg | ) www.businessinsider.com   time to read: +10 min
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee. Safety second (or third)A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are. (In a responding op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.) "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." (Altman and OpenAI said he recused himself from these deals.)
Persons: , Sam Altman, Daniel Kokotajlo, OpenAI, Altman, Helen Toner, Tasha McCauley, Toner, McCauley, Bret Taylor, Larry Summers, Kokotajlo, Jan Leike, Ilya Sutskever, Leike, Stuart Russell, NDAs, Scarlett Johansson, lawyered, Johansson, " Johansson, I've, Sam Altman — Organizations: Service, New York Times, Business, Times, Twitter, Microsoft, The New York Times, BI, Street, OpenAI, OpenAI's, Apple Locations: OpenAI, Russian, Reddit
Read previewThis has been the week of dueling op-eds from former and current OpenAI board members. Current OpenAI board members Bret Taylor and Larry Summers issued a response to AI safety concerns on Thursday, stating that "the board is taking commensurate steps to ensure safety and security." In the last six months, the two current board members said they had found Altman "highly forthcoming on all relevant issues and consistently collegial with his management team." She also said that the old OpenAI board found out about ChatGPT's release on Twitter. OpenAI dissolved the superalignment safety team before later announcing the formation of a new safety committee.
Persons: , Bret Taylor, Larry Summers, Helen Toner, Tasha McCauley, Sam Altman, Taylor, Summers, Altman, OpenAI, WilmerHale, Toner, Openai, he's, Jan Leike, Ilya Sutskever, Gretchen Krueger, Leike, Krueger Organizations: Service, Business, Twitter, World, Summit
Former OpenAI board member Helen Toner, who helped oust CEO Sam Altman in November, broke her silence this week when she spoke on a podcast about events inside the company leading up to Altman's firing. Toner also said Altman did not tell the board he owned the OpenAI startup fund. Within a week, Altman was back and board members Toner and Tasha McCauley, who had voted to oust Altman, were out. In March, OpenAI announced its new board, which includes Altman, and the conclusion of an internal investigation by law firm WilmerHale into the events leading up to Altman's ouster. "The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg," OpenAI board chair Bret Taylor said at the time, referring to president and co-founder Greg Brockman.
Persons: Helen Toner, CSET, Vox, Sam Altman, OpenAI, Toner, Altman, Sam, Ilya Sutskever, Jan Leike, Anthropic, OpenAI's, Sutskever, Tasha McCauley, Adam D'Angelo, WilmerHale, Greg, Bret Taylor, Greg Brockman, Taylor Organizations: The Ritz, Carlton, Twitter, OpenAI, Microsoft, Street Locations: Laguna Niguel, Dana Point , California
OpenAI announces new safety board after employee revolt
  + stars: | 2024-05-28 | by ( Brian Fung | ) edition.cnn.com   time to read: +2 min
Washington CNN —OpenAI said Tuesday it has established a new committee to make recommendations to the company’s board about safety and security, weeks after dissolving a team focused on AI safety. In a blog post, OpenAI said the new committee would be led by CEO Sam Altman as well as Bret Taylor, the company’s board chair, and board member Nicole Seligman. The announcement follows the high-profile exit this month of an OpenAI executive focused on safety, Jan Leike. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
Persons: Washington CNN — OpenAI, OpenAI, Sam Altman, Bret Taylor, Nicole Seligman, Jan Leike, Leike, OpenAI’s, , Ilya Sutskever, Sutskever, Altman’s, Organizations: Washington CNN, CNN, Safety, Security
Ex-OpenAI exec Jan Leike joined rival AI company Anthropic days after he quit over safety concerns. Leike, who co-led OpenAI's Superalignment team, left less than two weeks ago. AdvertisementOpenAI's former executive Jan Leike announced he's joining its competitor Anthropic. Leike co-led OpenAI's Superalignment team alongside cofounder Ilya Sutskever, who also resigned. The team was tasked with ensuring superintelligence doesn't go rogue and has since been dissolved, with remaining staffers joining the core research team.
Persons: Jan Leike, OpenAI's, OpenAI, , he's, Leike, Ilya Sutskever, superintelligence, @AnthropicAI Organizations: Service, Amazon, Business
Jan Leike, one of the lead safety researchers at OpenAI who resigned from the artificial intelligence company earlier this month, said on Tuesday that he's joined rival AI startup Anthropic. Leike announced his resignation from OpenAI on May 15, days before the company dissolved the superalignment group that he co-led. "I'm excited to join @AnthropicAI to continue the superalignment mission," Leike wrote on X. AI safety has gained rapid importance across the tech sector since OpenAI introduced ChatGPT in late 2022, ushering in a boom in generative AI products and investments. The committee will recommend "safety and security decisions for OpenAI projects and operations" to the company's board.
Persons: Jan Leike, he's, Leike, OpenAI, Ilya Sutskever, @AnthropicAI, Sam Altman, Dario Amodei, Daniela Amodei, Claude Organizations: OpenAI, Amazon, Microsoft, Google Locations: OpenAI
OpenAI on Tuesday said it created a Safety and Security Committee led by senior executives, after disbanding its previous oversight board in mid-May. The formation of a new oversight team comes after OpenAI dissolved a previous team that was focused on the long-term risks of AI. AI safety has been at the forefront of a larger debate, as the huge models that underpin applications like ChatGPT get more advanced. Bret Taylor, Adam D'Angelo, Nicole Seligman, who are all on OpenAI's board of directors, now sit on the new safety committee alongside Altman. Leike this month wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products."
Persons: Sam Altman, OpenAI, Ilya Sutskever, Jan Leike, AGI, Bret Taylor, Adam D'Angelo, Nicole Seligman, Altman, CNBC's Hayden Field Organizations: Microsoft, Security Locations: Redmond , Washington
download the appSign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read previewAI's golden boy, Sam Altman, may be starting to lose his luster. The company has also been dealing with comments from former executives that its commitment to AI safety leaves much to be desired. This story is available exclusively to Business Insider subscribers. ScaJo scandalThe criticism around AI safety is the latest blow for Altman, who is fighting battles on multiple fronts.
Persons: , Sam Altman, Gretchen Krueger, Jan Leike, Ilya Sutskever, Altman, Stuart Russell, Russell, Scarlett Johansson, Paul Morigi, OpenAI Organizations: Service, Business, Wednesday, UC Berkeley, Microsoft Locations: OpenAI, Russian
OpenAI faces more turmoil as another employee announces she quit over safety concerns. It comes after the resignations of high-profile executives Ilya Sutskever and Jan Leike, who ran its now-dissolved safety research team Superalignment. Krueger wrote, "I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. Kokotajlo said he left after "losing confidence that it [OpenAI] would behave responsibly around the time of AGI."
Persons: Ilya Sutskever, Jan Leike, Gretchen Krueger, Krueger, — Gretchen Krueger, Leike, OpenAI, Daniel Kokotajlo, William Saunders, Kokotajlo, OpenAI didn't Organizations: Business
Back in September, Scarlett Johansson, who played the hauntingly complex AI assistant in the 2013 Spike Jonze film “Her,” got a request from OpenAI’s CEO, Sam Altman. He wanted to hire Johansson to voice his company’s newest ChatGPT model, “Sky.” She said no. Johansson quickly lawyered up, saying Monday she was “shocked, angered and in disbelief” that Altman would use a voice “so eerily similar” to her own. OpenAI was forced to confront some of those concerns late last week, after two prominent employees left the company. “Being friends with AI will be so much easier than forging bonds with human beings,” wrote Wired editor Brian Barrett in a recent essay about the movie.
Persons: CNN Business ’, you’ve, Scarlett Johansson, Spike Jonze, , Sam Altman, Johansson, OpenAI, Altman, Altman —, Jan Leike, OpenAI’s, Ilya Sutskever, ” Altman, , that’s, Joaquin, Brian Barrett, — CNN’s Clare Duffy, Brian Fung Organizations: CNN Business, New York CNN, Google Locations: New York, Silicon
Read previewOpenAI's fight with Scarlett Johansson isn't just a PR disaster (and a big one at that). Most of all, it shows there's just really, really bad judgment going on at the highest levels of Sam Altman's company. But everyone immediately noticed that the "Sky" voice that ended up on ChatGPT reminded them of ScarJo. And then Sam Altman, in one of the greatest self-own moves of the generative AI era, tweeted out "her" during the product demo last week. The OpenAI team used a Scarlett Johansson sound-alike voice — also completely avoidable.
Persons: , Scarlett Johansson isn't, there's, Sam Altman's, ScarJo, Johansson, OpenAI, Scarlett Johansson, Sam Altman, wasn't, Johansson's, Altman, it's, Ilya Sutskever Organizations: Service, Business, Hollywood, SAG, CNN, Reuters, Marvel, tech's, Facebook, Microsoft, eventual Locations: Turkey
Read previewThe age of AGI is coming and could be just a few years away, according to OpenAI cofounder John Schulman. Speaking on a podcast with Dwarkesh Patel, Schulman predicted that artificial general intelligence could be achieved in "two or three years." A spokesperson for OpenAI told The Information that the remaining staffers were now part of its core research team. Schulman's comments come amid protest movements calling for a pause on training AI models. Groups such as Pause AI fear that if firms like OpenAI create superintelligent AI models, they could pose existential risks to humanity.
Persons: , John Schulman, Dwarkesh Patel, Schulman, AGI, Elon Musk, OpenAI, Kayla Wood, Jan Leike, Ilya Sutskever Organizations: Service, Business, Tech, Washington Post
New York CNN —OpenAI says it’s hitting the pause button on a synthetic voice released with an update to ChatGPT that prompted comparisons with a fictional voice assistant portrayed in the quasi-dystopian film “Her” by actor Scarlett Johansson. “We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI said in a post on X Monday. A spokesperson for the company said that structure would help OpenAI better achieve its safety objectives. OpenAI President Greg Brockman responded in a longer post on Saturday, which was signed with both his name and Altman’s, laying out the company’s approach to long-term AI safety. “We have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it,” Brockman said.
Persons: New York CNN — OpenAI, Scarlett Johansson, OpenAI, “ We’ve, ” OpenAI, , Desi Lydic, , ” Lydic, Joaquin Phoenix, Everett, Sam Altman, Johansson, Jan Leike, Ilya Sutskever, Altman, Leike, Greg Brockman, ” Brockman Organizations: New, New York CNN, Daily, Warner Bros ., White, CNN Locations: New York, ChatGPT, OpenAI
AdvertisementIt's a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI's ongoing chaos. Safety team implosionOpenAI has been in full damage control mode following the exit of key employees working on AI safety. He said the safety team was left "struggling for compute, and it was getting harder and harder to get this crucial research done." Silenced employeesThe implosion of the safety team is a blow for Altman, who has been keen to show he's safety-conscious when it comes to developing super-intelligent AI. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.
Persons: , Jan Leike, Ilya Sutskever, Sam Altman, Altman, Leike, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders, Cullen O'Keefe, Kokotajlo, Vox, OpenAI, Joe Rogan's, Neel Nanda, i've, Scarlett Johansson, OpenAI didn't Organizations: Service, Business, AGI
A Safety Check for OpenAI
  + stars: | 2024-05-20 | by ( Andrew Ross Sorkin | Ravi Mattu | Bernhard Warner | ) www.nytimes.com   time to read: +1 min
OpenAI’s fear factorThe tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. “Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity. Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. Leike spoke for many safety-first OpenAI employees, according to Vox.
Persons: Ilya Sutskever, Sam Altman, hadn’t, ” Jan Leike, Sutskever, Leike, , Vox, Daniel Kokotajlo, Altman Organizations: OpenAI, C.E.O
OpenAI's exit agreements had nondisparagement clauses threatening vested equity, Vox reported. Sam Altman said on X that the company never enforced it, and that he was unaware of the provision. Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementOpenAI employees who left the company without signing a non-disparagement agreement could have lost vested equity if they did not comply — but the policy was never used, CEO Sam Altman said on Saturday.
Persons: Vox, Sam Altman, , Superalignment, Jan Leike, Ilya Sutskever Organizations: Service, Vox News, Business
Total: 25