Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "superintelligence"


25 mentions found


Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and chief scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023. OpenAI co-founder Ilya Sutskever, who left the artificial intelligence startup last month, introduced his new AI company, which he's calling Safe Superintelligence, or SSI. "I am starting a new company," Sutskever wrote on X on Wednesday. Altman and Sutskever, along with other directors, clashed over the guardrails OpenAI had put in place in the pursuit of advanced AI. "I deeply regret my participation in the board's actions," Sutskever wrote in a post on X on Nov. 20.
Persons: Ilya Sutskever, OpenAI, Sutskever, Jan Leike, OpenAI's, Leike, Daniel Gross, Daniel Levy, Sam Altman, Altman, we've Organizations: Tel Aviv University, SSI, Microsoft, Apple Locations: Russian Israeli, Canadian, Tel Aviv, Palo Alto , California
Read previewA former OpenAI researcher opened up about how he "ruffled some feathers" by writing and sharing some documents related to safety at the company, and was eventually fired. Leopold Aschenbrenner, who graduated from Columbia University at 19, according to his LinkedIn, worked on OpenAI's superalignment team before he was reportedly "fired for leaking" in April. The AI researcher previously shared the memo with others at OpenAI, "who mostly said it was helpful," he added. Related storiesHR later gave him a warning about the memo, Aschenbrenner said, telling him that it was "racist" and "unconstructive" to worry about China Communist Party espionage. He said he wrote the document a couple of months after the superalignment team was announced, which referenced a four-year planning horizon.
Persons: , Leopold Aschenbrenner, OpenAI's, podcaster Dwarkesh Patel, Aschenbrenner, OpenAI, Sam, Sam Altman Organizations: Service, Columbia University, Business, China Communist Party Locations: OpenAI
It's all unraveling at OpenAI (again)
  + stars: | 2024-06-04 | by ( Madeline Berg | ) www.businessinsider.com   time to read: +10 min
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee. Safety second (or third)A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are. (In a responding op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.) "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." (Altman and OpenAI said he recused himself from these deals.)
Persons: , Sam Altman, Daniel Kokotajlo, OpenAI, Altman, Helen Toner, Tasha McCauley, Toner, McCauley, Bret Taylor, Larry Summers, Kokotajlo, Jan Leike, Ilya Sutskever, Leike, Stuart Russell, NDAs, Scarlett Johansson, lawyered, Johansson, " Johansson, I've, Sam Altman — Organizations: Service, New York Times, Business, Times, Twitter, Microsoft, The New York Times, BI, Street, OpenAI, OpenAI's, Apple Locations: OpenAI, Russian, Reddit
Ex-OpenAI exec Jan Leike joined rival AI company Anthropic days after he quit over safety concerns. Leike, who co-led OpenAI's Superalignment team, left less than two weeks ago. AdvertisementOpenAI's former executive Jan Leike announced he's joining its competitor Anthropic. Leike co-led OpenAI's Superalignment team alongside cofounder Ilya Sutskever, who also resigned. The team was tasked with ensuring superintelligence doesn't go rogue and has since been dissolved, with remaining staffers joining the core research team.
Persons: Jan Leike, OpenAI's, OpenAI, , he's, Leike, Ilya Sutskever, superintelligence, @AnthropicAI Organizations: Service, Amazon, Business
And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post. Our current techniques for aligning AI, such as reinforcement learning from human feedback , rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. AdvertisementLeike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created. "Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve.
Persons: , Sam Altman, Ilya Sutskever, Altman, Sutskever, he's, Jan Leike, Leike, OpenAI, superalignment, we're, Google's DeepMind, OpenAI — Organizations: Service, Business, OpenAI Locations: OpenAI
What would life be like if artificial intelligence solved all your problems? Death would become almost optional because you could take on digital form and keep going for a billion years. We human beings dislike our problems, naturally, but if we had no problems to solve, what meaning would life have? in the 10 years since Bostrom’s last book on the topic, “Superintelligence: Paths, Dangers, Strategies.” A.I. The idea that it will change the world has gone from a nerdy obsession to conventional wisdom.
Persons: Nick Bostrom, Bostrom’s Organizations: Oxford Locations: A.I
But increasingly, the algorithms that undergird our digital lives are making questionable decisions that enrich the powerful and wreck the lives of average people. There's no reason to be scared of AI making decisions for you in the future — computers have already been doing so for quite some time. As human control diminished, the real-world consequences of these algorithms have piled up: Instagram's algorithm has been linked to a mental-health crisis in teenage girls. AdvertisementAcross the public and private sectors, we've handed the keys to a spiderweb of algorithms built with little public insight into how they make their decisions. While generative AI is just the newest extension of the algorithm, it poses a unique threat.
Persons: who's, They've, Matthew Gray, Sergey Brin, Larry Page, It's, Elon Musk, Cambridge Analytica, algorithmically, ProPublica, Quora, OpenAI's ChatGPT, you've, they'll, superintelligence, — simulacrums Organizations: Knight Capital, Companies, Yahoo, Stanford, Google, Spotify, Netflix, Revenue, Facebook, Twitter, Elon, European Union, Associated Press, Black, Microsoft, Eating Disorders Association Locations: Cambridge
He estimates there's a 10-20% chance AI could destroy humanity but that we should build it anyway. An AI safety expert told BI that Musk is underestimating the risk of potential catastrophe. AdvertisementElon Musk is pretty sure AI is worth the risk, even if there's a 1-in-5 chance the technology turns against humans. "One of the things I think that's incredibly important for AI safety is to have a maximum sort of truth-seeking and curious AI." Musk said his "ultimate conclusion" regarding the best way to achieve AI safety is to grow the AI in a manner that forces it to be truthful.
Persons: Elon Musk, , Elon, recalculated, Geoff Hinton, Yamploskiy, Musk, Sam Altman, Hinton Organizations: Service, Cyber Security, University of Louisville, New York Times, Summit, Independent, CNN, Business
Daimon Labs, by contrast, had raised $1.5 million from a handful of VCs. It was around then that he started Daimon Labs alongside Dhruv Malik and Xiang Zhang to pursue the dream of what he calls "machines of loving grace". He ignored every metric of success for an AI model, except one: perplexity. It's a measure of how certain the AI model is of its predictions. But even with ruthlessly optimized hardware and that single-minded focus, Daimon Labs still couldn't afford to build the model Benmalek was envisioning.
Persons: , Ryan Benmalek, wouldn't, Benmalek, Isaac Asimov, Dhruv Malik, Xiang Zhang, Michael Lewis, Daimon, Brooklyn Organizations: Service, Business, Daimon Labs, The University of Washington, Cornell, Apple, Google, Nvidia, Labs, Lambda, Daimon Locations: Silicon, Seattle, Moneyball, Montreal, Brooklyn, North Carolina, Canada
An employee at rival Anthropic sent OpenAI thousands of paper clips in the shape of their logo. The prank was a subtle jibe suggesting OpenAI's approach to AI could lead to humanity's extinction. Anthropic was formed by ex-OpenAI employees who split from the company over AI safety concerns. AdvertisementOne of OpenAI's biggest rivals played an elaborate prank on the AI startup by sending thousands of paper clips to its offices. AdvertisementAnthropic was founded by former OpenAI employees who left the company in 2021 over disagreements on developing AI safely.
Persons: Anthropic, , Nick Bostrom, Bostrom, OpenAI, Sam Altman, Altman, Ilya Sutskever, Sutskever Organizations: Service, OpenAI's, Anthropic, Wall Street, Microsoft, Business Locations: Francisco
The previously unreported letter and AI algorithm was a catalyst that caused the board to oust Altman, the poster child of generative AI, the two sources said. The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman's firing. The researchers who wrote the letter did not immediately respond to requests for comment. According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions. OpenAI defines AGI as AI systems that are smarter than humans.
Persons: Sam Altman's, Altman, OpenAI, Mira Murati, ChatGPT Organizations: Reuters, Microsoft
[1/2] Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. The previously unreported letter and AI algorithm was a catalyst that caused the board to oust Altman, the poster child of generative AI, the two sources said. The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing. According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions. OpenAI defines AGI as AI systems that are smarter than humans.
Persons: Sam Altman, OpenAI, Chuck Schumer, Julia Nikhinson, Sam Altman’s, Altman, Mira Murati, ChatGPT, Anna Tong, Jeffrey Dastin, Krystal Hu, Kenneth Li, Lisa Shumaker Organizations: Intelligence, Senate, U.S, Capitol, REUTERS, Reuters, Microsoft, Thomson Locations: Washington , U.S, San Francisco, New York
Many are shrugging off the supposed existential risks of AI, labeling them a distraction. They argue big tech companies are using the fears to protect their own interests. The timing of the pushback, ahead of the UK's AI safety summit and following Biden's recent executive order on AI, is also significant. More experts are warning that governments' preoccupation with the existential risks of AI is taking priority over the more immediate threats. Merve Hickok, the president of the Center for AI and Digital Policy, raised similar concerns about the UK AI safety summit's emphasis on existential risk.
Persons: , You've, there's, Yann LeCun, Altman, Hassabis, LeCun, LeCun's, OpenAI's Sam Altman, Anthropic's Dario Amodei, Andrew Ng, hasn't, Anthropic, Aidan Gomez, Merve Hickok, Hickok, Rishi Sunak, Michelle Donelan Organizations: Service, Google, CNBC, Stanford University, Australian Financial, Guardian, Center, AI
Marc Andreessen said on the podcast Huberman Lab that fears around AI are overblown. The billionaire venture capitalist said that AI won't "decide to kill us all" and replace jobs. The real concern, he said, is the possibility that AI may end up in the hands of malicious actors. "A lot of the science-fiction scenarios are just not real," Andreessen said on an episode of Huberman Lab, a podcast led by Andrew Huberman, a neuroscientist. "AI can be an incredibly powerful tool for solving problems, and we should embrace it as such," Andreessen wrote.
Persons: Marc Andreessen, Andreessen Horowitz, Andreessen, Andrew Huberman, Alexis Ohanian isn't, Elon, , Sam Altman, Andreessen didn't, A16z Organizations: Service, OpenAI Locations: Wall, Silicon
There's a new ideological interest in Silicon Valley: effective accelerationism. It's called effective accelerationism. The more formalized e/acc idea has taken shape on Twitter and through Substack newsletters since 2022. In an e/acc world, no idea that offers hypothetical value should be considered too absurd, too dangerous, too out there to make a reality. But one thing does seem certain: as long as AI remains front and center, so too will effective accelerationism.
Persons: Marc Andreessen, It's, Garry Tan, Sam Bankman, Fried, Michael M, Nick Land, Freeman, ChatGPT, Marc Andreessen's, Tan, Y Organizations: Tech, acc, Morning, Twitter, Getty, University of Warwick Locations: Silicon Valley, British, Francisco
AI is shaping up to be a new Cold War with China, according to Marc Andreessen. The Silicon Valley veteran discussed US policymakers' plans on the Joe Rogan Experience podcast. Washington's leaders are determined that the US will beat China in a global race to dominate AI as a new Cold War takes shape, according to Marc Andreessen. Andreessen added that Washington's policymakers said not only do they need "American AI to succeed," but that they need to "beat the Chinese." Andreessen said Beijing's leaders "view AI as a way to achieve population control" because "they're authoritarians."
Persons: Marc Andreessen, Joe Rogan, a16z, Andreessen, Elon Musk Organizations: Silicon, China, Google, Microsoft, Meta, White, Representatives Locations: China, it's, Washington
China is signaling to the rest of the world that it's open for business again. Both Elon Musk and Janet Yellen have made trips to Beijing recently. But less money is flowing into the country – with foreign investors likely alienated by Xi Jinping's authoritarianism. Get the inside scoop on today’s biggest stories in business, from Wall Street to Silicon Valley — delivered daily. Spooked investors responded by dumping Chinese stocks in a $6 trillion blowout, while the onshore Chinese yuan dropped against the US dollar.
Persons: Elon Musk, Janet Yellen, Xi, Li Qiang, John Kerry, Mark Mobius, he'd Organizations: Service, Privacy, China, Tesla, Communist Party, Bain, Co, Big Tech Locations: China, Beijing, Wall, Silicon, Tianjin, Shanghai, West
Elon Musk and Sam Altman are racing to create superintelligent AI. Musk said xAI plans to use Twitter data to train a "maximally curious" and "truth-seeking" superintelligence. Elon Musk is throwing out challenge after challenge to tech CEOs — while he wants to physically fight Meta's Mark Zuckerberg, he's now racing with OpenAI to create AI smarter than humans. On Saturday, Musk said on Twitter Spaces that his new company, xAI, is "definitely in competition" with OpenAI. Over a 100-minute discussion that drew over 1.6 million listeners, Musk explained his plan for xAI to use Twitter data to train superintelligent AI that is "maximally curious" and "truth-seeking."
Persons: Elon Musk, Sam Altman, Musk, Mark Zuckerberg, , Meta's Mark Zuckerberg, he's, OpenAI, Altman, Semafor, Ilya Sutskever, Jan Leike, Sam Altman — Organizations: Twitter, Intelligence
Elon Musk launches AI firm xAI as he looks to take on OpenAI
  + stars: | 2023-07-13 | by ( ) www.reuters.com   time to read: +3 min
In a Twitter Spaces event Wednesday evening, Musk explained his plan for building a safer AI. Rather than explicitly programming morality into its AI, xAI will seek to create a "maximally curious" AI, he said. Musk in March registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm lists Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary. Dan Hendrycks, who will advise the xAI team, is currently director of the Center for AI Safety and his work revolves around the risks of AI.
Persons: Elon Musk, Musk, xAI, that's, Igor Babuschkin, DeepMind, Tony Wu, Szegedy, Greg Yang, Jared Birchall, Google's Bard, Microsoft's, Bing, Bard, Dan Hendrycks, Tesla, Akash Sriram, Chavi Mehta, Yuvraj Malik, Aditya Soni, Anna Tong, Shailesh Kuber, Leslie Adler Organizations: SpaceX, Twitter, Microsoft, Google, X.AI Corp, Center, AI Safety, X Corp, Thomson Locations: OpenAI, Nevada, San Francisco Bay, Bengaluru, Anna, San Francisco
Elon Musk's warning for China: superintelligent AI could take control of the country. The billionaire said he told senior leaders about the potential threat during a recent China trip. Elon Musk says he told senior leaders in China that the creation of an AI-led "digital superintelligence" could usurp the Chinese Communist Party and take control of the country. Musk has raised several alarms about the potential dangers of AI becoming a kind of "superintelligence" with some capabilities that humans have. During his trip to China, Musk was treated like royalty, meeting senior government officials and business leaders to discuss popular topics such as AI.
Persons: Elon, Musk, Elon Musk, xAI, OpenAI, Ro Khanna, Mike Gallagher, Gallagher Organizations: Morning, Communist Party Locations: China
The event was billed as a conversation about the future of AI and came on a day that Musk launched his own new AI company, xAI. "I think a maximally curious AI, one that is just trying to sort of understand the universe is, I think, going to be pro-humanity," Musk said. He said the idea that a "digital superintelligence" could supplant the Chinese Communist Party itself seemed to resonate. Musk even said he believes the Chinese government would be open to collaborating on an international framework around AI regulation. Musk acknowledged he has "some vested interest in China" but ultimately believes "China is underrated" and that "the people of China are really awesome."
Persons: Elon Musk, Porte, Musk, Ro Khanna, Mike Gallagher, Gallagher, Xi Jinping, China's, Janet Yellen, John Kerry, he's Organizations: SpaceX, Tesla, Twitter, Viva Technology, Porte de, Chesnot, U.S, House Armed Services Committee, Chinese Communist Party, xAI, Motion, Communist Party, CCP, optimist, Netflix, CNBC, YouTube Locations: Paris, France, China, Taiwan, United States, West
OpenAI fears that superintelligent AI could lead to human extinction. It is putting together a team to ensure that superintelligent AI aligns with human interests. The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years. OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority. To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Sam Altman, Altman, Elon Musk Organizations: superintelligent, Research
"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research. AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Connor Leahy, Anna Tong, Kenneth Li, Rosalba O'Brien Organizations: Microsoft, Reuters, Thomson Locations: San Francisco
Elon Musk repeats call for artificial intelligence regulation
  + stars: | 2023-06-16 | by ( ) www.reuters.com   time to read: 1 min
PARIS, June 16 (Reuters) - Billionaire Elon Musk, CEO of Tesla (TSLA.O) and owner of Twitter, reaffirmed on Friday his view that there should be a 'pause' on the development of artificial intelligence (AI) and that the AI sector needed regulation. "There is a real danger for digital superintelligence having negative consequences," said Musk, at the Paris VivaTech event. "I am in favour of AI regulation," he added. Reporting by Silvia Aloisi, Gilles Guillaume, Mathieu Rosemain, Sudip Kar-Gupta, Michel Rose; Editing by Louise HeavensOur Standards: The Thomson Reuters Trust Principles.
Persons: Billionaire Elon Musk, Silvia Aloisi, Gilles Guillaume, Mathieu Rosemain, Sudip Kar, Gupta, Michel Rose, Louise Heavens Organizations: Billionaire, Tesla, Twitter, Paris, Thomson
ChatGPT parent OpenAI doesn't want to go public, according to CEO Sam Altman. At an event in Abu Dhabi, he said he wants to retain full control of the AI startup's technology. Altman said, "I think the chance that we have to make a very strange decision someday is non-trivial." "When we develop superintelligence, we're likely to make some decisions that public market investors would view very strangely," Altman said. The latest investment is estimated to have valued OpenAI at $29 billion, up from about $14 billion in 2021, Semafor reported in January.
Persons: Sam Altman, Altman, , Semafor, OpenAI Organizations: Service, Bloomberg, Microsoft, ChatGPT Locations: Abu Dhabi
Total: 25