Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "STUART RUSSELL"


17 mentions found


Read previewThere's a battle in Silicon Valley over AI risks and safety — and it's escalating fast. This story is available exclusively to Business Insider subscribers. Right to WarnWhile the concerns around AI safety are nothing new, they're increasingly being amplified by those within AI companies. OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours. A spokesperson previously reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee.
Persons: , OpenAI, Bengio, Geoffrey Hinton, Stuart Russell, Jacob Hilton, Hilton, Sam Altman, Helen Toner, Altman, Russell, Daniel Kokotajlo, Kokotajlo Organizations: Service, Google, Business Locations: Silicon Valley, OpenAI
A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry's rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the employees wrote. The letter also details the current and former employees' concerns about insufficient whistleblower protections for the AI industry, saying that without effective government oversight, employees are in a relatively unique position to hold companies accountable. "Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated." Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter.
Persons: OpenAI, they've, Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright, Daniel Ziegler, Ramana Kumar, Neel Nanda, Geoffrey Hinton, Yoshua Bengio, Stuart Russell Organizations: Google, Microsoft, Meta, CNBC, Security Locations: Anthropic
It's all unraveling at OpenAI (again)
  + stars: | 2024-06-04 | by ( Madeline Berg | ) www.businessinsider.com   time to read: +10 min
In a statement to Business Insider, an OpenAI spokesperson reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee. Safety second (or third)A common theme of the complaints is that, at OpenAI, safety isn't first — growth and profits are. (In a responding op-ed, current OpenAI board members Bret Taylor and Larry Summers defended Altman and the company's safety standards.) "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point." (Altman and OpenAI said he recused himself from these deals.)
Persons: , Sam Altman, Daniel Kokotajlo, OpenAI, Altman, Helen Toner, Tasha McCauley, Toner, McCauley, Bret Taylor, Larry Summers, Kokotajlo, Jan Leike, Ilya Sutskever, Leike, Stuart Russell, NDAs, Scarlett Johansson, lawyered, Johansson, " Johansson, I've, Sam Altman — Organizations: Service, New York Times, Business, Times, Twitter, Microsoft, The New York Times, BI, Street, OpenAI, OpenAI's, Apple Locations: OpenAI, Russian, Reddit
download the appSign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read previewAI's golden boy, Sam Altman, may be starting to lose his luster. The company has also been dealing with comments from former executives that its commitment to AI safety leaves much to be desired. This story is available exclusively to Business Insider subscribers. ScaJo scandalThe criticism around AI safety is the latest blow for Altman, who is fighting battles on multiple fronts.
Persons: , Sam Altman, Gretchen Krueger, Jan Leike, Ilya Sutskever, Altman, Stuart Russell, Russell, Scarlett Johansson, Paul Morigi, OpenAI Organizations: Service, Business, Wednesday, UC Berkeley, Microsoft Locations: OpenAI, Russian
LONDON, Oct 31 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit this week to examine the risks of the fast-growing technology and kickstart an international dialogue on regulation of it. The aim of the summit is to start a global conversation on the future regulation of AI. Currently there are no broad-based global regulations focusing on AI safety, although some governments have started drawing up their own rules. A recent Financial Times report said Sunak plans to launch a global advisory board for AI regulation, modeled on the Intergovernmental Panel on Climate Change (IPCC). When Sunak announced the summit in June, some questioned how well-equipped Britain was to lead a global initiative on AI regulation.
Persons: Olaf Scholz, Justin Trudeau –, Kamala Harris, Ursula von der Leyen, Wu Zhaohui, Antonio Guterres, James, Demis Hassabis, Sam Altman, OpenAI, Elon Musk, , Stuart Russell, Geoffrey Hinton, Alan Turing, Rishi Sunak, Sunak, Joe Biden, , Martin Coulter, Josephine Mason, Christina Fincher Organizations: Bletchley, WHO, Canadian, European, United Nations, Google, Microsoft, HK, Billionaire, Alan, Alan Turing Institute, Life, European Union, British, EU, UN, Thomson Locations: Britain, England, Beijing, British, Alibaba, United States, China, U.S
The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. Currently there are no broad-based regulations focusing on AI safety, and the first set of legislations by the European Union is yet to become law as lawmakers are yet to agree on several issues. "It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken," he said. Since the launch of OpenAI's generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems. "There are more regulations on sandwich shops than there are on AI companies."
Persons: Dado Ruvic, Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, Yuval Noah Harari, Elon Musk, Stuart Russell, Supantha Mukherjee, Miral Organizations: REUTERS, Rights, Safety, European, Elon, Thomson Locations: Rights STOCKHOLM, London, European Union, British, Stockholm
In U.S.-China AI contest, the race is on to deploy killer robots
  + stars: | 2023-09-08 | by ( ) www.reuters.com   time to read: +26 min
In this high-tech contest, seizing the upper hand across fields including AI and autonomous weapons, like Ghost Shark, could determine who comes out on top. This could become critical if the United States intervened against an assault by Beijing on Taiwan. Cheap and expendableThe AI military sector is dominated by software, an industry where change comes fast. Still, the available disclosures of spending on AI military research do show that outlays on AI and machine learning grew sharply in the decade from 2010. The Costa-Mesa, California-based company now employs more than 1,800 staff in the United States, the United Kingdom and Australia.
Persons: America’s, Shane Arnott, Anduril, ” Arnott, Arnott, , , Mick Ryan, Eric Schmidt, hasn’t, Lloyd Austin, , Stuart Russell, Russell, Kathleen Hicks, “ We’ll, Palmer Luckey, Luckey, ” Arnott didn’t, Biden, Tsai Ing, Frank Kendall, Datenna, Martijn Rasser, Feng Yanghe, Feng, Palmer, ” Anduril, Arnott wouldn’t, David Lague, Edgar Su, Catherine Tai, Peter Hirschberg Organizations: Australian Navy, Ghost Sharks, Sharks, Reuters, Defense, Australian, Chinese Communist Party, Beijing, People’s Liberation Army, PLA, Department of Defense, Pentagon, Australia’s Department of Defence, Australian Defence Force, Technologists, University of California, U.S ., U.S, Teledyne FLIR, Facebook, VR, Military, . Air Force, FH, U.S . Central Intelligence Agency, Department, Statistics, Harvard University, Biden Administration, Special, Command, Ministry of Defense, Veteran Locations: China, Australia, United States, Sydney, Britain, Japan, Singapore, South Korea, Europe, Asia, Ukraine, , America, U.S, Taiwan, East Asia, Beijing, Russian, Berkeley, Fort Campbell , Tennessee, Kenya, , Russia, Colorado, Zhuhai, Netherlands, Costa, Mesa , California, United Kingdom, Virginia, Canberra, Washington
WASHINGTON, July 18 (Reuters) - Artificial intelligence startup Anthropic's CEO Dario Amodei will testify on July 25 at a U.S. Senate hearing on artificial intelligence as lawmakers consider potential regulations for the fast-growing technology, the Senate panel scheduling the hearing said on Tuesday. "It’s our obligation to address AI’s potential threats and risks before they become real," said Democratic Senator Richard Blumenthal, the subcommittee chair. "We are on the verge of a new era, with major consequences for workers, consumer privacy, and our society." President Joe Biden met with the CEOs of top artificial intelligence companies in May, including Amodei, and made clear they must ensure their products are safe before they are deployed. The report would help push federal financial regulators to adopt and adapt to AI changes disrupting the industry, Schumer's office said.
Persons: Dario Amodei, Amodei, Yoshua Bengio, Stuart Russell, Richard Blumenthal, Josh Hawley, Joe Biden, Chuck Schumer, David Shepardson, Leslie Adler, Chris Reese Organizations: U.S, Senate, Privacy, Technology, Google, Democratic, Republican, Thomson
A Berkeley professor said AI developers are "running out of text" to train chatbots at a UN summit. But Russell's insights point toward another potential vulnerability: the shortage of text to train these datasets. A study conducted last November by Epoch, a group of AI researchers, estimated that machine learning datasets will likely deplete all "high-quality language data" before 2026. Language data in "high-quality" sets comes from sources such as "books, news articles, scientific papers, Wikipedia, and filtered web content," according to the study. Russell added that while there are possible explanations for such a purchase, "the natural inference is that there isn't enough high-quality public data left."
Persons: Stuart Russell, Russell, OpenAI, Elon Musk, he's, Sarah Silverman, Mona Awad, Paul Tremblay, Sam Altman, Altman Organizations: UN, University of California, International Telecommunication Union, OpenAI Locations: Berkeley, UN, Abu Dhabi
Tools like ChatGPT could raise fears about fewer teachers being used in schools, per an AI expert. Stuart Russell spoke to The Guardian about how traditional teaching roles could change. The education sector has been having a difficult time adapting to new AI tech. Speaking to The Guardian, AI expert Stuart Russell said the rising use of the technology could spark "reasonable" fears among those working in the education sector that fewer teachers, or possibly none at all, could be employed by schools. He added that he thought humans would still play a role but it could differ from traditional teaching duties.
Persons: Stuart Russell, Russell Organizations: Guardian, University of California, Good Global Summit, Oxford, Russell Group Locations: Berkeley, Geneva, Cambridge
Oppenheimer’s list of books included works by Plato, mathematician Bernhard Riemann and scientist Michael Faraday, and the “Bhagavad-Gita,” with which he has famously long been associated. What happens when the inner workings and potential reach of scientific inventions are unknown, even to the human beings who create them? Still, Pride is also a time to revel in culture’s power to transform, sustain and bring joy to LGBTQ communities. But Medvedev knows that above all else he needs Putin to think of him as unequivocally loyal and useful. What it will do is help 40 million borrowers who, like me, were drowning in debt and need immediate relief.
Persons: Robert Oppenheimer, ” Oppenheimer, Plato, Bernhard Riemann, Michael Faraday, William Shakespeare’s “, ” Charles Baudelaire’s “, Fleurs, Mal ”, Eliot’s, Oppenheimer, ” Matthew Zapruder, , William Carlos Williams, Nick Anderson, ChatGPT, Stuart Russell, Jessica Chia, Bethany Cianciolo, Russell, isn’t, ” Russell, , Clay Jones, Joe Biden, John Avlon, Kevin McCarthy, McCarthy, Joel Pett, Poppy Harlow, James Comey, Donald Trump, Republicans ’, MAGA, Julian Zelizer, Zelizer, Trump, Kayleigh McEnany, Rob Finnerty, Matt Wolking, Cupp, McEnany, that’s, Kayleigh, Pride Luciano, Sereno, Luciano Vecchio, It’s, ” Vecchio, “ Sereno, Dmitry Medvedev, Vladimir Putin, Russia’s, Frida Ghitis, Medvedev, Putin, ” Medvedev “, Michael Bociurkiw, Biden, Sophia A, Nelson ., Nelson, it’s, , Brandon Bell, Jill Filipovic —, , Filipovic, we’ve, ” Don’t, Keith Magee, Kara Alaimo, James Moore, Texas GOP Tess Taylor, Lala Tanmoy Das, Alex Soros, Scottie Pippen can’t, Jordan, Scottie Pippen, Nathaniel S, Butler, NBAE, Michael Jordan, ” Pippen, Charles Barkley, Phil Jackson —, Will Leitch, ” Leitch, Pippen, Leitch, There’s Organizations: CNN, Manhattan, American, Committee, Tribune, Agency, Biden, Republicans, Trump, GOP, Luciano Vecchio Pride, United, AFP, Russia’s Security, Republican, Texas GOP, Philadelphia 76ers, Getty, NBA Locations: Berkeley, Iowa, revel, it’s, Argentina, United Russia, United Kingdom, Russia, Houston City, America, European, Texas
Will general purpose AI — AI that is as capable as humans — eventually take over the world? CNN/Peg Skorpinski “…even though we may understand how to build perfectly safe general purpose AI, what’s to stop Dr. We don’t know if they reason; we don’t know if they have their own internal goals that they’ve learned or what they might be. It is not general purpose AI, but it’s giving people a taste of what it would be like. And so it turns out that you can actually build AI systems that have those properties, but they’re very different from the kinds of AI systems that we know how to build.
Persons: CNN —, ChatGPT, Bill Gates, , Stuart Russell, Russell, ” Russell, they’ve, Peg Skorpinski “, ” Stuart Russell Russell, , STUART RUSSELL, ” Stuart Russell, we’ll, , it’s, they’re, That’s, Arthur Samuel, Samuel, Travis Teo, I’ve, Garry Kasparov, Kasparov, Stan Honda, There’s, they’re misaligned, you’ve, It’s, that’s, we’ve Organizations: CNN, University of California, IBM Watson Media, Hyundai, Boston Dynamics, Reuters, Microsoft, Artificial, Intelligence, US National Academies, GPT, IBM's, Getty, Federal Aviation Administration, Nuclear Regulatory, PIXAR Locations: Berkeley, , Singapore, New York, AFP, ChatGPT, Luxembourg, Cayman Islands, United States, California,
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Rather than pause research, she said, AI researchers should be subjected to greater transparency requirements. "If you do AI research, you should be very transparent about how you do it."
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI. Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters.
AI experts and company leaders have signed an open letter calling for a pause on AI development. The letter warns that AI systems such as OpenAI's GPT-4 are becoming "human-competitive at general tasks" and pose a potential risk to humanity and society. Here are the key points:Out-of-control AIThe non-profit floats the possibility of developers losing control of powerful new AI systems and their intended effect on civilization. A "dangerous race"The letter warned that AI companies are locked in an "out-of-control race to develop and deploy" new advanced systems. Six-month pauseThe open letter asks for a six-month break from developing any AI systems more powerful than those already on the market.
Total: 17