Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Hassabis"


25 mentions found


Nvidia CEO Jensen Huang said AGI will be reached in five years during the 2023 NYT DealBook Summit. Huang defined AGI as tech that exhibits basic intelligence "fairly competitive" to a normal human. Still, he admitted that AI technology is not quite there yet despite its rapid progress. AdvertisementJensen Huang, the CEO of Nvidia — one of the companies that is fueling the AI revolution — predicts that we may be able to see artificial general intelligence, or AGI, within the next five years. "Software can't be written without AI, chips can't be designed without AI, nothing's possible," he concluded on the point of AI's potential.
Persons: Jensen Huang, Huang, , Andrew Ross Sorkin, Ross Sorkin, Sorkin, Ilya Sutskever, Ian Hogarth, John Carmack, Demis Hassabis, Nvidia didn't Organizations: Nvidia, Service, New York Times DealBook, AIs, OpenAI
Elon Musk says he wants to rebuild his friendship with Google cofounder Larry Page. Page reportedly once called Musk a speciesist in a discussion about humanity and AI safeguards. We were friends for a very long time," Musk said of Page on Lex Fridman's podcast. AdvertisementAdvertisementElon Musk wants to be on good terms with Larry Page again after the two fought over AI safeguards. AdvertisementAdvertisement"The future of AI should not be controlled by Larry," Musk told Hassabis, according to the biography.
Persons: Elon Musk, Larry Page, Page, Musk, Lex Fridman's, , Larry, Walter Isaacson's, DeepMind, Demis Hassabis, Tucker Carlson, OpenAI, Sam Altman Organizations: Service, Google Locations: DeepMind
An AI godfather says we should all be worried about the concentration of power in the AI sector. Bengio said the control of powerful AI systems was a central question for democracy. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementThe concentration of power in the AI arena is one of the main risks facing the industry, an AI godfather says. Regulation, at least in its current form, will not be the boost for big tech companies that some industry experts have suggested it could be, he added.
Persons: Yoshua Bengio, Bengio, , Yoshua, I've, Yann LeCun, OpenAI's Sam Altman, LeCun, Anthropic's Dario Amodei, Benigo Organizations: Service Locations: Canadian, ChatGPT
Here's who's goingMajor names in the technology and political world will be there. They range from Tesla CEO Elon Musk, whose private jet landed in the U.K. late Tuesday, to U.S. Vice President Kamala Harris. What the summit seeks to addressThe main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models. The summit is squarely focused on so-called "frontier AI" models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere. Loss of control risks refer to a situation in which the AI that humans create could be turned against them.
Persons: Elon Musk, Mandel Ngan, Rishi Sunak's, ChatGPT, Here's who's, Kamala Harris, Musk, Elon, Brad Smith, Demis, Yann LeCun, Global Affairs Nick Clegg, Adam Selipsky, Sam Altman, Dario, Jensen Huang, Rene Haas, Dario Gil Darktrace, Poppy Gustaffson Databricks, Ali Ghodsi, Marc Benioff, Cheun Kyung, Alex Karp, Emmanuel Macron, Joe Biden, Justin Trudeau, Olaf Scholz, Sunak, Will Organizations: Senate, Intelligence, U.S, Capitol, Washington , D.C, Afp, Getty, Bletchley, Microsoft, Tesla, CNBC, Global Affairs, Web, Rene Haas IBM, Marc Benioff Samsung, Technology, South, Sony, Joe Biden Canadian Locations: U.S, Washington ,, China, U.K, South Korean, Chesnot
Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementSome of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech. Andew Ng , a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit. — Geoffrey Hinton (@geoffreyhinton) October 31, 2023Meta's chief AI scientist Yann LeCun , also known as an AI godfather for his work with Hinton, sided with Ng.
Persons: Andrew Ng, OpenAI's Sam Altman, , Andew Ng, Ng, It's, Elon Musk, Sam Altman, DeepMind, Demis Hassabis, Googler Geoffrey Hinton, Yoshua, godfathers, — Geoffrey Hinton, Yann LeCun, Hinton, LeCun, Meredith Whittaker, Whittaker Organizations: Google, Big Tech, AI's, Service, Australian Financial Locations: Hinton, British, Canadian, @geoffreyhinton
Many are shrugging off the supposed existential risks of AI, labeling them a distraction. They argue big tech companies are using the fears to protect their own interests. The timing of the pushback, ahead of the UK's AI safety summit and following Biden's recent executive order on AI, is also significant. More experts are warning that governments' preoccupation with the existential risks of AI is taking priority over the more immediate threats. Merve Hickok, the president of the Center for AI and Digital Policy, raised similar concerns about the UK AI safety summit's emphasis on existential risk.
Persons: , You've, there's, Yann LeCun, Altman, Hassabis, LeCun, LeCun's, OpenAI's Sam Altman, Anthropic's Dario Amodei, Andrew Ng, hasn't, Anthropic, Aidan Gomez, Merve Hickok, Hickok, Rishi Sunak, Michelle Donelan Organizations: Service, Google, CNBC, Stanford University, Australian Financial, Guardian, Center, AI
LONDON, Oct 31 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit this week to examine the risks of the fast-growing technology and kickstart an international dialogue on regulation of it. The aim of the summit is to start a global conversation on the future regulation of AI. Currently there are no broad-based global regulations focusing on AI safety, although some governments have started drawing up their own rules. A recent Financial Times report said Sunak plans to launch a global advisory board for AI regulation, modeled on the Intergovernmental Panel on Climate Change (IPCC). When Sunak announced the summit in June, some questioned how well-equipped Britain was to lead a global initiative on AI regulation.
Persons: Olaf Scholz, Justin Trudeau –, Kamala Harris, Ursula von der Leyen, Wu Zhaohui, Antonio Guterres, James, Demis Hassabis, Sam Altman, OpenAI, Elon Musk, , Stuart Russell, Geoffrey Hinton, Alan Turing, Rishi Sunak, Sunak, Joe Biden, , Martin Coulter, Josephine Mason, Christina Fincher Organizations: Bletchley, WHO, Canadian, European, United Nations, Google, Microsoft, HK, Billionaire, Alan, Alan Turing Institute, Life, European Union, British, EU, UN, Thomson Locations: Britain, England, Beijing, British, Alibaba, United States, China, U.S
watch nowThe boss of Google DeepMind pushed back on a claim from Meta's artificial intelligence chief alleging the company is pushing worries about AI's existential threats to humanity to control the narrative on how best to regulate the technology. "If your fearmongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun said on X, the platform formerly known as Twitter, on Sunday. I also know that producing AI systems that are safe and under our control is possible. "Then there's sort of the misuse of AI by bad actors repurposing technology, general-purpose technology for bad ends that they were not intended for. "And then finally, I think about the more longer-term risk, which is technical AGI [artificial general intelligence] risk," Hassabis said.
Persons: Google DeepMind, CNBC's Arjun Kharpal, Hassabis, DeepMind, Yan LeCun, Sam Altman, Dario Amodei, LeCun, Yan, That's, Meta Organizations: Google, CNBC, Cooperation, China
Share Share Article via Facebook Share Article via Twitter Share Article via LinkedIn Share Article via EmailWe have to talk to everyone, including China, to understand the potential of AI technology, Google DeepMind CEO saysDeepMind CEO Demis Hassabis and Google’s SVP of Research, Technology & Society, James Manyika, discuss international cooperation of AI regulation and the UK as a hub for innovation for the technology.
Persons: Demis Hassabis, James Manyika Organizations: Google, Research, Technology & Society Locations: China
Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good. Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone. https://t.co/Zv1rvOA3Zz — Max Tegmark (@tegmark) October 29, 2023LeCun says founder fretting is just lobbyingSince the launch of ChatGPT , AI's power players have become major public figures. The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape. For LeCun, keeping AI development closed is a real reason for alarm.
Persons: Meta's Yann LeCun, , Yann LeCun, Sam Altman, Anthropic's Dario Amodei, Altman, Hassabis, LeCun, Amodei, LeCun's, Max Tegmark, Turing, Hinton, Russell, Tegmark, I'd, fretting, Elon Musk, OpenAI's, OpenAI Organizations: Service, Google, Hassabis, Research, Meta Locations: Bengio, West Coast, China
Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said. Some large tech companies didn't want to compete with open source, he added. AdvertisementAdvertisementA leading AI expert and Google Brain cofounder said Big Tech companies were stoking fears about the technology's risks to shut down competition. AdvertisementAdvertisementIn May, AI experts and CEOs signed a statement from the Center for AI Safety that compared the risks posed by AI with nuclear war and pandemics. Any necessary AI regulation should be created thoughtfully, he added.
Persons: Andrew Ng, , Sam Altman, It's, Demis Hassabis, Dario Amodei, Ng Organizations: Big Tech, Australian Financial, Service, Google, Stanford University, Center, AI Safety, European
British Prime Minister Rishi Sunak leaves 10 Downing Street to attend Prime Minister's Questions at the Houses of Parliament in London, Britain, October 18, 2023. Sunak wants Britain to be a global leader in AI safety, carving out a role after Brexit between the competing economic blocs of the United States, China and the European Union in the rapidly growing technology. The UK government will also publish a report on "frontier" AI, the cutting-edge general-purpose models that the summit will focus on. The report will inform discussions about risks such as societal harms, misuse and loss of control, the government said. China is expected to attend, according to a Financial Times report, while European Commission Vice President Vera Jourova has received an invitation.
Persons: Rishi Sunak, Clodagh, Sunak, Kamala Harris, Demis Hassabis, Vera Jourova, Paul Sandle, Mike Harrison Organizations: British, REUTERS, Safety, European Union, Google, Financial Times, European, Thomson Locations: London, Britain, Bletchley, United States, China, Canada, France, Germany, Italy, Japan, Hiroshima
Meta's chief AI scientist Yann LeCun said that superintelligent AI is unlikely to wipe out humanity. He told the Financial Times that current AI models are less intelligent than a cat. AI CEOs signed a letter in May warning that superintelligent AI could pose an "extinction risk." AdvertisementAdvertisementFears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Meta's chief AI scientist has said. However, LeCun told the Financial Times that many AI companies had been "consistently over-optimistic" over how close current generative models were to AGI, and that fears over AI extinction were overblown as a result.
Persons: Yann LeCun, , Albert Einstein, Sam Altman, Demis Hassabis, Dario Amodei, OpenAI's, LeCun, They're, Meta Organizations: Financial Times, Service, Intelligence, Microsoft
REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing RightsLONDON, Oct 18 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit next month, aiming to carve out a role following Brexit as an arbiter between the United States, China, and the European Union in a key tech sector. The Nov. 1-2 summit will focus heavily on the existential threat some lawmakers, including Britain's Prime Minister Rishi Sunak, fear AI poses. Sunak, who wants the UK to become a hub for AI safety, has warned the technology could be used by criminals and terrorists to create weapons of mass destruction. Critics question why Britain has appointed itself the centre of AI safety. "We are now reflecting on potential EU participation," a spokesperson told Reuters.
Persons: Dado Ruvic, Rishi Sunak, Sunak, Alan Turing, Kamala Harris, Demis, Matt Clifford, Clifford, we're, Stephanie Hare, Elon Musk, Geoffrey Hinton, Britain, OpenAI, Marc Warner, it's, Vera Jourova, Brando Benifei, Dragos Tudorache, Benifei, Jeremy Hunt, Martin Coulter, Matt Scuffham, Mark Potter Organizations: REUTERS, European Union, Britain's, EU, Bletchley, Google, San, Reuters, China . Finance, Politico, Thomson Locations: Britain, United States, China, England, British, France, Germany, London, U.S, San Francisco, Beijing, Europe
She also discussed how the company is rethinking the future of Google Assistant. AdvertisementAdvertisementThere's a lot of pressure on Google right now. A key person in the middle of all this is Sissie Hsiao, Google's VP and general manager of Bard and Google Assistant. If it disappoints, it will embolden critics who say Google has fallen behind. Google Assistant was the answer, and in 2021 Google reshuffled its search team to put Hsiao in charge of its voice assistant.
Persons: Bard, , OpenAI's, Sundar Pichai, Demis Hassabis, Sissie Hsiao, Google's, She's, Hsiao, Gemini, I've, OpenAI, Josh Edelson, Getty Hsiao, It's, it's Organizations: Google, Service, Gemini, Microsoft Locations: Bard
What the Nobel Prizes get wrong about science
  + stars: | 2023-09-29 | by ( Katie Hunt | ) edition.cnn.com   time to read: +9 min
Peter Brzezinski, the secretary of the committee for the Nobel chemistry prize, said there were no plans to change the rule. He said the Nobel Prize committees, at least for science prizes, are “innately conservative.”DiversityOther criticism leveled at the Nobel Prizes includes the lack of diversity among winners. Of course, these flaws and gaps only matter because the Nobels are far better known than other science prizes, Rees added. The Nobel Prize in physiology or medicine will be announced on Monday, followed by the physics prize on Tuesday and the Nobel Prize in chemistry on Wednesday. The Nobel Prize for literature and the Nobel Peace Prize will be announced on Thursday and Friday, respectively.
Persons: Alfred Nobel, Martin Rees, Rees, , Jonathan Nackstrand, Rainer Weiss, Barry Barish, Kip Thorne, David Pendlebury, “ Nobel, ” Pendlebury, Nobel’s, Peter Brzezinski, , ” Brzezinski, John Jumper, AlphaFold, Lasker, Pendlebury, Emmanuelle Charpentier, Jennifer Doudna, it’s, Carolyn Bertozzi, Andrea Ghez, Naomi Oreskes, Henry Charles Lea, ” Rees Organizations: CNN, Royal Society, Getty, Clarivate’s Institute for Scientific, Nobel Foundation, Academy, Google, Harvard University Locations: Swedish, AFP, Stockholm
Google is preparing to launch its answer to rival OpenAI's GPT-4: Gemini. Gemini is a next-gen, multimodal AI model due for release later this year. The tech is a next-gen, multimodal AI model being worked on by a team of researchers pulled from Google's now-merged AI divisions DeepMind and Google Brain. Gemini is multimodalGoogle's Gemini is a multimodal AI, meaning it can process more than one type of data. Researchers behind the SemiAnalysis blog have also predicted that Google's Gemini would likely outperform GPT-4 because of Google's access to top-flight chips.
Persons: OpenAI's GPT, OpenAI's, Sam Altman, AlphaGo Gemini, Google's DeepMind, AlphaGo, Lee Sedol, ChatGPT, Demis Hassabis, DeepMind, Bard Organizations: Google, Service, OpenAI, AlphaGo, Wired Locations: Wall, Silicon, Google's
DeepMind's Mustafa Suleyman recently talked about setting boundaries on AI with the MIT Tech Review. "You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable." And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event.
Persons: DeepMind's Mustafa Suleyman, Mustafa Suleyman, Suleyman, there's, Sam Altman, Elon Musk, Mark Zuckerberg, — Suleyman, Pi, Hassabis, Satya Nadella, Geoffrey Hinton, Yoshua Organizations: MIT Tech, Service, MIT Technology, AIs, Life Institute Locations: Wall, Silicon, Washington
At Musk's 2013 birthday bash, he and Larry Page discussed AI's role in humanity's future, per biographer Walter Isaacson's retelling. Page labeled Musk a "specist," while Musk defended his views as "pro-human." Musk even attempted to stop Google's acquisition of AI company DeepMind, saying Page shouldn't control the future of AI, Isaacson wrote in a Time Magazine report. Per Isaacson, the reason behind the thwarting attempt was Musk's distrust of then-CEO of Google, Larry Page, and his views towards AI. Musk, Page, and Google did not immediately respond to a request for comment from Insider, sent outside regular business hours.
Persons: Larry Page, Walter Isaacson's, Musk, Isaacson, Elon Musk, Walter Isaacson —, Per Isaacson, DeepMind, Demis Hassabis, Larry, Luke Nosek, Musk's, , Tucker Carlson, Page, Sam Altman, Eric Schmidt, Geoffrey Hinton Organizations: Magazine, Service, Time Magazine, Google, Media, PayPal Locations: Wall, Silicon, DeepMind, Napa Valley , California
Jeff Kowalsky | Bloomberg | Getty ImagesA string of Google executives have changed their roles in the span of several months, in a shift that has sidelined many of company's remaining old guard. The changes encompass high-profile executives such as CFO Ruth Porat, YouTube CEO Susan Wojcicki, and employee No. Some say they have left their roles for a new challenge and others have left to seek opportunities in AI. While she'll still be in an advisory role at Google, she said, she wanted to "start a new chapter." Google's AI head, Jeff Dean, who's been at Google since 1999, became a chief scientist as part of the change.
Persons: Ruth Porat, Jeff Kowalsky, Susan Wojcicki, Urs Hölzle, Susan Wojcicki —, Sergey Brin, Larry Page, she'll, Robert Kyncl, David Lawee, Hölzle, Morgan Stanley, Porat, Courtenay Mencini, who've, it's, OpenAI, Sundar Pichai, Google execs, Prabhakar Raghavan, HJ Kim, Geoffrey Hinton, Demis, James Manyika, Jeff Dean, who's, It's Organizations: Inc, Michigan Central Station, Bloomberg, Getty, Google, YouTube, Warner Music Group, CapitalG, CNBC, New York Times, McKinsey, Google Research Locations: Detroit , Michigan, Silicon Valley
There is an influx of cash and interest into artificial intelligence startups right now. Insider spoke to ex-Google DeepMind staffers who have founded AI startups in stealth. He isn't the only DeepMind alum working on practical applications of artificial intelligence. Last month, Mistral, an AI startup founded by DeepMind alum, secured $113 million in seed funding from Lightspeed just four weeks after it launched. He has since been working on his second AI startup in stealth mode since June 2023, adding that working under the radar was inspired by DeepMind's "own model of working in stealth."
Persons: Mustafa Suleyman, Suleyman, Devang Agrawal, Jonathan Godwin, DeepMind, Godwin, Simon Kohl, Ang Li, isn't, Li, Simon Menashy, Adam Liska, Demis Hassabis, GlyphicAI's Agrawal, Mehdi Ghissassi, OpenAI's, Agrawal, Karl Moritz Hermann, DeepMind's Organizations: Google, Microsoft, Nvidia, Labs, MMC Ventures, DeepMind, Lightspeed Locations: DeepMind, London, California
Brin is frequently showing up at Google HQ to help its AI efforts, The Wall Street Journal reported. He is reportedly deeply involved in the development of Gemini, an AI model that aims to rival GPT-4. Google cofounder Sergey Brin is reportedly showing up often at the search giant's headquarters to help develop ChatGPT rival Gemini and boost its AI ambitions. Google is pouring efforts into Gemini, an AI model designed to rival the GPT-4 model underlying OpenAI's technology. This week, Google's AI ambitions faced another threat as Meta unveiled Llama 2.
Persons: Sergey Brin, Brin, GPT, Larry Page, Sundar Pichai, Demis Hassabis Organizations: Google, Street Journal, Gemini, Morning, The New York Times, Wall Street, Meta, Microsoft Locations: The
While investors have poured billions into AI startups, concern about AI's capabilities has grown. People don't all have the same value systems, so AI alignment can look different depending on where the AI is operated and deployed. Investors poured $29 billion into AI startups in the first six months of 2023. Aligned AI. The drive to fund AI safetyAI researchers are also vigilant about where the funds for AI safety and alignment come from.
Persons: ChatGPT, Stuart Armstrong, Rebecca Gorman, , Gorman, OpenAI's Sam Altman, Demis Hassabis, Bill Gates, Connor Leahy, Leahy, it's, Sam Bankman, Ian Hogarth, Hogarth Organizations: Oxford University, Investors, Alameda Research, FTX Locations: London
In April, Google made a dramatic move to face down the threat from Microsoft and OpenAI: it combined its AI research team, Brain, with DeepMind. By fusing it with its central AI unit, Google sent an unambiguous message that it was pulling out all the stops to supercharge its work in AI. Insider obtained internal org charts that reveal the new power structure of Google DeepMind. Hassabis oversees around 2,450 full-time employees, the majority of which are Google DeepMind employees. Dean, who previously oversaw all of Google's AI unit, has moved from his management position into the new role of Chief Scientist, reporting directly to CEO Sundar Pichai.
Persons: DeepMind, Demis Hassabis, There's, Hassabis, Eli Collins, Bard chatbot, Zoubin Ghahramani, Jeff Dean, Koray, Google's, Dean, Sundar Pichai, Emanuel Taropa Organizations: Google, Microsoft, Brain, Research, Technology, Gemini, General Intelligence
Left to right: Microsoft's CTO Kevin Scott, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. Joy Malone/David Ryder/Bloomberg/Joel Saget/AFP/Getty ImagesSome AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services. “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”Even in more ordinary use cases, however, there are concerns. Influencing regulatorsRegulators may be the real intended audience for the tech industry’s doomsday messaging.
Persons: Sam Altman, Altman, Demis Hassabis, Kevin Scott, Elon Musk, Joy Malone, David Ryder, Joel Saget, ” Gary Marcus, , Marcus, Gary Marcus, Eric Lee, Emily Bender, Bender, ” Bender, , we’re Organizations: CNN, Google, Microsoft, Bloomberg, Getty, New York University, OpenAI, University of Washington, Laboratory, Washington Locations: Valley, AFP, Washington , DC, Congress
Total: 25