Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "DeepMind"


25 mentions found


The chosen location for the two-day conference has a special association with the man considered by many to be the father of modern computer science, Alan Turing. Before 1938, Bletchley Park was a mansion in the Buckinghamshire countryside built for a politician during the Victorian era. "What Alan Turing predicted many decades ago is now coming to fruition," she said, referring to his research into machine learning. "What happened at Bletchley Park eighty years ago opened the door to the new information age," Donelan said. Since then, men and women cautioned or convicted under historical homosexuality legislation were pardoned under what is known as the "Alan Turing law."
Persons: It's, Alan Turing, , Elon Musk, Sam Altman, Kamala Harris, Rishi Sunak, Goldman Sachs, who's, Turing, Michelle Donelan, Connor Leahy, Hollie Adams, Lorenz, Donelan Organizations: Bletchley, Service, AI, Guardian, Google, University of Manchester, Trust, Getty, National Museum of Computing Locations: England, London, Bletchley, Buckinghamshire, Poland
Here's who's goingMajor names in the technology and political world will be there. They range from Tesla CEO Elon Musk, whose private jet landed in the U.K. late Tuesday, to U.S. Vice President Kamala Harris. What the summit seeks to addressThe main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models. The summit is squarely focused on so-called "frontier AI" models — in other words, the advanced large language models, or LLMs, like those developed by companies such as OpenAI, Anthropic, and Cohere. Loss of control risks refer to a situation in which the AI that humans create could be turned against them.
Persons: Elon Musk, Mandel Ngan, Rishi Sunak's, ChatGPT, Here's who's, Kamala Harris, Musk, Elon, Brad Smith, Demis, Yann LeCun, Global Affairs Nick Clegg, Adam Selipsky, Sam Altman, Dario, Jensen Huang, Rene Haas, Dario Gil Darktrace, Poppy Gustaffson Databricks, Ali Ghodsi, Marc Benioff, Cheun Kyung, Alex Karp, Emmanuel Macron, Joe Biden, Justin Trudeau, Olaf Scholz, Sunak, Will Organizations: Senate, Intelligence, U.S, Capitol, Washington , D.C, Afp, Getty, Bletchley, Microsoft, Tesla, CNBC, Global Affairs, Web, Rene Haas IBM, Marc Benioff Samsung, Technology, South, Sony, Joe Biden Canadian Locations: U.S, Washington ,, China, U.K, South Korean, Chesnot
The UK's AI summit is underway. Some AI experts and startups say they've been frozen out in favor of bigger tech companies. They warn that the "closed door" event risks ensuring that AI is dominated by select companies. The UK's AI summit aims to bring together AI experts, tech bosses, and world leaders to discuss the risks of AI and find ways to regulate the new technology. "It is far from certain whether the AI summit will have any lasting impact," Ekaterina Almasque, a general partner at European venture capital firm OpenOcean, which invests in AI, told Insider.
Persons: Elon Musk, Sam Altman, , OpenAI's Sam Altman, Brad Smith, Kamala Harris, Iris Ai, Victor Botev, Yann LeCun, Rishi Sunak, Ekaterina Almasque, Almasque, Goldman Sachs Organizations: Service, OpenAI's, Microsoft, Twitter, UK, Big Tech, UK government's Department for Science, Innovation, Technology, UK's Trades Union Congress, American Federation of Labor, Industrial Organizations, Summit Locations: OpenOcean
Mistral, a tiny AI startup that aims to be Europe's answer to OpenAI, is in discussions to raise a major round of funding that could push its valuation above $2 billion. Its cofounders are in talks with venture capital firm Andreessen Horowitz to raise further funds, seven sources familiar with proceedings told Insider. Mistral is set to raise around $400 million at a valuation of at least $2 billion, which could rise to as high as $2.5 billion, three sources said. The deal is not yet finalized and the round size, valuation figures, and participants could still change. Andreessen Horowitz, General Catalyst, Mistral, Abstract Ventures, and Bezos Expeditions did not respond to Insider's request for comment.
Persons: Andreessen Horowitz, Catalyst, Arthur Mensch, Guillaume Lample, Timothée Lacroix, Jeff Bezos, Xavier Niel Organizations: Mistral, Meta, DeepMind, Bezos Expeditions, Amazon, Catalyst, Ventures, Lightspeed Venture Partners Locations: Paris
Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementSome of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech. Andew Ng , a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit. — Geoffrey Hinton (@geoffreyhinton) October 31, 2023Meta's chief AI scientist Yann LeCun , also known as an AI godfather for his work with Hinton, sided with Ng.
Persons: Andrew Ng, OpenAI's Sam Altman, , Andew Ng, Ng, It's, Elon Musk, Sam Altman, DeepMind, Demis Hassabis, Googler Geoffrey Hinton, Yoshua, godfathers, — Geoffrey Hinton, Yann LeCun, Hinton, LeCun, Meredith Whittaker, Whittaker Organizations: Google, Big Tech, AI's, Service, Australian Financial Locations: Hinton, British, Canadian, @geoffreyhinton
Many are shrugging off the supposed existential risks of AI, labeling them a distraction. They argue big tech companies are using the fears to protect their own interests. The timing of the pushback, ahead of the UK's AI safety summit and following Biden's recent executive order on AI, is also significant. More experts are warning that governments' preoccupation with the existential risks of AI is taking priority over the more immediate threats. Merve Hickok, the president of the Center for AI and Digital Policy, raised similar concerns about the UK AI safety summit's emphasis on existential risk.
Persons: , You've, there's, Yann LeCun, Altman, Hassabis, LeCun, LeCun's, OpenAI's Sam Altman, Anthropic's Dario Amodei, Andrew Ng, hasn't, Anthropic, Aidan Gomez, Merve Hickok, Hickok, Rishi Sunak, Michelle Donelan Organizations: Service, Google, CNBC, Stanford University, Australian Financial, Guardian, Center, AI
Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity. The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. In a speech last week, Sunak said only governments — not AI companies — can keep people safe from the technology’s risks. Frontier AI is shorthand for the latest and most powerful systems that go right up to the edge of AI’s capabilities. That makes frontier AI systems “dangerous because they’re not perfectly knowledgeable,” Clune said.
Persons: , Rishi Sunak, It’s, Kamala Harris, Ursula von der Leyen, Google’s, Alan Turing, Sunak, , Jeff Clune, Clune, Elon, Sam Altman, He’s, Joe Biden, Geoffrey Hinton, Yoshua, ” Clune, , it's, Francine Bennett, Ada Lovelace, Deb Raji, ” Raji, it’s, shouldn’t, Raji, DeepMind, Anthropic, Dario Amodei, Jack Clark, , Carsten Jung, Jill Lawless Organizations: British, U.S, European, University of British, AI Safety, European Union, Clune, Ada, Ada Lovelace Institute, House, University of California, ” Tech, Microsoft, Institute for Public Policy Research, Regulators, Associated Press Locations: Bletchley, University of British Columbia, State, EU, Brussels, China, U.S, Beijing, London, Berkeley
Share Share Article via Facebook Share Article via Twitter Share Article via LinkedIn Share Article via EmailWe have to talk to everyone, including China, to understand the potential of AI technology, Google DeepMind CEO saysDeepMind CEO Demis Hassabis and Google’s SVP of Research, Technology & Society, James Manyika, discuss international cooperation of AI regulation and the UK as a hub for innovation for the technology.
Persons: Demis Hassabis, James Manyika Organizations: Google, Research, Technology & Society Locations: China
LONDON, Oct 31 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit this week to examine the risks of the fast-growing technology and kickstart an international dialogue on regulation of it. The aim of the summit is to start a global conversation on the future regulation of AI. Currently there are no broad-based global regulations focusing on AI safety, although some governments have started drawing up their own rules. A recent Financial Times report said Sunak plans to launch a global advisory board for AI regulation, modeled on the Intergovernmental Panel on Climate Change (IPCC). When Sunak announced the summit in June, some questioned how well-equipped Britain was to lead a global initiative on AI regulation.
Persons: Olaf Scholz, Justin Trudeau –, Kamala Harris, Ursula von der Leyen, Wu Zhaohui, Antonio Guterres, James, Demis Hassabis, Sam Altman, OpenAI, Elon Musk, , Stuart Russell, Geoffrey Hinton, Alan Turing, Rishi Sunak, Sunak, Joe Biden, , Martin Coulter, Josephine Mason, Christina Fincher Organizations: Bletchley, WHO, Canadian, European, United Nations, Google, Microsoft, HK, Billionaire, Alan, Alan Turing Institute, Life, European Union, British, EU, UN, Thomson Locations: Britain, England, Beijing, British, Alibaba, United States, China, U.S
watch nowThe boss of Google DeepMind pushed back on a claim from Meta's artificial intelligence chief alleging the company is pushing worries about AI's existential threats to humanity to control the narrative on how best to regulate the technology. "If your fearmongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun said on X, the platform formerly known as Twitter, on Sunday. I also know that producing AI systems that are safe and under our control is possible. "Then there's sort of the misuse of AI by bad actors repurposing technology, general-purpose technology for bad ends that they were not intended for. "And then finally, I think about the more longer-term risk, which is technical AGI [artificial general intelligence] risk," Hassabis said.
Persons: Google DeepMind, CNBC's Arjun Kharpal, Hassabis, DeepMind, Yan LeCun, Sam Altman, Dario Amodei, LeCun, Yan, That's, Meta Organizations: Google, CNBC, Cooperation, China
Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good. Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone. https://t.co/Zv1rvOA3Zz — Max Tegmark (@tegmark) October 29, 2023LeCun says founder fretting is just lobbyingSince the launch of ChatGPT , AI's power players have become major public figures. The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape. For LeCun, keeping AI development closed is a real reason for alarm.
Persons: Meta's Yann LeCun, , Yann LeCun, Sam Altman, Anthropic's Dario Amodei, Altman, Hassabis, LeCun, Amodei, LeCun's, Max Tegmark, Turing, Hinton, Russell, Tegmark, I'd, fretting, Elon Musk, OpenAI's, OpenAI Organizations: Service, Google, Hassabis, Research, Meta Locations: Bengio, West Coast, China
Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said. Some large tech companies didn't want to compete with open source, he added. AdvertisementAdvertisementA leading AI expert and Google Brain cofounder said Big Tech companies were stoking fears about the technology's risks to shut down competition. AdvertisementAdvertisementIn May, AI experts and CEOs signed a statement from the Center for AI Safety that compared the risks posed by AI with nuclear war and pandemics. Any necessary AI regulation should be created thoughtfully, he added.
Persons: Andrew Ng, , Sam Altman, It's, Demis Hassabis, Dario Amodei, Ng Organizations: Big Tech, Australian Financial, Service, Google, Stanford University, Center, AI Safety, European
Sunak's speech came as the British government gears up to host the AI Safety Summit next week. Sunak announced that the U.K. will set up the world's first AI safety institute to evaluate and test new types of AI in order understand the risks. At the AI Safety Summit next week, Sunak said he will propose to set up a "truly global expert panel nominated by the countries and organizations attending to publish a state of AI science report." The U.K. has some notable AI firms, such as Alphabet-owned DeepMind, as well as strong tech-geared universities. But there can be no serious strategy for AI without at least trying to engage all of the world's leading AI powers," Sunak said.
Persons: Sunak, Rishi Sunak, Bard chatbot Organizations: Britain's, U.S Locations: British, Bletchley, today's, U.S, China, Britain, Washington, Beijing
British Prime Minister Rishi Sunak leaves 10 Downing Street to attend Prime Minister's Questions at the Houses of Parliament in London, Britain, October 18, 2023. Sunak wants Britain to be a global leader in AI safety, carving out a role after Brexit between the competing economic blocs of the United States, China and the European Union in the rapidly growing technology. The UK government will also publish a report on "frontier" AI, the cutting-edge general-purpose models that the summit will focus on. The report will inform discussions about risks such as societal harms, misuse and loss of control, the government said. China is expected to attend, according to a Financial Times report, while European Commission Vice President Vera Jourova has received an invitation.
Persons: Rishi Sunak, Clodagh, Sunak, Kamala Harris, Demis Hassabis, Vera Jourova, Paul Sandle, Mike Harrison Organizations: British, REUTERS, Safety, European Union, Google, Financial Times, European, Thomson Locations: London, Britain, Bletchley, United States, China, Canada, France, Germany, Italy, Japan, Hiroshima
The same inevitable supply-and-demand dynamic is about to wash over us again with large language models and generative AI. AI models are trained on masses of data from the past. Humans are good at learning quickly from a small amount of data, while AI models need mountains of information to train on. Soon, human content creators will be vying for attention with content generated by AI models. 'Utility, value and signaling'Hartz, a venture capitalist who now chairs Eventbrite's board, says successful technologists will continue to spend heavily on human experiences.
Persons: , Kevin Hartz, Eventbrite, Taylor Swift, Marc Andreessen, Hartz, John Barone, you'll, Sal Khan, That's, Gates, Michael Larson, Elon Musk's, Morgan Stanley, Jared Birchall, Noam Brown, He's, Mark Zuckerberg, Zuckerberg Organizations: Service, Khan Academy, Menlo School, Sigma, Bloomberg, Meta, OpenAI, Google, Amazon Locations: GPT, Fiji, Palo Alto, Silicon, Menlo
Microsoft just shut down Project Airsim, its AI-based drone simulation software that was part of its vision for an "industrial metaverse," Insider has learned. Both projects were considered part of Microsoft's "industrial metaverse." Project Airsim was originally launched as an open-source project in 2017, though it later shifted focus into a product for industrial customers. Microsoft kept Project Airsim around because it believed there were large prospective customers for the product, the person said. Gurdeep Pall, previously head of product incubations and business AI who at one point ran Project Bonsai and most recently ran Project Airsim, left last month after 33 years with the company.
Persons: Airsim, Kevin Scott, Scott, OpenAI ramped, Gurdeep Pall Organizations: Microsoft, Microsoft Chief, OpenAI, Amazon Web Services, Airsim Locations: OpenAI
Share Share Article via Facebook Share Article via Twitter Share Article via LinkedIn Share Article via EmailDeepmind co-founder Mustafa Suleyman makes push for global panel to regulate AI safetyMustafa Suleyman, Deepmind co-founder, joins 'Squawk Box' to discuss how it's possible to find an international committee to regulate artificial intelligence, how much of this endeavor will require the sharing of data, and much more.
Persons: Mustafa Suleyman, Deepmind
However, one of tech's buzziest bros, Sam Altman, says he has "structures" — but not a bunker. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementAdvertisementDoomsday predictions like nuclear war, climate change, or a zombie apocalypse have ratcheted up the ultra-rich's obsession with luxury bunkers. However, one Silicon Valley tech bro joked that these bunkers may not be useful in the case of an AI apocalypse. AdvertisementAdvertisement"I have like structures, but I wouldn't say a bunker," Altman added without clarifying what these structures were.
Persons: buzziest, Sam Altman, AGI, , bro, Altman, Joanna Stern, Altman —, Yorker —, Clyde Scott, Douglas Rushkoff, Ivanka Trump, Tom Brady, Jeff Bezos, Rushkoff Organizations: Service, WSJ, Yorker, Google, Bloomberg, Federal Civil Defense Administration Locations: South Dakota, Poland, America
Meta's chief AI scientist Yann LeCun said that superintelligent AI is unlikely to wipe out humanity. He told the Financial Times that current AI models are less intelligent than a cat. AI CEOs signed a letter in May warning that superintelligent AI could pose an "extinction risk." AdvertisementAdvertisementFears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Meta's chief AI scientist has said. However, LeCun told the Financial Times that many AI companies had been "consistently over-optimistic" over how close current generative models were to AGI, and that fears over AI extinction were overblown as a result.
Persons: Yann LeCun, , Albert Einstein, Sam Altman, Demis Hassabis, Dario Amodei, OpenAI's, LeCun, They're, Meta Organizations: Financial Times, Service, Intelligence, Microsoft
REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing RightsLONDON, Oct 18 (Reuters) - Britain will host the world's first global artificial intelligence (AI) safety summit next month, aiming to carve out a role following Brexit as an arbiter between the United States, China, and the European Union in a key tech sector. The Nov. 1-2 summit will focus heavily on the existential threat some lawmakers, including Britain's Prime Minister Rishi Sunak, fear AI poses. Sunak, who wants the UK to become a hub for AI safety, has warned the technology could be used by criminals and terrorists to create weapons of mass destruction. Critics question why Britain has appointed itself the centre of AI safety. "We are now reflecting on potential EU participation," a spokesperson told Reuters.
Persons: Dado Ruvic, Rishi Sunak, Sunak, Alan Turing, Kamala Harris, Demis, Matt Clifford, Clifford, we're, Stephanie Hare, Elon Musk, Geoffrey Hinton, Britain, OpenAI, Marc Warner, it's, Vera Jourova, Brando Benifei, Dragos Tudorache, Benifei, Jeremy Hunt, Martin Coulter, Matt Scuffham, Mark Potter Organizations: REUTERS, European Union, Britain's, EU, Bletchley, Google, San, Reuters, China . Finance, Politico, Thomson Locations: Britain, United States, China, England, British, France, Germany, London, U.S, San Francisco, Beijing, Europe
A recent research paper revealed a new way to help AI models ingest way more data. Soon, you'll be able to put millions of words into context windows of AI models, researchers say. Bigger AI models can handle more, but only up to about 75,000. Massive context windowsThis Ring Attention method means that we should be able to put millions of words into the context windows of AI models, not just tens of thousands. AdvertisementAdvertisementThis chart shows some of the results of tests from the "Ring Attention" AI research paper.
Persons: you'll, , Matei Zaharia, Pieter Abbeel, Claude, That's, OpenAI's, Hao Liu, Liu Organizations: Service, Google, UC Berkeley, Databricks, Nvidia Locations: GPT
She also discussed how the company is rethinking the future of Google Assistant. AdvertisementAdvertisementThere's a lot of pressure on Google right now. A key person in the middle of all this is Sissie Hsiao, Google's VP and general manager of Bard and Google Assistant. If it disappoints, it will embolden critics who say Google has fallen behind. Google Assistant was the answer, and in 2021 Google reshuffled its search team to put Hsiao in charge of its voice assistant.
Persons: Bard, , OpenAI's, Sundar Pichai, Demis Hassabis, Sissie Hsiao, Google's, She's, Hsiao, Gemini, I've, OpenAI, Josh Edelson, Getty Hsiao, It's, it's Organizations: Google, Service, Gemini, Microsoft Locations: Bard
Alphabet 's AI lab, DeepMind, cut employee costs by 39% last year, according to a recent filing with a U.K. government agency. For the 2022 financial year, staff costs and other related expenses were 594.5 million pounds (nearly $731 million), down from 969.4 million pounds (nearly $1.2 billion) in 2021 — translating to an almost 39% reduction in employee costs, per the filing. Following DeepMind's employee cost cuts in 2022, Alphabet executives discussed plans to allocate resources to key revenue drivers, such as AI, on its first-quarter earnings call of 2023. "Beginning in the second quarter of 2023, the costs associated with teams and activities transferred from Google Research will move from Google Services to Google DeepMind within Alphabet's unallocated corporate costs," Pichai said during a spring earnings call. DeepMind's 2022 profit was about 60.9 million pounds (nearly $74.9 million), down from 102.4 million pounds (nearly $126 million) in 2021 — a decrease of more than 40%.
Persons: DeepMind, Sundar Pichai, Pichai Organizations: Google, Google Research, Google Services Locations: Edmonton, Canada
Ali Alkhatib, an AI-ethics researcher, says large AI systems should not work for everything. Companies make grand claims about what their models can do, but this can cause significant harm. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . Researchers are spending more time critiquing artificial-intelligence systems for their grandiose claims and unacknowledged harms. Because inherently, what OpenAI is doing is sort of unreasonable, which is a challenging thing for them to acknowledge or face."
Persons: Ali Alkhatib, , Alkhatib, They're, OpenAI, ethicists Organizations: Companies, Service, Google, University of San Francisco's Data
What the Nobel Prizes get wrong about science
  + stars: | 2023-09-29 | by ( Katie Hunt | ) edition.cnn.com   time to read: +9 min
Peter Brzezinski, the secretary of the committee for the Nobel chemistry prize, said there were no plans to change the rule. He said the Nobel Prize committees, at least for science prizes, are “innately conservative.”DiversityOther criticism leveled at the Nobel Prizes includes the lack of diversity among winners. Of course, these flaws and gaps only matter because the Nobels are far better known than other science prizes, Rees added. The Nobel Prize in physiology or medicine will be announced on Monday, followed by the physics prize on Tuesday and the Nobel Prize in chemistry on Wednesday. The Nobel Prize for literature and the Nobel Peace Prize will be announced on Thursday and Friday, respectively.
Persons: Alfred Nobel, Martin Rees, Rees, , Jonathan Nackstrand, Rainer Weiss, Barry Barish, Kip Thorne, David Pendlebury, “ Nobel, ” Pendlebury, Nobel’s, Peter Brzezinski, , ” Brzezinski, John Jumper, AlphaFold, Lasker, Pendlebury, Emmanuelle Charpentier, Jennifer Doudna, it’s, Carolyn Bertozzi, Andrea Ghez, Naomi Oreskes, Henry Charles Lea, ” Rees Organizations: CNN, Royal Society, Getty, Clarivate’s Institute for Scientific, Nobel Foundation, Academy, Google, Harvard University Locations: Swedish, AFP, Stockholm
Total: 25