Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Yoshua"


25 mentions found


Professor Yoshua Bengio, at the One Young World Summit in Montreal, Canada, on Friday, Sept. 20, 2024Famed computer scientist Yoshua Bengio — an artificial intelligence pioneer — has warned of the nascent technology's potential negative effects on society and called for more research to mitigate its risks. Machines could soon have most of the cognitive abilities of humans, he said — artificial general intelligence (AGI) is a type of AI technology that aims to equal or better human intellect. Yoshua Bengio Head of the Montreal Institute for Learning AlgorithmsSuch outcomes are possible within decades, he said. There are arguments to suggest that the way AI machines are currently being trained "would lead to systems that turn against humans," Bengio said. Yoshua Bengio Head of the Montreal Institute for Learning AlgorithmsCompanies developing AI must also be liable for their actions, according to the computer scientist.
Persons: Yoshua Bengio, , Bengio, CNBC's Tania Bryer, That's, we're, OpenAIhas, It’s, Yoshua, — that's, OpenAI Organizations: Young, Summit, University of Montreal, Montreal Institute, Machines, Intelligence, CNBC, Learning Locations: Montreal, Canada, AGI, U.S, Rwanda, Swiss
AdvertisementA bipartisan US congressional commission urges a "Manhattan Project" for AI to outpace China. Trump has previously called China the "primary threat" in the AI race. AdvertisementThe Manhattan Project was a secret program led by the US government during World War II to develop the world's first atomic bombs. Advertisement"We have to take the lead over China, China is the primary threat," he added. OpenAI also cited the Manhattan Project in its blueprint as one of the US's "iconic infrastructure projects that moved the country forward."
Persons: Trump, Donald Trump, Logan, Joe Biden's, OpenAI, Yoshua, Max Tegmark Organizations: China, China Economic, Security, Commission, General Intelligence, US Treasury Department, Business, Manhattan Project, Life Institute, MIT, Guardian, Elon Locations: China, US, Washington
Yoshua Bengio, a leading AI expert, told BI that a deceptive AI could be dangerous. Bengio said that stronger safety tests and regulatory oversight are needed for advanced AI models. AdvertisementOpenAI's new o1 model is better at scheming — and that makes the "godfather" of AI nervous. "In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk and its consequences in o1's case," Bengio wrote in the statement. This story is available exclusively to Business Insider subscribers.
Persons: , Yoshua, Bengio Organizations: Service, University of Montreal, o1, Business Locations: Canadian
Read previewThere's a battle in Silicon Valley over AI risks and safety — and it's escalating fast. This story is available exclusively to Business Insider subscribers. Right to WarnWhile the concerns around AI safety are nothing new, they're increasingly being amplified by those within AI companies. OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours. A spokesperson previously reiterated the company's commitment to safety, highlighting an "anonymous integrity hotline" for employees to voice their concerns and the company's safety and security committee.
Persons: , OpenAI, Bengio, Geoffrey Hinton, Stuart Russell, Jacob Hilton, Hilton, Sam Altman, Helen Toner, Altman, Russell, Daniel Kokotajlo, Kokotajlo Organizations: Service, Google, Business Locations: Silicon Valley, OpenAI
A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry's rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the employees wrote. The letter also details the current and former employees' concerns about insufficient whistleblower protections for the AI industry, saying that without effective government oversight, employees are in a relatively unique position to hold companies accountable. "Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated." Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter.
Persons: OpenAI, they've, Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright, Daniel Ziegler, Ramana Kumar, Neel Nanda, Geoffrey Hinton, Yoshua Bengio, Stuart Russell Organizations: Google, Microsoft, Meta, CNBC, Security Locations: Anthropic
On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, announced that this year’s Turing Award will go to Avi Wigderson, an Israeli-born mathematician and theoretical computer scientist who specializes in randomness. Often called the Nobel Prize of computing, the Turing Award comes with a $1 million prize. The award is named for Alan Turing, the British mathematician who helped create the foundations for modern computing in the mid-20th century. Other recent winners include Ed Catmull and Pat Hanrahan, who helped create the computer-generated imagery, or C.G.I., that drives modern movies and television, and the A.I. researchers Geoffrey Hinton, Yann LeCun and Yoshua Bengio, who nurtured the techniques that gave rise to chatbots like ChatGPT.
Persons: Turing, Avi Wigderson, Alan Turing, Ed Catmull, Pat Hanrahan, Geoffrey Hinton, Yann LeCun, Yoshua Bengio Organizations: Association for Computing Machinery Locations: Israeli, British
OpenAI and Meta are close to unveiling AI models that can reason and plan, the FT reported. AdvertisementOpenAI and Meta are reportedly preparing to release more advanced AI models that would be able to help problem-solve and take on more complex tasks. Representatives for Meta and OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours. Getting AI models to reason and plan is an important step toward achieving artificial general intelligence (AGI), which both Meta and OpenAI have claimed to be aiming for. Elon Musk, a longtime AI skeptic, recently estimated that AI would outsmart humans within two years.
Persons: , Brad Lightcap, Joelle Pineau, OpenAI, John Carmack, Bengio, Geoffrey Hinton, Elon Musk, Musk Organizations: Meta, Service, Financial Times, Business
“The museum gives an opportunity to works of art that, for whatever reason, at some point had been banned, attacked, censored, or canceled, because there are so many,” Rodrigo told The Associated Press. Political Cartoons View All 1256 ImagesFive years later, Benet's idea became the Museum of Forbidden Art, which opened its doors in October. As more works come under attack, people like art critic and curator Gabriel Luciani say the exhibit is essential. “(But) it is true that most of the works on display are from the years 2010 to 2020. Rodrigo said her museum hopes it won't see any attacks because visitors should come prepared to be shocked.
Persons: Donald Trump, Robert Mapplethorpe, Spain's, Pablo Picasso, Rosa Rodrigo, , ” Rodrigo, Tatxo Benet, , Gabriel Luciani, Michelangelo’s David, Andres Serrano, ” Luciani, Zoulikha, Bouabdellah, Charlie Hebdo, Prophet Muhammad, Zoya Falkova, Goya, Picasso, Klimt, Illma Gore, Gore, Chuck Close, Fries, Charo Corrales, Mary, Rodrigo, Hernán Muñoz Organizations: Barcelona's Museum, Forbidden, Associated Press, Museum, Trump, Facebook, Forbidden Art, Catholic Locations: BARCELONA, Spain, , Europe, Hong Kong, Florida, Algerian, Clichy, France, Paris, Kazakh, Evermust, Kyrgyzstan, Los Angeles, American, McDonald’s, London, Barcelona
However, overemphasizing the dangers of AI risks paralyzing debate at a pivotal moment. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . Advertisement"I'm not scared of A.I.," LeCun told the magazine. While Hinton and Meta's chief AI scientist LeCun have butted heads, fellow collaborator and third AI godfather Yoshua Bengio has stressed that this unknown is the real issue.
Persons: what's, Geoffrey Hinton, , Hinton, Yan LeCun, Turing, LeCun, Yoshua Bengio, Yann, Joshua Rothman, it's Organizations: Service, Big Tech, Google, Yorker Locations: Hinton, Canadian
An AI godfather says we should all be worried about the concentration of power in the AI sector. Bengio said the control of powerful AI systems was a central question for democracy. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementThe concentration of power in the AI arena is one of the main risks facing the industry, an AI godfather says. Regulation, at least in its current form, will not be the boost for big tech companies that some industry experts have suggested it could be, he added.
Persons: Yoshua Bengio, Bengio, , Yoshua, I've, Yann LeCun, OpenAI's Sam Altman, LeCun, Anthropic's Dario Amodei, Benigo Organizations: Service Locations: Canadian, ChatGPT
British Prime Minister Rishi Sunak attends an in-conversation event with Tesla and SpaceX's CEO Elon Musk in London, Britain, Thursday, Nov. 2, 2023. Risks around rapidly-developing AI have been an increasingly high priority for policymakers since Microsoft-backed Open AI (MSFT.O) released ChatGPT to the public last year. "It was fascinating that just as we announced our AI safety institute, the Americans announced theirs," said attendee Nigel Toon, CEO of British AI firm Graphcore. China’s vice minister of science and technology said the country was willing to work with all sides on AI governance. Yoshua Bengio, an AI pioneer appointed to lead a "state of the science" report commissioned as part of the Bletchley Declaration, told Reuters the risks of open-source AI were a high priority.
Persons: Rishi Sunak, Tesla, Elon Musk, Kirsty Wigglesworth, Sam Altman, Kamala Harris, Ursula von der Leyen, China –, Sunak, Finance Bruno Le Maire, Vera Jourova, Jourova, Harris, Nigel Toon, Wu Zhaohui, Musk, you’ve, Martin Coulter, Paul Sandle, Matt Scuffham, Louise Heavens Organizations: British, Elon, U.S, European Commission, Microsoft, of, Finance, EU, Reuters, Thomson Locations: London, Britain, China, Bletchley, U.S, South Korea, France, United States
China's delegate to the meeting, Vice Minister of Science and Technology Wu Zhaohui, was present on Thursday, his ministry said on Friday. The Chinese technology ministry declined to say why China did not agree to the proposal, which was about AI model testing. British Prime Minister Rishi Sunak chaired Thursday's meeting that comprised "a small group of like-minded senior representatives from governments around the world", Britain said, including the U.S. vice president and the EC president. Some British lawmakers had criticised China's participation in the inaugural AI summit. Sunak told reporters: "Some said we shouldn't even invite China, others said we would never get an agreement with them.
Persons: Ursula von der Leyen, Kamala Harris, Rishi Sunak, Giorgia Meloni, Antonio Guterres, Yoshua Bengio, Mila, Microsoft Brad, Technology Wu Zhaohui, Wu, Oliver Dowden, Sunak, Paul Sandle, Brenda Goh, Alistair Smout, Cynthia Osterman Organizations: Italy's, UN, Quebec AI Institute, Microsoft, Safety, Science, Technology, Bloomberg, U.S, European Union, Thomson Locations: British, SHANGHAI, LONDON, China, Britain, Beijing, Bletchley Park, England, United States, Bletchley, London, Shanghai
AI godfather Yoshua Bengio says the risks of AI should not be underplayed. His remarks come after Meta's Yann LeCun accused Bengio and AI founders of "fear-mongering." AdvertisementAdvertisementClaims by Meta's chief AI scientist, Yann LeCun, that AI won't wipe out humanity are dangerous and wrong according to one of his fellow AI godfathers. AdvertisementAdvertisement"If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote. "Existential risk is one problem but the concentration of power, in my opinion, is the number two problem," he said.
Persons: Yoshua Bengio, Bengio, Meta's Yann LeCun, , Yann LeCun, Yann, LeCun, overstating, Andrew Ng, Geoffrey Hinton, Hinton Organizations: Service, Bell Labs, Google Locations: Bengio
So-called frontier AI refers to the latest and most powerful systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. One of Sunak’s major goals is to get delegates to agree on a first-ever communique about the nature of AI risks. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first. A White House official gave details of Harris’s speech, speaking on condition of anonymity to discuss her remarks in advance.
Persons: Google's Bard, Rishi Sunak's, Kamala Harris, who’s, Elon Musk, Ursula von der Leyen, Yoshua, Sunak, Harris, Biden’s, Jill Lawless Organizations: , British, Safety, U.S, White, Associated Locations: BLETCHLEY, England, London, China, Bletchley
Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAdvertisementSome of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech. Andew Ng , a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit. — Geoffrey Hinton (@geoffreyhinton) October 31, 2023Meta's chief AI scientist Yann LeCun , also known as an AI godfather for his work with Hinton, sided with Ng.
Persons: Andrew Ng, OpenAI's Sam Altman, , Andew Ng, Ng, It's, Elon Musk, Sam Altman, DeepMind, Demis Hassabis, Googler Geoffrey Hinton, Yoshua, godfathers, — Geoffrey Hinton, Yann LeCun, Hinton, LeCun, Meredith Whittaker, Whittaker Organizations: Google, Big Tech, AI's, Service, Australian Financial Locations: Hinton, British, Canadian, @geoffreyhinton
Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity. The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. In a speech last week, Sunak said only governments — not AI companies — can keep people safe from the technology’s risks. Frontier AI is shorthand for the latest and most powerful systems that go right up to the edge of AI’s capabilities. That makes frontier AI systems “dangerous because they’re not perfectly knowledgeable,” Clune said.
Persons: , Rishi Sunak, It’s, Kamala Harris, Ursula von der Leyen, Google’s, Alan Turing, Sunak, , Jeff Clune, Clune, Elon, Sam Altman, He’s, Joe Biden, Geoffrey Hinton, Yoshua, ” Clune, , it's, Francine Bennett, Ada Lovelace, Deb Raji, ” Raji, it’s, shouldn’t, Raji, DeepMind, Anthropic, Dario Amodei, Jack Clark, , Carsten Jung, Jill Lawless Organizations: British, U.S, European, University of British, AI Safety, European Union, Clune, Ada, Ada Lovelace Institute, House, University of California, ” Tech, Microsoft, Institute for Public Policy Research, Regulators, Associated Press Locations: Bletchley, University of British Columbia, State, EU, Brussels, China, U.S, Beijing, London, Berkeley
The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. Currently there are no broad-based regulations focusing on AI safety, and the first set of legislations by the European Union is yet to become law as lawmakers are yet to agree on several issues. "It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken," he said. Since the launch of OpenAI's generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems. "There are more regulations on sandwich shops than there are on AI companies."
Persons: Dado Ruvic, Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, Yuval Noah Harari, Elon Musk, Stuart Russell, Supantha Mukherjee, Miral Organizations: REUTERS, Rights, Safety, European, Elon, Thomson Locations: Rights STOCKHOLM, London, European Union, British, Stockholm
DeepMind's Mustafa Suleyman recently talked about setting boundaries on AI with the MIT Tech Review. "You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable." And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event.
Persons: DeepMind's Mustafa Suleyman, Mustafa Suleyman, Suleyman, there's, Sam Altman, Elon Musk, Mark Zuckerberg, — Suleyman, Pi, Hassabis, Satya Nadella, Geoffrey Hinton, Yoshua Organizations: MIT Tech, Service, MIT Technology, AIs, Life Institute Locations: Wall, Silicon, Washington
Over the course of three conversations this summer, Acemoglu told me he's worried we're currently hurtling down a road that will end in catastrophe. "There's a fair likelihood that if we don't do a course correction, we're going to have a truly two-tier system," Acemoglu told me. "I was following the canon of economic models, and in all of these models, technological change is the main mover of GDP per capita and wages," Acemoglu told me. In later empirical work, Acemoglu and Restrepo showed that that was exactly what had happened. "I realize this is a very, very tall order," Acemoglu told me.
Persons: who's, Katya Klinova, Daron Acemoglu, Simon Johnson, Acemoglu, Johnson, we've, he's, we're, Power, James Robinson, , Robinson, David Autor, Pascual Restrepo, Restrepo, John Maynard Keynes, Simon Simard, Lord Byron, Eric Van Den Brulle, hasn't, it's, Gita Gopinath, Paul Romer, Romer, What's, Daron, GPT, Asu Ozdaglar, It's, Mark Madeo, Tattong, Erik Brynjolfsson, Brynjolfsson, There's, Yoshua Bengio, Yuval Noah Harari, Andrew Yang, Elon Musk, I've, That's, Aki Ito Organizations: Getty, MIT, of Technology, Hulton, London School of Economics, Stagecoach, Technology, , International Monetary Fund, Microsoft, Asu, Companies, Computer, Greenpeace, Communications, Big Tech, Workers Locations: Silicon Valley, America, Boston, Istanbul, Turkey, Acemoglu, England, United States, Britain, Australia
Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI." Geoffrey Hinton, a trailblazer in the AI field, recently quit his job at Google and said he regrets the role he played in developing the technology. Hinton also worked at Google for over a decade, but Hinton quit his role at Google this past spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.
Persons: Geoffrey Hinton, Noah Berger, Yann LeCun, Bengio, Hinton, He's Organizations: University of Toronto, Google, Associated Press
The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society. Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry. “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.
Persons: Biden, , Brad Smith, Dario Amodei, Bengio, ” Amodei, Amodei, Chuck Schumer, Schumer Organizations: CNN, Frontier Model, Google, Microsoft, US, European Union, Amazon, Meta, Companies, European
WASHINGTON, July 18 (Reuters) - Artificial intelligence startup Anthropic's CEO Dario Amodei will testify on July 25 at a U.S. Senate hearing on artificial intelligence as lawmakers consider potential regulations for the fast-growing technology, the Senate panel scheduling the hearing said on Tuesday. "It’s our obligation to address AI’s potential threats and risks before they become real," said Democratic Senator Richard Blumenthal, the subcommittee chair. "We are on the verge of a new era, with major consequences for workers, consumer privacy, and our society." President Joe Biden met with the CEOs of top artificial intelligence companies in May, including Amodei, and made clear they must ensure their products are safe before they are deployed. The report would help push federal financial regulators to adopt and adapt to AI changes disrupting the industry, Schumer's office said.
Persons: Dario Amodei, Amodei, Yoshua Bengio, Stuart Russell, Richard Blumenthal, Josh Hawley, Joe Biden, Chuck Schumer, David Shepardson, Leslie Adler, Chris Reese Organizations: U.S, Senate, Privacy, Technology, Google, Democratic, Republican, Thomson
"High level, we want this to become something like your personal AI friend," said developer Div Garg, whose company MultiOn is beta-testing an AI agent. The race towards increasingly autonomous AI agents has been supercharged by the March release of GPT-4 by developer OpenAI, a powerful upgrade of the model behind ChatGPT - the chatbot that became a sensation when released last November. GPT-4 facilitates the type of strategic and adaptable thinking required to navigate the unpredictable real world, said Vivian Cheng, an investor at venture capital firm CRV who has a focus on AI agents. OpenAI itself is very interested in AI agent technology, according to four people briefed on its plans. There are at least 100 serious projects working to commercialize agents, said Matt Schlicht, who writes a newsletter on AI.
Persons: Siri, Alexa, Tony Stark's, Kanjun Qiu, Reid Hoffman, Mustafa Suleyman, Qiu, OpenAI, Vivian Cheng, CRV, Aravind Srinivas, Jarvis, Yoshua Bengio, Satya Nadella, Apple's Siri, it's, Google, Edward Grefenstette, Jason Franklin, WVV Capital, Hesam Motlagh, Matt Schlicht, Anna Tong, Jeffrey Dastin, Kenneth Li Organizations: Microsoft, Google, U.S . Federal Trade Commission, Reuters, FTC, OpenAI's, Financial Times, Amazon, Alexa, Investors, WVV, Google Ventures, Entrepreneurs, Thomson Locations: Silicon, Jarvis, GPT, Cognosys, San Francisco, Palo Alto
July 12 (Reuters) - Chip designer Nvidia (NVDA.O) will invest $50 million to speed up training of Recursion's (RXRX.O) artificial intelligence models for drug discovery, the companies said on Wednesday, sending the biotech firm's shares surging about 62%. Recursion, whose advisers include AI pioneer Yoshua Bengio, will use its biological and chemical datasets exceeding 23,000 terabytes to train AI models on Nvidia's cloud platform. Nvidia, seen as a big winner of the boom in artificial intelligence, could then license those models to biotech firms through BioNeMo, a generative AI cloud service for drug discovery that it rolled out earlier this year. The investment comes as Recursion strengthened its AI focus in May by snapping up two companies in the AI-driven drug discovery space for $87.5 million. The Salt Lake City, Utah-based company's current partners include Bayer (BAYGn.DE) and Roche (ROG.S).
Persons: Nvidia, Roche, Mubadala, Baillie Gifford, Chavi Mehta, Stephen Nellis, Mariam Sunny, Shilpi Majumdar, Sriraj Organizations: Nvidia, Bayer, Baillie Gifford & Co, Thomson Locations: BioNeMo, Salt Lake City , Utah, Abu, Bengaluru, San Francisco
STOCKHOLM, June 30 (Reuters) - The proposed EU Artificial Intelligence legislation would jeopardise Europe's competitiveness and technological sovereignty, according to an open letter signed by more than 160 executives at companies ranging from Renault (RENA.PA) to Meta (META.O). EU lawmakers agreed to a set of draft rules this month where systems like ChatGPT would have to disclose AI-generated content, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content. Since ChatGPT became popular, several open letters have been issued calling for regulation of AI and raising the "risk of extinction from AI." The third, Yann LeCun, who works at Meta, signed Friday's letter challenging the EU regulations. The letter warned that under the proposed EU rules technologies like generative AI would become heavily regulated and companies developing such systems would face high compliance costs and disproportionate liability risks.
Persons: ChatGPT, Elon Musk, Sam Altman, Geoffrey Hinton, Yoshua, Yann LeCun, OpenAI's Altman, Supantha Mukherjee, Jamie Freed Organizations: EU Artificial Intelligence, Renault, EU, Meta, Spanish, Thomson Locations: STOCKHOLM, French, Europe, Stockholm
Total: 25