Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "" Hinton"


25 mentions found


Google has opposed a shareholder's call for more transparency around its algorithms. CEO Sundar Pichai emphasized the potential of new generative AI and added safety is essential. Google's parent company Alphabet opposed a shareholder proposal that sought increased transparency surrounding its algorithms. It argued that accountability and transparency in artificial intelligence are needed if the technology is to remain safe to society. Google in its opposition to the proposal said that it already provides meaningful disclosures surrounding its algorithms, including through websites that provide overviews of how YouTube's algorithms sort content, for instance.
Persons: Sundar Pichai, Pichai, We've, Geoffrey Hinton, Timnit Gebru, ProPublica Organizations: Google, Trillium Asset Management, Trillium, New Zealand Royal Commission, Mozilla Foundation, New York University, SEC, Google's Locations: Christchurch, Saudi Arabia
Yoshua Bengio is one of three AI "godfathers" who won the Turing Prize for breakthroughs in 2018. He told the BBC that he would've prioritized safety if he'd known how quickly AI would progress. A professor known as one of three AI "godfathers" told the BBC that he felt "lost" over his life's work. "We also need the people who are close to these systems to have a kind of certification," Bengio told the broadcaster. On Tuesday, he signed a statement issued by the Center for AI Safety, which warns the technology poses an "extinction" risk comparable to nuclear war.
Persons: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, ChatGPT, Sam Altman, Bengio, That's, Altman, Hinton, he's, LeCun, Organizations: BBC, Morning, Center, AI Safety, Google, New York Times Locations: Hinton
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft (MSFT.O) and Google (GOOGL.O). Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April. AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change. Last week OpenAI CEO Sam Altman referred to EU AI - the first efforts to create a regulation for AI - as over-regulation and threatened to leave Europe.
Washington CNN —Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety. The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur. The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence.
The Center for AI Safety's statement compares the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio have also supported the statement. The CEOs of three leading AI companies have signed a statement issued by the Center for AI Safety (CAIS) warning of the "extinction" risk posed by artificial intelligence. Per CAIS, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement, which compared the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio are among the statement's signatories, along with executives at Microsoft and Google.
As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks. One of the sharpest and most urgent warnings has come from a man who helped invent the technology. Cade Metz, a technology correspondent for The New York Times, speaks to Geoffrey Hinton, whom many consider to be the godfather of A.I.
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I.
61% of American adults say AI poses an existential threat to humanity, per a Reuters and Ipsos poll. The poll's findings come as tech leaders sound the alarm over the potential risks of AI. Americans are worried about the risks artificial intelligence could pose on society — and Trump supporters and Christians are the most concerned. 70% respondents who voted for Donald Trump in the 2020 presidential election believe AI poses risks to civilization. Geoffrey Hinton, the "Godfather of AI" who recently quit Google to raise awareness around AI's risks, believes AI poses a "more urgent" threat to humanity than climate change.
NEW YORK, May 15 (Reuters) - At least seven top Barclays Plc (BARC.L) bankers have resigned to join to UBS Group AG (UBSG.S) in the United States in the last few days, people familiar with the matter said. The moves add to a trio of Barclays investment bankers that UBS announced internally it hired last month. Many Credit Suisse bankers are based in the United States. UBS has hired Laurence Braham, Richard Hardegree, Richard Casavechia, Ozzie Ramos, Jason Williams, Neil Meyer and Ken Tittle from Barclays, the sources said. These bankers follow Barclays ex-colleagues Marco Valla, Jeff Hinton and Kurt Anthony, whose moves to UBS were announced in April.
The moves add to a trio of Barclays U.S. investment bankers that UBS announced it hired last month. Many Credit Suisse bankers are based in the United States. Sources told Reuters last month that UBS plans to retain only a small number of Credit Suisse senior bankers with strong client relationships. At Barclays, Braham was global chair of investment banking for technology, while Hardegree served as vice chair and head of technology M&A. Williams, Meyer and Tittle were managing directors in the investment banking group.
The European parliament has voted in favor of adopting an AI Act by a large majority. The proposed law is the first law on AI by a major regulator, according to the Act's website. The law aims to regulate the advanced technology and protect Europeans from potential risks. The European parliament has voted by a large majority in favor of adopting a wide-ranging proposed law on AI. The proposed law is the first relating to AI by a major regulator, according to the AI Act website.
It added that about 50,000 boepd of production has been temporarily curtailed since the evening of May 5. Crescent Point Energy Corp (CPG.TO)Crescent Point said about 45,000 boepd of production in the Kaybob Duvernay region has been temporarily shut in with a plan to restart production once safe and permitted to do so. NuVista Energy Ltd (NVA.TO)The company said it has temporarily shut in and depressured all operations proximal to the ongoing fires in the Grande Prairie region. Vermilion Energy Inc (VET.TO)Vermilion Energy said it had temporarily shut in about 30,000 boepd of production and that it was assessing the risk to its operations. Cenovus Energy Inc (CVE.TO)Cenovus said it has shut-in production and brought plants down in some areas of its conventional business.
Warren Buffett compared AI to the creation of the atom bomb at Berkshire Hathaway's annual meeting. Buffett has long spoken about his fears around nuclear war keeping him up at night. Warren Buffett compared artificial intelligence to the creation of the atom bomb, becoming the latest high-profile business figure to express alarm about the rapid advancement of the technology. "We did invent for very, very good reason, the atom bomb. And, World War Two, it was enormously important that we did so.
On Monday, researcher Geoffrey Hinton, known as "The Godfather of AI," said he'd left his post at Google, citing concerns over potential threats from AI development. Google CEO Sundar Pichai talked last month about AI's "black box" problem, where even its developers don't always understand how the technology actually works. Among the other concerns: AI systems, left unchecked, can spread disinformation, allow companies to hoard users personal data without their knowledge, exhibit discriminatory bias or cede countless human jobs to machines. In the "Blueprint for an AI Bill of Rights," Venkatasubramanian helped lay out proposals for "ethical guardrails" that could safely govern and regulate the AI industry. With them in place, most people would barely notice the difference while using AI systems, he says.
Artificial intelligence could present a more urgent danger to the world than climate change, Geoffrey Hinton said. The "Godfather of AI" recently quit Google so he could speak openly about the threat posed by the tech. The "Godfather of AI" who recently quit Google to raise awareness about the dangers of artificial intelligence has said the threat the tech poses to the world could be more urgent than climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' "With climate change, it's very easy to recommend what you should do: you just stop burning carbon.
Elon Musk has purchased 10,000 GPUs to build an AI model at Twitter, Insider reported. A VC founder said he suspects Musk just wants to catch up with the competition, per Bloomberg. Elon Musk's calls to slow down AI development could just be a ploy to help him catch up, the tech entrepreneur venture capitalist Vinod Khosla told Bloomberg. "I 80% suspect his call to slow down AI development was so he could catch up." In 2015, Musk cofounded OpenAI, the company behind ChatGPT which is largely considered to be leading the new boom in AI technology.
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future. On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative A.I., the technology that powers popular chatbots like ChatGPT. A part of him, he said, now regrets his life’s work.
Google told staff it will be more selective about the research it publishes. Recently, information like code and data has become accessible on a "much more on a need-to-know" basis, according to a Google AI staffer. LaMDA, a chatbot technology that forms the basis of Bard, was originally built as a 20 percent project within Google Brain. (The company has historically allowed employees to spend 20% of their working days exploring side projects that might turn into full-fledged Google products.) Google's AI division has faced other setbacks.
Shemia Fagan’s salary as Oregon’s secretary of state was $77,000 in 2022, according to state records. Photo: Matthew Hinton/Associated PressOregon’s secretary of state is resigning after it was revealed that she worked on the side as a consultant for a cannabis company, earning $10,000 a month, while her office was overseeing an audit of the state’s marijuana regulator. Democrat Shemia Fagan announced her resignation Tuesday, as fallout and political pressure over the consulting job grew. Tina Kotek last week asked the Oregon Government Ethics Commission to investigate Ms. Fagan’s dealings. Kotek also asked the state’s Justice Department to look into a recent audit of the cannabis industry overseen by the secretary of state’s office.
[1/2] Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017. REUTERS/Mark BlinchMay 2 (Reuters) - A pioneer of artificial intelligence said he quit Google (GOOGL.O) to speak freely about the technology's dangers, after realising that computers could become smarter than people far sooner than he and other experts had expected. "I left so that I could talk about the dangers of AI without considering how this impacts Google," Geoffrey Hinton wrote on Twitter. “The idea that this stuff could actually get smarter than people — a few people believed that,” he told the New York Times. In his tweet, Hinton said Google itself had "acted very responsibly" and denied that he had quit so that he could criticise his former employer.
Vice President Kamala Harris will meet with the chief executives of Google , Microsoft , OpenAI and Anthropic Thursday to discuss the responsible development of artificial intelligence, the White House confirmed to CNBC Tuesday. Harris will address the need for safeguards that can mitigate AI's potential risks and emphasize the importance of ethical and trustworthy innovation, the White House said. Generative AI has exploded into public consciousness after OpenAI released its viral new chatbot called ChatGPT late last year. In the months since, Microsoft has been integrating OpenAI's generative technology across many of its products as part of its multi-year, multi-billion-dollar investment in the company. While many experts are optimistic about the potential of generative AI, the technology has also inspired questions and concerns from regulators and tech industry giants.
Elon Musk says Geoffrey Hinton "knows what he's talking about" after he voiced concerns about AI. Musk recently signed an open letter calling for a six-month pause on advanced AI development. Elon Musk has weighed in on comments about the dangers of advanced AI by Geoffrey Hinton, who is nicknamed the "Godfather of AI." Recently, he put his name to an open letter that called for a six-month pause on advanced AI development. Representatives for Elon Musk did not immediately respond to Insider's request for comment, made outside normal working hours.
Tuesday's selloff in Chegg shares exposed some investors to the dark side of artificial intelligence, igniting concerns about how the latest technology craze may be putting some companies' revenue sources in danger. CHGG 1D mountain Chegg shares plummet on AI risks While Chegg may be the first shoe to drop, it's certainly not the last company set to showcase some of the risks posed by AI. Elsewhere, Deepwater Asset Management's Gene Munster sees potential risks ahead to some consulting companies known to outsource work for other businesses. Companies operating off of seat-based models, such as human resources companies, may face headwinds from declining headcount, but could benefit long term from optimizing AI, he added. To be sure, even the largest companies dominating the space and poised to prosper from AI face risks ahead.
Microsoft's chief scientific officer says he disagrees with people calling for a pause on AI development, including Elon Musk. Eric Horvitz told Fortune it's "reasonable" for people to be concerned but that we need to "jump in, as opposed to pause." Microsoft's chief scientific officer has addressed an open letter signed by Elon Musk and thousands of others calling for a pause on AI development. We need to really just invest more in understanding and guiding and even regulating this technology—jump in, as opposed to pause." Musk cofounded OpenAI in 2015 alongside Altman and others but left the board in 2018.
Total: 25