Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Geoffrey"


25 mentions found


Google has opposed a shareholder's call for more transparency around its algorithms. CEO Sundar Pichai emphasized the potential of new generative AI and added safety is essential. Google's parent company Alphabet opposed a shareholder proposal that sought increased transparency surrounding its algorithms. It argued that accountability and transparency in artificial intelligence are needed if the technology is to remain safe to society. Google in its opposition to the proposal said that it already provides meaningful disclosures surrounding its algorithms, including through websites that provide overviews of how YouTube's algorithms sort content, for instance.
Persons: Sundar Pichai, Pichai, We've, Geoffrey Hinton, Timnit Gebru, ProPublica Organizations: Google, Trillium Asset Management, Trillium, New Zealand Royal Commission, Mozilla Foundation, New York University, SEC, Google's Locations: Christchurch, Saudi Arabia
Yoshua Bengio is one of three AI "godfathers" who won the Turing Prize for breakthroughs in 2018. He told the BBC that he would've prioritized safety if he'd known how quickly AI would progress. A professor known as one of three AI "godfathers" told the BBC that he felt "lost" over his life's work. "We also need the people who are close to these systems to have a kind of certification," Bengio told the broadcaster. On Tuesday, he signed a statement issued by the Center for AI Safety, which warns the technology poses an "extinction" risk comparable to nuclear war.
Persons: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, ChatGPT, Sam Altman, Bengio, That's, Altman, Hinton, he's, LeCun, Organizations: BBC, Morning, Center, AI Safety, Google, New York Times Locations: Hinton
Marcos del Mazo/LightRocket/Getty ImagesAfter a couple of years of reduced air travel in the wake of the pandemic, travelers returned to the air in 2022 to significant airline chaos – canceled flights, lost luggage and overstretched staff. And interestingly, while Air New Zealand came out on top for 2023, Thomas said the results were close among the top five. Singapore Airlines took the fifth spot on AirlineRatings.com's 2023 list and also won the Best First Class award. Johannes P. Christo/Anadolu Agency/Getty ImagesAbu Dhabi’s Etihad Airways is number 3 on AirlineRatings.com’s 2023 list. Singapore Airlines, named top in the Best First Class award and the Excellence in Long Haul Travel - Southeast Asia award, took fifth place overall.
Persons: AirlineRatings.com, , ” Geoffrey Thomas, ” AirlineRatings.com, Marcos del Mazo, Thomas, Johannes P Organizations: CNN, Air, Zealand, CNN Travel, Zealand’s, Qatar Airways, Air New Zealand, Business, Catering, Long, Singapore Airlines, Christo, Anadolu Agency, Abu Dhabi’s Etihad Airways, Korean, North, ” Air, New, Civil Aviation Authority, Auckland International Airport, Etihad Airways, Qantas, Virgin, Cathay Pacific Airways, Emirates, Lufthansa, SAS, TAP, All Nippon Airways, Delta Air, Air Canada, British Airways, Jet, JAL, Vietnam Airlines, Turkish Airlines, KLM, . Alaska Airlines, United Airlines Locations: Australia, North Asia, Asia, Zealand, Auckland, AirlineRatings.com’s, Virgin Australia, Swiss, TAP Portugal
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft (MSFT.O) and Google (GOOGL.O). Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April. AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change. Last week OpenAI CEO Sam Altman referred to EU AI - the first efforts to create a regulation for AI - as over-regulation and threatened to leave Europe.
The Center for AI Safety's statement compares the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio have also supported the statement. The CEOs of three leading AI companies have signed a statement issued by the Center for AI Safety (CAIS) warning of the "extinction" risk posed by artificial intelligence. Per CAIS, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement, which compared the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio are among the statement's signatories, along with executives at Microsoft and Google.
As the world begins to experiment with the power of artificial intelligence, a debate has begun about how to contain its risks. One of the sharpest and most urgent warnings has come from a man who helped invent the technology. Cade Metz, a technology correspondent for The New York Times, speaks to Geoffrey Hinton, whom many consider to be the godfather of A.I.
Washington CNN —Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety. The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur. The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence.
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I.
South Korean carrier Asiana Airlines has stopped selling certain exit row seats on its A321-200 planes. Window exit row seats 26A and 31A will no longer be bookable, according to Asiana. The carrier said the move was out of precaution after a passenger opened a door during landing on Friday. South Korean carrier Asiana Airlines said it will no longer sell certain exit row seats on all of its Airbus A321-200 jets after a passenger opened an emergency door in flight, Al Jazeera reported. On both planes, certain exit row window seats on the left-hand side of the cabin will no longer be bookable.
President Joe Biden nominated telecom attorney Anna Gomez to the Federal Communications Commission, his second attempt to fill an empty seat on the typically five-member panel that has left the agency in a 2-2 deadlock for his entire presidency thus far. Gomez has previously worked for the FCC in several positions over 12 years, the White House said. Jonathan Spalter, president and CEO of USTelecom, a trade group that represents broadband providers like AT&T and Verizon , congratulated Gomez in a statement. Free Press, a nonprofit advocacy group that supports net neutrality, said Gomez's nomination was long overdue. González called Gomez "eminently qualified" for the role and praised the nomination of a Latinx candidate to the position.
61% of American adults say AI poses an existential threat to humanity, per a Reuters and Ipsos poll. The poll's findings come as tech leaders sound the alarm over the potential risks of AI. Americans are worried about the risks artificial intelligence could pose on society — and Trump supporters and Christians are the most concerned. 70% respondents who voted for Donald Trump in the 2020 presidential election believe AI poses risks to civilization. Geoffrey Hinton, the "Godfather of AI" who recently quit Google to raise awareness around AI's risks, believes AI poses a "more urgent" threat to humanity than climate change.
LONDON, May 16 (Reuters) - Britons face the biggest tax-raising drive since the start of former prime minister Margaret Thatcher's term of office in the coming years as more people are pushed into paying the top rate of income tax, a leading think tank said on Tuesday. Britons pay income tax at a rate of 20% on income over 12,570 pounds ($15,865) a year and 40% on income over 50,270 pounds with a higher rate beyond that. Isaac Delestre, an IFS research economist, said inflation's recent surge was pushing up nominal earnings of many workers and dragging them into the higher tax rate bracket. The Conservative Party pledged not to increase income tax rates in its 2019 election manifesto. "A third of the expected record fall in household incomes this year is likely to be a result of this tax rise," the IFS said.
The European parliament has voted in favor of adopting an AI Act by a large majority. The proposed law is the first law on AI by a major regulator, according to the Act's website. The law aims to regulate the advanced technology and protect Europeans from potential risks. The European parliament has voted by a large majority in favor of adopting a wide-ranging proposed law on AI. The proposed law is the first relating to AI by a major regulator, according to the AI Act website.
Warren Buffett compared AI to the creation of the atom bomb at Berkshire Hathaway's annual meeting. Buffett has long spoken about his fears around nuclear war keeping him up at night. Warren Buffett compared artificial intelligence to the creation of the atom bomb, becoming the latest high-profile business figure to express alarm about the rapid advancement of the technology. "We did invent for very, very good reason, the atom bomb. And, World War Two, it was enormously important that we did so.
[1/2] Berkshire Hathaway Chairman Warren Buffett walks through the exhibit hall as shareholders gather to hear from the billionaire investor at Berkshire Hathaway Inc's annual shareholder meeting in Omaha, Nebraska, U.S., May 4, 2019. Tens of thousands of people are flocking to Omaha, Nebraska this weekend for the extravaganza that Buffett, 92, calls "Woodstock for Capitalists." "Charlie is 99 and Warren turns 93 on Aug. 30," Lountzis added, "and you just don't know how many more you're going to have." Buffett and Munger are due to answer five hours of shareholder questions at the meeting. "We believe in constructive engagement and dialogue, whether it's Warren Buffett or another company," Frerichs said in an interview.
Artificial intelligence could present a more urgent danger to the world than climate change, Geoffrey Hinton said. The "Godfather of AI" recently quit Google so he could speak openly about the threat posed by the tech. The "Godfather of AI" who recently quit Google to raise awareness about the dangers of artificial intelligence has said the threat the tech poses to the world could be more urgent than climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' "With climate change, it's very easy to recommend what you should do: you just stop burning carbon.
On Monday, researcher Geoffrey Hinton, known as "The Godfather of AI," said he'd left his post at Google, citing concerns over potential threats from AI development. Google CEO Sundar Pichai talked last month about AI's "black box" problem, where even its developers don't always understand how the technology actually works. Among the other concerns: AI systems, left unchecked, can spread disinformation, allow companies to hoard users personal data without their knowledge, exhibit discriminatory bias or cede countless human jobs to machines. In the "Blueprint for an AI Bill of Rights," Venkatasubramanian helped lay out proposals for "ethical guardrails" that could safely govern and regulate the AI industry. With them in place, most people would barely notice the difference while using AI systems, he says.
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future. On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative A.I., the technology that powers popular chatbots like ChatGPT. A part of him, he said, now regrets his life’s work.
Elon Musk has purchased 10,000 GPUs to build an AI model at Twitter, Insider reported. A VC founder said he suspects Musk just wants to catch up with the competition, per Bloomberg. Elon Musk's calls to slow down AI development could just be a ploy to help him catch up, the tech entrepreneur venture capitalist Vinod Khosla told Bloomberg. "I 80% suspect his call to slow down AI development was so he could catch up." In 2015, Musk cofounded OpenAI, the company behind ChatGPT which is largely considered to be leading the new boom in AI technology.
Google told staff it will be more selective about the research it publishes. Recently, information like code and data has become accessible on a "much more on a need-to-know" basis, according to a Google AI staffer. LaMDA, a chatbot technology that forms the basis of Bard, was originally built as a 20 percent project within Google Brain. (The company has historically allowed employees to spend 20% of their working days exploring side projects that might turn into full-fledged Google products.) Google's AI division has faced other setbacks.
[1/2] Berkshire Hathaway Chairman Warren Buffett walks through the exhibit hall as shareholders gather to hear from the billionaire investor at Berkshire Hathaway Inc's annual shareholder meeting in Omaha, Nebraska, U.S., May 4, 2019. Tens of thousands of people are flocking to Omaha, Nebraska this weekend for the extravaganza that Buffett, 92, calls "Woodstock for Capitalists." Buffett and Munger are due to answer five hours of shareholder questions at the meeting. "We believe in constructive engagement and dialogue, whether it's Warren Buffett or another company," Frerichs said in an interview. Reporting by Jonathan Stempel in Omaha, Nebraska; Editing by Will Dunham and Megan DaviesOur Standards: The Thomson Reuters Trust Principles.
Apple and Google released a proposal with software fixes to unwanted tracking by Bluetooth devices. The system would alert a user's phone if it detects a nearby tracker that has been separated from its owners device. Apple and Google are combining forces to stop the use of Bluetooth tracking devices like AirTags for stalking people without their consent. The two tech giants released a proposal Tuesday outlining standards to ensure products like the Apple AirTag and similar tech gadgets aren't misused for stalking and unwanted tracking. "These draft standards to allow detection of unwanted trackers is a significant step forward in the work to increase safety and privacy."
[1/2] Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017. REUTERS/Mark BlinchMay 2 (Reuters) - A pioneer of artificial intelligence said he quit Google (GOOGL.O) to speak freely about the technology's dangers, after realising that computers could become smarter than people far sooner than he and other experts had expected. "I left so that I could talk about the dangers of AI without considering how this impacts Google," Geoffrey Hinton wrote on Twitter. “The idea that this stuff could actually get smarter than people — a few people believed that,” he told the New York Times. In his tweet, Hinton said Google itself had "acted very responsibly" and denied that he had quit so that he could criticise his former employer.
“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper in an interview on Tuesday. Apple co-founder Steve Wozniak, who was one of the signatories on the letter, appeared on “CNN This Morning” on Tuesday, echoing concerns about its potential to spread misinformation. “Tricking is going to be a lot easier for those who want to trick you,” Wozniak told CNN. Hinton, for his part, told CNN he did not sign the petition. “It’s not clear to me that we can solve this problem,” Hinton told Tapper.
Total: 25