Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Turing"


25 mentions found


Some AI experts say we're barreling headfirst toward the destruction of humanity. Current AI systems are not sentient but they are created to be humanlike. "We need to look at the lack of purpose that people would feel at the loss of jobs en masse," he told Insider. AI biasIf AI systems are used to help make wider societal decisions, systematic bias can become a serious risk, experts told Insider. There have already been several examples of bias in generative AI systems, including early versions of ChatGPT.
Persons: Sam Altman, OpenAI, we're, David Krueger, it's, I'm, Alan Turing, Janis Wong, Aaron Mok, Krueger, Abhishek Gupta, Arvind Krishna, Gupta, Wong Organizations: Center, AI Safety, Cambridge University, Montreal AI, IBM Locations: Montreal
CNN —Stanley Tucci weighed in on the debate about straight actors portraying gay characters in a new interview with BBC Radio 4’s Desert Island Discs on Saturday. Tucci, who is married to literary agent Felicity Blunt, said he believes that as an actor, “you’re supposed to play different people.”“You just are. Tucci has portrayed gay characters in 2006’s “The Devil Wears Prada” and in the 2020 film “Supernova” alongside Oscar-winner Colin Firth. “Because often, it’s not done the right way.”For decades, Hollywood has cast actors in heterosexual relationships for gay roles. Conversations around inclusivity in casting transgender actors in transgender roles have also become pertinent, and casting cisgender actors for those roles has recently fallen out of popular practice.
Persons: CNN — Stanley Tucci, Tucci, Felicity Blunt, “ you’re, , Oscar, Colin Firth, ’ ”, Heath, Jake Gyllenhaal, Cate Blanchett, “ Carol, ” Benedict Cumberbatch, , Alan Turing, Gyllenhaal, Blanchett, Cumberbatch, James Corden, isn’t, it’s, Guy Lodge, Firth, Felicity Organizations: CNN, BBC Radio, Hollywood, Awards, GLAAD, Guardian Locations: , , Hollywood, Felicity Blunt's
DeepMind's co-founder believes the Turing test is an outdated method to test AI intelligence. In his book, he suggests a new idea in which AI chatbots have to turn $100,000 into $1 million. A co-founder of Google's AI research lab DeepMind thinks AI chatbots like ChatGPT should be tested on their ability to turn $100,000 into $1 million in a "modern Turing test" that measures human-like intelligence. The Turing test was introduced by Alan Turing in the 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine.
Persons: DeepMind's, Mustafa Suleyman, Suleyman, Turing, Alan Turing, OpenAI's ChatGPT, ChatGPT Organizations: Power, Bloomberg, ACI, McKinsey
An AI takeoverOne of the most commonly cited risks is that AI will get out of its creator's control. Current AI systems are not sentient but they are created to be humanlike. "We need to look at the lack of purpose that people would feel at the loss of jobs en masse," he told Insider. AI biasIf AI systems are used to help make wider societal decisions, systematic bias can become a serious risk, experts told Insider. There have already been several examples of bias in generative AI systems, including early versions of ChatGPT.
Persons: Sam Altman, OpenAI, we're, David Krueger, it's, I'm, Janis Wong, Alan Turing, Aaron Mok, Krueger, Abhishek Gupta, Arvind Krishna, Gupta, Wong Organizations: Center, AI Safety, Cambridge University, Alan Turing Institute, Montreal AI, IBM Locations: Montreal
Meta's chief AI scientist said AI trained on large language models is still not very smart. Yan LeCun said AI can't learn how to load a dishwasher or reason like a child could, CNBC reported. AI like ChatGPT that's been trained on large language models isn't even as smart as dogs or cats, Meta's chief AI scientist said. He said that AI tools trained on large language models are limited because they're only coached on text. "What it tells you we are missing something really big … to reach not just human level intelligence, but even dog intelligence," LeCun added.
Persons: Yan LeCun, that's, Yann LeCun, LeCun, it's Organizations: CNBC, Viva Tech, BBC News Locations: Paris
Yann LeCun says concerns that AI could pose a threat to humanity are "preposterously ridiculous." He was part of a team that won the Turing Award in 2018 for breakthroughs in machine learning. An AI expert has said concerns that the technology could pose a threat to humanity are "preposterously ridiculous." Marc Andreessen warned against "full-blown moral panic about AI" and said that people have a "moral obligation" to encourage its development. He added that concerns about AI were overstated and if people realized the technology wasn't safe they shouldn't build it, per BBC News.
Persons: Yann LeCun, Yoshua Bengio, Geoffrey Hinton, LeCun, Bing, DALL, Bengio, Elon Musk, Steve Wozniak, Bill Gates, Marc Andreessen Organizations: BBC News, BBC, Apple, Center, AI Safety, Yale's, Leadership Institute, CNN Locations: Paris
Factbox: Governments race to regulate AI tools
  + stars: | 2023-06-13 | by ( ) www.reuters.com   time to read: +6 min
CHINA* Planning regulationsThe Chinese government will seek to initiate AI regulations in its country, billionaire Elon Musk said on June 5 after meeting with officials during his recent trip to China. ITALY* Investigating possible breachesItaly's data protection authority plans to review other artificial intelligence platforms and hire AI experts, a top official said in May. ChatGPT became available again to users in Italy in April after being temporarily banned over concerns by the national data protection authority in March. SPAIN* Investigating possible breachesSpain's data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. The Biden administration earlier in April said it was seeking public comments on potential accountability measures for AI systems.
Persons: Alan Turing, Elon Musk, Margrethe Vestager, Vestager, CNIL, Dado Ruvic, Ziv Katzir, Israel, ChatGPT, OpenAI, Antonio Guterres, Guterres, Michael Bennet, Biden, Alessandro Parodi, Amir Orusov, Jason Neely, Kirsten Donovan, Milla Nissi Organizations: Microsoft, Authority, Reuters, EU, Key, European Consumer Organisation, Seven, REUTERS, Israel Innovation Authority, UNITED, International Atomic Energy Agency, United Nations, U.S . Federal Trade Commission's, Thomson Locations: AUSTRALIA, BRITAIN, Britain, CHINA, China, Beijing, U.S, FRANCE, Italy, Hiroshima, Japan, IRELAND, ISRAEL, Israel, ITALY, JAPAN, SPAIN, Gdansk
There's a chance that AI development could get "catastrophic," Yoshua Bengio told The New York Times. "Today's systems are not anywhere close to posing an existential risk," but they could in the future, he said. "Today's systems are not anywhere close to posing an existential risk," Yoshua Bengio, a professor at the Université de Montréal, told the publication. Marc Andreessen spoke even more strongly in a blog post last week in which he warned against "full-blown moral panic about AI" and described "AI risk doomers" as a "cult." "AI doesn't want, it doesn't have goals, it doesn't want to kill you, because it's not alive," he wrote.
Persons: There's, Yoshua Bengio, there's, Montréal, Bengio, Anthony Aguirre, Microsoft Bing, It's, Aguirre, Elon Musk, Steve Wozniak, Anthropic, Eric Schmidt, Bill Gates, Marc Andreessen, it's, Andreessen Organizations: New York Times, Morning, University of California, Times, Microsoft, Life Institute, Bengio, Apple, Center, AI Safety Locations: Santa Cruz
OpenAI CEO Sam Altman tweeted that he finally watched the movie "Ex Machina." Sam Altman, the CEO of OpenAI, spent Wednesday night watching the 2015 movie "Ex Machina" for the first time. The movie details the story of a rich tech billionaire, Nathan, who creates an AI-powered humanoid robot named Ava. Ava the humanoid from "Ex Machina" ultimately merges into human society. But in a tweet Thursday morning, Altman said that while he thought "Ex Machina" was a "pretty good movie," he still wasn't sure why "everyone" told him to watch it.
Persons: Sam Altman, Altman, Nathan, Ava, Caleb, Alan Turing, OpenAI's ChatGPT, ChatGPT, hasn't, OpenAI Organizations: Stanford, Philosophy
Queer people in history: Figures to know
  + stars: | 2023-06-01 | by ( Leah Asmelash | ) edition.cnn.com   time to read: +7 min
To commemorate the month, CNN is highlighting five major LGBTQ elders – some who have passed on, and some who haven’t – highlighting their achievements. From a drag king who fought discrimination on the streets of New York to a famous mathematician who stood up to adversity despite legal limitations, here are five LGBTQ figures to know. Miss Major Griffin-GracyMiss Major in the film "Major," a documentary about her life and campaigns. But a year after Stonewall, Miss Major was arrested for robbery, landing her with a five-year prison sentence. Decades after her release, Miss Major spent time as the executive director of the Transgender Gender Variant Intersex Justice Project.
Persons: Bayard Rustin, Martin Luther King Jr, Patrick A, Burns, Rustin wasn’t, Rustin, King, Sen, Strom Thurmond, Gavin Newsom, Larry Kramer Larry Kramer, Catherine McGann, Larry Kramer, , , Kramer, Anthony Fauci, Miss Major Griffin, Major, Marsha P, Johnson, Miss Major, Mama, Michelle V, Stormé DeLarverie, DeLarverie, White, “ That’s, Alan Turing, Alan Turing’s, Turing, it’s Organizations: CNN, New York Times Co, Getty, Southern Christian Leadership Conference, California Gov, Village Voice, AIDS, Centers for Disease Control, ACT UP, AIDS Coalition, National Institute of Allergy, Miss, Stonewall, New York Times, Physical Laboratory Locations: New York, India, Montgomery, Washington, Chicago, Greenwich, New Orleans, England
Yoshua Bengio is one of three AI "godfathers" who won the Turing Prize for breakthroughs in 2018. He told the BBC that he would've prioritized safety if he'd known how quickly AI would progress. A professor known as one of three AI "godfathers" told the BBC that he felt "lost" over his life's work. "We also need the people who are close to these systems to have a kind of certification," Bengio told the broadcaster. On Tuesday, he signed a statement issued by the Center for AI Safety, which warns the technology poses an "extinction" risk comparable to nuclear war.
Persons: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, ChatGPT, Sam Altman, Bengio, That's, Altman, Hinton, he's, LeCun, Organizations: BBC, Morning, Center, AI Safety, Google, New York Times Locations: Hinton
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft (MSFT.O) and Google (GOOGL.O). Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April. AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change. Last week OpenAI CEO Sam Altman referred to EU AI - the first efforts to create a regulation for AI - as over-regulation and threatened to leave Europe.
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I.
Martin Shkreli is out of jail and earning $2,500 a month working as a consultant at a law firm. Shkreli is also living in Queens with his sister, per a report by the US Probation Office. A year after getting out of jail, Martin Shkreli — also known as "Pharma Bro" — is earning $2,500 as a consultant for a law firm, and living with his sister in Queens, New York. However, Shkreli was released from jail early in May 2022, after which he was transferred to a halfway house, where he lived until September. Upon getting out of jail, he posted a selfie of himself on Facebook, saying: "Getting out of real prison is easier than getting out of Twitter prison."
The European Union is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI's ChatGPT. "If it's about protecting personal data, they apply data protection laws, if it's a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable." Data protection authorities in France and Spain also launched in April probes into OpenAI's compliance with privacy laws. 'THINKING CREATIVELY'French data regulator CNIL has started "thinking creatively" about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead. "We are looking at the full range of effects, although our focus remains on data protection and privacy," he told Reuters.
Vyera said its bankruptcy was the result of declining profits, increased competition for generic drugs, and litigation alleging that Vyera suppressed competition for its most valuable drug, Daraprim. Daraprim is a life-saving anti-parasitic medicine that Shkreli infamously raised the price on by more than 4000% and worked to choke off generic competition for after the company acquired the drug in 2015. Vyera filed a Chapter 11 plan in court on Wednesday, laying out it its intent to repay creditors through asset sales. Vyera said that recently-sold vouchers have fetched prices between $95 million and $120 million in sales that have occurred since 2020. Vyera listed Duane Morris as its largest unsecured creditor in its bankruptcy filing, with a $2.1 million asserted debt.
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
For the past decade Hinton worked part-time at Google , between the company's Silicon Valley headquarters and Toronto. "I thought it was 30 to 50 years or even longer away," Hinton told the Times, in a story published Monday. "Obviously, I no longer think that." "Biological agents cannot do this. So collections of identical digital agents can acquire hugely more knowledge than any individual biological agent.
This copy is for your personal, non-commercial use only. Distribution and use of this material are governed by our Subscriber Agreement and by copyright law. For non-personal use or to order multiple copies, please contact Dow Jones Reprints at 1-800-843-0008 or visit www.djreprints.com. https://www.wsj.com/articles/proximity-and-the-life-and-death-s-of-alan-turing-reviews-a-weekend-of-world-premieres-3fe73739
March 21 (Reuters) - Computing networking pioneer Bob Metcalfe on Wednesday won the industry's most prestigious prize for the invention of the Ethernet, a technology that half a century after its creation remains the foundation of the internet. The Association for Computing Machinery credited Metcalfe, 76, with the Ethernet's "invention, standardization, and commercialization" in conferring its 2022 Turing Award, known as the Nobel prize of computing. The Ethernet got its start when Metcalfe, who later went on to co-found computing network equipment maker 3Com, was asked to hook up the office printer. Metcalfe said previous generations of AI "died on the vine because of a lack of data." And the brain teaches us that connecting them is where it's at," Metcalfe said.
Nowadays, the promise of social media as a unifying force for good has all but collapsed, and Zuckerberg is slashing thousands of jobs after his company's rocky pivot to the metaverse. Much like social media in 2012, the AI industry is standing on the precipice of immense change. And as Altman and his cohort charge ahead, AI could fundamentally reshape our economy and lives even more than social media. If social media helped expose the worst impulses of humanity on a mass scale, generative AI could be a turbocharger that accelerates the spread of our faults. Social media amplified society's issues, as Wooldridge puts it.
The AI industry is vast, encompassing not only buzzy chatbots and conversational search engines but also things like self-driving vehicles. Big technology companies have laid off tens of thousands of workers in recent months, but workers with AI skills are still in demand. Highly educated data scientists and core AI specialists with technical know-how are still highly in demand despite recent layoffs, Forshaw said. Natural language processing is really hot right now, but data science and data analytics skills are still in high demand." Still, Kimmel— who recently launched a bootcamp for AI startups— suggests that it's best to jump in and learn alongside early builders.
It's a profit-making move designed to leverage our very human tendency to see human traits in nonhuman things. Look, I don't think we don't need to treat chatbots with respect because they ask us to. Making chatbots seem as if they're human isn't just incidental. So the real issue involving the current incarnation of chatbots isn't whether we treat them as people — it's how we decide to treat them as property. The robots don't care."
"Tesla vision AI could really crush these Google 'not a bot' tests lol," Elon Musk tweeted Wednesday. He was referring to reCAPTCHA tests, which ask you to identify objects to prove you're a human. "Tesla vision AI could really crush these Google 'not a bot' tests lol," Musk tweeted on Wednesday evening. Musk tweeted in response: "This will greatly increase public awareness that a Tesla can drive itself (supervised for now)." Last December, CNN reported that Tesla's self-driving mode caused an eight-car pileup in California, injuring nine people.
Even the skeptics of the latest hype cycle recounted during the Town Hall numerous examples of how AI is already embedded in more efficient business processes. The arrival of ChatGPT and generative AI only a few years after the hype cycle over the metaverse has attracted both the AI bulls and bears as tech pursues its next big thing. Another executive who works with lawyers and accountants said the sentiment right now is that AI is not to replace lawyers, but "lawyers using AI are gonna replace lawyers." And what generative AI does, is really help you crunch, take your machine learning to, you know, the 'nth' level of the finite level. That may also place AI in the crosshairs of ESG investors, making sure that the ethics part is part of the mission of companies using it.
Total: 25