Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Turing"


25 mentions found


George Hinton voiced some alarming concerns about AI in a "60 Minutes" interview. The AI "godfather" says the tech is learning better than humans — and has the potential to do bad. AdvertisementAdvertisementAll that AI is missing now, Hinton said, is the self-awareness to know how to use its intelligence to manipulate humans. They'll know how to do it"And of course, there's the concern of using AI to replace people in jobs, generate fake news, and unintended bias going undetected. He recently expressed regret for his role in advancing AI, but said on "60 Minutes" he had no regrets for the good it can do.
Persons: George Hinton, Hinton, , Geoffrey Hinton, Turing, they'll, Machiavelli, Hinton didn't, godfathers Organizations: Service, Google
Men are getting rich from AI. Women, not so much.
  + stars: | 2023-10-07 | by ( Tom Carter | ) www.businessinsider.com   time to read: +5 min
New research finds that female-led AI companies are missing out on the global rush to invest in AI. AI startups founded by women in the UK raised six times less than those founded by men in the past decade. This occurs even as the number of female-founded AI companies that are launching, rises. "One of our biggest priorities is ensuring AI models are fair and unbiased," she said. "While AI is heavily male-dominated and we still see a big gender disparity at AI events in San Francisco, we're hopeful this will change as more women become involved in AI companies.
Persons: , Alan Turing Institute's, it's, Erin Young, Rebecca Gorman, they've, OpenAI's DALL, Angela Hoover, Andi, Dr Young Organizations: Service, Data Science, Turing Institute, Funds, Amazon Locations: California, San Francisco
The "neural network planner" that Shroff and others were working on took a different approach. Faced with a situation, the neural network chooses a path based on what humans have done in thousands of similar situations. By early 2023, the neural network planner project had analyzed 10 million clips of video collected from the cars of Tesla customers. By mid-April 2023, it was time for Musk to try the new neural network planner. "Oh, wow," he said, "even my human neural network failed here, but the car did the right thing."
Persons: Elon Musk, Mozart, Mark Zuckerberg, Dhaval Shroff, OpenAI, Musk, Shroff, Alan Turing, Uber, James Bond, Ashok Elluswamy Organizations: Tesla, Computing Machinery, Intelligence, Palo Locations: Palo Alto, Buffalo , New York
Bletchley Park is the home of the World War Two Codebreakers, who in 1941 helped break the secret code used by the German government to direct ground-to-air operations on the Eastern front. The U.K. government will host the world's first artificial intelligence safety summit in Bletchley Park, the home of the codebreakers who cracked the code that ended World War II. The renowned Bletchley Park building was the home of the World War II Codebreakers, who in 1941 helped break the secret Enigma Code used by the German government to direct ground-to-air operations on the Eastern front. The U.K. tech sector has been flagging of late, following drops in venture capital investment. The U.S. is by far the world leader when it comes to AI, with massive firms ploughing resources into the technology.
Persons: , Rishi Sunak, OpenAI, Bard, Alan Turing, Turing, Sunak, Bejiing Organizations: Microsoft, Google, Baidu Locations: Bletchley, Bletchley Park, Britain, China, The U.S, EU
A piece of paper sits on the Colossus machine at Bletchley Park in Milton Keynes, Britain, September 15, 2016. REUTERS/Darren Staples/File Photo Acquire Licensing RightsLONDON, Aug 24 (Reuters) - Britain will host a global summit on artificial intelligence at the old home of Britain's World War Two codebreakers in November as Prime Minister Rishi Sunak pitches Britain as global leader in guarding the safety of the fast-developing technology. The summit will take place on Nov. 1 and 2 at Bletchley Park, the site in Milton Keynes where mathematician Alan Turing cracked Nazi Germany's Enigma code, the government said on Thursday. "The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park," Sunak said. Governments around the world are wrestling with how to control the potential negative consequences of AI without stifling innovation.
Persons: Darren Staples, Rishi Sunak, Alan Turing, Sunak, Joe Biden, Matt Clifford, Jonathan Black, Andrew MacAskill, Tomasz Janowski Organizations: REUTERS, Bletchley, Tech, European Union, Thomson Locations: Milton Keynes, Britain, Washington, Canada, France, Germany, Italy, Japan, United States, Hiroshima
Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI." Geoffrey Hinton, a trailblazer in the AI field, recently quit his job at Google and said he regrets the role he played in developing the technology. Hinton also worked at Google for over a decade, but Hinton quit his role at Google this past spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.
Persons: Geoffrey Hinton, Noah Berger, Yann LeCun, Bengio, Hinton, He's Organizations: University of Toronto, Google, Associated Press
Luckily, board games have come a long, long way from social-deduction games like Werewolf, the Settlers of Catan fad and the cringey-in-retrospect obsession with Cards Against Humanity. With the help of a half-dozen experts in the field of tabletop gaming, we’ve pulled together a list of some of the best board games currently available, many of which you probably haven’t run into. I love the whimsical theme and design.” The experts told us it’s best played with a full suite of four players. Escape room at homePlayers: 1 to 6Time to play: VariableThe Exit games are escape rooms you can play at home. Dwayne Shearill of BlackBoardGaming cites Forbidden Island as one of the first board games he got his wife to play; the pair now host the YouTube channel about board games together.
Persons: Tim Barribeau, , we’ve, Eric Yerko, Eric, Mandi Hutchinson, Suzanne Sheldon, Sass, Yerko, ” Decolonize, It’s, , Ada Weyland, Weyland, Richard Garfield, ” Yerko, “ It’s, Dwayne Shearill, , Mike Mignola, you’ve, woman’s, you’ll, Steve Gianaca, Haiclue, Beneeta Kaur, hasn’t, Ada, MonsDRAWsity, it’s Organizations: Sass Games, Salt, Catan, Love Games, YouTube, Games, Amazon Players, Monopoly Locations: Northwest, Pacific, It’s, Yerko
ChatGPT's new Code Interpreter tool was released to paying customers on 7 July. A Wharton professor said: 'Things that took me weeks to master in my Ph.D. were completed in seconds' by the tool. Even without Code Interpreter, ChatGPT already had some code-writing abilities. ChatGPT-creator OpenAI released Code Interpreter to Plus subscribers on July 7. Even without Code Interpreter, ChatGPT already had some code-writing abilities.
Persons: Wharton, ChatGPT, Ethan Mollick, Mollick, OpenAI, Insider's Aki Ito, Sarah Silverman —, Sam Altman, Peter Tennant Organizations: University of Leeds, Turing
Some AI experts say we're barreling headfirst toward the destruction of humanity. Current AI systems are not sentient but they are created to be humanlike. "We need to look at the lack of purpose that people would feel at the loss of jobs en masse," he told Insider. AI biasIf AI systems are used to help make wider societal decisions, systematic bias can become a serious risk, experts told Insider. There have already been several examples of bias in generative AI systems, including early versions of ChatGPT.
Persons: Sam Altman, OpenAI, we're, David Krueger, it's, I'm, Alan Turing, Janis Wong, Aaron Mok, Krueger, Abhishek Gupta, Arvind Krishna, Gupta, Wong Organizations: Center, AI Safety, Cambridge University, Montreal AI, IBM Locations: Montreal
CNN —Stanley Tucci weighed in on the debate about straight actors portraying gay characters in a new interview with BBC Radio 4’s Desert Island Discs on Saturday. Tucci, who is married to literary agent Felicity Blunt, said he believes that as an actor, “you’re supposed to play different people.”“You just are. Tucci has portrayed gay characters in 2006’s “The Devil Wears Prada” and in the 2020 film “Supernova” alongside Oscar-winner Colin Firth. “Because often, it’s not done the right way.”For decades, Hollywood has cast actors in heterosexual relationships for gay roles. Conversations around inclusivity in casting transgender actors in transgender roles have also become pertinent, and casting cisgender actors for those roles has recently fallen out of popular practice.
Persons: CNN — Stanley Tucci, Tucci, Felicity Blunt, “ you’re, , Oscar, Colin Firth, ’ ”, Heath, Jake Gyllenhaal, Cate Blanchett, “ Carol, ” Benedict Cumberbatch, , Alan Turing, Gyllenhaal, Blanchett, Cumberbatch, James Corden, isn’t, it’s, Guy Lodge, Firth, Felicity Organizations: CNN, BBC Radio, Hollywood, Awards, GLAAD, Guardian Locations: , , Hollywood, Felicity Blunt's
DeepMind's co-founder believes the Turing test is an outdated method to test AI intelligence. In his book, he suggests a new idea in which AI chatbots have to turn $100,000 into $1 million. A co-founder of Google's AI research lab DeepMind thinks AI chatbots like ChatGPT should be tested on their ability to turn $100,000 into $1 million in a "modern Turing test" that measures human-like intelligence. The Turing test was introduced by Alan Turing in the 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine.
Persons: DeepMind's, Mustafa Suleyman, Suleyman, Turing, Alan Turing, OpenAI's ChatGPT, ChatGPT Organizations: Power, Bloomberg, ACI, McKinsey
An AI takeoverOne of the most commonly cited risks is that AI will get out of its creator's control. Current AI systems are not sentient but they are created to be humanlike. "We need to look at the lack of purpose that people would feel at the loss of jobs en masse," he told Insider. AI biasIf AI systems are used to help make wider societal decisions, systematic bias can become a serious risk, experts told Insider. There have already been several examples of bias in generative AI systems, including early versions of ChatGPT.
Persons: Sam Altman, OpenAI, we're, David Krueger, it's, I'm, Janis Wong, Alan Turing, Aaron Mok, Krueger, Abhishek Gupta, Arvind Krishna, Gupta, Wong Organizations: Center, AI Safety, Cambridge University, Alan Turing Institute, Montreal AI, IBM Locations: Montreal
Meta's chief AI scientist said AI trained on large language models is still not very smart. Yan LeCun said AI can't learn how to load a dishwasher or reason like a child could, CNBC reported. AI like ChatGPT that's been trained on large language models isn't even as smart as dogs or cats, Meta's chief AI scientist said. He said that AI tools trained on large language models are limited because they're only coached on text. "What it tells you we are missing something really big … to reach not just human level intelligence, but even dog intelligence," LeCun added.
Persons: Yan LeCun, that's, Yann LeCun, LeCun, it's Organizations: CNBC, Viva Tech, BBC News Locations: Paris
Yann LeCun says concerns that AI could pose a threat to humanity are "preposterously ridiculous." He was part of a team that won the Turing Award in 2018 for breakthroughs in machine learning. An AI expert has said concerns that the technology could pose a threat to humanity are "preposterously ridiculous." Marc Andreessen warned against "full-blown moral panic about AI" and said that people have a "moral obligation" to encourage its development. He added that concerns about AI were overstated and if people realized the technology wasn't safe they shouldn't build it, per BBC News.
Persons: Yann LeCun, Yoshua Bengio, Geoffrey Hinton, LeCun, Bing, DALL, Bengio, Elon Musk, Steve Wozniak, Bill Gates, Marc Andreessen Organizations: BBC News, BBC, Apple, Center, AI Safety, Yale's, Leadership Institute, CNN Locations: Paris
Factbox: Governments race to regulate AI tools
  + stars: | 2023-06-13 | by ( ) www.reuters.com   time to read: +6 min
CHINA* Planning regulationsThe Chinese government will seek to initiate AI regulations in its country, billionaire Elon Musk said on June 5 after meeting with officials during his recent trip to China. ITALY* Investigating possible breachesItaly's data protection authority plans to review other artificial intelligence platforms and hire AI experts, a top official said in May. ChatGPT became available again to users in Italy in April after being temporarily banned over concerns by the national data protection authority in March. SPAIN* Investigating possible breachesSpain's data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. The Biden administration earlier in April said it was seeking public comments on potential accountability measures for AI systems.
Persons: Alan Turing, Elon Musk, Margrethe Vestager, Vestager, CNIL, Dado Ruvic, Ziv Katzir, Israel, ChatGPT, OpenAI, Antonio Guterres, Guterres, Michael Bennet, Biden, Alessandro Parodi, Amir Orusov, Jason Neely, Kirsten Donovan, Milla Nissi Organizations: Microsoft, Authority, Reuters, EU, Key, European Consumer Organisation, Seven, REUTERS, Israel Innovation Authority, UNITED, International Atomic Energy Agency, United Nations, U.S . Federal Trade Commission's, Thomson Locations: AUSTRALIA, BRITAIN, Britain, CHINA, China, Beijing, U.S, FRANCE, Italy, Hiroshima, Japan, IRELAND, ISRAEL, Israel, ITALY, JAPAN, SPAIN, Gdansk
There's a chance that AI development could get "catastrophic," Yoshua Bengio told The New York Times. "Today's systems are not anywhere close to posing an existential risk," but they could in the future, he said. "Today's systems are not anywhere close to posing an existential risk," Yoshua Bengio, a professor at the Université de Montréal, told the publication. Marc Andreessen spoke even more strongly in a blog post last week in which he warned against "full-blown moral panic about AI" and described "AI risk doomers" as a "cult." "AI doesn't want, it doesn't have goals, it doesn't want to kill you, because it's not alive," he wrote.
Persons: There's, Yoshua Bengio, there's, Montréal, Bengio, Anthony Aguirre, Microsoft Bing, It's, Aguirre, Elon Musk, Steve Wozniak, Anthropic, Eric Schmidt, Bill Gates, Marc Andreessen, it's, Andreessen Organizations: New York Times, Morning, University of California, Times, Microsoft, Life Institute, Bengio, Apple, Center, AI Safety Locations: Santa Cruz
Queer people in history: Figures to know
  + stars: | 2023-06-01 | by ( Leah Asmelash | ) edition.cnn.com   time to read: +7 min
To commemorate the month, CNN is highlighting five major LGBTQ elders – some who have passed on, and some who haven’t – highlighting their achievements. From a drag king who fought discrimination on the streets of New York to a famous mathematician who stood up to adversity despite legal limitations, here are five LGBTQ figures to know. Miss Major Griffin-GracyMiss Major in the film "Major," a documentary about her life and campaigns. But a year after Stonewall, Miss Major was arrested for robbery, landing her with a five-year prison sentence. Decades after her release, Miss Major spent time as the executive director of the Transgender Gender Variant Intersex Justice Project.
Persons: Bayard Rustin, Martin Luther King Jr, Patrick A, Burns, Rustin wasn’t, Rustin, King, Sen, Strom Thurmond, Gavin Newsom, Larry Kramer Larry Kramer, Catherine McGann, Larry Kramer, , , Kramer, Anthony Fauci, Miss Major Griffin, Major, Marsha P, Johnson, Miss Major, Mama, Michelle V, Stormé DeLarverie, DeLarverie, White, “ That’s, Alan Turing, Alan Turing’s, Turing, it’s Organizations: CNN, New York Times Co, Getty, Southern Christian Leadership Conference, California Gov, Village Voice, AIDS, Centers for Disease Control, ACT UP, AIDS Coalition, National Institute of Allergy, Miss, Stonewall, New York Times, Physical Laboratory Locations: New York, India, Montgomery, Washington, Chicago, Greenwich, New Orleans, England
OpenAI CEO Sam Altman tweeted that he finally watched the movie "Ex Machina." Sam Altman, the CEO of OpenAI, spent Wednesday night watching the 2015 movie "Ex Machina" for the first time. The movie details the story of a rich tech billionaire, Nathan, who creates an AI-powered humanoid robot named Ava. Ava the humanoid from "Ex Machina" ultimately merges into human society. But in a tweet Thursday morning, Altman said that while he thought "Ex Machina" was a "pretty good movie," he still wasn't sure why "everyone" told him to watch it.
Persons: Sam Altman, Altman, Nathan, Ava, Caleb, Alan Turing, OpenAI's ChatGPT, ChatGPT, hasn't, OpenAI Organizations: Stanford, Philosophy
Yoshua Bengio is one of three AI "godfathers" who won the Turing Prize for breakthroughs in 2018. He told the BBC that he would've prioritized safety if he'd known how quickly AI would progress. A professor known as one of three AI "godfathers" told the BBC that he felt "lost" over his life's work. "We also need the people who are close to these systems to have a kind of certification," Bengio told the broadcaster. On Tuesday, he signed a statement issued by the Center for AI Safety, which warns the technology poses an "extinction" risk comparable to nuclear war.
Persons: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, ChatGPT, Sam Altman, Bengio, That's, Altman, Hinton, he's, LeCun, Organizations: BBC, Morning, Center, AI Safety, Google, New York Times Locations: Hinton
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft (MSFT.O) and Google (GOOGL.O). Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April. AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change. Last week OpenAI CEO Sam Altman referred to EU AI - the first efforts to create a regulation for AI - as over-regulation and threatened to leave Europe.
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I.
Martin Shkreli is out of jail and earning $2,500 a month working as a consultant at a law firm. Shkreli is also living in Queens with his sister, per a report by the US Probation Office. A year after getting out of jail, Martin Shkreli — also known as "Pharma Bro" — is earning $2,500 as a consultant for a law firm, and living with his sister in Queens, New York. However, Shkreli was released from jail early in May 2022, after which he was transferred to a halfway house, where he lived until September. Upon getting out of jail, he posted a selfie of himself on Facebook, saying: "Getting out of real prison is easier than getting out of Twitter prison."
The European Union is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI's ChatGPT. "If it's about protecting personal data, they apply data protection laws, if it's a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable." Data protection authorities in France and Spain also launched in April probes into OpenAI's compliance with privacy laws. 'THINKING CREATIVELY'French data regulator CNIL has started "thinking creatively" about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead. "We are looking at the full range of effects, although our focus remains on data protection and privacy," he told Reuters.
Vyera said its bankruptcy was the result of declining profits, increased competition for generic drugs, and litigation alleging that Vyera suppressed competition for its most valuable drug, Daraprim. Daraprim is a life-saving anti-parasitic medicine that Shkreli infamously raised the price on by more than 4000% and worked to choke off generic competition for after the company acquired the drug in 2015. Vyera filed a Chapter 11 plan in court on Wednesday, laying out it its intent to repay creditors through asset sales. Vyera said that recently-sold vouchers have fetched prices between $95 million and $120 million in sales that have occurred since 2020. Vyera listed Duane Morris as its largest unsecured creditor in its bankruptcy filing, with a $2.1 million asserted debt.
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
Total: 25