Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Deepmind"


25 mentions found


could transform computer programming from a rarefied, highly-compensated occupation into a widely accessible skill that people can easily pick up and use as part of their jobs across a wide variety of fields. In situations where one needs a “simple” program … those programs will, themselves, be generated by an A.I. Welsh’s argument, which ran earlier this year in the house organ of the Association for Computing Machinery, carried the headline, “The End of Programming,” but there’s also a way in which A.I. could mark the beginning of a new kind of programming — one that doesn’t require us to learn code but instead transforms human-language instructions into software. Everyone is a programmer now — you just have to say something to the computer.”
Persons: , DeepMind, ” Matt Welsh, there’s, , ” Jensen Huang Organizations: Google, Apple, Association for Computing Machinery, Nvidia Locations: Google’s, Taiwan
So I spent a week conversing with Pi, a new personal AI chatbot, to see if it could fill the void. Pi is indeed useful and friendly, but it didn't fulfill my desire for connection and conversation. My human colleagues are sarcastic, snarky, wry, and, well, human. Thanks, Pi! When I relayed an amusing anecdote about my 15-year-old daughter, Pi told me: "There's no doubt that teenagers can be challenging, and it can be hard to communicate with them at times."
Persons: Pi, , Slack, Reid Hoffman, Mustafa Suleyman, Karén, OpenAI's ChatGPT, Google's Bard, I'd, Siri, Sally Rooney Organizations: Service, LinkedIn, hometown Celtics, Miami Heat, Celtics
So I spent a week conversing with Pi, a new personal AI chatbot, to see if it could fill the void. Pi is indeed useful and friendly, but it didn't fulfill my desire for connection and conversation. My human colleagues are sarcastic, snarky, wry, and, well, human. Thanks, Pi! When I relayed an amusing anecdote about my 15-year-old daughter, Pi told me: "There's no doubt that teenagers can be challenging, and it can be hard to communicate with them at times."
Persons: Pi, , Slack, Reid Hoffman, Mustafa Suleyman, Karén, OpenAI's ChatGPT, Google's Bard, I'd, Siri, Sally Rooney Organizations: Service, hometown Celtics, Miami Heat, Celtics
The Microsoft Bing App is seen running on an iPhone in this photo illustration on 30 May, 2023 in Warsaw, Poland. (Photo by Jaap Arriens/NurPhoto via Getty Images)Artificial intelligence may lead to human extinction and reducing the risks associated with the technology should be a global priority, industry experts and tech leaders stated in an open letter. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement on Tuesday read. The technology has gathered pace in recent months after chatbot ChatGPT was released for public use in November and subsequently went viral. The statement Tuesday said that there has been increasing discussion about a "broad spectrum of important and urgent risks from AI."
Persons: Microsoft Bing, Jaap Arriens, Sam Altman, chatbot ChatGPT, ChatGPT Organizations: Microsoft, Getty, Google, Center, AI Safety Locations: Warsaw, Poland
Yoshua Bengio is one of three AI "godfathers" who won the Turing Prize for breakthroughs in 2018. He told the BBC that he would've prioritized safety if he'd known how quickly AI would progress. A professor known as one of three AI "godfathers" told the BBC that he felt "lost" over his life's work. "We also need the people who are close to these systems to have a kind of certification," Bengio told the broadcaster. On Tuesday, he signed a statement issued by the Center for AI Safety, which warns the technology poses an "extinction" risk comparable to nuclear war.
Persons: Yoshua Bengio, Geoffrey Hinton, Yann LeCun, ChatGPT, Sam Altman, Bengio, That's, Altman, Hinton, he's, LeCun, Organizations: BBC, Morning, Center, AI Safety, Google, New York Times Locations: Hinton
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft (MSFT.O) and Google (GOOGL.O). Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April. AI pioneer Hinton earlier told Reuters that AI could pose a "more urgent" threat to humanity than climate change. Last week OpenAI CEO Sam Altman referred to EU AI - the first efforts to create a regulation for AI - as over-regulation and threatened to leave Europe.
should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the letter, signed by many of the industry’s most respected figures. These industry leaders are quite literally warning that the impending A.I. revolution should be taken as seriously as the threat of nuclear war. It is, however, precisely what the world’s most leading experts are warning could happen. researcher at Duke University, told CNN on Tuesday: “Do we really need more evidence that A.I.’s negative impact could be as big as nuclear war?”
Persons: Sam Altman, Demis Hassabis —, , Dan Hendrycks, Robert Oppenheimer, , , ” Hendrycks, Newsrooms, Cynthia Rudin Organizations: CNN, Google, Center, A.I, Duke University
The Center for AI Safety's statement compares the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio have also supported the statement. The CEOs of three leading AI companies have signed a statement issued by the Center for AI Safety (CAIS) warning of the "extinction" risk posed by artificial intelligence. Per CAIS, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement, which compared the risks posed by AI with nuclear war and pandemics. AI experts including Geoffrey Hinton and Yoshua Bengio are among the statement's signatories, along with executives at Microsoft and Google.
Washington CNN —Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety. The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur. The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence.
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I.
Google invented AI technology like the Transformer, which led to breakthroughs in the chatbot race. Essentially, outside researchers gain access to a piece of AI technology from Google and continue iterating on it, which in turn benefits Google's own products. Several experts in patent law spoke to Insider about Google's AI patents, namely why it hasn't used them against competitors and whether it even could. Technical and legal questions aside, it would be somewhat hypocritical for Google to sue anyone for infringing its AI patents. Patent warsLegal experts pointed to the idea of "mutually assured destruction" to explain why tech companies would file patents without enforcing them offensively.
Persons: hasn't, Matthew D'Amore, Idong Ebong, Nixon Peabody Organizations: Google, Cornell University, Big Tech, Microsoft, Samsung, Apple, HTC, IBM Locations: OpenAI, ChatGPT
The strategy is revealed in a detailed internal sales guideline, titled "Generative AI Sales Playbook," obtained by Insider. The guidelines may help Amazon make a stronger push in the generative AI space, where companies including Microsoft, OpenAI, Anthropic, and Google have taken an early lead. 'ChatGPT is a brand new, experimental offering'The guidelines focus on SageMaker's appeal to companies looking to build their own generative AI services. For example, for c-suite executives, Amazon salespeople are told to focus on how generative AI can "improve efficiency by automating operations," the document said. For those with a bit more experience in AI, Amazon salespeople are advised to recommend new generative AI capabilities and AWS offerings to accelerate their development process.
Persons: Bard, SageMaker, JumpStart, Sam Altman Drew Angerer, Sparrow, It's, I'm, haven't, you've, Canva's, Eugene Kim Organizations: Microsoft, Google, Amazon, Stability, AI21 Labs, Amazon Alexa, AWS, Burnham
Bill Gates said the winner in AI will be the company that creates a personal digital agent. Gates added that it's 50-50 as to whether the AI winner behind the digital agent will come from Big Tech or the startup world. The startup was founded by LinkedIn cofounder Reid Hoffman, Deepmind cofounder Mustafa Suleyman, and Karén Simonyan, and describes itself as an "AI studio creating a personal AI for everyone." And Pi is still a ways away from what Bill Gates is imagining, a personal AI that can do your shopping and help read your emails. But Pi is the best conversational AI I've used so far, and everyone I've spoken to who has used it has been impressed.
Persons: Bill Gates, Pi, it's, Gates, I'd, I'm, I've, Reid Hoffman, Mustafa Suleyman, Karén, Matt Turner, we've, It's, there's, Spriha Srivistava, that's, Brad Davis, Brad, ChatGPT Organizations: Microsoft, Google, Big Tech, CNBC, LinkedIn, Pi Locations: San Francisco, Instagram
Microsoft co-founder Bill Gates reacts during a visit with Britain's Prime Minister Rishi Sunak of the Imperial College University, in London, Britain, February 15, 2023. Microsoft co-founder Bill Gates believes the future top company in artificial intelligence will likely have created a personal digital agent that can perform certain tasks for people. Gates said there is a fifty-fifty chance that this future AI winner will be either a startup or a tech giant. Until then, companies will continue embedding so-called generative AI technologies akin to OpenAI's popular ChatGPT into their own products. Watch: Bill Gates says OpenAI's GPT is the most important tech advance since the 1980's
After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public. "AGI safety is really important, and frontier models should be regulated," Altman tweeted. Large language models, like OpenAI's GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos. Some are more concerned about what they call "AI safety." "There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk," Montgomery told Congress.
AI boom could expose investors’ natural stupidity
  + stars: | 2023-05-19 | by ( Felix Martin | ) www.reuters.com   time to read: +7 min
Indeed, enthusiasm about AI has become the one ray of light piercing the stock market gloom created by the record-breaking rise in U.S. interest rates. It’s a good moment for investors to be especially alert to the tendency of natural stupidity to drive stock market valuations to unrealistic – and therefore ultimately unprofitable – extremes. However, the most important lessons of behavioural economics relate to a more fundamental question: Will the new generation of AI do what it promises? Behavioural economics offers some cautionary tales for such attempts to apply AI in the wild. For example, stock market returns can be affected by a small number of rare but extreme movements in share prices.
LONDON, May 19 (Reuters) - Google on Friday defeated a lawsuit brought on behalf of 1.6 million people over medical records provided to the U.S. tech giant by a British hospital trust. The Royal Free London NHS Trust transferred patient data to Google's artificial intelligence firm DeepMind Technologies in 2015 in relation to the development of a mobile app designed to analyse medical records and detect acute kidney injuries. Alphabet Inc (GOOGL.O) unit Google and DeepMind were sued last year by Royal Free patient Andrew Prismall on behalf of 1.6 million people for alleged misuse of private information. Judge Heather Williams ruled on Friday that the case should not proceed, agreeing the case is "bound to fail". "I conclude that each member of the claimant class does not have a realistic prospect of establishing a reasonable expectation of privacy in respect of their relevant medical records," she said in a written ruling.
Sam Altman, CEO of OpenAI, is one among a number of business and political leaders set to join the annual Bilderberg Meeting in Lisbon, Portugal. OpenAI CEO Sam Altman will join forces with key leadership from firms like Microsoft and Google this week as a secretive meeting of the business and political elite kickstarts in Lisbon, Portugal. Artificial intelligence will top the agenda as the ChatGPT chief meets with Microsoft CEO Satya Nadella, DeepMind head Demis Hassabis, and former Google CEO Eric Schmidt at the annual Bilderberg meeting. All in, around 130 participants from 23 countries are set to attend the private meeting — a similar number to previous years. However, the event's organizers say that the discrete nature of the event is to allow for greater freedom of discussion.
Google's AI demonstration showed how Big Tech keeps on winning. Lots of startups are throwing their hat into the generative AI race. But it looks like Big Tech has a leg up on the competition. And it makes competitors even more dependent on Big Tech companies. My colleague Hugh Langley highlights how size and reach, not just quality, will win the AI race and breaks down how Big Tech is set to reap most of the spoils.
DeepMind co-founder Mustafa Suleyman has a chilling warning for Google, his former employer: The internet as we know it will fundamentally change and "old school" Search will be gone in a decade. During his final period at Google, Suleyman worked on LaMDA, a large language model. With or without Google, the search experience will evolve to be conversational and interactive, Suleyman said on the No Priors podcast. There will be business AIs, government AIs, nonprofit AIs, political AIs, influencer AIs, brand AIs. Now, that's going to become much more dynamic, and interactive.
A co-founder of DeepMind, the AI company bought by Google in 2014, warned about AI-related job losses. Mustafa Suleyman said at a San Francisco conference that there will be "a serious number of losers." A co-founder of the AI company DeepMind has warned that governments will need to figure out a plan to compensate people who will lose their jobs to the new technology, the Financial Times reported. DeepMind was bought by Google in 2014, and has helped it develop large language models similar to ChatGPT called LaMDA and PaLM. Suleyman left DeepMind last January, before setting up his own chatbot business called Inflection AI.
ChatGPT is powered by these contractors making $15 an hour
  + stars: | 2023-05-08 | by ( David Ingram | ) www.cnbc.com   time to read: +7 min
Out of the limelight, Savreux and other contractors have spent countless hours in the past few years teaching OpenAI's systems to give better responses in ChatGPT. So far, AI contract work hasn't inspired a similar movement in the U.S. among the Americans quietly building AI systems word-by-word. watch nowJob postings for AI contractors refer to both the allure of working in a cutting-edge industry as well as the sometimes-grinding nature of the work. There's no definitive tally of how many contractors work for AI companies, but it's an increasingly common form of work around the world. A spokesperson for OpenAI said no one was available to answer questions about its use of AI contractors.
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
A French startup that wants to be Europe's answer to OpenAI is in talks to raise an initial funding round. The secretive new project, named Mistral, was founded by two AI research scientists. Mistral, a secretive new startup pitched as Europe's answer to OpenAI, is in discussions to raise a substantive funding round, sources say. London-based generative AI startup Synthesia is in talks to raise a major round while ElevenLabs raised at a $100 million valuation last month. It's also widened the race for AI supremacy with Google launching its own AI assistant, Bard, to compete with OpenAI.
A new chatbot called Pi, launched by Inflection AI, offers personal advice and support. There's a new AI chatbot on the scene — and this one wants to get personal. At one point, I asked Pi to share museum recommendations for a friend visiting New York City. Insider asked Pi how to restart the conversation, and Pi said to "start talking about whatever is on your mind." Inflection AI also said it's "creating a new form of 'boundary training' that will redefine how AIs learn and are trained."
Total: 25