Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Deepmind"


25 mentions found


The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. DeepMind is an AI research lab that was co-founded in 2010 by Demis Hassabis. The CEO of Alphabet-owned AI research lab, DeepMind Technologies, spoke about the potential of artificial intelligence in an interview with CBS' "60 Minutes," which aired on Sunday. DeepMind CEO Demis Hassabis told CBS that he thinks that AI might one day become self-aware. Hassabis told CBS that he believes AI is "the most important invention that humanity will ever make."
Tesla CEO Elon Musk is planning to launch an artificial intelligence start-up that would go head-to-head with OpenAI, the Financial Times reported Friday. "It's real and they are excited about it," a source familiar with the matter told the Financial Times. Musk was once a major financial backer at OpenAI, committing $1 billion over multiple years, according to an earlier report from Semafor. Beyond Microsoft and Google, Amazon announced it's entering the generative AI space on Thursday. Read more at the Financial Times.
Ian Hogarth — who has invested in over 50 AI companies — wrote an FT essay warning about the tech. "God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," he added. At some point, someone will figure out how to cut us out of the loop, creating a God-like AI capable of infinite self-improvement," Hogarth added. He also warned that the heated competition between those at the forefront of the technology, like OpenAI and Alphabet-owned DeepMind, risks an unstable "God-like AI" because of a lack of oversight. In a 2019 interview with the New York Times, OpenAI CEO Sam Altman compared his ambitions to the Manhattan Project, which created the first nuclear weapons.
Driven by the recent AI boom, companies are raiding top college campuses for rare technical talent. She's currently on leave from her Stanford AI Ph.D. program to focus on Moonhub. In 2011, new AI Ph.D. graduates took jobs in the tech industry and academia in about equal measure. But since then, the majority of new grads have headed to the AI industry, with nearly double the percentage of AI Ph.D. grads taking industry jobs versus academic roles in 2021, according to Stanford's Institute for Human-Centered AI's 2023 AI Index Report. "All AI companies have roles for people with Ph.D.s and without," said Attaluri, the soon-to-be researcher at DeepMind.
Elon Musk-owned Twitter purchased 10,000 GPUs, apparently to get into the generative AI boom. This move goes against Musk's open-letter plea for companies to slow down AI development. It also backs up Reid Hoffman's claim that some like Musk wanted the pause so they could catch up. In December, a few months after taking full ownership of Twitter, Musk went so far as to tweet about how he cut OpenAI's access to Twitter, which was used to train OpenAI's language models. For someone who says he wants to pause AI development, Musk seems to be doing the very opposite of that.
Legal generative AI startup Harvey has raised a funding round from Sequoia, Insider has learned. This funding round landed the startup a $150 million post-money valuation. Industry-specific generative AI startups have emerged in nearly every industry, from healthcare to gaming, to offer specialized services beyond the capabilities of general models like OpenAI's GPT-4. Legal generative AI startup Harvey has raised a Series A round of funding at a $150 million post-money valuation from Sequoia Capital, according to three people with knowledge of the financing who were not authorized to speak publicly. The relatively few customers makes Harvey's valuation seem rich in comparison, the source said, pointing to a broader trend of elevated valuations and round sizes in the hyped-up generative AI space.
Tech companies typically use GPUs to work on large AI models, given the computational workload the newer technology requires. Musk, however, has criticized the recent development of generative AI, saying the technology is powerful and needs regulation to make sure it's operating within "the public interest." It's unclear exactly what Twitter will use generative AI for, the people familiar said. Generative AI has the capability, if trained for the use, to create new advertising images and text to target specific audiences. Nvidia, which is estimated to have 95% of the market, manufactures a GPU for large AI models that costs $10,000.
Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI's research. Among the research cited was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats. Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously. Twitter will soon launch a new fee structure for access to its research data, potentially hindering research on the subject.
Tech leaders are urging caution on AI
  + stars: | 2023-03-30 | by ( Paayal Zaveri | ) www.businessinsider.com   time to read: +4 min
Insider asked ChatGPT, the viral AI chatbot sweeping the internet, to whip up a layoff memo for a pretend tech company, Gomezon. Elon Musk, Steve Wozniak, researchers at Alphabet's DeepMind, and other AI leaders are calling for a pause on training AI models more powerful than OpenAI's GPT-4. My colleague Emilia David looked at why Elon Musk and other tech leaders are right: AI needs to slow down. An Apple Watch is an essential for many of us these days, but the right band can make all the difference. Check out Insider's review of the 18 best Apple Watch bands in 2023.
AI-powered technology companies Microsoft Microsoft arguably pushed AI into the mainstream with its unexpected release of its new AI-powered search engine Bing. During its annual conference for developers earlier this month, Nvidia demonstrated the fundamental importance of its role in facilitating the usage AI applications. Also unveiled were four new chips designed specifically for inferencing , which are optimized for various new generative AI applications. To facilitate these operations, AMD provides machine learning and deep learning systems which offer higher-performance computing capabilities to accelerate AI applications. Qualcomm Qualcomm (QCOM) is focused on making AI technology on-device processing more efficient across different industries and products.
A top Google researcher resigned after warning execs that Bard was trained off ChatGPT, per The Information. A Google spokesperson told The Verge that Bard is not trained on data from ChatGPT. The researcher is one of many Google employees to leave the company and join OpenAI. One person told The Information that Google stopped using the data to train Bard after Devlin warned executives about the issue. The Information reported that Alphabet's two AI teams, DeepMind and Google Brain, have joined forces to better compete with OpenAI.
March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Rather than pause research, she said, AI researchers should be subjected to greater transparency requirements. "If you do AI research, you should be very transparent about how you do it."
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI. Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters.
In the white paper, the Department for Science, Innovation and Technology (DSIT) outlined five principles it wanted companies to follow. Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper. "When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently." On Monday, Secretary of State Michelle Donelan visited the offices of AI startup DeepMind in London, a government spokesperson said. Not everyone is convinced by the U.K. government's approach to regulating AI.
Elon Musk and dozens of other technology leaders have called on AI labs to pause the development of systems that can compete with human-level intelligence. "Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" The Future of Life Institute is a nonprofit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of artificial intelligence. The institute has previously gotten the likes of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapons systems. The institute said it was calling on all AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Its signatories called for a 6-month pause on the training of AI systems more powerful than GPT-4. The letter, issued by the non-profit Future of Life Institute, called for AI labs to pause training any tech more powerful than OpenAI's GPT-4, which launched earlier this month. The non-profit said powerful AI systems should only be developed "once we are confident that their effects will be positive and their risks will be manageable." Stability AI CEO Emad Mostaque, researchers at Alphabet's AI lab DeepMind, and notable AI professors have also signed the letter. The letter accused AI labs of being "locked in an out-of-control race to develop and deploy" powerful tech.
Top AI researchers have been leaving for startups where their work can have more impact. That frustration over Google's slow movement has been corroborated by other former Google researchers who spoke to Insider. Niki Parmar left Google Brain after five years to serve as a co-founder and CTO of Adept, though like Vaswani she recently left for a stealth startup. Lukasz Kaiser left Google Brain after over seven years to join OpenAI in 2021. Sharan Narang, another contributor to the T5 paper, left Google Brain in 2022 after four years.
Members of the AlphaFold team in front of the European Molecular Biology Laboratory in Heidelberg, Germany. AlphaFold was trained on public data resources, including those managed by the EMBL’s European Bioinformatics Institute. Meta Platforms Inc.’s new tool predicting the structure of hundreds of millions of proteins is the latest example of a breakthrough in computational biology that began several years ago at an Alphabet Inc. subsidiary. Some scientists expect the new class of artificial-intelligence systems to accelerate work in the life sciences, particularly drug development.
The draft needs to be thrashed out between EU countries and EU lawmakers, called a trilogue, before the rules can become law. This led to different AI tools being classified according to their perceived risk level: from minimal through to limited, high, and unacceptable. Almost all of the big tech players have stakes in the sector, including Microsoft (MSFT.O), Alphabet (GOOGL.O) and Meta (META.O). BIG TECH, BIG PROBLEMSThe EU discussions have raised concerns for companies -- from small startups to Big Tech -- on how regulations might affect their business and whether they would be at a competitive disadvantage against rivals from other continents. A recent survey by industry body appliedAI showed that 51% of the respondents expect a slowdown of AI development activities as a result of the AI Act.
LONDON, March 21 (Reuters) - Google (GOOGL.O) asked London's High Court on Tuesday to throw out a lawsuit brought on behalf of 1.6 million people over medical records provided to the tech giant by a British hospital trust. The Royal Free London NHS Trust transferred patient data to Google's artificial intelligence firm DeepMind Technologies in 2015 in relation to the development of a mobile app designed to analyse medical records and detect acute kidney injuries. Google and DeepMind were sued last year by Royal Free patient Andrew Prismall on behalf of 1.6 million people for alleged misuse of private information. However, Prismall's lawyer Timothy Pitt-Payne said in court filings that every claimant "had their patient-identifiable medical records transferred ... and therefore suffered the same loss of control". "Every wrongful transfer of medical records merits an award of damages," he added.
Ex-Google engineers developed a conversational AI chatbot years ago, per The Wall Street Journal. Google is now racing to catch up with Microsoft's AI and plans to release its AI chatbot this year. "It caused a bit of a stir inside of Google," Shazeer said in an interview with investors Aarthi Ramamurthy and Sriram Krishnan last month. But Google's AI plans may now finally see the light of day, even as discussions around whether its chatbot can be responsibly launched continue. Alphabet chairman John Hennessy agreed that Google's chatbot wasn't "really ready for a product yet."
About half of the 20 people who reported to Elon Musk after his takeover have left Twitter. Musk has hired some new people, including engineers who may be working on an AI project. Elon Musk's Twitter is ruled by chaos. None have been directly replaced, the people familiar said, although Musk has also hired some new people from outside his companies. Below is a complete list of who Musk set as his direct reports, including those who have already left the company.
AI search engine startup Perplexity AI is raising a funding round led by NEA, Insider has learned. The deal aims to raise between $20 and $25 million at a $150 million post-money valuation, according to sources. The fundraise continues the trend of large rounds and valuations in the buzzy generative AI space. AI search engine startup Perplexity AI is in talks to raise a funding round led by NEA, according to three people with knowledge of the financing who were not authorized to speak publicly. Perplexity AI cofounder and CEO Aravind Srinivas declined to comment.
He chatted with a woman who was locked out of her Apple account minutes after her iPhone was stolen. CEO Mark Zuckerberg is structurally changing Facebook to mimic Instagram. The restructuring — which will likely include layoffs, as Insider reported — is part of Zuck's planned "year of efficiency." 8. iPhone users could soon send iMessages through PCs. These are the best MagSafe battery packs for iPhone users.
Total: 25