Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Dan Hendrycks"


10 mentions found


Elon Musk announced his new company xAI which he says has the goal to understand the true nature of the universe. Elon Musk said Friday that his AI company, xAI, will debut its technology on Saturday. "Tomorrow, xAI will release its first AI to a select group," Musk posted on X, formerly known as Twitter. In August, xAI seemed to be hiring, according to posts on social media by multiple members of the company. Musk incorporated xAI in Nevada in March, according to filings.
Persons: Elon Musk, xAI, Musk, Bard, Claude, Tesla, DeepMind's AlphaCode, OpenAI's, Greg Yang, Toby Pohlen, Dan Hendrycks Organizations: Google, Nvidia, Team, DeepMind, Google Research, Microsoft Research, Twitter, CNBC, Fox News Channel, xAI, X, X Corp, Center, AI Safety Locations: Nevada, San Francisco
Unstable Diffusion is an AI image generator with minimal content restrictions. That's what Chaudhry did when starting Unstable Diffusion in August 2022, the same month Stable Diffusion was released to the public. The community launched a Kickstarter campaign in December to raise money to build its image generator, but was removed from the platform 12 days later. At the time, experts told Insider that being featured in non-consensual deepfake porn can be traumatic and considered abuse. TechCrunch was able to use the generator to create images that produced look-alikes of Donald Trump and Chris Hemsworth, for example.
Persons: Arman Chaudhry, Chaudhry, TechCrunch, Donald Trump, Chris Hemsworth, Dan Hendrycks, Gavin Newsom Organizations: TechCrunch, Center, AI Safety, Google Locations: Atrioc, California
Share Share Article via Facebook Share Article via Twitter Share Article via LinkedIn Share Article via EmailInternational coordination in A.I. is vital to curbing existential risks, says Center for AI SafetyDan Hendrycks, director of the Center for AI Safety, says "it's probably better to start that as soon as possible, rather than when the technologies are more relevant for weaponization, when they become much more relevant for national security."
Persons: Dan Hendrycks, it's Organizations: Center, AI Safety Locations: A.I
Elon Musk launches AI firm xAI as he looks to take on OpenAI
  + stars: | 2023-07-13 | by ( ) www.reuters.com   time to read: +3 min
In a Twitter Spaces event Wednesday evening, Musk explained his plan for building a safer AI. Rather than explicitly programming morality into its AI, xAI will seek to create a "maximally curious" AI, he said. Musk in March registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm lists Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary. Dan Hendrycks, who will advise the xAI team, is currently director of the Center for AI Safety and his work revolves around the risks of AI.
Persons: Elon Musk, Musk, xAI, that's, Igor Babuschkin, DeepMind, Tony Wu, Szegedy, Greg Yang, Jared Birchall, Google's Bard, Microsoft's, Bing, Bard, Dan Hendrycks, Tesla, Akash Sriram, Chavi Mehta, Yuvraj Malik, Aditya Soni, Anna Tong, Shailesh Kuber, Leslie Adler Organizations: SpaceX, Twitter, Microsoft, Google, X.AI Corp, Center, AI Safety, X Corp, Thomson Locations: OpenAI, Nevada, San Francisco Bay, Bengaluru, Anna, San Francisco
Elon Musk launches his new company, xAI
  + stars: | 2023-07-12 | by ( Hayden Field | ) www.cnbc.com   time to read: +2 min
Elon Musk, the CEO of Tesla and SpaceX, and owner of Twitter, on Wednesday announced the debut of a new AI company, xAI, with the goal to "understand the true nature of the universe." According to the company's website, Musk and his team will share more information in a live Twitter Spaces chat on Friday. Team members behind xAI are alumni of DeepMind, OpenAI, Google Research, Microsoft Research, Twitter and Tesla, and have worked on projects including DeepMind's AlphaCode and OpenAI's GPT-3.5 and GPT-4 chatbots. Musk seems to be positioning xAI to compete with companies like OpenAI, Google and Anthropic, which are behind leading chatbots like ChatGPT, Bard and Claude. Musk reportedly incorporated xAI in Nevada in March.
Persons: Elon Musk, Musk, DeepMind's AlphaCode, OpenAI's, Bard, Claude, Dan Hendrycks, Greg Yang, Tesla Organizations: SpaceX, Twitter, Wednesday, Team, DeepMind, Google Research, Microsoft Research, Google, The Financial Times, Nvidia, Fox News Channel, Center, AI Safety, xAI, X Corp Locations: San Francisco, Nevada
Elon Musk officially unveiled his new company xAI on Wednesday. Musk publicly introduced a new company called xAI, whose stated aim is to glean "the true nature of the universe." And an apparently all-male, 12-person team including Musk has been detailed to the heady task, according to its website, which doesn't list any female employees. Hendrycks has published about the potential dangers of AI, including the technology's ability to spread disinformation. With the launch of xAI, Musk now owns or is in charge of half a dozen companies, including Tesla, SpaceX, Neuralink, The Boring Company, and Twitter.
Persons: Elon Musk, Musk, xAI's, Tesla, Musk helms, Dan Hendrycks, Hendrycks, Musk's Tesla, Insider's Kali Hays, — Manuel Kroiss, Igor Babuschkin, Babuschkin, Brian Philip Organizations: Morning, X.AI, Tesla, Twitter, X Corp, Microsoft, AI Safety, SpaceX, Center, AI, Palo Alto Police Department, The Boring Company Locations: Nevada, Hendrycks, Palo Alto
Washington CNN —Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety. The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur. The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence.
should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the letter, signed by many of the industry’s most respected figures. These industry leaders are quite literally warning that the impending A.I. revolution should be taken as seriously as the threat of nuclear war. It is, however, precisely what the world’s most leading experts are warning could happen. researcher at Duke University, told CNN on Tuesday: “Do we really need more evidence that A.I.’s negative impact could be as big as nuclear war?”
Persons: Sam Altman, Demis Hassabis —, , Dan Hendrycks, Robert Oppenheimer, , , ” Hendrycks, Newsrooms, Cynthia Rudin Organizations: CNN, Google, Center, A.I, Duke University
A research paper by an AI safety expert speculates on future nightmarish scenarios involving the tech. A recent paper authored by Dan Hendrycks, an AI safety expert and director of the Center for AI Safety, highlights a number of speculative risks posed by unchecked development of increasingly intelligent AI. Emergent goals: It's possible that, as AI systems become more complex, they obtain the capability to create their own objectives. An AI safety expert outlined a range of speculative doomsday scenarios from weaponization to power-seeking behavior. A similar sentiment was recently expressed in an open letter signed by Elon Musk and a number of other AI safety experts.
Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI's research. Among the research cited was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats. Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously. Twitter will soon launch a new fee structure for access to its research data, potentially hindering research on the subject.
Total: 10