Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Superintelligent"


25 mentions found


The narrative from Silicon Valley is that the AI train has left the station and any smart investor had better hop on before these products become “superintelligent” and start solving all the world’s problems. Now, some of the leading language models appear to be hitting a wall, according to at least three reports last week. But if we have indeed hit a scaling wall, “it may mean that the the mega-cap technology companies have over-invested” and it’s possible that they could scale back in the near future. That’s the AI optimist/pragmatist view. For a less rosy outlook, I turned to Gary Marcus, NYU professor emeritus and outspoken critic of AI hype.
Persons: CNN Business ’, New York CNN — It’s, OpenAI, , that’s, , Orion “, Ilya Sutskever, ” Sutskever, Marc Andreessen, Sam Altman, ” Gil Luria, Davidson, it’s, ” Luria, Gary Marcus, ” Marcus, “ LLMs Organizations: CNN Business, New York CNN, Nvidia, Tech, ” Bloomberg, ” Reuters Locations: New York, GPT
Read previewThe age of AGI is coming and could be just a few years away, according to OpenAI cofounder John Schulman. Speaking on a podcast with Dwarkesh Patel, Schulman predicted that artificial general intelligence could be achieved in "two or three years." A spokesperson for OpenAI told The Information that the remaining staffers were now part of its core research team. Schulman's comments come amid protest movements calling for a pause on training AI models. Groups such as Pause AI fear that if firms like OpenAI create superintelligent AI models, they could pose existential risks to humanity.
Persons: , John Schulman, Dwarkesh Patel, Schulman, AGI, Elon Musk, OpenAI, Kayla Wood, Jan Leike, Ilya Sutskever Organizations: Service, Business, Tech, Washington Post
And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post. Our current techniques for aligning AI, such as reinforcement learning from human feedback , rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. AdvertisementLeike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created. "Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve.
Persons: , Sam Altman, Ilya Sutskever, Altman, Sutskever, he's, Jan Leike, Leike, OpenAI, superalignment, we're, Google's DeepMind, OpenAI — Organizations: Service, Business, OpenAI Locations: OpenAI
The past few years have been tough for edtech companies. In recent months, multiple edtech startups have raised fresh funding rounds while specifically touting AI as a core part of their business model. These deals could signal that AI is ushering in a new era for edtech companies, and VCs who invest in the space are excited about the renaissance. And Ednition, also one of Donnelly's portfolio companies, provides an infrastructure-as-a-service platform for other edtech companies to improve the data that goes into their AI models. That's why it's so important to invest in ed-tech AI startups that help people rethink how they interact with technology and learn new skills necessary to successfully enter the workforce, he said.
Persons: PitchBook, VCs, Brian Dixon, Dixon, we've, Numerade, Kapoor, Katelyn Donnelly, she's, I've, you'll, Donnelly, Avalance, OpenAI, we're, ChatGPT, Ryan Craig, Craig, edtech Organizations: Business, Labs, Kapor, Partners, Chingona Ventures, TechCrunch, Kapoor Capital, Lirvana Labs, Odyssey Education, University Ventures, ACT Locations: VCs, edtech
The startup, Read AI, closed a $21 million Series A funding round in April. Goodwater Capital led the round, with participation from existing investor Madrona Venture Group, which led the startup's $10 million seed round in 2021. David Shin, who co-founded Read AI alongside Robert Williams and Elliott Waldron, told Business Insider that the generative AI boom over the last year has supercharged what the startup can offer clients. Read AI offers multiple pricing plans, including a basic, free version for individual users as well as enterprise and enterprise plus accounts that cost $22.50 and $29.75 a month per user, respectively. Check out the 23-slide presentation Read AI used to raise $21 million in Series A funding.
Persons: David Shin, Robert Williams, Elliott Waldron, Read Organizations: Goodwater Capital, Madrona Venture Group, Business, Microsoft, Google, Read, Bloomberg Locations: Seattle
Newly launched startup Superintelligent is betting it can solve this problem and help more people master using AI in their work and personal lives. The company just exited stealth with $2 million in pre-seed funding from Learn Capital, an edtech-focused VC fund. Based in New York, Superintelligent is a learning platform designed to help people understand how to use AI tools. "People who never would have cared about taking an online course before will 100% find themselves looking for online tools for learning AI." More broadly, AI is providing a boon to the edtech space, with startups such as Lirvana Labs, which provides AI learning for kids; Curipod, which lets teachers create AI lesson plans; and AI-powered study assistants Digest.ai and FoondMate all raising funding recently.
Persons: , OpenAI's DALL, Superintelligent, Nathaniel Whittemore, Whittemore, Digest.ai, he's Organizations: Service, Pew Research Center, Learn, Business, Labs Locations: New York
As the teacher started to count down, the students uncrossed their arms and bowed their heads, completing the exercise in a flash. One,” the teacher said. Pens across the room went down and all eyes shot back to the teacher. Under a policy called “Slant” (Sit up, Lean forward, Ask and answer questions, Nod your head and Track the speaker) the students, aged 11 and 12, were barred from looking away. When a digital bell beeped (traditional clocks are “not precise enough,” the principal said) the students walked quickly and silently to the cafeteria in a single line.
Persons: Ozymandias, Percy Bysshe Shelley — Organizations: Michaela Community School Locations: London
An employee at rival Anthropic sent OpenAI thousands of paper clips in the shape of their logo. The prank was a subtle jibe suggesting OpenAI's approach to AI could lead to humanity's extinction. Anthropic was formed by ex-OpenAI employees who split from the company over AI safety concerns. AdvertisementOne of OpenAI's biggest rivals played an elaborate prank on the AI startup by sending thousands of paper clips to its offices. AdvertisementAnthropic was founded by former OpenAI employees who left the company in 2021 over disagreements on developing AI safely.
Persons: Anthropic, , Nick Bostrom, Bostrom, OpenAI, Sam Altman, Altman, Ilya Sutskever, Sutskever Organizations: Service, OpenAI's, Anthropic, Wall Street, Microsoft, Business Locations: Francisco
Nearly all of OpenAI’s 800 employees have threatened to follow Mr. Altman to Microsoft, which asked him to lead an A.I. lab with Greg Brockman, who quit his roles as OpenAI’s president and board chairman in solidarity with Mr. Altman. The board has not said what it thought Mr. Altman was not being honest about. There were indications that the board was still open to his return, as it and Mr. Altman held discussions that extended into Tuesday, two people familiar with the talks said. But there was a sticking point: Mr. Altman rejected some of the guardrails that had been proposed to improve his communication with the board.
Persons: OpenAI, Altman, Greg Brockman, Brockman Organizations: Microsoft
To many, he was considered the human face of generative AI. Those worries over generative AI came to a head with the surprise ousting of Altman, who was also OpenAI's cofounder. “Does the future then belong to the machines?”Sutskever reportedly felt Altman was pushing OpenAI’s software too quickly into users’ hands, potentially compromising safety. The fate of OpenAI is viewed by many technologists as critical to the development of AI. He advocated on social media in September for a "slowing down" of AI development.
Persons: Sam Altman, Altman, Ilya Sutskever, , , Connor Leahy, Sutskever, OpenAI, Biden, Emmett Shear, Greg Bensinger, Kenneth Li, Matthew Lewis Organizations: FRANCISCO, Microsoft, European Union, Thomson Locations: OpenAI, San Francisco
The startup's newly appointed interim head moved quickly to dismiss speculation that OpenAI's board ousted Altman due to a spat over the safety of powerful AI models. It was not clear why Murati had stepped down as interim CEO. [1/4]Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023. Some of those joining Altman at Microsoft include senior researchers Szymon Sidor and Jakub Pachocki, according to Brockman. Microsoft had supported a return by Altman to the startup, according to sources, a move that seemed likely only hours prior to Monday's announcements.
Persons: OpenAI, Emmett Shear, Sam Altman, Greg Brockman, OpenAI's, Altman, Shear, Ilya Sutskever, we've, Sutskever, Mira Murati, reinstates Altman, Brockman, Bret Taylor, Will Hurd, Murati, Carlos Barria, Richard Windsor, Satya Nadella, Szymon Sidor, Jakub Pachocki, Russ Mould, AJ Bell, Nadella, Jeffry Dastin, Anna Tong, Krystal Hu, Akash Sriram, Aditya Soni, Urvi, Shubham, Stephanie Kelly, Nivedita Bhattacharjee, Miyoung Kim, Sam Holmes, Susan Fenton, Chizu Nomiyama, Anil D'Silva Organizations: Microsoft, FRANCISCO, Google, OpenAI, Reuters, Economic Cooperation, REUTERS, Radio Free Mobile, Thomson Locations: OpenAI, Texas, Asia, San Francisco , California, U.S, San Franciso, New York, Bengaluru
In a statement on the social media platform X, Shear dismissed speculation that OpenAI's board ousted Altman because of a spat over the safety of powerful AI models. OpenAI dismissed Altman on Friday following a "breakdown of communications," according to an internal memo seen by Reuters. [1/4]Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023. In a separate post on X, Altman shared Nadella's message with the words, "the mission continues". loadingThe decision not to reinstate Altman as OpenAI's chief confounded efforts by investors and employees to steady the startup's path.
Persons: OpenAI, Emmett Shear, Sam Altman, Shear, OpenAI's, Altman, Satya Nadella, Greg Brockman, Carlos Barria, Nadella, Szymon Sidor, Brockman, Richard Windsor, Ilya Sutskever, Jeffry Dastin, Anna Tong, Urvi, Stephanie Kelly, Nivedita Bhattacharjee, Miyoung Kim, Sam Holmes, Susan Fenton Organizations: Microsoft, OpenAI, Reuters, Economic Cooperation, REUTERS, The, Radio Free Mobile, Thomson Locations: Asia, San Francisco , California, U.S, San Francisco, OpenAI, San Franciso, Bengaluru, New York
But if social networking was a wolf in sheep’s clothing, artificial intelligence is more like a wolf clothed as a horseman of the apocalypse. certainly poses problems and challenges that call for government action, the apocalyptic concerns — be they mass unemployment from automation or a superintelligent A.I. If doing too little, too late with social media was a mistake, we now need to be wary of taking premature government action that fails to address concrete harms. The White House is not wrong to want standardized testing of A.I. systems to keep the government apprised of safety tests, and also to have the secretary of labor study the risks of and remedies for A.I.
Persons: Stanley Kubrick’s, , that’s Organizations: HAL
The UK's AI summit is underway. Some AI experts and startups say they've been frozen out in favor of bigger tech companies. They warn that the "closed door" event risks ensuring that AI is dominated by select companies. The UK's AI summit aims to bring together AI experts, tech bosses, and world leaders to discuss the risks of AI and find ways to regulate the new technology. "It is far from certain whether the AI summit will have any lasting impact," Ekaterina Almasque, a general partner at European venture capital firm OpenOcean, which invests in AI, told Insider.
Persons: Elon Musk, Sam Altman, , OpenAI's Sam Altman, Brad Smith, Kamala Harris, Iris Ai, Victor Botev, Yann LeCun, Rishi Sunak, Ekaterina Almasque, Almasque, Goldman Sachs Organizations: Service, OpenAI's, Microsoft, Twitter, UK, Big Tech, UK government's Department for Science, Innovation, Technology, UK's Trades Union Congress, American Federation of Labor, Industrial Organizations, Summit Locations: OpenOcean
watch nowThe boss of Google DeepMind pushed back on a claim from Meta's artificial intelligence chief alleging the company is pushing worries about AI's existential threats to humanity to control the narrative on how best to regulate the technology. "If your fearmongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun said on X, the platform formerly known as Twitter, on Sunday. I also know that producing AI systems that are safe and under our control is possible. "Then there's sort of the misuse of AI by bad actors repurposing technology, general-purpose technology for bad ends that they were not intended for. "And then finally, I think about the more longer-term risk, which is technical AGI [artificial general intelligence] risk," Hassabis said.
Persons: Google DeepMind, CNBC's Arjun Kharpal, Hassabis, DeepMind, Yan LeCun, Sam Altman, Dario Amodei, LeCun, Yan, That's, Meta Organizations: Google, CNBC, Cooperation, China
But he told MIT Technology Review that he wasn't sure whether he would choose to become "part AI." Elon Musk has said Neuralink will help people merge with AI — but it is unclear if it's possible. AdvertisementAdvertisementOpenAI's chief scientist has said that people may choose to become "part AI" in the future to compete with superintelligent machines. AdvertisementAdvertisementSutskever is currently working on OpenAI's "superalignment" project , which aims to build fail-safes that will prevent superintelligent AI from going rogue. Despite this, Sutskever told MIT Tech Review that he was unsure whether he would ever choose to merge with AI, should it become possible.
Persons: Ilya Sutskever, Elon Musk, , he's, , Sutskever, OpenAI Organizations: MIT Technology, Service, MIT Tech Review Locations:
OpenAI is building a new "Preparedness" team to further AI safety. The ChatGPT-maker's newest team aims to address potential risks linked to advanced AI, including nuclear threats. The Preparedness team is hiring for a national security threat researcher and a research engineer. The Preparedness team will help "track, evaluate, forecast, and protect against catastrophic risks," including chemical, biological, nuclear, and cybersecurity threats. As part of the team, OpenAI is hiring for a national security threat researcher and a research engineer.
Persons: , OpenAI, Aleksander Madry, Aleksander Madry didn't, Elon Musk, cofound OpenAI, Sam Altman, Lex Fridman's, we're, Claude, Yann LeCun Organizations: Service, MIT, Intelligence
Meta's chief AI scientist Yann LeCun said that superintelligent AI is unlikely to wipe out humanity. He told the Financial Times that current AI models are less intelligent than a cat. AI CEOs signed a letter in May warning that superintelligent AI could pose an "extinction risk." AdvertisementAdvertisementFears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Meta's chief AI scientist has said. However, LeCun told the Financial Times that many AI companies had been "consistently over-optimistic" over how close current generative models were to AGI, and that fears over AI extinction were overblown as a result.
Persons: Yann LeCun, , Albert Einstein, Sam Altman, Demis Hassabis, Dario Amodei, OpenAI's, LeCun, They're, Meta Organizations: Financial Times, Service, Intelligence, Microsoft
A newly released biography on Musk details how he justified poaching a Google scientist to then-CEO Larry Page. "And I was like, 'Larry, if you just hadn't been so cavalier about AI safety then it wouldn't really be necessary to have some countervailing force," Musk said. Sutskever joined Google's AI unit, Google Brain, in 2013 along with Geoffrey Hinton — also known as the "godfather of AI." "And I was like, 'Larry, if you just hadn't been so cavalier about AI safety then it wouldn't really be necessary to have some countervailing force," Musk added. When Musk started his own AI startup — xAI — in July, he again poached AI experts from Google and OpenAI.
Persons: Larry Page, Larry, Musk, Elon Musk, Sam Altman, OpenAI, Walter Isaacson's, Altman, Ilya Sutskever, Sutskever, Geoffrey Hinton —, Ilya, Isaacson, Organizations: Service, Google Locations: Wall, Silicon
Max Tegmark has long believed in the promise of artificial intelligence. As a physicist and AI researcher at the Massachusetts Institute of Technology and a co-founder of the Future of Life Institute, which studies technology, he has envisioned a near future in which superintelligent computers could fight climate change, find cures for cancer and generally solve our thorniest problems. As long as proper safety standards are in place, he argued, “the sky’s the limit.”
Persons: Max Tegmark Organizations: Massachusetts Institute of Technology, Life Institute
Elon Musk and Sam Altman are racing to create superintelligent AI. Musk said xAI plans to use Twitter data to train a "maximally curious" and "truth-seeking" superintelligence. Elon Musk is throwing out challenge after challenge to tech CEOs — while he wants to physically fight Meta's Mark Zuckerberg, he's now racing with OpenAI to create AI smarter than humans. On Saturday, Musk said on Twitter Spaces that his new company, xAI, is "definitely in competition" with OpenAI. Over a 100-minute discussion that drew over 1.6 million listeners, Musk explained his plan for xAI to use Twitter data to train superintelligent AI that is "maximally curious" and "truth-seeking."
Persons: Elon Musk, Sam Altman, Musk, Mark Zuckerberg, , Meta's Mark Zuckerberg, he's, OpenAI, Altman, Semafor, Ilya Sutskever, Jan Leike, Sam Altman — Organizations: Twitter, Intelligence
Elon Musk's warning for China: superintelligent AI could take control of the country. The billionaire said he told senior leaders about the potential threat during a recent China trip. Elon Musk says he told senior leaders in China that the creation of an AI-led "digital superintelligence" could usurp the Chinese Communist Party and take control of the country. Musk has raised several alarms about the potential dangers of AI becoming a kind of "superintelligence" with some capabilities that humans have. During his trip to China, Musk was treated like royalty, meeting senior government officials and business leaders to discuss popular topics such as AI.
Persons: Elon, Musk, Elon Musk, xAI, OpenAI, Ro Khanna, Mike Gallagher, Gallagher Organizations: Morning, Communist Party Locations: China
OpenAI fears that superintelligent AI could lead to human extinction. It is putting together a team to ensure that superintelligent AI aligns with human interests. The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years. OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority. To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Sam Altman, Altman, Elon Musk Organizations: superintelligent, Research
"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research. AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Connor Leahy, Anna Tong, Kenneth Li, Rosalba O'Brien Organizations: Microsoft, Reuters, Thomson Locations: San Francisco
It records patient visits on his smartphone and summarizes them for treatment plans and billing. Dr. Hitchcock used to spend up to two hours typing up these medical notes after his four children went to bed. “It’s quite awesome.”ChatGPT-style artificial intelligence is coming to health care, and the grand vision of what it could bring is inspiring. But first will come more mundane applications of artificial intelligence. A prime target will be to ease the crushing burden of digital paperwork that physicians must produce, typing lengthy notes into electronic medical records required for treatment, billing and administrative purposes.
Persons: Matthew Hitchcock, Hitchcock, , Locations: Chattanooga , Tenn
Total: 25