Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Gary Marcus"

25 mentions found

A mysterious new OpenAI model known as Q* has got the tech world talking. AI experts say the model could be a big step forward but is unlikely to end the world anytime soon. NEW LOOK Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. AdvertisementAs the dust settles on the chaos at OpenAI, we still don't know why CEO Sam Altman was fired — but reports have suggested it could be linked to a mysterious AI model. Dr Andrew Rogoyski, a director at the Surrey Institute for People-Centered AI, told BI that solving unseen problems was a key step towards creating AGI.
Persons: , Sam Altman, OpenAI, Ilya Sutskever, Charles Higgins, Sophia Kalanovska, Kalanovska, we've, Andrew Rogoyski, Gary Marcus, Marcus Organizations: Service, Reuters, Surrey Institute for People, AIs, Business Locations: OpenAI
Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023. In recent weeks, talks have hit stumbling blocks over the extent to which companies should be allowed to self-regulate. Alexandra van Huffelen, Dutch minister for digitalisation, told Reuters the OpenAI saga underscored the need for strict rules. "Please don't gut the EU AI Act; we need it now more than ever." Reporting by Martin Coulter and Supantha Mukherjee; Editing by Susan FentonOur Standards: The Thomson Reuters Trust Principles.
Persons: Sam Altman, Carlos Barria, Altman, OpenAI’s, Brando Benifei, , Alexandra van Huffelen, Gary Marcus, Martin Coulter, Supantha Mukherjee, Susan Fenton Organizations: Economic Cooperation, REUTERS, European Commission, EU, Reuters, Microsoft, New York University, Thomson Locations: Asia, San Francisco , California, U.S, European, OpenAI, France, Germany, Italy
The wildest coup in Silicon Valley's history just took place over the last 48 hours. OpenAI booted CEO Sam Altman, nearly hired him back, then went with 2 other CEOs. Sam Altman has now been scooped up as an employee by Microsoft, OpenAI's biggest investor. Sam Altman had met with world leaders, including British Prime Minister Rishi Sunak, in London earlier in the month. But Microsoft CEO Satya Nadella has not let a good crisis go to waste.
Persons: OpenAI, Sam Altman, Altman, , Mira Murati, Emmett Shear, Greg Brockman, — Aaron Levie, ❤️ emojis, AngelList, Babak Nivi, OpenAI's, Rishi Sunak, Kamala Harris, Ursula von der Leyen, Alastair Grant, Ilya Sutskever, Adam D'Angelo, Tasha McCauley, Helen Toner, Mary, Gary Marcus, Sam, 👉, Gary, Satya Nad Organizations: Microsoft, Service, British, AI, Georgetown's Center for Security, Emerging Technology, usl Locations: London
Generative AI still mostly experimental, say executives
  + stars: | 2023-11-09 | by ( Katie Paul | )   time to read: +4 min
NEW YORK, Nov 9 (Reuters) - One year after the debut of ChatGPT created a global sensation, leaders of business, government and civil society said at the Reuters NEXT conference in New York that generative AI technology is still mostly in an experimental stage, with limited exceptions. Aguirre cited self-driving cars as an example of a technology struggling to make the transition to full deployment. “I’ve observed many generative AI applications that are in production while other customers are just beginning their journey.”One way generative AI was already being deployed widely, highlighted by speakers across industries, was to write computer code. Gary Marcus, a professor at New York University, said generative AI was error-prone in coding just like in other areas, but that the problem was less of a hindrance in the tech sector because programmers knew how to troubleshoot it. Companies should move slowly and deliberately when integrating the technology into uses where accuracy matters, executives emphasized.
Persons: ChatGPT, What's, Anthony Aguirre, Aguirre, Sherry Marcus, I’ve, Lili Cheng, Copilot, Cheng, Gary Marcus, Marcus, Cisco's Vijoy Pandey, Pandey, Katie Paul, David Gregorio Our Organizations: Reuters NEXT, Life Institute, Microsoft Corporate, Reuters, New York University, Thomson Locations: New York
[1/2] Artificial Intelligence words are seen in this illustration taken March 31, 2023. Companies are increasingly using AI to make decisions including about pricing, which could lead to discriminatory outcomes, experts warned at the conference. “We should not underestimate how powerful these models are now and how rapidly they are going to get more powerful,” he said. Developing ever-more powerful AI will also risk eliminating jobs to a point where it may be impossible for humans to simply learn new skills and enter other industries. “Once that happens, I fear that it's not going to be so easy to go back to AI being a tool and AI as something that empowers people.
Persons: Dado Ruvic, ” Gary Marcus, Marcus, Marta Tellado, Anthony Aguirre, , , Anna Tong, Lisa Shumaker Organizations: REUTERS, Reuters NEXT, New York University, Companies, Consumer, Life Institute, Reuters, reuters, Thomson Locations: New York, San Francisco
VC Marc Andreessen wrote a lengthy missive this week, titled "The Techno-Optimist Manifesto." AdvertisementAdvertisementIt wouldn't be a Marc Andreessen essay if the internet didn't lose its mind over it. Rather, technology, he wrote, can solve for "any material problem" under the sun and herald a new era of "abundance for everyone." The backlash to Andreessen's essay was swift. He runs one of the biggest venture capital firms by assets and perceived importance.
Persons: Marc Andreessen, , Kara Swisher, Andreessen, Ben Collins, Andreessen's missive, Dorothea Baur, Cameron Moll, he's, Gary Marcus, Marcus, what's, Aravind Srinivas, A16z, Collins, Andreessen Horowitz, Del, Johnson Organizations: Service, Securities and Exchange Commission, Facebook, Backstage Capital, Silicon Valley Bank, Venture Locations: Silicon Valley, Del Johnson, Silicon
GPT-4 users have complained that the OpenAI model is getting 'dumber.' Their findings, published on Tuesday, challenge the assumption that AI models automatically improve. One of the bedrock assumptions of the current artificial intelligence boom is that AI models "learn" and improve over time. This is what users of OpenAI's GPT-4, the world's most-powerful AI model, have been experiencing lately. This recent GPT-4 research paper provides a healthy dose of skepticism to the assumptions that are driving these wild swings in value.
Persons: OpenAI's, OpenAI, Matei Zaharia, Zaharia, Gary Marcus, Marcus Organizations: Twitter, Microsoft Locations: OpenAI
Left to right: Microsoft's CTO Kevin Scott, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. Joy Malone/David Ryder/Bloomberg/Joel Saget/AFP/Getty ImagesSome AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services. “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”Even in more ordinary use cases, however, there are concerns. Influencing regulatorsRegulators may be the real intended audience for the tech industry’s doomsday messaging.
Persons: Sam Altman, Altman, Demis Hassabis, Kevin Scott, Elon Musk, Joy Malone, David Ryder, Joel Saget, ” Gary Marcus, , Marcus, Gary Marcus, Eric Lee, Emily Bender, Bender, ” Bender, , we’re Organizations: CNN, Google, Microsoft, Bloomberg, Getty, New York University, OpenAI, University of Washington, Laboratory, Washington Locations: Valley, AFP, Washington , DC, Congress
Washington CNN —Days after OpenAI CEO Sam Altman testified in front of Congress and proposed creating a new federal agency to regulate artificial intelligence, a US senator has introduced a bill to do just that. On Thursday, Colorado Democratic Sen. Michael Bennet unveiled an updated version of legislation he introduced last year that would establish a Federal Digital Platform Commission. And for the most significant platforms — companies the bill calls “systemically important” — the bill would create requirements for algorithmic audits and public risk assessments of the harms their tools could cause. The debate over whether the US government should establish a separate federal agency to police AI tools may become a significant focus of those efforts following Altman’s testimony this week. Altman suggested in a Senate hearing on Tuesday that such an agency could restrict how AI is developed through licenses or credentialing for AI companies.
Even the man who runs ChatGPT-maker OpenAI worries about the influence of AI on 2024's election. The devastation caused by social media in America's recent political history could look like child's play by comparison to AI. Even Altman thinks AI will make humans stupidFor now, Altman said, humans understand that AI is in its infancy and are aware that bots like ChatGPT routinely make mistakes. Altman correctly (and self-interestedly) called during the session for AI to be regulated, including a suggestion that AI-generated content is clearly labeled. The same slowness just won't cut it in a world running to embrace ChatGPT.
How the CEO behind ChatGPT won over Congress
  + stars: | 2023-05-17 | by ( Brian Fung | )   time to read: +9 min
It was a pivotal moment for the AI industry. He agreed that large-scale manipulation and deception using AI tools are among the technology’s biggest potential flaws. On Tuesday, they seemed ready — or even relieved — to be dealing with another area of the technology industry. The AI industry’s biggest players and aspirants include some of the same tech giants Congress has sharply criticized, including Google and Meta. Here too, Altman deftly seized an opportunity to curry favor with lawmakers by emphasizing distinctions between his industry and the social media industry.
AI Is Now Cooking, but It Shouldn’t Be Overdone
  + stars: | 2023-05-16 | by ( Mustafa Suleyman | )   time to read: +1 min
We needn’t brace for the bubble to pop or fear an uptick in popular disillusionment with the technology. AI has only recently exploded into public view. When OpenAI launched ChatGPT in November 2022, few people outside industry had heard of large language models, let alone used one. “In 20 years following the internet space,” wrote a UBS analyst, “we cannot recall a faster ramp in a consumer internet app.” Venture capitalists poured more than $40 billion into AI enterprises. Chipmaker Nvidia saw a huge spike in its share price, briefly taking it to a trillion-dollar market capitalization, and Meta announced its plan to spend $33 billion on its “build-out of AI capacity.”
Persons: Sam Altman, Christina Montgomery, Gary Marcus, Mark Kelly, OpenAI, Organizations: IBM, Getty, UBS, ” Venture, Nvidia, Meta
OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I. : Rules for Artificial Intelligence' on Capitol Hill in Washington, May 16, 2023. The hearing came after Altman met with a receptive group of House lawmakers at a private dinner Monday, where the CEO walked through risks and opportunities in the technology. After the hearing, Blumenthal told reporters that comparing Altman's testimony to those of other CEOs was like "night and day." "Some of the Big Tech companies are under consent decrees, which they have violated.
OpenAI CEO Sam Altman spoke to an engaged crowd of about 60 lawmakers at a dinner Monday about the advanced artificial technology his company produces and the challenges of regulating it. The wide-ranging discussion that lasted about two hours came ahead of Altman's first time testifying before Congress at a Senate Judiciary subcommittee on privacy and technology hearing on Tuesday. The dinner discussion comes at a peak moment for AI, which has thoroughly captured Congress' fascination. "There isn't any question where he pulls back on anything," she said, adding that lawmakers had very thoughtful things to ask. Khanna said the question of openness of the model is something he's discussed with Altman before, though not at Monday's dinner.
AI Has Finally Become Transformative
  + stars: | 2023-05-16 | by ( Martin Casado | )   time to read: 1 min
Speaking to a Senate subcommittee on May 16, 2023, OpenAI CEO Sam Altman, IBM chief privacy officer Christina Montgomery and NYU professor emeritus Gary Marcus gave suggestions for regulating the AI industry—and highlighted the associated perils. Images: AFP/Getty Images/Reuters Composite: Mark KellyArtificial intelligence has generated tremendous value across many applications over the last decade, including search, ad targeting and recommendations. But nearly all these gains have gone to tech giants such as Google and Facebook . Despite the hoopla—and a lot of related startup activity—AI hasn’t brought a market transformation similar to the internet or mobile, in which an entire new class of companies emerge and become household names. That may soon change.
Persons: Sam Altman, Christina Montgomery, Gary Marcus, Kelly, hasn’t Organizations: IBM, Getty, Google, Facebook
As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts. In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even “a little bit scared” of the technology. Others want Altman and OpenAI to move more cautiously. “I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at an event last month. “The letter I don’t think was the optimal way to address it.”– CNN’s Jennifer Korn contributed to this report.
OpenAI CEO Sam Altman to testify before Congress
  + stars: | 2023-05-10 | by ( Brian Fung | )   time to read: +1 min
Washington CNN —OpenAI CEO Sam Altman will testify before Congress next Tuesday as lawmakers increasingly scrutinize the risks and benefits of artificial intelligence, according to a Senate Judiciary subcommittee. During Tuesday’s hearing, lawmakers will question Altman for the first time since OpenAI’s chatbot, ChatGPT, took the world by storm late last year. The groundbreaking generative AI tool has led to a wave of new investment in AI, prompting a scramble among US policymakers who have called for guardrails and regulation amid fears of AI’s misuse. Also testifying Tuesday will be Christina Montgomery, IBM’s vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI “hype.”“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said Connecticut Democratic Sen. Richard Blumenthal, who chairs the Senate panel on privacy and technology. “This hearing begins our Subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology.”He added: “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”
OpenAI CEO Sam Altman will testify before Congress for the first time next week as lawmakers are urgently seeking to figure out how to regulate rapidly advancing artificial intelligence tools. The hearing, entitled "Oversight of AI: Rules for Artificial Intelligence," will also feature IBM Vice President and Chief Privacy and Trust Officer Christina Montgomery and New York University Professor Emeritus Gary Marcus. Rep. Ted Lieu, D-Calif., who is co-hosting the dinner, told NBC News it's meant to "educate members" and that more than 50 lawmakers had already RSVP'd. Last week, Altman joined other tech CEOs for a meeting at the White House with Vice President Kamala Harris to discuss risks associated with AI. WATCH: OpenAI CEO Sam Altman on the ChatGPT boom and the need for regulation
Some of us would like to slow this down because we are seeing more costs every day, but I don’t think that means that there are no benefits. We may someday have a technology that revolutionizes science and technology, but I don’t think GPT-5 is the ticket for that. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem. I don’t think we should go after an individual who posts a silly story on Facebook that wasn’t true. I don’t think, however, that the technology we have right now is very good for that — systems that can’t even reliably do math problems.
Share Share Article via Facebook Share Article via Twitter Share Article via LinkedIn Share Article via EmailHere's why A.I. needs a six-month pause: NYU Professor Gary MarcusGary Marcus, New York University professor and Geometric Intelligence founder, joins 'Squawk on the Street' to discuss his thoughts on A.I. and why there are risks in this space.
March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Rather than pause research, she said, AI researchers should be subjected to greater transparency requirements. "If you do AI research, you should be very transparent about how you do it."
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI. Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters.
On Tuesday, Google announced it was bringing AI-powered chat technology to Gmail and Google Docs, letting it help composing emails or documents. On Thursday, Microsoft said that its popular business apps like Word and Excel would soon come bundled with ChatGPT-like technology dubbed Copilot. But this time, Microsoft is pitching the technology as being "usefully wrong." Microsoft chief scientist and technical fellow Jaime Teevan said that when Copilot "gets things wrong or has biases or is misused," Microsoft has "mitigations in place." "I studied AI for decades and I feel this huge sense of responsibility with this powerful new tool," Teevan said.
Dmitri Brereton said Bing's new AI chatbot "got some answers completely wrong" during its demo. As part of Microsoft's unveiling of the new tech, Bing's AI was asked to list the pros and cons of the three best-selling pet vacuums. "I hope Bing AI enjoys being sued for libel," he wrote. The AI arms race may lead to the spread of misinformationBrereton's observations come as Big Tech companies like Google and Microsoft enter an AI arms race. While Brereton told Insider that generative AI search engines like the new Bing can be "quite transformative," he noted that releasing it prematurely "could lead to big problems."
Total: 25