Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Emily Bender"


6 mentions found


Hanna is a former Google AI ethicist who worked alongside Timnit Gebru, who was fired from the tech giant after voicing concerns about its natural language processing tools. Hanna now oversees research at Gebru's Distributed AI Research Institute. Her work centers on communities most affected by AI. "So it increases that gigification and casualization of work." See Business Insider's full AI Power List
Persons: Hanna, Timnit Gebru, there's, Emily Bender Organizations: Research, University of Washington
New York CNN —Sam Altman thinks the technology underpinning his company’s most famous product could bring about the end of human civilization. As many as 300 million full-time jobs around the world could eventually be automated in some way by generative AI, according to Goldman Sachs estimates. Challenges aheadWhen starting OpenAI, Altman told CNN in 2015 he wanted to steer the path of AI, rather than worrying about the potential harms and doing nothing. OpenAI CEO Sam Altman addresses a speech during a meeting at Station F in Paris on May 26. Sam embodies that for AI right now.”The world is counting on Altman to act in the best interest of humanity with a technology by his own admission could be a weapon of mass destruction.
Persons: Sam Altman, OpenAI’s ChatGPT, Altman, ChatGPT, Goldman Sachs, , Patrick Semansky, ‘ Kevin Bacon, Mairo, ” Altman, Kamala Harris, Joe Biden, OpenAI, Elon Musk, Kyunghyun Cho, JP Lee, Greg Brockman, SeongJoon Cho, Kevin Bacon, Aaron Levie, “ I’ve, he’s, ” Levie, Bern Elliot, , Rowan Curan, Forrester, , Biden, Joel Saget, Emily Bender, Margaret O’Mara, O’Mara, Gates, Jobs Organizations: New, New York CNN, World Economic, Privacy, Technology, Capitol, Silicon, White House, New York University, Softbank Ventures, Bloomberg, Getty, CNN, Gartner Research, Israeli Defense Force, University of Washington, Laboratory Locations: New York, Washington ,, Washington, Valley, Silicon, Silicon Valley, Milan, Italy, Softbank Ventures Asia, Seoul, South Korea, Big Sur, Paris, AFP, Manhattan
Left to right: Microsoft's CTO Kevin Scott, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. Joy Malone/David Ryder/Bloomberg/Joel Saget/AFP/Getty ImagesSome AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services. “Motives seemed to be mixed,” Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”Even in more ordinary use cases, however, there are concerns. Influencing regulatorsRegulators may be the real intended audience for the tech industry’s doomsday messaging.
Persons: Sam Altman, Altman, Demis Hassabis, Kevin Scott, Elon Musk, Joy Malone, David Ryder, Joel Saget, ” Gary Marcus, , Marcus, Gary Marcus, Eric Lee, Emily Bender, Bender, ” Bender, , we’re Organizations: CNN, Google, Microsoft, Bloomberg, Getty, New York University, OpenAI, University of Washington, Laboratory, Washington Locations: Valley, AFP, Washington , DC, Congress
How the CEO behind ChatGPT won over Congress
  + stars: | 2023-05-17 | by ( Brian Fung | ) edition.cnn.com   time to read: +9 min
It was a pivotal moment for the AI industry. He agreed that large-scale manipulation and deception using AI tools are among the technology’s biggest potential flaws. On Tuesday, they seemed ready — or even relieved — to be dealing with another area of the technology industry. The AI industry’s biggest players and aspirants include some of the same tech giants Congress has sharply criticized, including Google and Meta. Here too, Altman deftly seized an opportunity to curry favor with lawmakers by emphasizing distinctions between his industry and the social media industry.
Microsoft's CTO office told staff they can use ChatGPT at work as long as they don't share "sensitive data." In response, a senior engineer from Microsoft's CTO office wrote that they were allowed, as long as employees don't share confidential information with the AI tool. "Human beings sign NDAs and consequently have incentives to be careful in how they share information. While employees are on the hook for protecting confidential data, it's not clear what exactly Microsoft or OpenAI are doing to address the issue. Is the responsibility on employees to not share sensitive information, or is the responsibility on OpenAI to use information carefully, or some combination?"
An Amazon lawyer warned employees about sharing confidential company information with ChatGPT. Others wondered if they were even allowed to use the AI tool for work. She warned employees not to provide ChatGPT with "any Amazon confidential information (including Amazon code you are working on)," according to a screenshot of the message seen by Insider. Overall, Amazon employees in the Slack channel were excited about the potential of ChatGPT, and wondered if Amazon was working on a competing product. For Amazon employees, data privacy seems to be the least of their concerns.
Total: 6