Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Timnit"


20 mentions found


Hanna is a former Google AI ethicist who worked alongside Timnit Gebru, who was fired from the tech giant after voicing concerns about its natural language processing tools. Hanna now oversees research at Gebru's Distributed AI Research Institute. Her work centers on communities most affected by AI. "So it increases that gigification and casualization of work." See Business Insider's full AI Power List
Persons: Hanna, Timnit Gebru, there's, Emily Bender Organizations: Research, University of Washington
OpenAI CEO Sam Altman has argued that AI models should eventually produce synthetic data good enough to train themselves effectively. As the well of usable human-generated data dries up, more companies look into using synthetic data. Rather than being pulled from the real world, synthetic data is generated by AI systems that have been trained on real-world data. Synthetic data may help offer some effective "countertuning" to the biases produced by real-world data, too. 'Habsburg AI'While the AI industry found some advantages in synthetic data, it faces serious issues it can't afford to ignore, such as fears synthetic data can wreck AI models.
Persons: , that's, Sam Altman, Gary Marcus, It's, Nathan Lambert, Gretel, SynthLabs, Meta, Timnit Gebru, Margaret Mitchell, LLMs, Sadowski, Alexandr Wang, AlphaGeometry, Marcus Organizations: Service, Google, Business, Oxford, Gartner, New York University, Allen Institute, AI, Nvidia, Meta's, Anadolu, Getty, Rush, Microsoft, Monash University Locations: Cambridge, Habsburg
The police had used a facial-recognition AI program that identified her as the suspect based on an old mugshot. AdvertisementThe Detroit Police Department said that it restricts the use of the facial-recognition AI program to violent crimes and that matches it makes are just investigation leads. AdvertisementThe study also found that in a hypothetical murder trial, the AI models were more likely to propose the death penalty for an AAE speaker. A novel proposalOne reason for these failings is that the people and companies building AI aren't representative of the world that AI models are supposed to encapsulate. Bardlavens leads a team that aims to ensure equity is considered and baked into Adobe AI tools.
Persons: , Woodruff, who's, Ivan Land, Joy Buolamwini, Timnit Gebru, Valentin Hofmann, OpenAI's, AAE, Geoffrey Hinton, Christopher Lafayette, Udezue, OpenAI, Google's, John Pasmore, Latimer, Buolamwini, Timothy Bardlavens, Microsoft Bing, Microsoft Bardlavens, Bardlavens, Esther Dyson, Dyson, Arturo Villanueva, I'd, Villanueva, Alza, We're, Andrew Mahon, Alza's Organizations: Service, Detroit, Business, Court of Michigan, Detroit Police Department, Microsoft, IBM, Allen Institute, AI, Dartmouth College, Center for Education Statistics, Big Tech, Udezue, Meta, Google, Tech, Companies, Adobe Locations: That's, American, Africa, Southeast Asia, North America, Europe, Spanish
Claudine Gay is out as the president of Harvard. Gay announced Tuesday in a letter that she was stepping down, and reactions have poured in on social media from both her supporters and critics. AdvertisementElon Musk voiced his agreement with a social media user's post that said Gay had been "caught plagiarizing." I admire Claudine Gay for putting Harvard's interests first at what I know must be an agonizingly difficult moment. AdvertisementNot everyone celebrated Gay's resignation.
Persons: Claudine Gay, Gay, , Larry Summers, Gay's, Elon Musk, Emil Michael, Uber's, Alan Garber, — Lawrence H, Summers, Bill Ackman, Ackman, hasn't, Sally Kornbluth, Elizabeth Magill, it’s, 3yUDw6tciF — Emil Michael, @emilmichael, Christopher Rufo, Rufo, Elon, Jason Calacanis, Timnit Gebru, Couldn't, Gebru, Nikole Hannah Jones, Janai Nelson, Liz Magill's, Gary Marcus, Uber, Marcus Organizations: Harvard, Service, Treasury, Twitter, Billionaire, Gay, Former University of Pennsylvania, Harvard Corporation, Conservative, Google, NAACP Legal Defense, Educational Fund
OpenAI's new board has drawn criticism for only consisting of white men. One name that's been floated to join it is ex-Google researcher and AI ethics expert Timnit Gebru. AdvertisementThe new OpenAI board has drawn criticism for its lack of diversity, as it's currently comprised of three white men. One name that's been floated to join the board is prominent AI researcher Timnit Gebru. The new board will consider candidates to fill an expanded board of nine members, according to The Verge.
Persons: Timnit, Gebru, , Timnit Gebru, Sam Altman's, Larry Summers, Bret Taylor, Adam D'Angelo Organizations: Wired, Service, Google, Intelligence Research
Sam Altman's high-profile firing has drawn comparisons to Timnit Gebru's exit from Google. Some tech observers and "Black Twitter" asked: "What if Sam Altman were a Black woman?" Sam Altman's shocking ouster— and reinstatement — to OpenAI drew comparisons to Steve Jobs's exit and eventual return to Apple . But a less obvious comparison has been drawn that asks the question: "What if Sam Altman was a Black Woman?" Advertisement"What If Sam Altman Were A Black Woman?
Persons: Sam Altman's, Sam Altman, , OpenAI, Steve Jobs's, Timnit, Altman, wasn't, Gebru's, Googlers, Jeff Dean, WVBvXJZwh0 — Daniel, @growing_daniel, Eric Schmidt —, Kimberly Bryant, Bryant, TechCrunch's Dominic, Madori Davis, Greg Brockman, z5Dc1BbhbQ — Taylor Poindexter, @engineering_bae, aren't, Émile Torres, Black, Gebru, Torres Organizations: Google, Service, Apple, Microsoft, Tech, Black Locations: America, OpenAI
On Tuesday, Lilian Weng, head of safety systems at OpenAI, wrote on X that she "just had a quite emotional, personal conversation" with ChatGPT in voice mode. Try it especially if you usually just use it as a productivity tool," Weng wrote. AdvertisementAdvertisementJust had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. OpenAI and Weng did not immediately respond to Insider's request for comment made outside of normal working hours. Mehtab Khan, fellow at Harvard University's Berkman Klein Center for Internet & Society, described Weng's post as "a dangerous characterization of an AI tool, and also yet another example of anthropomorphizing AI without thinking about how existing rules apply/don't apply."
Persons: it's, , OpenAI, Lilian Weng, Weng, 9S97LPvBoS — Lilian Weng, Greg Brockman, chatbots, Gebru, Eliza, Joseph Weizenbaum, ELIZA chatbot, Eliza wasn't, Tom Insel, Margaret Mitchell, OpenAI's Weng, Khan, Harvard University's Berkman Klein Organizations: Service, MIT, Google, Harvard, Harvard University's Berkman Klein Center, Internet & Society Locations: OpenAI
Researchers found programmers often prefer ChatGPT's (wrong) answers on coding questions. But a pre-print paper released this month suggests ChatGPT has a neat little trick to convince people it's smart: A kind of style over substance approach. Researchers from Purdue University analyzed ChatGPT's replies to 517 questions posted to Stack Overflow, an essential Q&A site for software developers and engineers. The Purdue findings follow research from Stanford and UC Berkeley academics indicating that the large language model is getting dumber. In response to the Purdue research, computer scientist and AI expert Timnit Gebru tweeted: "Great that Stack Overflow is being destroyed by OpenAI +friends."
Persons: ChatGPT, ChatGPT's, Alistair Barr, Adam Rogers, Elon Musk, OpenAI, Timnit Gebru Organizations: Morning, Purdue University, Purdue, Stanford, UC Berkeley
"Titanic" and "Avatar" director James Cameron doesn't think AI will replace human writers. Cameron said he "certainly" wasn't interested in AI writing his scripts, but he could be forced to reassess his stance in the future. The Writers Guild and Screen Actors Guild are currently on strike for the first time since 1960. A significant factor in their protest is the fear of being replaced by AI, Insider previously reported. Richard Walter, the former chairman of UCLA's screenwriting program, told Insider writers should not fear AI.
Persons: James Cameron doesn't, Oscar, James Cameron, Cameron, they've, we've, Timnit Gebru, Emily M Bender, Oppenheimer, , Richard Walter Organizations: CTV News, Canada's CTV, Guild, Screen, Writers Guild of America, TechCrunch, Microsoft
Google has opposed a shareholder's call for more transparency around its algorithms. CEO Sundar Pichai emphasized the potential of new generative AI and added safety is essential. Google's parent company Alphabet opposed a shareholder proposal that sought increased transparency surrounding its algorithms. It argued that accountability and transparency in artificial intelligence are needed if the technology is to remain safe to society. Google in its opposition to the proposal said that it already provides meaningful disclosures surrounding its algorithms, including through websites that provide overviews of how YouTube's algorithms sort content, for instance.
Persons: Sundar Pichai, Pichai, We've, Geoffrey Hinton, Timnit Gebru, ProPublica Organizations: Google, Trillium Asset Management, Trillium, New Zealand Royal Commission, Mozilla Foundation, New York University, SEC, Google's Locations: Christchurch, Saudi Arabia
Ex-Google AI researcher said the current AI "gold rush" means firms won't "self-regulate." Gebru told The Guardian: "We need regulation and we need something better than just a profit motive." A former Google researcher, who said she was fired after pointing out the biases in AI, has warned that the current AI "gold rush" means companies won't "self-regulate" unless they are pressured. The 40-year-old, who also had stints at Apple and Microsoft, explained that the hype around AI means safeguards are being cast to the side. "It feels like a gold rush," Gebru added.
Google told staff it will be more selective about the research it publishes. Recently, information like code and data has become accessible on a "much more on a need-to-know" basis, according to a Google AI staffer. LaMDA, a chatbot technology that forms the basis of Bard, was originally built as a 20 percent project within Google Brain. (The company has historically allowed employees to spend 20% of their working days exploring side projects that might turn into full-fledged Google products.) Google's AI division has faced other setbacks.
Women still have less access to the internet, with men being 21% more likely to be online than women globally. One reason for this is because being a girl, teenager, woman, trans or non-binary person makes us victims of digital violence. An internet women want is one where there is no fear to comment, to express an opinion, or publish photos of our bodies -- and where there are no limits simply because you are a woman on the internet. Women stood together internationally when Iranian women cut their hair , showing how the internet politicises women, sparks debates and builds international solidarity. Therefore, an internet that women want -- and that works for women -- needs to start by being affordable for women.
But, "you do at some point need to start having contact with reality," he told Insider. The plan was still only a rough sketch, Blania told Insider, but that didn't seem to matter to his host. "He always wanted to understand everything at a very deep level," Thrun told Insider in an email. (When asked about guns, Altman told Insider he'd been "happy to have one both times my home was broken into while I was there.") When asked about this, Altman told Insider in an email: "i can guess what that's about; these stories grow crazily inflated over the years of getting re-told!
Altman told Insider, "We debate our approach frequently and carefully." "I don't think anyone can lose your dad young and wish he didn't have more time with him," Altman told Insider. Altman told Insider that his thinking had evolved since those posts. (When asked about guns, Altman told Insider he'd been "happy to have one both times my home was broken into while I was there.") When asked about this, Altman told Insider in an email: "i can guess what that's about; these stories grow crazily inflated over the years of getting re-told!
Character.ai CEO Noam Shazeer, a former Googler who worked in AI, spoke to the "No Priors" podcast. He says Google was afraid to launch a chatbot, fearing consequences of it saying something wrong. Like the chatbot ChatGPT, Character.ai's technology leans on a vast amount of text-based information scraped from the web for its knowledge. Shazeer was a lead author on Google's Transformer paper, which has been widely cited as key to today's chatbots. Google had also received pushback internally from AI researchers like Timnit Gebru who cautioned against releasing anything that might cause harm.
A group of prominent artificial intelligence experts called on European officials to pursue even broader regulations of the technology in the European Union's AI Act. In a policy brief released on Thursday, more than 50 individual expert and institutional signatories advocate for Europe to include general purpose AI, or GPAI, in its forthcoming regulations, rather than limiting the regulations to a more narrow definition. The group points to generative AI tools that have risen in popularity over the past few months, like ChatGPT. "That sort of wave of attention toward generative AI I think gave this clause greater visibility," Myers West said. "The EU AI is poised to become, as far as we're aware, the first omnibus regulation for artificial intelligence," Myers West said.
Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI's research. Among the research cited was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats. Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously. Twitter will soon launch a new fee structure for access to its research data, potentially hindering research on the subject.
Timnit Gebru Is Calling Attention to the Pitfalls of AI
  + stars: | 2023-02-24 | by ( Emily Bobrow | ) www.wsj.com   time to read: +1 min
As a leading researcher on the ethics of artificial intelligence, Timnit Gebru has long believed that machine-learning algorithms could one day power much of our lives. What she didn’t predict was just how quickly this would happen. “I didn’t imagine people would be like, ‘Let’s replace lawyers with a chatbot,’ or ‘Let’s sell AI generated art that looks exactly like someone else’s,’” she says over video from her home in California’s Bay Area. Much of her work involves highlighting the ways AI programs can reinforce existing prejudices. “We talk about algorithms, but we don’t talk about who’s constructing the data set or who’s in the data set,” she says.
Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference on July 6, 2022, in Sun Valley, Idaho. Sam Altman may be tech's next household name, but many Americans probably haven't heard of him. To anyone outside San Francisco, Altman would probably seem like just another young tech CEO. That worldview flared up into controversy in 2017 when Altman wrote a blog post criticizing political correctness, saying tech entrepreneurs were leaving San Francisco over it. "I realized I felt more comfortable discussing controversial ideas in Beijing than in San Francisco," he wrote.
Total: 20