Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Yoshua Bengio"


7 mentions found


LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday. "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' He added: "With climate change, it's very easy to recommend what you should do: you just stop burning carbon. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
Several tech executives and top artificial-intelligence researchers, including Tesla Inc. Chief Executive Officer Elon Musk and AI pioneer Yoshua Bengio , are calling for a pause in the breakneck development of powerful new AI tools. A moratorium of six months or more would give the industry time to set safety standards for AI design and head off potential harms of the riskiest AI technologies, the proponents of a pause said.
March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Rather than pause research, she said, AI researchers should be subjected to greater transparency requirements. "If you do AI research, you should be very transparent about how you do it."
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI. Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters.
Several tech executives and top artificial-intelligence researchers, including Tesla Inc. Chief Executive Officer Elon Musk and AI pioneer Yoshua Bengio , are calling for a pause in the breakneck development of powerful new AI tools. A moratorium of six months or more would give the industry time to set safety standards for AI design and head off potential harms of the riskiest AI technologies, the proponents of a pause said.
AI experts and company leaders have signed an open letter calling for a pause on AI development. The letter warns that AI systems such as OpenAI's GPT-4 are becoming "human-competitive at general tasks" and pose a potential risk to humanity and society. Here are the key points:Out-of-control AIThe non-profit floats the possibility of developers losing control of powerful new AI systems and their intended effect on civilization. A "dangerous race"The letter warned that AI companies are locked in an "out-of-control race to develop and deploy" new advanced systems. Six-month pauseThe open letter asks for a six-month break from developing any AI systems more powerful than those already on the market.
Total: 7