Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Life Institute"


24 mentions found


Twitter CEO Elon Musk speaks at the "Twitter 2.0: From Conversations to Partnerships," marketing conference in Miami, Florida, on April 18, 2023. After being spotted on Capitol Hill on Wednesday, Elon Musk tweeted that he'd met with Senate Majority Leader Chuck Schumer, D-N.Y., and other lawmakers about artificial intelligence regulation. Schumer's high-level plans focus on transparency for AI systems, requiring independent experts to test the technologies ahead of public release and requiring disclosure of the people, places and ways involved in the technology's development. Musk has been among the loudest critics of the current fast pace of AI development. WATCH: Tesla investors claim Elon Musk is too distracted
WASHINGTON, April 26 (Reuters) - Elon Musk, the billionaire CEO of electric vehicle maker Tesla (TSLA.O) and social media platform Twitter, discussed artificial intelligence issues with U.S. Senate Majority Leader Chuck Schumer on Wednesday. "We talked about the future," Musk told reporters after exiting the meeting that lasted about an hour. Earlier this month, Schumer said he had launched an effort to establish rules on artificial intelligence to address national security and education concerns, as use of programs like ChatGPT becomes widespread. Senate Intelligence Committee chair Mark Warner sent major AI CEOs a letter Wednesday asking them to take steps to address concerns. Commerce Secretary Gina Raimondo told reporters Wednesday the Biden administration is working "as aggressively as possible to figure out our approach" to AI.
April 17 (Reuters) - EU lawmakers urged world leaders on Monday to hold a summit to find ways to control the development of advanced artificial intelligence (AI) systems such as ChatGPT, saying they were developing faster than expected. The 12 MEPs, all working on EU legislation on the technology, called on U.S. President Joe Biden and European Commission President Ursula von der Leyen to convene the meeting, and said AI firms should be more responsible. "We are nevertheless in agreement with the letter's core message: with the rapid evolution of powerful AI, we see the need for significant political action," they added. The letter urged democratic and "non-democratic" countries to reflect on potential systems of governance, and to exercise restraint in their pursuit of very powerful AI. The Biden administration has also been seeking public comments on potential accountability measures for AI systems as questions loom about their impact on national security and education.
A group of a dozen lawmakers for the European Union called for a new set of rules to regulate a larger swath of artificial intelligence tools, beyond those identified as explicitly high risk under the region's proposed AI Act. The letter comes after a group of prominent AI experts called for Europe to make its AI rules more expansive, arguing that excluding general purpose AI, or GPAI, would miss the mark. "We are nevertheless in agreement with the letter's core message: with the rapid evolution of powerful AI, we see the need for significant political attention." They pledged to provide a set of rules within the AI Act framework to steer AI development in a "human-centric, safe, and trustworthy" way. The lawmakers said both democratic and non-democratic countries should be called on "to exercise restraint and responsibility in their pursuit of very powerful artificial intelligence."
Altman made the remarks during a Thursday video appearance at an MIT event that discussed business and AI. OpenAI makes ChatGPT, an AI bot that can create human-like responses to questions asked by a user. GPT technology underpins Microsoft 's Bing AI chatbot, and prompted a flurry of AI investment. Earlier this year, Altman acknowledged that AI technology made him a "little bit scared." Questions about safe and ethical AI use have come up at the White House, on Capitol Hill, and in boardrooms across America.
Elon Musk is reportedly planning an AI startup amid the chatbot craze kicked off by OpenAI's ChatGPT. He is talking to Tesla and SpaceX investors about backing the startup, FT reported. Musk has recruited AI experts and obtained graphics processing units, Insider previously reported. Musk did not respond to emailed requests for comment sent to his Tesla and SpaceX email addresses. Insider's Kali Hays has also previously reported that Musk has brought on artificial intelligence experts, and obtained roughly 10,000 graphics processing units.
Adobe cloud business insightsDespite the drag of technical debt that the data suggests, some industry executives say it gets a bad reputation. In this sense, technical debt is a signal of iteration. Adobe head of strategic development for creative cloud partnerships Chris Duffey is looking to reshape technical debt. "I would offer to reframe technical debt as the value of insight gathering throughout the innovation creation process," Duffey said. Despite reduction of operational costs, legacy systems in the technical debt bucket are core operational functions that an organization can't just turn off.
AI developers, prominent AI ethicists and even Microsoft co-founder Bill Gates have spent the past week defending their work. "I don't think asking one particular group to pause solves the challenges," Gates told Reuters on Monday. A pause would be difficult to enforce across a global industry, Gates added — though he agreed that the industry needs more research to "identify the tricky areas." That's what makes the debate interesting, experts say: The open letter may cite some legitimate concerns, but its proposed solution seems impossible to achieve. It noted in its blog post that future AI systems could become "much more powerful" over the next decade, and building guardrails now could "help reduce risks" down the road.
More than 1,000 people, including Elon Musk, recently signed a letter calling for a pause on AI development. Meta CTO Andrew Bosworth said: "Not only is it unrealistic, I don't think it would be effective." "I think it's very important to invest in responsible development," Bosworth told the outlet, "and we do that kind of investment all the time. And so I think, not only is it unrealistic, I don't think it would be effective." "Shutting down AI development for six months gives the bad guys six more months to catch up," he said.
Bill Gates said he didn't think a halt on advanced AI development for six months is practical. His comments came a week after an open letter called for a six-month pause on advanced AI development. Gates, however, isn't the only high-profile voice cautioning about pausing AI development. Last week, billionaire investor Bill Ackman warned that shutting down AI development for six months would allow bad actors six more months to catch up to current technology. Bill Gates did not immediately respond to Insider's request for comment sent via the Bill & Melinda Gates Foundation outside regular business hours.
Civil society groups in the U.S. and EU have since pressed lawmakers to rein in OpenAI's research. Among the research cited was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats. Asked to comment on the criticism, FLI's Tegmark said both short-term and long-term risks of AI should be taken seriously. Twitter will soon launch a new fee structure for access to its research data, potentially hindering research on the subject.
March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Rather than pause research, she said, AI researchers should be subjected to greater transparency requirements. "If you do AI research, you should be very transparent about how you do it."
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned (GOOGL.O) DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI's newly launched model GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Musk, whose carmaker Tesla (TSLA.O) is using AI for an autopilot system, has been vocal about his concerns about AI. Sam Altman, chief executive at OpenAI, hasn't signed the letter, a spokesperson at Future of Life told Reuters.
March 28 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training of systems more powerful than GPT-4, they said in an open letter, citing potential risks to society and humanity. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities. Since its release last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to launch similar products, and companies to integrate it or similar technologies into their apps and products. Editing by Gerry DoyleOur Standards: The Thomson Reuters Trust Principles.
Elon Musk and dozens of other technology leaders have called on AI labs to pause the development of systems that can compete with human-level intelligence. "Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" The Future of Life Institute is a nonprofit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of artificial intelligence. The institute has previously gotten the likes of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapons systems. The institute said it was calling on all AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
Its signatories called for a 6-month pause on the training of AI systems more powerful than GPT-4. The letter, issued by the non-profit Future of Life Institute, called for AI labs to pause training any tech more powerful than OpenAI's GPT-4, which launched earlier this month. The non-profit said powerful AI systems should only be developed "once we are confident that their effects will be positive and their risks will be manageable." Stability AI CEO Emad Mostaque, researchers at Alphabet's AI lab DeepMind, and notable AI professors have also signed the letter. The letter accused AI labs of being "locked in an out-of-control race to develop and deploy" powerful tech.
AI experts and company leaders have signed an open letter calling for a pause on AI development. The letter warns that AI systems such as OpenAI's GPT-4 are becoming "human-competitive at general tasks" and pose a potential risk to humanity and society. Here are the key points:Out-of-control AIThe non-profit floats the possibility of developers losing control of powerful new AI systems and their intended effect on civilization. A "dangerous race"The letter warned that AI companies are locked in an "out-of-control race to develop and deploy" new advanced systems. Six-month pauseThe open letter asks for a six-month break from developing any AI systems more powerful than those already on the market.
The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. The letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Correction: An earlier version of this story said Microsoft founder Bill Gates and OpenAI CEO Sam Altman had signed the letter.
The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. Lian Jye Su, an analyst at ABI Research, said the letter shows legitimate concerns among tech leaders over the unregulated usage of AI technologies. But he called parts of the petition “ridiculous,” including the premise of asking for a hiatus in AI development beyond GPT-4.
UBS shuffles retailers: Ross Stores (ROST) to sell; Burlington (BURL) to sell; Club name Foot Locker (FL) to sell. Apple Pay Later allows four payments over six weeks. Users can apply for Apple Pay Later loans of between $50 and $1,000. As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade.
Cheetahs are being reintroduced to India after being declared extinct in the country in 1952. The reintroduction effort is aimed at creating a viable population of wild cheetahs. Some experts are critical of the plan, saying it's more like a large "zoo" than a wild population. The reintroduction plan, which is estimated to cost about $11 million, aims to establish a viable, free-ranging population of cheetahs. "The cheetah is a magnificent animal, it's a big magnet for ecotourism," Jhala told National Geographic.
In the case of Elon Musk v. Charismatic Megafauna, the agency intends to publish its final report in late April. Musk went on: "Either explicitly or implicitly some people seem to think that humans are a blight on the Earth's surface. Musk is talking about existential risk, the idea that something — an asteroid, a rogue artificial intelligence — might kill every human on Earth. And if you assume that future human minds will "mainly be implemented in computational hardware instead of biological neuronal wetware," as Bostrom does, you end up with a mind-boggling 1054 human lives. Musk has made the defense of "future life" his mission.
It's the last weekend in August and we've got the perfect way to spend it: reading another edition of Insider Life. See what it's like to live a day as a rising real-estate star. The community has exploded during the pandemic, as we've spent more time at home, becoming increasingly aware of how our spaces look and feel. MicroLife InstituteA new tiny-home community outside Atlanta is aiming to change the way people live. The 500-square-foot cottages cost up to $200,000, and are built around communal areas that are designed to encourage socialization among neighbors.
Total: 24