Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Wozniak"


25 mentions found


Apple is developing an AI-powered health coach under the codename Quartz, Bloomberg reported. It will use Apple Watch data to monitor diet, exercise, and sleep patterns, then suggest changes. It would use data from Apple Watch users to monitor sleep patterns, exercise routines, diet, and emotions, then suggest personalized healthy changes based on the AI's analysis. The latest iterations of the Apple Watch already have a heart rate monitor, fertility tracker, and temperature sensor. Bloomberg reported that additional features will be rolled out later to read users' moods by analyzing their speech, typed words, and other device data.
Elon Musk said Larry Page hasn't spoken to him in years after they disagreed about AI safety. Musk said Page wanted to create a "digital god" and accused him of being a speciesist. Elon Musk said his longtime friendship with Google cofounder Larry Page ended over a disagreement about AI and that the two men haven't talked in years. The Tesla CEO said Page "got very upset with me about OpenAI" — the company Musk helped found as a competitor to Google's AI efforts. Musk said that he hasn't been able to talk with Page "because he doesn't want to talk to me anymore."
Elon Musk said the government needs "some sort of contingency plan" to deal with powerful AI. Musk told Tucker Carlson there needs to be a way to shut it down if it gets out of hand. Elon Musk said the government needs to be prepared to step in if artificial intelligence gets out of hand. Alphabet CEO Sundar Pichai and OpenAI CEO Sam Altman have also called for government regulation and expressed concern about the potential dangers of the technology. Now, Musk says he plans to create a "maximum truth-seeking AI that tries to understand the nature of the universe."
"I'm going to start something which I call TruthGPT," Musk told Fox News' "Tucker Carlson Tonight" on Monday, adding that he'd want his AI chatbot to be a "maximum truth-seeking AI that tries to understand the nature of the universe." He didn't provide evidence for those claims, or detail exactly what a "truth-seeking AI" might entail. "A path to AI dystopia is to train AI to be deceptive," Musk said. "It could cause harm," Pichai told CBS News' "60 Minutes" on Sunday. Twitter — which intends to pursue generative AI, Musk told the BBC last week — is a for-profit company.
Here's what tech executives are saying about the potential dangers of advanced AI tech. In a recent interview with Tucker Carlson, Musk said AI had the potential to destroy civilization. Sam AltmanOpenAI CEO Sam Altman has said he's a "little bit afraid" of AI. "And I think it'd be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid." In an earlier interview with ABC News, Altman said that "people should be happy" that his company was "a little bit scared" of the potential of artificial intelligence.
AI development needs input from social scientists, ethicists, and philosophers, Sundar Pichai said. The Google CEO told CBS that AI systems need to be "aligned to human values, including morality." Social scientists, ethicists, and philosophers need to be involved in the development of AI, Google CEO Sundar Pichai told CBS' "60 Minutes." "This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on." Though much of the buzz so far has been based on OpenAI's ChatGPT, Google is also developing Bard, its own AI chatbot.
A group of a dozen lawmakers for the European Union called for a new set of rules to regulate a larger swath of artificial intelligence tools, beyond those identified as explicitly high risk under the region's proposed AI Act. The letter comes after a group of prominent AI experts called for Europe to make its AI rules more expansive, arguing that excluding general purpose AI, or GPAI, would miss the mark. "We are nevertheless in agreement with the letter's core message: with the rapid evolution of powerful AI, we see the need for significant political attention." They pledged to provide a set of rules within the AI Act framework to steer AI development in a "human-centric, safe, and trustworthy" way. The lawmakers said both democratic and non-democratic countries should be called on "to exercise restraint and responsibility in their pursuit of very powerful artificial intelligence."
Leaders like Elon Musk have called for a "pause" on AI development to better consider its effects on society. But industry insiders say that we already know the best way to make sure AI acts responsibly: Just add humans. The thing is that the secret to responsible AI is no secret at all. A lot of responsible AI deals with understanding the impact of AI in people's day-to-day lives. All of which is to say, a way to create responsible AI already exists.
Altman made the remarks during a Thursday video appearance at an MIT event that discussed business and AI. OpenAI makes ChatGPT, an AI bot that can create human-like responses to questions asked by a user. GPT technology underpins Microsoft 's Bing AI chatbot, and prompted a flurry of AI investment. Earlier this year, Altman acknowledged that AI technology made him a "little bit scared." Questions about safe and ethical AI use have come up at the White House, on Capitol Hill, and in boardrooms across America.
Adobe cloud business insightsDespite the drag of technical debt that the data suggests, some industry executives say it gets a bad reputation. In this sense, technical debt is a signal of iteration. Adobe head of strategic development for creative cloud partnerships Chris Duffey is looking to reshape technical debt. "I would offer to reframe technical debt as the value of insight gathering throughout the innovation creation process," Duffey said. Despite reduction of operational costs, legacy systems in the technical debt bucket are core operational functions that an organization can't just turn off.
Elon Musk-owned Twitter purchased 10,000 GPUs, apparently to get into the generative AI boom. This move goes against Musk's open-letter plea for companies to slow down AI development. It also backs up Reid Hoffman's claim that some like Musk wanted the pause so they could catch up. In December, a few months after taking full ownership of Twitter, Musk went so far as to tweet about how he cut OpenAI's access to Twitter, which was used to train OpenAI's language models. For someone who says he wants to pause AI development, Musk seems to be doing the very opposite of that.
Job van der Voort, CEO of HR-tech company Remote, says AI will give workers "superpowers." Van der Voort said he thinks AI won't replace workers but will instead transform their jobs. AI "gives you superpowers," van der Voort said. AI "is going to transform every single business going forward, I think without any exception," van der Voort said. But rather than replacing people's jobs completely, van der Voort said said that AI would instead cause a redeployment of the workforce.
Eric Schmidt said a six-month pause on AI development would "simply benefit China." "The question is what is the right answer," Schmidt told the Financial Review. "I'm not in favour of a six-month pause because it will simply benefit China." Instead of a pause, leaders should instead collectively discuss appropriate guardrails "ASAP," Schmidt said. "I think today the government's response would be clumsy because there are very few people in government who understand this stuff," Schmidt told the Australian newspaper.
AI developers, prominent AI ethicists and even Microsoft co-founder Bill Gates have spent the past week defending their work. "I don't think asking one particular group to pause solves the challenges," Gates told Reuters on Monday. A pause would be difficult to enforce across a global industry, Gates added — though he agreed that the industry needs more research to "identify the tricky areas." That's what makes the debate interesting, experts say: The open letter may cite some legitimate concerns, but its proposed solution seems impossible to achieve. It noted in its blog post that future AI systems could become "much more powerful" over the next decade, and building guardrails now could "help reduce risks" down the road.
LinkedIn cofounder Reid Hoffman told CNBC the call for an AI slow down is a "mistaken effort." He said Elon Musk has an "it's only great if I do it" mentality when it comes to ChatGPT. Both Musk and Hoffman helped found OpenAI in 2015, but Musk left the company's board of directors in 2018 and has been critical of the company ever since. Despite his longtime friendship with Musk, Hoffman said he's "much more on the path that OpenAI has gone" when it comes to developing the technology. Spokespeople for Musk, Hoffman, and OpenAI did not respond to a request for comment ahead of publication.
LONDON, April 4 (Reuters) - Calls to pause the development of artificial intelligence will not “solve the challenges” ahead, Microsoft co-founder Bill Gates told Reuters, his first public comments since an open letter sparked a debate about the future of the technology. The technologist-turned-philanthropist said it would be better to focus on how best to use the developments in AI, as it was hard to understand how a pause could work globally. “I don’t think asking one particular group to pause solves the challenges,” Gates said on Monday. While currently focused full-time on the philanthropic Bill and Melinda Gates Foundation, Gates has been a bullish supporter of AI and described it as revolutionary as the Internet or mobile phones. He also said in the interview the details of any pause would be complicated to enforce.
Sam Altman compared OpenAI's ambitions with the scale of the Manhattan Project in 2019, per the NYT. According to Metz, Altman also paraphrased the Manhattan Project's leader, Robert Oppenheimer, in a 1945 speech in which he justified creating the bombs that devastated Hiroshima and Nagasaki as a necessary expansion of human knowledge. "Technology happens because it is possible," Altman reportedly said, adding that he and Oppenheimer shared the same April 22 birthday, per The Times. Altman cautioned that AGI would come with a "serious risk of misuse, drastic accidents, and societal disruption" in the February blog post. Last Friday, Italy's national data protection agency announced that it was blocking access to ChatGPT and investigating OpenAI.
None of these things actually happened, but AI-generated images depicting them did go viral online over the past week. The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart. Eliot Higgins, founder and creative director of the investigative group Bellingcat, posted fake images of former President Donald Trump to Twitter last week. Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts.
Welcome to the era of viral AI generated 'news' images
  + stars: | 2023-04-02 | by ( Clare Duffy | ) edition.cnn.com   time to read: +8 min
None of these things actually happened, but AI-generated images depicting them did go viral online over the past week. The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart. Eliot Higgins, founder and creative director of the investigative group Bellingcat, posted fake images of former President Donald Trump to Twitter last week. Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts.
A letter from tech heavyweights and researchers urging caution about AI should serve as a warning. To help address the fears, companies must set rules and be open how they use AI, execs told Insider. If you ask a group of high-profile tech leaders and researchers, they'll answer a firm "yes." That could involve companies coming up with standards and declaring how they are using or plan to use AI, business leaders told Insider. Bricker said business leaders need to work on improving the rules around AI systems and processes.
One AI researcher who has been warning about the tech for over 20 years said to "shut it all down." Eliezer Yudkowsky said the open letter calling for a pause on AI development doesn't go far enough. Yudkowsky, who has been described as an "AI doomer," suggested an "indefinite and worldwide" ban. The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4. Yudkowsky instead suggested a ban that is "indefinite and worldwide" with no exceptions for governments or militaries.
Dozens of AI enthusiasts gathered in SF's Cerebral Valley on Thursday for Eric Newcomer's AI summit. The handful of streets between San Francisco's Fillmore and Mission neighborhoods have been called a variety of names in recent times — Cerebral Valley, Bayes Valley, Hayes Valley — but on a Thursday morning in March, they were the home for dozens of AI enthusiasts, founders, and VCs looking to learn more about the space at independent journalist Eric Newcomer's Cerebral Valley AI Summit. The model to rule them allWith representation from several OpenAI competitors, including Anthropic, Adept, and Stability AI, a common question during panels was how the landscape of AI model providers would shake out. Others, like Stability AI founder and CEO Emad Mostaque, claimed that the question of AI models went beyond performance or cost to issues around transparency and accessibility. The future of codingWith the recent AI boom, a flock of startups have emerged to help developers build AI and non-AI applications.
Italy's data protection regulator announced a ban on ChatGPT, and investigation into OpenAI. It cited a March 20 data breach, and no "legal basis" for using people's data to train the chatbot. Italy's national data protection agency (DPA) said it would block access to ChatGPT immediately, and is starting an investigation into its creator, OpenAI. It added that the restriction was temporary, until the company can abide by the European Union's data protection laws, known as General Data Protection Regulation (GDPR). The Italian authority also cited a data breach on March 20, where a bug allowed some ChatGPT users to see the titles of other users' conversations.
GPT-4 sign on website displayed on a laptop screen and OpenAI logo displayed on a phone screen are seen in this illustration photo taken in Poland on March 14, 2023. The group wants the FTC to require OpenAI establish a way to independently assess GPT products before they're deployed in the future. It also wants the FTC to create a public incident reporting system for GPT-4 similar to its systems for reporting consumer fraud. It also wants the agency to take on a rulemaking initiative to create standards for generative AI products. Tesla CEO Elon Musk, who co-founded OpenAI, and Apple co-founder Steve Wozniak were among the other signatories.
Tech leaders are urging caution on AI
  + stars: | 2023-03-30 | by ( Paayal Zaveri | ) www.businessinsider.com   time to read: +4 min
Insider asked ChatGPT, the viral AI chatbot sweeping the internet, to whip up a layoff memo for a pretend tech company, Gomezon. Elon Musk, Steve Wozniak, researchers at Alphabet's DeepMind, and other AI leaders are calling for a pause on training AI models more powerful than OpenAI's GPT-4. My colleague Emilia David looked at why Elon Musk and other tech leaders are right: AI needs to slow down. An Apple Watch is an essential for many of us these days, but the right band can make all the difference. Check out Insider's review of the 18 best Apple Watch bands in 2023.
Total: 25