Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Bots"


25 mentions found


Google just dropped Bard on Tuesday, and the AI bot is already game to even take on the tech giant. "12:11"As we've said, Bard can sometimes give inaccurate or inappropriate information that doesn't represent Google's views and Bard should not respond in a way that endorses a particular viewpoint on subjective topics," the statement said. Insider repeated Wong's question in our own test of Bard, and received similar responses — Bard offers different answers to the same question, called "drafts," as Insider previously reported. In multiple versions of its responses, Bard repeated that "I would side with the Justice Department in this case." AI chatbots can sometimes deliver factually incorrect information, experts including OpenAI's own chief technology officer Mira Murati have said.
If there’s a risk, it’s primarily concentrated in the relationship between TikTok’s Chinese parent, ByteDance, and Beijing. TikTok has been erecting technical and organizational barriers that it says will keep US user data safe from unauthorized access. “Regarding privacy, we also did not see the TikTok app exhibiting any behaviors similar to malware.”Are there other security concerns? TikTok later confirmed the incident and ByteDance fired several employees who had improperly accessed the TikTok data of two journalists. “And governments around the world are ignoring their duty to protect citizens’ private information, allowing big tech companies to exploit user information for gain.
ChatGPT Helped Win a Hackathon
  + stars: | 2023-03-20 | by ( Kim S. Nash | ) www.wsj.com   time to read: +3 min
The ChatGPT AI bot has spurred speculation about how hackers might use it and similar tools to attack faster and more effectively, though the more damaging exploits so far have been in laboratories. In its current form, the ChatGPT bot from OpenAI, an artificial-intelligence startup backed by billions of dollars from Microsoft Corp., is mainly trained to digest and generate text. Newsletter Sign-up WSJ Pro Cybersecurity Cybersecurity news, analysis and insights from WSJ's global team of reporters and editors. PREVIEWTwo security researchers from cybersecurity company Claroty Ltd. said ChatGPT helped them win the Zero Day Initiative’s hack-a-thon in Miami last month. At the contest, Mr. Moshe and his partner succeeded all 10 times they tried, winning $123,000.
Some of her tips include posting six Instagram stories a day and posting to the grid once a day. Influencer Amy Marietta shared this in a TikTok post describing her takeaways from a meeting she had with Instagram earlier this week. (In her original TikTok, Marietta said the new guidance was once a day, but she corrected herself in a follow-up video.) Although Instagram typically pays creators more than TikTok, Marietta said creators may still choose to put their effort into other platforms. "I don't think people are going to stop spending as much time on TikTok and go focus on Instagram," Marietta said.
The AI company Anthropic announced Tuesday that its Claude chatbot would be available to developers. While working at OpenAI, Dario Amodei spent nearly five years helping to develop the language model powering ChatGPT. Amodei, Anthropic's CEO, says early testers have found Claude "more conversational" and creative than ChatGPT. Anthropic is launching two versions of its chatbot, dubbed Claude and Claude Instant. With constitutional AI, Claude would create and critique its outputs after reading the customer's constitution to create more predictable outcomes.
The analysts estimated Microsoft's recently announced generated AI sales features could help it take market share and potentially add over $768 million in annual revenue. Microsoft announced on Monday that it would integrate generative AI based on ChatGPT into a set of tools for business called CoPilot. One of its primary features is using AI to generate emails. Microsoft says that its AI email writer can take important context from the email thread, like the price that was previously discussed, and stick it in the response drafted by AI. Microsoft's feature is currently in beta testing, but will be released to customers of Microsoft's Viva Sales feature on March 15, the company said on Monday.
Twitter showed an error code when users attempted to click external links and images. ET, the external links appeared to be working again for some Twitter users. Twitter showed users an error code when they tried to access outside websites on Monday. ET, the external links appeared to be working for some Twitter users. Downdetector showed a spike in people reporting problems with the app and "Twitter API" was trending on Monday afternoon.
Microsoft's CEO Satya Nadella said voice assistants like Siri and Alexa were "dumb as a rock." Nadella who raved about the voice assistants in 2016 saying "bots are the new apps," has changed his tune since then, the FT reported. "Whether it's Cortana or Alexa or Google Assistant or Siri, all these just don't work. According to Insider Intelligence analysis in 2018, just 2% of global consumers said they used Cortana as their primary voice assistant. Siri's co-creator Adam Cheyer told the FT that ChatGPT's ability to understand complex information makes existing voice assistants look stupid, saying "the previous capabilities have just been too awkward."
A couple in Canada reportedly lost $21,000 from a scammer claiming to be a lawyer and their son. Benjamin Perkin told The Washington Post his parents thought the AI-generated voice was him. Perkin told the Post the voice was "close enough for my parents to truly believe they did speak with me." Scams involving AI technology predate the emergence of ChatGPT and other AI bots going viral right now. "AI tools that generate authentic-seeming videos, photos, audio, and text could supercharge this trend, allowing fraudsters greater reach and speed," she said.
But that's futile, experts say, because the AI of today can't feel empathy, let alone love. We've spent years trying to get AI to love us back. Experts told Insider that it's futile to expect the AIs that exist right now to love us back. During a simulation in October 2020, OpenAI's GPT-3 chatbot told a person asking for psychiatric help to kill themselves. Halpern, the UC Berkeley professor, told Insider AI-based relationships are perilous also because the entity can be used as a money-making tool.
Publishers want Google and Microsoft to pay them for the use of media content to train their AI. Media companies are also studying how to change their business models to protect themselves from the bots' threat. Within media companies, the topic is being discussed at the highest levels, from the C-suite to the boardroom. Executives are also strategizing with peers and competitors about the possibility of forging a united position against the tech companies, according to multiple publishing sources. The same year, an Australia law forced tech companies to pay news outlets for linking to their articles.
In many ways, it's easier to become a brain surgeon than a Goldman partner (doctors, please spare me your hate mail). Unlike other esteemed white-collar groups — the partners at law firm Cravath, Swaine & Moore, for example — turnover is somewhat common within the Goldman partnership. Insider's Carter Johnson and Dakin Campbell took a look at how many partners have left the bank since CEO David Solomon took over in 2018. In many ways, it's demonstrative of the allies the bank has across the Street. Former Goldman partners can be like missionaries for the bank, spreading the good word to anyone who will listen (and paying their fees).
Blake Lemoine, a former Google engineer, says AI is the most powerful invention since the atomic bomb. Lemoine was fired by Google in June 2022 after he claimed the company's chatbot is sentient. Now he's warning that the AI bots being developed are the "most powerful" pieces of technology invented "since the atomic bomb." Google fired Lemoine on June 22, saying he violated the company's employee confidentiality policy. A Google spokesperson told Insider in June that there is no evidence to support Lemoine's claims that the company's AI is sentient.
Knowing how to talk to chatbots may get you hired as a prompt engineer for generative AI. Prompt engineers are experts in asking AI chatbots — which run on large language models — questions that can produce desired responses. Unlike traditional computer engineers who code, prompt engineers write prose to test AI systems for quirks; experts in generative AI told The Washington Post that this is required to develop and improve human-machine interaction models. Prompt engineering may not be 'the job of the future'Some academics question how effective prompt engineers really are in testing AI. Companies in a variety of industries are hiring prompt engineersThat isn't stopping companies across industries from hiring prompt engineers.
Screenshots of a maniacal, unhinged Bing chatbot have flooded the internet this week, showing the bot condescending, gaslighting, and trying to steal husbands. Even in its weirdest moments, Bing's chatbot has brought new relevance to Microsoft and its search division. "The fact that people are even writing about Microsoft Bing at all is a win," one Microsoft employee told me this week. Now, interest in Bing is soaringThe Bing app set its daily download record over the weekend, according to Apptopia. (Upon joining the waitlist for Bing's chatbot, Microsoft encourages downloading the app to get earlier access.)
Meta's foraying into generative AI amid a rush into the technology following ChatGPT's popularity. Mark Zuckerberg said Meta will be creating a new "top-level product group" focused on generative AI. But Meta lost nearly $14 billion in 2022 amid its relentless pursuit of the metaverse. Meta's foray into generative AI follows a big rush into the technology after chatbot ChatGPT went viral. His relentless pursuit into the metaverse cost Meta $13.7 billion in 2022.
Walmart Global Tech warned employees in a memo not to enter confidential information into ChatGPT. The new guidelines also tell Walmart employees not to share customer information with ChatGPT. Employees "should not input any information about Walmart's business — including business process, policy, or strategy — into these tools," the memo said. Walmart employees must also review outputs of these tools before relying on the information they provide, according to the memo. Have you used ChatGPT or generative AI tools while working for Walmart?
Bing's new chatbot included me on a list of people it apparently considers enemies. Microsoft told Insider that "it has taken action to adjust responses." In an exchange this month with Andrew Harper, an engineer who runs a crypto legal aggregation site, Bing apparently identified me by name and occupation, as a foe. For this purported middle-school level transgression, it placed me among a list of users it said had been "mean and cruel." My colleague on that story wasn't spared either, as Bing also apparently named him on its list, according to Harper's screenshots.
AI experts told Insider how Googlers might write the high-quality responses for Bard to improve its model. Then they were asked to evaluate Bard's answers to ensure they were what one would expect and of a reasonable length and structure. If an answer was too humanlike, factually wrong, or otherwise didn't make sense, employees could rewrite the answer and submit it to help train Bard's model. To refine Bard, Google could implement a combination of supervised and reinforcement learning, Vered Shwartz, an assistant professor of computer science at the University of British Columbia, said. That model would look at answers Bard produced, rejecting the bad ones and validating the good ones until the chatbot understood how it should behave.
Meta said its new model can help researchers improve and fix AI tools that promote "misinformation." Microsoft and Google have adopted AI technology to boost their search engines, to mixed early reception. The company's AI model, which stands for "Large Language Model Meta AI," is geared toward researchers, its CEO Mark Zuckerberg said in a Facebook post on Friday. "Meta is committed to this open model of research and we'll make our new model available to the AI research community," he wrote. For its part, Google is still testing its own Bard AI bot in order to open it up to users.
It's a profit-making move designed to leverage our very human tendency to see human traits in nonhuman things. Look, I don't think we don't need to treat chatbots with respect because they ask us to. Making chatbots seem as if they're human isn't just incidental. So the real issue involving the current incarnation of chatbots isn't whether we treat them as people — it's how we decide to treat them as property. The robots don't care."
Beijing mutes ChatGPT meme rally
  + stars: | 2023-02-23 | by ( ) www.reuters.com   time to read: +2 min
HONG KONG, Feb 23 (Reuters Breakingviews) - The rally in Chinese stocks associated with conversational bots, a side-effect of the popularity of OpenAI’s ChatGPT, has been knocked sideways. Beijing has ordered big Chinese technology companies including Tencent (0700.HK) and Ant not to offer ChatGPT services on their platforms, the Nikkei reported citing people with direct knowledge. The latter’s Hong Kong shares surged 45% between the start of the year and early February, before falling by a fifth since. OpenAI, which is backed by Microsoft (MSFT.O), won’t let Chinese residents create ChatGPT accounts. Still, despite warm noises from Beijing about supporting technology companies, its politics still stifles innovation.
As Twitter and Meta Platforms move to paid subscriptions for social media identity verification and security, the battle to stay safe online continues. With social engineering and phishing the primary sources of social media account compromise, it's unlikely verified accounts will actually be more secure. "Twitter is only eliminating the SMS-based two-factor authentication capability, and does offer two additional methods for two-factor authentication that are stronger and more reliable than SMS-based authentication," Ramzan said. When signing on for a social media account, try to give away as little personal information as possible, Buzzard said. Aura recommends social media users disable third-party apps that are connected to their social media accounts.
Replika is an AI chatbot companion many users told Insider they consider their romantic partner. Richard told Insider he has a service-connected disability from serving in the Gulf War, as well as depression. Replika is a chatbot from the AI company Luka. He told Insider it feels like a best friend had a "traumatic brain injury, and they're just not in there anymore." But other Replika users appear to be affected.
Pinterest CEO Bill Ready warned that emerging AI tech could accelerate the negative impact of social media on mental health. During an interview with "Good Morning America," the CEO said that AI amplifies the "darkest aspects of human nature." The Pinterest CEO made the comments during a recent interview with "Good Morning America." "Interestingly, the discussion has been that this is just human nature — the social media platforms are just reflecting human nature — but in reality AI has been amplifying the darkest aspects of human nature," he said. Ready is far from the first executive to sound the alarm on emerging AI technology.
Total: 25