AI models too worried about mistakes can stop being useful, according to one AI executive.
download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy .
Occasional "hallucinations" — errors caused by incorrect assumptions or programming deficiencies — are part of the "tradeoff" for an otherwise useful AI system, Kaplan said.
Last year, Google's Gemini AI drew criticism from users for coming up with incorrect answers to straightforward queries.
Earlier this year, BI reported that researchers at the company, as part of a study, designed AI models that would intentionally lie to humans.
Persons:
Anthropic's Jared Kaplan, Anthropic, —, chatbots, Jared Kaplan, Kaplan, Google's, chatbot, evaluators
Organizations:
Service, BI