Read previewA group of AI researchers recently found that for as little as $60, a malicious actor could tamper with the datasets generative AI tools similar to ChatGPT rely on to provide accurate answers.
Tramèr and a team of AI researchers then posed the question in a paper published in February on arXiv, a research paper platform hosted by Cornell University: Could someone deliberately "poison" the data an AI model is trained on?
The team then monitored how often researchers downloaded from the datasets that contained domains Tramèr and his colleagues owned.
as the site is a "very prime component of the training sets" for language models, Tramèr said.
AdvertisementTramer also adds that data poisoning isn't even necessary at the moment due to the existing flaws of AI models.
Persons:
—, Florian Tramèr, Tramèr, Tramer, I'm, He's
Organizations:
Service, Business, ETH Zurich, Cornell University
Locations:
arXiv