AI can be a real tattletale 

You can read plenty of concerns about AI’s use in law enforcement, from facial recognition tools potentially identifying someone incorrectly and privacy amid increased surveillance, to the ethics of — what’s up, Minority Report fans? — predictive policing.  

A robot staring intensely.

But many companies are using AI to snitch on more minor infractions. 

That was there before, I swear

Car rental companies have been using and testing AI tools to scan for damage. Hertz uses UVeye’s AI-powered scanning system to compare vehicles between pickup and return. 

  • It flags any differences it spots, occasionally finding dents and dings the average human would never detect.
  • For example, a New York couple was charged $195 in damages and fees for a tiny dent that neither they nor the employee who oversaw the return noticed. 

A Hertz spokesperson told The New York Times that the scanners also ensure customers aren’t charged for damages that did not occur during their rental, so there’s that. 

What else?

We previously discussed companies using AI tools to monitor  employees’ Slack conversations for bullying, leaking confidential info, and other issues. 

The US Treasury Department used AI to analyze data and root out $4B in fraud in FY2024, up six-fold from the previous fiscal year and including $1B in check fraud. 

Barcelona-based AI startup Murphy recently emerged from stealth mode. Its platform employs AI agents to collect debts. They can negotiate payment plans and discounts while supposedly maintaining “respectful” communication, per Tech Funding News.

But perhaps the funniest snitchbots belong to the several AI startups that have launched to catch people using other AI tools to cheat

It’s not AI… 

… tattling, necessarily, but the humans asking it to do so.

Anthropic’s Claude Opus 4 was given access to fictional emails that suggested an engineer was having an affair. When the model was told it might be shut down, it threatened to blackmail that engineer.

Anthropic also encountered, during research, an AI that would attempt to contact authorities and the press if it learned it was being used for nefarious purposes.

Yet researcher Sam Bowman told Wired that it’s unlikely these scenarios would occur outside of research, as AI would have to be specifically prompted to behave this way.

New call-to-action
Topics:

Technology

Ai

Related Articles

Get the 5-minute news brief keeping 2.5M+ innovators in the loop. Always free. 100% fresh. No bullsh*t.