You can feed Google’s new Audio Overview feature info and generate a podcast with voices so realistic, someone could be fooled into thinking they’re listening to real humans, and perhaps believe any inaccuracies or hallucinations.
It’s hard to imagine a time of more misinformation than right now, when everything is potentially AI — and we’re already used to it.
Case in point: An image of a scared little girl and her puppy has been floating around in relation to Hurricane Helene. It’s not real, but many people seem indifferent because, as one person wrote, “it is emblematic of the trauma and pain” people are experiencing.
404 Media co-founder Jason Koebler is calling it the “fuck it” era, when it no longer matters if something is AI-generated if it matches a vibe.
… by movies and other media that use fictional narratives to portray something about real life. So why does this feel so off? Maybe because:
Here’s how pervasive this issue is: Wikipedia — the internet’s favorite source of information — now has a dedicated team that removes poorly written AI-generated content.
Wikipedia even allows AI in articles, but it has to be accurate with real sources. In the past, AI has sourced articles it completely made up.
… of course, like when people need accurate info about natural disaster evacuation and shelters, their health, or upcoming elections.
But it’s also gotten so silly that you can’t even Google what a baby peacock looks like.
We are zooming toward a place where people can’t discern what’s real, and if they stop caring — oh, boy, is it gonna get weird.