When ChatGPT launched in November 2022, its ability to mimic humans felt astonishing — and, frankly, a little scary.
Nearly two years later, both the technology and our relationship with it have progressed rapidly.
Two recent studies show just how much humans are able to trust chatbots already:
One in six adults (17%) turn to chatbots for medical advice at least once a month,accordingto a survey from KFF, a health policy nonprofit. For adults under 30, that rose to 25%.
Anotherstudyfound that patients actually preferred answers written by chatbots over those from physicians.
Health care professionals aren’t out of jobs yet, though — most adults (56%) said they aren’t confident they can differentiate between true and false claims from AI chatbots.
DebunkBot — a chatbot designed by researchers — was actuallyableto talk users out of conspiracy theories they believed in.
The bot reduced participants’ belief in conspiracy theories by20%, on average.
Otherstudieshave also shown that AI biases can shift users’ opinions.
And people haveincreasinglybeen turning to ChatGPT for mental health support and therapy.
This makes sense…
… given that the averagecostof psychotherapy in the US is $100-$200 per session. And in 2022, theaveragewait time for a new-patient doctor’s appointment was 26 days.
If these use cases seem risky, you better hold on — AI is gearing up to take over some high-stakes matters:
It’s cracking into aviation, with several companiesplanningto use AI for air traffic control systems.
AI ishelpingto diagnose and predict disease, and could one day assist surgeons in the operating room.
Already, we’re letting it drive our cars, do our work, and play with our kids.
But where do Americans seem to draw the line? Politics. Two-thirdssaythey are not confident in AI-powered election information.