I’ve talked about AI art tools a few times here, because I use them sometimes, always in ways that (A) are not harmful to artists’ livelihood and (B) not nefarious in other ways, and I’m pretty tired of the hot takes about them.
But I haven’t gone after the text tools like ChatGPT yet, and I guess it’s time. Because, wow.
Big tech companies and investors are jumping into these glorified ELIZA programs because they don’t want to be the ones that missed out on the Next Big Thing. But it’s pretty gimmicky, a “solution” in search of a problem, much like blockchain or VR (excuse me, “the metaverse”).
“AI” has gone from amusing toys that generate absurd flavors of ice cream or names for action figures, to a thing that can write stories and essays and poems and advice if you ask for it. And people are starting to trust it, for some reason. These tools don’t understand or know anything, they just mimic language.
It’s like a parrot who has hung out in doctors’ offices for a while, and now people are asking it for medical advice and trusting the results. It’s worse in many ways than just asking some internet rando. That rando might have a medical degree, or they might spout misinformation or bullshit, but unless you’re completely gullible (and yes, some people are) you’re going to take what they say with a grain of salt. But ChatGPT is guaranteed to have no knowledge of anything, no ability to judge truth, no ethics or responsibility or accountability, and its “sense” of context is focused entirely on patterns of grammar rather than information.
I’ve literally seen someone respond to my criticism with “how can it be wrong, when it writes working code?” Because if you ask for it, it also writes non-working, bullshit code with completely inappropriate syntax that demonstrates a complete lack of knowledge of the thing you’re asking about, and presents it with exactly the same 100% confidence that it assigns to everything.
As far as I’m concerned, this has more potential for danger than self-driving cars.