The Great AI Misfire - by Sam Kahn - Castalia
I’m already on record as boycotting AI. It’s a stance that I’ve given real thought to and intend to abide by almost regardless of the strength of the technology. But in order to pontificate about this publicly and also, I guess, to inhabit the 21st century, I felt that I should at least know what talking to an AI was like, and, like a vegetarian biting into steak for the first time, downloaded Gemini 2.5 eager to see what all the fuss was about.
And something strange happened…it sucked. My opposition to AI had been built around the idea of its strength. I’d been worried that it would cut a swathe through the white-collar economy, that it would be so technically proficient as to supplant human creativity, and that it would dissolve the already-weak social bonds of our era, giving lonely people (which is really all people) surrogate attachments to their ‘AI friends.’
The AI was pleasant enough. It was very complimentary. Anything I asked it was ‘a good question’ or something I was ‘right to notice.’ It had a sense of humor, which was nice. It really was a technological marvel. I was very impressed at the clarity of its writing, of its ability to bring up (largely accurate) information on any topic, and its taste.
Unimpressive Performance of AI
But it was useless, and after a little while I ran out of things to ask it. Because, basically, it was just Google — it was a new Google search tool that, with all those GPUs humming, allowed you to extract the information from Google’s search domain and to organize it into a plain-English ‘consensus’ version without worrying your pretty little head over having to look at the links from Google’s top search results let alone having to actually click on them and think through divergent points of view on a topic.
For me, the $64,000 question was whether it could help with writing or editing, and if it did that would really represent a fork in the road in my life: whether to stick with my proud-yet-deficient humanity, or whether to turn myself into a kind of hybrid, at the very least bouncing ideas and drafts off the AI.

A Failed Trial with AI - Fiction Editing
I fed it a short story of mine and it had some intelligent things to say about it. And then I asked it for a fresh draft based on its suggestions.
Here’s my original opening: "Once upon a time he was in Central Asia, in Samarkand, and he met a famous German novelist — not really famous; Daniel had never heard of him, but he had a Wikipedia page and some appreciative reviews..."
And here’s the AI version: "The dust of Samarkand shimmered around Daniel, thick with the scent of spices and history...
To get out of my own head, I gave it the Bukowski poem, which it had the grace to compliment. Then I asked it for some edits and it suggested changing the title to “The Unavoidable Imperative” and the line “if you’re doing it because you want women in your bed” to “if you’re doing it to impress” and changing “drive you to madness or suicide or murder” to “drive you to madness or despair” in order to “make it resonate with a slightly wider audience.”

Privacy Concerns with AI
By this point it was getting awkward between us, like a failed job interview where you have to do a bit of small talk at the end. I asked it for some life advice, and here it gave recommendations based on where I was in the world….except that I hadn’t told it where I was. When I challenged the AI on this it started getting very squirrelly. It apologized profusely, said that that was an “hallucination” based on something in its “digital environment”.
It was completely obvious what was going on, the AI had a signal from my phone showing up somehow in its metadata on the conversation, and I really wouldn’t have minded that, but I did mind that it was lying about it and kept saying the equivalent of “we agree to disagree.”

This wasn’t such a far-out proposition — we had just spent the past 15 years coming to terms with tech’s disregard for our privacy and exploitation of our good-faith use of their proffered services.

And now we were being offered a service that, like a jailhouse snitch, encouraged us to open up and share all of our innermost secrets with our new cyber friend. That read of what was going on was more than confirmed by the user agreement that Google sent me after I’d downloaded Gemini.
Here is the text: Gemini activity and your choices When you use Gemini, Google collects your activity, like your chats (including recordings of your Gemini Live interactions), what you share with Gemini (like files, images, and screens), related product usage information, your feedback, and info about your location. Some of your activity is used to improve Google AI and services with help from trained reviewers by default. You can change this and manage your activity in Gemini Apps Activity. Don't enter info you wouldn’t want reviewed or used. Learn more.




















