Whose privacy?

A few days ago, I got a payment reminder from my bank for my ChatGPT subscription. These renewal notifications have become a routine. For nearly three years, since May 2023, it had quietly slipped into the background and I’d stopped thinking about it altogether. And that feeling of “whatever” was precisely the nudge I needed to start thinking about it more carefully.
Your data is a treasure trove, and the conversations we have in those LLM chat interfaces linger somewhere on their servers. It’s a good principle to apply a degree of “zero trust” to anything we type online.
This isn’t about wearing a tinfoil hat, but about being more mindful. Which begged a question: am I using this tool for exploring private thoughts, or am I using it to refine something meant for public view, like this blog post?
A lot’s changed since 2023. The first open-source model I used was Meta’s Llama, but now the field’s crowded with capable alternatives. The basic tasks I rely on an LLM for, checking my grammar, searching the web, rephrasing a sentence, have become commoditised.
This sparks a question. What’s the premium you’re really paying for?
We all know the old saying: when you use a tool for free, you’re most likely the product. There’s no such thing as truly “free”, after all. So perhaps the real premium isn’t just about better features. It’s about privacy.
But the real push to reconsider came from something else entirely.
Recently, OpenAI signed a contract with the US government. This came after Anthropic was blacklisted for refusing Pentagon demands to remove its safeguards. They walked part of it back after a backlash, which is something. But the event was a stark reminder. It highlighted a question I can’t avoid asking: whose privacy is truly being protected?
Philosophically, I’ve realised that convenience shouldn’t come at the cost of principle. Privacy isn’t a conditional right. It shouldn’t hinge on your nationality or jurisdiction. If a company pledges to “protect the rights of US citizens”, that’s a red flag for everyone else. Where does that leave me? Or any user outside that jurisdiction?
This is the core of it: I want my private explorations to remain just that—private. Using an LLM as a production tool to polish public content is one thing. Trusting a centralised, commercially-driven company with my unformed thoughts and confidential chats is another.
So yes, I’m voting with my wallet. It’s a small decision, really. Just one subscription. But these choices add up. They quietly shape the digital world we’re all building.
This isn’t about being anti-American or anti-government. It’s about supporting technologies whose alignment is transparently with the individual user, not the state. It’s about wanting my AI assistant to feel like a thoughtful collaborator, not a potential informant.
So, with a quiet but firm conviction, I’m letting it go. My daily needs can be met elsewhere, and my principles feel better served.