Back in May, I wrote an article about AI journaling. The idea (which I had stolen from some YouTuber) was that you write your journal entries as a brain dump—just lists of stuff—into an LLM, and then ask the LLM to do it’s thing.

. . . ask the LLM to organize those lists: Give me a list of things to do today. Give me a list of blind spots I haven’t been thinking of. Suggest a plan of action for addressing my issues. Tell me if there’s any easy way to solve multiple problems with a single action.

Now, I think it’s very unlikely that an LLM is going to come up with anything genuinely insightful in response to these prompts. But here’s the thing: Your journal isn’t going to either. The value of journaling is that you’re regularly thinking about this stuff, and you’re giving yourself a chance to deal with your stresses in a compartmented way that makes them less likely to spill over into areas of your life where they’re more likely to be harmful.

I still think that’s all true, and I still think an LLM might be a useful journaling tool. My main concern had to do with privacy. I didn’t want to provide some corporation’s LLM with all my hopes, dreams, fears, and best ideas, and hope that none of that data would be misused. I mean, bad enough if it was just subsumed into the LLMs innards and used as a tiny bit of new training data. Much worse if it was used to profile me, so that the AI firm could use my ramblings about my cares as an entry way into selling me crap. (And you know that selling you crap is going to be phase two of LLM deployment. Phase three is going to be convincing you to advocate and vote for the AI firm’s preferred political positions.)

Anyway, I figured it wouldn’t be long before local LLMs (where I’d actually be in control of where the data went) would be good enough to do this stuff, and I was willing to wait.

But I didn’t even have to wait that long! A couple of days ago, I saw an article in Ars Technica describing how Moxie Marlinspike of Signal fame had jumped out ahead with a really practical tool: confer.to. It’s a privacy-first AI tool built so that your conversation with the LLM is end-to-end encrypted in a way that keeps your conversation genuinely private.

I’ve started using it for journaling exactly as I described. Because of the way the privacy is inherent to Confer, I can’t actually keep my journal within Confer—all the content is lost when I end the session. So, I’m keeping the journal entries in Obsidian, and then copying each entry into Confer when I’m ready to get its take on what I’ve written.

[Updated 2026-01-20: This turns out not to be true. Conversations in Confer do last through browser restarts. Until I delete the key for that session, I can go back and see everything that was in that session.]

I wanted some sort of graphic for the post, and asked Confer to suggest something. It came up with 5 ideas, including this one, which (bonus) actually illustrates my process:

Anyway, I’ve already written three journal entries that I otherwise wouldn’t have, and gotten some mildly entertaining commentary on them—some of which may rise to the level of useful. We’ll see.

(Asked to comment on a previous draft of this post, Confer.to mentioned the “Give me a list of blind spots I haven’t been thinking of,” prompt above, and said, “But LLMs can’t actually know your blind spots — they can only reflect patterns in what you’ve said.” Which I know. And so, of course, once I started using an actual AI tool instead of just an imagined one, that ended up not being something I asked for.)

If I keep doing this (and I think I will), I’ll follow up with more stories from the AI-enhanced journaling trenches.

I am (just barely) old enough to remember the Black Panthers in the 1960s, when a group of black people tried to carry legal firearms to protect themselves, before they were mostly murdered by the police, the FBI, and one another.

I also remember the 1980s, when the NRA was trying to convince all marginalized groups (blacks, women, lesbians, gays, socialists, etc.), that arming themselves was a great idea. The NRA was sincere, I think—they just wanted more people to have guns.

Most people, especially black people, were well aware of the fact that walking around armed would make it much more likely that they’d be killed by the police. (They remembered what happened to the Black Panthers, presumably better than I did.)

Over the last couple of years, and especially over the last few days, I think perspectives are changing. First, a lot of white people are walking around armed, and even killing people, with minimal consequences. Second, the increasingly fascist police have been killing unarmed people at increasing rates, and looking like they’ll not only get away with it, but looking like they’re glorying in it.

There are definitely some black people thinking once again that being armed is a good idea. I hope they’re not horribly wrong about that.

This article, which had a really annoying headline, turns out to have some really great thinking.

In particular, the political perspective it is describing has more than a little overlap with the stuff I was writing about in my articles at Wise Bread.

An economic vision that … encompasses antimonopoly policies, right to repair and regulatory changes to smooth the path for people to start businesses, buy and work land, even build their own houses and invent things.

Source: NYT

Steven suggested that I should revisit my Wise Bread posts. There’s a lot of useful stuff there. It was stuff that had seemed a bit less relevant over the last few years (I started writing in June of 2007, right at the start of the Great Financial Crisis, and carried on for 10 years.) But with government having gone all-in on fascism, racism, and gangsterism this year, a lot of those themes are feeling much more on point than they had for a while.

So I think I’ll do that. A lot of my Wise Bread posts still feel just right. On a few, my perspective has changed a bit. I’ll write some new posts to talk about what’s changed.

Stay tuned.

A bunch of people in the AI industry have blithely suggested that it will be fine if huge numbers of people lose their jobs to AI, because we’ll create some sort of universal basic income (UBI) to support them.

I think that’s a great idea, and think we should put their money where their mouths are. Starting immediately, any firm with any significant business producing AI should be taxed 50% of their gross revenue, and all that money should be divided up equally among all Americans as a UBI. (Firms with a “significant” AI business that are also in other businesses as might want to spin off the AI part of the business, so that they don’t have to pay this special tax on the non-AI parts of their gross income.)

This wouldn’t immediately produce a big enough UBI to make it unnecessary for someone to work, but I figure about the time it became impossible for an ordinary person to find a job because they’d all been displaced by AI, the AI industry would be making enough money for half their revenues to fund an adequate UBI.

In a similar vein, every firm paying for AI, rather than paying for workers, should have to pay a tax on all that AI spending equal to what they’d pay in withholding taxes if they were paying that money to an employee.

I don’t think this would produce nearly enough to fund a UBI, but I think it might be enough to go a long way toward shoring up Social Security and Medicare.