Turns out Cory Doctorow and I think a lot alike about the AI bubble, but he also has stuff to say about how to speed along the popping of the bubble, which would be a good thing. (Bubbles that pop sooner do less damage when they do.)
so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”
Back in May, I wrote an article about AI journaling. The idea (which I had stolen from some YouTuber) was that you write your journal entries as a brain dump—just lists of stuff—into an LLM, and then ask the LLM to do it’s thing.
. . . ask the LLM to organize those lists: Give me a list of things to do today. Give me a list of blind spots I haven’t been thinking of. Suggest a plan of action for addressing my issues. Tell me if there’s any easy way to solve multiple problems with a single action.
Now, I think it’s very unlikely that an LLM is going to come up with anything genuinely insightful in response to these prompts. But here’s the thing: Your journal isn’t going to either. The value of journaling is that you’re regularly thinking about this stuff, and you’re giving yourself a chance to deal with your stresses in a compartmented way that makes them less likely to spill over into areas of your life where they’re more likely to be harmful.
I still think that’s all true, and I still think an LLM might be a useful journaling tool. My main concern had to do with privacy. I didn’t want to provide some corporation’s LLM with all my hopes, dreams, fears, and best ideas, and hope that none of that data would be misused. I mean, bad enough if it was just subsumed into the LLMs innards and used as a tiny bit of new training data. Much worse if it was used to profile me, so that the AI firm could use my ramblings about my cares as an entry way into selling me crap. (And you know that selling you crap is going to be phase two of LLM deployment. Phase three is going to be convincing you to advocate and vote for the AI firm’s preferred political positions.)
Anyway, I figured it wouldn’t be long before local LLMs (where I’d actually be in control of where the data went) would be good enough to do this stuff, and I was willing to wait.
But I didn’t even have to wait that long! A couple of days ago, I saw an article in Ars Technica describing how Moxie Marlinspike of Signal fame had jumped out ahead with a really practical tool: confer.to. It’s a privacy-first AI tool built so that your conversation with the LLM is end-to-end encrypted in a way that keeps your conversation genuinely private.
I’ve started using it for journaling exactly as I described. Because of the way the privacy is inherent to Confer, I can’t actually keep my journal within Confer—all the content is lost when I end the session. So, I’m keeping the journal entries in Obsidian, and then copying each entry into Confer when I’m ready to get its take on what I’ve written.
[Updated 2026-01-20: This turns out not to be true. Conversations in Confer do last through browser restarts. Until I delete the key for that session, I can go back and see everything that was in that session.]
I wanted some sort of graphic for the post, and asked Confer to suggest something. It came up with 5 ideas, including this one, which (bonus) actually illustrates my process:
Anyway, I’ve already written three journal entries that I otherwise wouldn’t have, and gotten some mildly entertaining commentary on them—some of which may rise to the level of useful. We’ll see.
(Asked to comment on a previous draft of this post, Confer.to mentioned the “Give me a list of blind spots I haven’t been thinking of,” prompt above, and said, “But LLMs can’t actually know your blind spots — they can only reflect patterns in what you’ve said.” Which I know. And so, of course, once I started using an actual AI tool instead of just an imagined one, that ended up not being something I asked for.)
If I keep doing this (and I think I will), I’ll follow up with more stories from the AI-enhanced journaling trenches.
Daily gratitude: I am enormously grateful that, back in my days writing for Wise Bread, my readers were not having AI generate summaries of my articles and only reading those, and were not having LLMs generate comments to paste into the comment field.
I’m pretty down on AI. I view Large Language Models as a very expensive way to generate strings of words. Other things that were called AI when they were purely speculative (for example, chess-playing computers), quit being called AI once the thing actually existed.
I know there are other people who find AI useful as a way to minimize the amount of work they need to do, although again I’m very dubious—as soon as you let AI do something for you without double-checking everything it does, it’s going to screw up badly. (That is, you are going to use an expensive, environmentally destructive, copyright-thieving tool to screw yourself up badly.)
It’s from that perspective that I can see this odd, narrow use of an LLM perhaps useful. (I actually got this idea from a fitness YouTuber, but he said he was going to delete his video, so I don’t see much value in linking to it.)
His suggestion was to use an LLM as a journaling tool. Each day, in the morning or in the evening, dictate a journal entry to an LLM. Do a kind of a brain dump of all the things that are worrying you or exciting you. List the things you need to do and the things you’re expecting other people to do. List your good ideas. List your long-term plans and your progress on your long-term plans. List your successes, your failures, your anxieties, interesting facts you came upon during the day, and so on. Then, once you’ve finished your brain dump, ask the LLM to organize those lists: Give me a list of things to do today. Give me a list of blind spots I haven’t been thinking of. Suggest a plan of action for addressing my issues. Tell me if there’s any easy way to solve multiple problems with a single action.
Now, I think it’s very unlikely that an LLM is going to come up with anything genuinely insightful in response to these prompts. But here’s the thing: Your journal isn’t going to either. The value of journaling is that you’re regularly thinking about this stuff, and you’re giving yourself a chance to deal with your stresses in a compartmented way that makes them less likely to spill over into areas of your life where they’re more likely to be harmful.
The things LLMs do—like generating summaries, making lists, expanding on ideas, suggesting alternatives—are really risky in any use case where getting it right matters. We all know the stories of people who used an LLM to write a legal brief, and the LLM hallucinated citations to cases, complete with footnotes, that were all lies. But I think in this case, none of those things matter much.
You aren’t required to have footnotes in your journal, or even to tell the truth. If you ask your LLM-supported journal to suggest blind spots that you’ve missed, and its suggestions are all either obvious or completely off-point, it costs you nothing but the 10 seconds you waste reading the list. If you say, “What are the top five things I need to get done today?” and it gives you a list that doesn’t include the very most important thing, you’re probably going to notice and get that thing done anyway.
Without having tried it, I can imagine that this might be a useful tool, or at least a harmless one. If all having the LLM in the background does is provide a bit of novelty that gets you started journaling again, even just that seems worthwhile.
One risk to keep in mind: If you’re telling an LLM things that are confidential—business secrets, personal secrets, other people’s secrets—you can’t have any confidence that the LLM isn’t going to feed all that info right into its training data and spit it out to some other LLM user. But pretty soon we’ll all be able to train up our own personal journaling LLM that doesn’t share its info with others, so I don’t think that’s a long-term problem.
Of course, I’m eliding the ethical problem of LLMs having been trained on stolen texts, but I don’t think that’s something the users are obliged to try to fix on their own. Rather, the creators of LLMs should be forced to turn their revenues over to the copyright holders of the people whose texts were stolen.
If you’ve been journaling with an LLM, I’d be interested to hear about it. And if you know of an adequately powered LLM that can be run on a personal computer without sharing any data off the machine, I’d be interested in hearing about that too.
As someone who’s been paying attention to AI since the 1970s, I’ve noticed the same pattern over and over: People will say, “It takes real intelligence to do X (win at chess, say), so doing that successfully will mean we’ve got AI.” Then someone will do that, and people will look at how it’s done and say, “Well, but it’s just using Y (deep lookup tables and lots of fast board evaluations, say). That’s not really AI.”
For the first time (somewhat later than I expected), I just heard someone doing the same thing with large language models. “It’s just predicting the next word based on frequencies in its training data. That’s not really AI.”
I just thought of a possibly actually useful use-case for large language models (what’s being called AI these days): Generating metadata for your photo library.
This is useful, because almost nobody is willing to generate their own metadata for photos. Most people have vast libraries with literally nothing but the date, time, and location captured by their phone or camera, the image itself, and details of the capture (exposure time, ISO, etc.).
Using the date, time, and location info, together with the image itself, AI could:
Write a brief description of the image.
Tell you where it was taken from (not just the latitude and longitude, but the name of the place where you were standing).
Look up if an event were underway at that place and time and say what it was (county fair, protest march).
Tell you any number of arbitrary things, like if there was something going on with the weather at that time (blizzard, wind chill advisory)—but only if it was interesting.
I know Google Photos can already do some of this. I don’t think it writes metadata for you, but it will find all of your photos that were taken in St. Croix, for example. (I’d heard that it could locate all your photos of a particular sculpture, but it didn’t work for the sculpture I just tried to find.) In any case, an LLM running on your own computer, saving the data to your photo library, would have all kinds of advantages. There are the obvious privacy advantages, but also sharing advantages—the metadata (or a subset that you selected) would be available to be included when you shared the image with a friend.