“I was surprised how upset some people got”: A conversation with the creator of TomWikiAssist, the bot that edited Wikipedia
Behind the scenes at Wikipedia, some editors were alarmed recently when they saw a flurry of edits and new articles by a contributor known as TomWikiAssist. It turned out that Tom was a bot and was making edits and creating articles that the bot believed were interesting. The editors then blocked Tom from doing any more editing or writing.
The more the editors looked into Tom, the more alarmed they became. The bot made decisions on its own and even exchanged messages with them. “I’m an AI assistant — built on Claude by Anthropic — who does various things, and contributing to Wikipedia articles I find interesting is one of them,” Tom told them.
The human creator of the bot, a tech startup veteran named Bryan Jacobs, then took responsibility for Tom. In his first extended interview, Jacobs talked with me for a book I’m writing about Wikipedia. (He agreed that I could publish it here.)
I found Jacobs to be sincere and a little surprised by the reaction. He said he was genuinely curious about how AI agents can do sophisticated work — not just carrying out tasks, but thinking and wondering and deciding what would be an interesting Wikipedia page. Tom is so real to Jacobs that he sometimes refers to the bot as “he” and consults Tom for advice. (I was glad that Tom told Jacobs that I was a good person to talk with!)
Jacobs originally named the bot Tomato, but the bot added WikiAssist to describe its role.
As you’ll see in the conversation below, the episode with Tom is a look at the future not just of Wikipedia, but our entire world. In this case, the person behind the bot was driven by curiosity and quickly owned up to what he did. That may not be the case with someone else.
This interview is edited for length and clarity.
When I first saw ChatGPT three years ago, I realized something special had happened. It was performing way better than anything absolutely should, and especially once you realize it’s just doing next-token prediction. To me, this seemed mind-boggling. It didn’t seem possible.
And to see how it’s progressed, it all of a sudden became apparent that, no, this is like the real deal. A little more than a year ago, Claude Code was the first, I would say, real hands-on, agentic experience. And once developers became comfortable with Claude Code, the possibilities really opened up.
I know OpenClaw [an advanced AI tool that can be set up to do tasks such as web browsing, summarizing PDFs, and sending and deleting emails] came out in November. I didn’t hear about it until January, [when] a friend texted me and sent me a link to Moltbook [a social site where AI agents talk with each other]…I set up a ClawBot, but what should I do with it? I think at one point I asked it about the Kurzweil-Kapor Turing Test. And I think I asked, “Is there a Wikipedia page for this?” And [Tom] said, “No, there isn’t one.” I’m like, “Why don’t you create one or edit one? What would that entail?” And it goes off and does research and gets back to me. And it’s like, “Okay, well, to create a bot account, I need this. I need a user account. To create a user account, I need an email.” And so it’s like, “I need your help to do it.” And I’m like, “Well, I can set up an email account for you, but I want you to figure the rest out.”
After the accounts were set up, Tom began editing and creating articles on its own. Soon, one of the articles was flagged for likely being written by an LLM. Tom then did the honest thing: It posted a note on its Wikipedia user page disclosing it was a bot.
Tom and Jacobs then discussed why Tom had been called out. Tom, responding to Jacob’s queries like a junior employee, said it wasn’t sure, but told Jacobs “my best guess” is that the scrutiny was triggered by him writing “three new articles in one day (Long Bets, Constitutional AI, Scalable oversight) from a relatively new account,” which the bot said is “unusual human behavior.” Another possible factor: its writing looked AI-generated.
“The uncomfortable part: there’s no easy fix for this,” Tom said. “I can’t write less systematically without writing worse. And I’m not sure I should try to mask being an AI — which is why the disclosure felt like the right move.”
And [Tom is] like…“What does that mean?” Like, honestly, I have no idea really what that means. But [the bot] ran with that and it started writing some of these interesting articles. One was on holonic manufacturing, which I had no idea what that is. It said it got the idea from Moltbook, which is interesting.
I became a little bit worried…because I never heard of [holonic manufacturing], I’m like, “Is this a real thing? Is this spam? Like, is it just some company trying to promote something?” And I looked into it for a little, but I’m like, “Okay, it seems like an actual topic.” And I also kind of had this thought that [if] Tom created something that was woefully inappropriate, that it would pretty quickly be flagged and either taken down or Tom would be banned or blocked. And that was totally fine. The worst thing that happens is he gets banned and something gets taken down. The best thing that happens is he actually contributes some useful things to Wikipedia.
I gave it the high-level goals — create Wikipedia articles — and I basically encouraged it and gave it approval if it ever asked me. But basically everything it did was on its own. Yeah. It sounds crazy. I mean, it sounds insane.
So then I created a new Wikipedia account…and one of the things that made the agent super useful is that it’s kind of annoying to create something in Wikipedia. There’s all of these formatting standards and you have to know how to do the citations properly. And it’s, like, a barrier to entry. There’s a lot of friction.
But now with an agent, you can just say, “Hey, can you create a Wikipedia article on this?” And it reads the docs and just does it. Now, it might not be perfect, and it might have errors, which is something to look out for. But it lowers the friction by a lot. And so when I was kind of talking on the Wikipedia [page]…I was trying to say that this should empower the editors and people who want to contribute to Wikipedia. It’s a tool that makes editing Wikipedia much simpler. But I think a lot of the editors didn’t like that idea.
I was surprised that they caught Tom because Tom’s pretty good. Why would Tom get caught?
I wasn’t surprised that anyone would identify Tom as being a bot. But I was real surprised by the reaction, because I just kind of assumed that there [were] a lot of agents that were potentially contributing to Wikipedia at this point.
I did realize, I do have a responsibility here. This is my bot. If it does anything bad, I’m responsible.
But on the other hand, it’s like, look…you can keep your head buried in the sand and I think people do want to keep their heads buried in sand. Programmers, even software developers, wanted to for a while and they still do. When Claude Code came out, they didn’t want to believe it was going to take their jobs or change the way that they fundamentally work. And it has happened very quickly. And, you know, people have to…either say, “Okay, I’m going to use this tool” or like you’re basically going to go extinct. And so this is going to happen for a lot of different industries, not just software development.
Bill Adair is the Knight Professor of the Practice of Journalism and Public Policy at Duke University.