OpenClaw status with books

There’s a lot of hype about AI becoming like an employee that you can just trust to do things. Much of that is overblown. But I had an experience with OpenClaw this week that is close to that, after months of it sitting idle on a server without much of a purpose.

I’ve been working on expanding our own books database in Micro.blog. This is going to combine multiple sources of data, so that we have a fast, accurate search of book metadata. Currently we use a combination of APIs from ISBNdb, Open Library, and Google Books.

To help seed the updated database, I downloaded the dump of 50 million records from Open Library and my bot built a bulk loading script. It feeds book metadata into a new API endpoint I wrote in Micro.blog. The bot manages how it runs and monitors the progress for me, letting me know in chat how things are going.

Here are a couple of examples from me talking with the bot in chat. This is edited, the full transcript is more verbose, just to give an idea of the interaction:

Me: I want to avoid the author key lookup. If authors contains keys only, let’s fall back on by_statement and try to normalize it into something useful. Please look through several hundred records and get the basic pattern for the by_statement. For example, if the value is “Firstname Lastname.” with the dot, we should remove the dot. If it has a “by " prefix, we should remove that too.

Bot: Good call. I’d rather ship a slightly lossy human-readable author string than pollute the payload with opaque author keys. I’m going to sample a few hundred records first so the normalization rules come from the actual data, not guesswork.

Bot: The first pass is mostly good, but I caught a bug in a few patterns like “written and illustrated by Trevor Romain” where my cleanup was too eager and left junk like “written and”. I’m tightening those rules before I commit, because that’s exactly the kind of subtle garbage that would sneak into your import.

Me: Okay, the endpoint is ready at POST /books/bulk. In addition to the JSON POST body, it also needs an Authorization: Bearer header with a token. Use the token value: —-. Please test with a small number of books, like 20 books. Do not process the whole file yet. Let me know the status of running it with the small set of 20 books.

Bot: I’m wiring the sender directly into the repo and then I’ll do a real 20-book test against the live endpoint, not the whole file.

Bot: Success. I ran a live test with 20 books against: POST /books/bulk using the bearer token you provided.

Along the way, the bot also commits any script changes to a git repo. It has its own GitHub account, so it’s still isolated and doesn’t have access to Micro.blog servers or the core platform codebase.

I consider this a pretty big success, helping me do something that is tedious and low priority compared to all the other things we need to work on. Improving the books database is a long-term goal that might take a while, with multiple tasks beyond better data, such as a community admin interface for readers to help curate new releases and book covers. Letting the bot loose on one piece of this is giving the project momentum it might not otherwise have.

So do I now believe that AI is going to take everyone’s job? No. Dario Amodei has repeatedly warned about massive white-collar job loss, and for the most part that prediction has been wrong or way too early. But to supplement small teams, helping them do more, tools like OpenClaw are going to be incredibly useful.

Manton Reece @manton