Bias in centralized AI

In the Grok fallout, Mike Masnick makes the case for moving away from large, centralized AI. He asks a range of great questions about bias in AI:

After a similar incident two months or so ago where Grok became obsessed with linking everything to white genocide, the company started publishing its system prompts to GitHub. So, at the very least, we can see the progression on the system prompt side. This transparency, while laudable, reveals something deeply troubling about how centralized AI systems operate—and how easily they can be manipulated.

Nick Heer has a good response. I’m inclined to agree with Nick, that we may not want to get rid of centralized AI, but there should be oversight:

What that probably means is some kind of oversight, akin to what we have for other areas of little control. This is how we have some trust in the water we drink, the air we breathe, the medicine we take, and the planes we fly in. Consumer protection laws give us something to stand on when we are taken advantage of.

One of the unique challenges with AI — compared to say, social networks — is that it’s more difficult to run AI at the quality and scale of the big players. We can run local models, but they will be worse because of hardware constraints.

We are now firmly in a world where trust is everything. One answer to abundant content, mostly slop, is to seek out sources by real people who have built a reputation over years. People like Mike and Nick. The same will be true for AI. I mostly trust OpenAI and Anthropic. I think they have good intentions and good teams. I don’t feel the same way for any random model I might take off the shelf at Hugging Face.

Manton Reece @manton