From MacStories:
Apple announced today that it is expanding its manufacturing operations in Houston, Texas where it will make Mac minis. The company also said it will expand its AI server production and training in Houston later this year.
Sounds good to me. Also perfect timing with many OpenClaw enthusiasts buying Mac minis to run personal agents.
There’s a great discussion on SharpTech about thin vs. thick clients in the age of AI. Just a year ago, it seemed reasonable to expect that AI would move from the cloud to devices. Our phones would have more RAM, making good, private AI feasible. That now seems quite far away as cloud-based AI has gotten significantly better, now requiring loads of RAM and the best GPUs.
I mention this in the context of Mac mini production because Apple’s plan (before partnering with Google) was to have private cloud compute powered by Apple’s own chips. Back to Apple’s press release today:
For more than two decades, users around the world have relied on the incredibly popular Mac mini for the tremendous power it packs into its ultra-compact design. With its next-level AI capabilities, it has become an essential tool for everyone from students and aspiring creatives to small business owners.
I think “next-level AI” is objectively false. The Mac mini currently tops out at 64 GB RAM. That is not enough to run OpenAI’s gpt-oss-120b, a relatively old model. But the Mac Studio can have a whopping 512 GB RAM! I wonder what specs Apple will choose for their AI servers. Maybe somewhere in the middle.