Why I built a personal AI that never touches the cloud

Andy Massey · 18 April 2026 · Hong Kong

I have 3,800 LinkedIn connections. I could not tell you when I last messaged most of them, what we talked about, or who introduced us. That information exists – scattered across LinkedIn, WhatsApp, email, and my calendar – but it is not connected. Not searchable. Not useful.

Last August, I started building something to fix that. Eight months later, it runs on a Mac Mini on my desk and knows more about my relationships than any tech company.

This is the story of how it got here, and why it matters that it runs locally.

The cloud problem

Every "personal AI" product I evaluated in 2025 had the same pitch: give us your data and we will give you intelligence. Upload your contacts. Connect your email. Let us read your messages.

I could not do it. Not because I am paranoid – because I thought about what "your data" actually means in this context. It means:

Every conversation I have had with my partner. Every medical appointment. Every message to a friend going through a rough time. Every late-night search. Every half-formed thought I typed into a note and deleted. Every relationship, mapped and scored and quantified.

You can not un-share your soul. Once personal data is on someone else's servers, the only thing between you and a breach, a subpoena, or a terms-of-service change is a privacy policy written by their lawyer.

So I asked: what if the AI ran on my hardware? What if nothing ever left my house?

Apple Silicon changed the equation

The reason this did not work five years ago is that local AI inference was impractical. Running a large language model on a consumer computer was either impossibly slow or required a gaming PC with a powerful GPU.

Apple Silicon changed that. A $699 Mac Mini M4 with 24GB RAM runs a 9-billion-parameter language model at 30 tokens per second. That is fast enough for real-time conversation, document analysis, and knowledge extraction. All local. No cloud API calls. No usage bills.

Combine that with GDPR – the regulation that forces every tech company to give you your data in a machine-readable format – and you have everything you need. The hardware to process locally. The legal right to your own data. The open-source models to make it intelligent.

What Ostler does

The moment you install Ostler, it reads your Safari browsing history, iMessage conversations, Apple Notes, Calendar events, Photos face labels, Reminders, and Mail – directly from your Mac. You see value in minutes. Then, for deeper data, it imports from 20 platforms via GDPR exports: LinkedIn connections, Facebook friends, Instagram follows, WhatsApp contacts, and more – all connected into a searchable, intelligent whole.

An AI assistant called Marvin answers questions about your life. "When did I last see James?" "What did David recommend at our meeting?" "Who do I know at that company?" Marvin runs locally and answers from your actual data, not the internet.

A personal wiki auto-generates pages for every person, organisation, and topic in your graph. A conversation processing pipeline extracts facts, relationship signals, and coaching observations from recorded conversations. Everything is cross-linked, timestamped, and yours.

The numbers

After importing my own data across 20 platforms:

7,000+ people in the knowledge graph. 2 million+ data points connected. 14,874 wiki pages auto-generated. 459 automated tests. 3 messaging channels (iMessage, WhatsApp, email) for the AI assistant.

All running on a Mac Mini. All local. All mine.

Why now

This week, Perplexity announced their "Personal Computer" – a Mac Mini hub that does something superficially similar. There is one critical difference: Perplexity's version sends your data to their cloud for processing. They charge $50 a month for the privilege.

That is exactly the model I built Ostler to avoid.

The market is validating the concept. Karpathy wrote about personal knowledge bases. Omi raised money for conversation capture. Poke raised $25M for an iMessage assistant. Everyone is circling the same insight: people want personal AI that actually knows them.

But nobody is doing it locally. Nobody is saying "your data should stay on your hardware, full stop." That is what Ostler does.

What is next

Ostler is in friends beta. A small group of people are installing it on their own Mac Minis, importing their GDPR exports, and telling me what breaks. The install script takes about 30 minutes. A diagnostic tool called Ostler Doctor is in the works to help when something goes wrong – coming soon.

If you have a Mac Mini and want to try it, the getting started guide is here. Your data stays on your machine. $24.99 per month for unlimited AI – no token caps, no cloud bills.

Try Ostler

Private. Unlimited. Runs on your Mac Mini.

Request early access

If you have thoughts, questions, or want to tell me I am wrong about something – hello@ostler.ai.