Ostler vs Kin AI

Last verified on 26 April 2026. Pricing, features, and policies change – check the source if anything looks off.

Price check

Ostler: $49.99 + $24.99/mo – closed Mac-native system, AI runs on your hardware, data stays on your hardware.
Kin AI: free + $20–44/mo paid tiers – iOS / Android, AI requests proxied through Cloudflare to Anthropic / OpenAI / Cerebras.

Ostler is the only AI assistant that actually knows your life – every face, every message, every meeting, every email loaded in, all kept on your Mac. Kin AI is the closest thing to Ostler currently shipping: 50,000+ users, personal AI advisors covering work, relationships, values, body, social. The marketing says “local-first by default,” “your data never leaves your phone,” “never trains AI models.” Read their actual privacy policy and two of those three claims have a meaningful asterisk.

What Kin’s own privacy notice says

Kin’s storage claim is genuine: your text content sits on your iPhone in their app. The architecture diverges at the AI layer. Their own Privacy Notice section 5.2:

“Requests are typically routed through our backend... The proxy forwards the request.”

– Kin AI Privacy Notice, mykin.ai/privacy-notice

The backend is Cloudflare Workers. The proxy forwards to Anthropic, OpenAI, and Cerebras – named in the same document as sub-processors alongside Clerk (auth), PostHog (analytics), Customer.io (email), Intercom (support), and Langfuse (tracing). Nine named sub-processors in total. They configure those providers “under terms... intended to prevent the use of your data for training.” Intended to prevent is a softer commitment than contractually prohibits.

None of that is dishonest. Kin discloses it in its policy. But it does mean that “your data never leaves your phone” is true only of the storage layer; the moment you ask Kin a question, the question and its context flow through five companies’ infrastructure.

How Ostler is structurally different

Ostler runs the AI model on your Mac. Ollama loads a Qwen 3.5 9B (or your chosen model) into your Mac’s RAM. Your question never leaves the machine. We do not have a Cloudflare Workers proxy because we do not need one. We do not have Anthropic / OpenAI / Cerebras as sub-processors because we do not call them. There is no equivalent Section 5.2 in our privacy policy.

This is not a marketing claim. It is what 16 GB of unified memory and a 9-billion-parameter open-weights model on Apple Silicon makes possible. The trade-off is real: our model is smaller than GPT-4. The benefit is also real: nobody between you and the answer.

Ostler Kin AI
Storage Your Mac (encrypted, your passphrase) Your phone
AI inference Local (Ollama, on your Mac) Cloud-proxy through Cloudflare to Anthropic / OpenAI / Cerebras
Sub-processors Stripe, Apple billing, support email host, hosting (4) Clerk, Cloudflare, PostHog, Customer.io, Intercom, Langfuse, Anthropic, OpenAI, Cerebras (9)
Training stance We literally cannot train – no data flow exists “Intended to prevent” (contractual, not architectural)
Platform macOS Mini + iOS / Watch companion iOS + Android (no Mac native)
Price $49.99 once + $24.99/mo Free + $20–44/mo paid
Bulk import (GDPR exports) 20 platforms No (chat-only onboarding)
FDA-derived sources (Mac native) Safari, iMessage, Notes, Calendar, Reminders, Photos, Mail No (no Mac app)
Personal wiki Auto-generated, 21 page types No
Knowledge graph Vectors + RDF triples + Redis Memory system, not a graph
Third-party data subjects Addressed (in-app show / correct / suppress / delete per person) Not addressed
Article 22 automated decision-making Explicit (we do not make Art. 22 decisions) Not addressed
Article 27 EU representative Deferred until material EU user base (commitment + trigger stated) N/A – Danish company (Kin AI ApS, Copenhagen)
Data retention specifics 7 years billing (HK IRO), 2 years support email, 24 months policy archive 30-day Langfuse trace; “reasonable period” for account data
Works offline Yes (storage + AI both) Storage only; AI requires internet

What Kin does well

Kin’s product surface is good. Multi-advisor framing is clever – you choose which “advisor” (work coach, relationship guide, body coach) to talk to, and each holds its own context. Their App Store presence and ranking (#20 personal-development AI category) reflect a polished mobile experience. Their privacy notice itself is well-structured and honest about its sub-processors, even when the marketing copy oversells.

If you want a chat-style mobile AI advisor and you are not concerned about cloud inference, Kin is a reasonable choice. It costs less, downloads from the App Store, runs on your phone.

What Kin does not do

Kin is fundamentally a chat product. You start with a blank slate and tell it about yourself over time. There is no bulk import – you cannot hand it your LinkedIn export or your Google Takeout and have it know who you are by morning. There is no Mac native app, no Mac data sources (iMessage / Notes / Mail / Calendar / Photos / Safari), no personal wiki, no knowledge graph in any structural sense. The “memory” is the LLM’s context window plus their own retrieval layer; not a graph you can query, export, or visualise.

For someone with 20 years of digital exhaust they want to actually search and use, Kin is not the right tool. For someone who wants to start a relationship with an AI advisor from scratch, it might be.

The honest bit

If your priority is a mobile-first, polished, low-friction AI assistant and you are comfortable with cloud inference, Kin is well-built. Ostler is not for you.

If your priority is owning the entire stack – including the model that reads your data – on hardware you control, with no provider in the loop, then this is the difference Ostler is for. We are slower to set up, we cost more, and you need a Mac with at least 16 GB of RAM. In return, the answer to “where does my data live and who else can see it” is one address: your desk at home.

The choice

Both products say “your data is yours”. Both products mean it. The question is what “data” means. If “data” is just storage, both deliver. If “data” includes the conversations you have with the assistant – the questions you ask, the context the AI sees – only one of the two answers them on the user’s own machine.

Request early access See all comparisons