A deep dive into how we built an AI system that never trains on your data, never sells it, and lets you delete everything in one click.
Why I don't trust privacy policies
I'll be honest about something: I don't trust privacy policies. Not even ours. A privacy policy is a promise, and promises are only as durable as the incentives behind them. Companies get acquired. Boards change priorities. What's sacred today becomes negotiable tomorrow. I've watched it happen at companies I've worked at — the privacy page says one thing, the growth team wants another, and the growth team wins because the privacy page isn't a technical constraint. It's just words on a website. So when we started building Wingmnn — a system that would see people's emails, calendars, financial transactions, and contacts — I didn't want our users to trust our privacy policy. I wanted them to trust our architecture. There's a difference. A policy says 'we won't.' Architecture says 'we can't.' That second one is much harder to build and impossible to reverse in a board meeting.
What 'architecturally private' actually looks like
Let me walk through what this means concretely, because 'privacy by design' has become a phrase people throw around without explaining what they designed. First: user data and model training infrastructure are physically separated. Not logically separated — not behind a feature flag that someone could flip. Physically separate systems with no network path between them. Your emails flow into inference pipelines that generate insights and return them to your account. They never touch a training pipeline because there is no pipe to touch. You'd have to redesign the entire system to change this. Second: encryption isn't optional and it isn't configurable. Every piece of user data is encrypted with AES-256 at rest. Every connection uses TLS 1.3. Database volumes use full-disk encryption. Keys are managed through a dedicated KMS and rotated on a schedule that isn't controlled by the application layer. Even if someone compromised the application, they'd get encrypted blobs without the keys to decrypt them. Third: financial connections through Plaid are read-only at the API level. This isn't a policy we enforce in our code — it's a constraint enforced by Plaid itself. We literally do not have the capability to move money, even if our systems were fully compromised. We store transaction details but never account numbers, routing numbers, or credentials.
The hardest decision we made
The most valuable thing you can do with user data in 2026 is train models on it. Every AI company knows this. The more data you train on, the better your models get, the better your product becomes, the more users you attract, the more data you get. It's a flywheel, and it's the reason most AI companies structure their terms of service to allow training on user content. We chose not to do this, and I want to be transparent: it was not an easy decision. It means our models improve more slowly. It means we can't offer certain features that competitors might. It means we're leaving real value on the table. But we made the decision for a reason that goes beyond ethics — we made it because we think it's better business. Wingmnn asks people to connect their email, their calendar, their bank accounts. That's an extraordinary amount of trust. If we ever used that data for training — even once, even anonymized, even with consent — we'd break something that can't be repaired. Trust, once lost at that level, doesn't come back. So we decided: the data is yours, it's processed for you, and it never becomes ours. We wrote the architecture to enforce it, and we burned the bridge behind us.
Permissions as a design constraint
When you connect Gmail to Wingmnn, we ask for exactly two OAuth scopes: read your email and send email on your behalf (only when you explicitly compose through our interface). That's it. We don't ask for access to your Google Drive. We don't ask for your Google Contacts separately — we discover contacts from your email and calendar activity. We don't ask for anything we don't need, because every unnecessary permission is an unnecessary attack surface and an unnecessary reason for someone to hesitate before connecting. This might seem like a small detail, but it reflects something deeper about how we think. Every permission is a question you're asking the user: 'Do you trust me with this?' The fewer times you ask, the more each answer means. We've seen products that ask for twelve OAuth scopes on first connection. You can almost feel users' trust evaporating as they scroll through the permissions screen. We want the opposite experience: you connect, you see exactly what we're asking for, and it makes sense.
Delete means delete
This is the one that people don't believe until they test it. When you delete your Wingmnn account, your data is gone. Not archived. Not 'deleted but retained for 90 days in case of legal hold.' Gone. We run a 30-day grace period so you can change your mind — during that time, logging back in cancels the deletion. After that window closes, we purge everything: primary databases, replica databases, backup systems, cached aggregations. The entity graph, your briefing history, your email index, your financial records — all of it. What we retain is genuinely anonymized, aggregated analytics: how many users connected Gmail this month, what the average briefing engagement rate is. Numbers that could describe anyone and identify no one. I know 'delete means delete' should be the bare minimum, not a feature. But look at how many companies make it nearly impossible to actually remove your data, and you'll understand why we think it's worth stating explicitly.
The question we ask ourselves
There's a question we come back to in every architecture review, every feature discussion, every time we're tempted to take a shortcut: 'If this system were compromised tomorrow — fully, completely, worst-case scenario — what would the attacker get, and what could they do with it?' The answer we aim for is: they'd get encrypted data they can't read, from accounts they can't access, with financial connections they can't use to move money. That's the bar. Not 'we have a good security team' or 'we follow best practices.' The bar is: even in total failure, the damage is contained by the architecture itself. We're not there on every vector yet. Security is never finished. But that's the standard we hold ourselves to, and every decision — from which cloud provider to use to how we structure database access to whether an engineer can see production data (they can't, without an audited access event) — flows from that question. Privacy isn't our policy. It's our constraint. And constraints, unlike policies, don't change when the board meets.