Why this exists and why it's different
The third time I sat in a post-incident call and wrote “credentials in the wrong place” as the root cause, I closed the report template and opened a text editor. Not to write another report. To write the spec for the thing I wanted to exist.
The pattern was always the same. A leaver whose access was “revoked” in the ticketing system but never actually removed from the shared vault. An admin account with no audit trail, shared between three people who all knew the password. A four-month exfiltration that nobody caught because there were no quorum requirements on sensitive operations — just trust, and the assumption that trust would hold.
The tools that could have prevented this were either priced at £60k a year (fine if you're a large enterprise, useless if you're a 25-person team trying to do the right thing) or architecturally naive — cloud-synced, policy-based rather than cryptographic. Most password managers are built around the assumption that the server is trustworthy. I'd spent enough time in incident response to know that assumption eventually breaks.
So I built around a different assumption: the server is hostile. Everything is encrypted client-side before it leaves your device. The server stores ciphertext it cannot read. Access revocation is mathematical, not procedural. Multi-party approval is enforced by the architecture, not by convention.
Every feature traces back to a real incident. The per-entry key derivation — because a single compromised master password shouldn't expose everything. The structured offboarding — because “we removed their Slack” is not offboarding. The anomaly detection — because four months is too long to not notice. None of this is theoretical. It's what I wish had existed on the days I needed it most.