Promises
aren’t proof.
Cryptography is.
Traditional SaaS asks you to trust audits and policies. Qaxa is zero-knowledge by design, so we never hold your data in plaintext. Don’t rely on paperwork—inspect the code instead.
- No blind trust in the provider
- Source code you can inspect
- Client-side encryption you can verify
Don’t trust us.
Trust the repo.
Security shouldn’t be a black box. Our client code is open source, so your team can inspect how keys are generated and how data is encrypted before it leaves the device.
Inspect the client code (soon)
Private by
design. Not by
promise.
Qaxa is built so every room, file, and message is encrypted before it leaves your device. No server-side keys. No admin access. No plaintext for AI training.
- No server-side keys
- No AI training on your content
- No platform admin access
Clear answers about the model.
What Qaxa protects—and what it can’t see.
Yes. Messages, files, tasks, notes, and comments are end-to-end encrypted, so only invited participants can read them. Qaxa is built on a zero-knowledge model, which means we cannot read your content.
It means Qaxa cannot read your content. Your data is encrypted before it leaves your device, and Qaxa does not hold the keys needed to decrypt it. Read more about why Qaxa relies on PGP.
No. Qaxa is designed so we cannot read your messages, files, tasks, or notes.
On your device. Qaxa encrypts content locally before it is sent to our servers.
An attacker could access encrypted data, not readable content. Without the keys, it remains unintelligible.
Qaxa may still see limited service metadata needed to operate the system, such as account information and usage records. That is not the same as access to the contents of your rooms, messages, or files.
Recovery depends on your recovery method—not on Qaxa reading or restoring your data. Without the required recovery credentials, encrypted content cannot be recovered by the provider.
Yes. Our client code is open source, so anyone can inspect how encryption works before data leaves the device. Sensitive software should be verifiable—not built on blind trust.
No. Your files, chats, and room content are not used to train AI models. Qaxa encrypts content before it leaves your device, which means we cannot read it.