Browser Memory Security: Protecting Your Master Key in the RAM
Your master key has to live somewhere while you're using Pwdly. Here's the surprisingly nuanced engineering that goes into making sure 'somewhere' isn't a place an attacker can reach.

There's an uncomfortable truth at the heart of every client-side encryption product: while you're using it, your master key has to exist in plaintext somewhere. The whole edifice of zero-knowledge architecture rests on a few hundred milliseconds of cryptographic operations happening in your browser's RAM. An attacker who can read that RAM, even briefly, can read everything. Defending that window is the most underrated engineering work in modern password management.
Where the threats actually live
Browser memory is attacked from three directions. Cross-site scripting (XSS) inside our own application would let injected JavaScript read variables in our context. Malicious or compromised browser extensions can inject scripts and read DOM and JavaScript heap state. Operating-system-level memory disclosure attacks (Spectre-class side channels, swap files written to disk, memory dumps from system crashes) can leak buffer contents even with no code execution. Each requires a different mitigation.
Defense one: WebAssembly heap isolation
All Pwdly cryptography runs inside a WebAssembly module compiled from Rust using the dalek and libsodium-sumo stacks. The WASM linear memory is a separate buffer from the JavaScript heap. JavaScript can only see into it through explicit copy operations on typed-array views. This means an XSS payload that gains JavaScript execution in our page cannot simply read "the master key variable" — there is no such variable in JavaScript. The key lives inside an opaque memory region the attacker has to actively reach into.
Defense two: dedicated worker context
The crypto WASM module runs inside a dedicated Web Worker. Workers have their own JavaScript context, completely walled off from the main thread's DOM and global scope. Communication happens through structured-clone message passing — strings and ArrayBuffers in, ciphertext out. This means an XSS payload on the main thread cannot import the worker's globals or peek at its memory. To extract a key, an attacker would need to compromise the worker itself, which has a far smaller attack surface (no DOM, no third-party scripts, no event listeners on user input).
Defense three: ephemeral buffers and zeroization
Every sensitive buffer — the master key, derived subkeys, plaintext secrets during display — is allocated, used, and then explicitly overwritten with zeros before being released. We use libsodium's sodium_memzero specifically because the compiler is forbidden from optimizing it away. The result is that a memory snapshot taken thirty seconds after a decryption finishes contains nothing but zeros where the key used to be. This protects against post-incident forensics and crash-dump leaks.
Defense four: time-limited derived keys
Your master key derived from your mnemonic is the most valuable secret in the system, so we never keep it around. After unlock, we derive a session key, a project-key-unwrapping key, and a few short-lived subkeys, then immediately zeroize the master key. The subkeys themselves expire after a configurable idle period — five minutes by default — and the worker is torn down entirely. Locking your vault doesn't just hide the UI; it destroys the keys.
Defense five: a hostile CSP
Pwdly ships with one of the strictest Content Security Policies you'll find on a production app. No inline scripts. No eval. No third-party origins for scripts, styles, or fonts. No remote analytics with code execution. We self-host every dependency. The CSP is enforced in report-only mode in staging and in strict mode in production, and any violation triggers an alert. The result is that even a successful XSS finding has very few primitives to work with.
What we can't fully defend
We're honest about the limits. A malicious browser extension with full host permissions on our domain can — by design of the extension API — inject code into our context. The browser is the trust boundary, and we can't move it. Our best mitigation is to make this attack visible: we display warnings when we detect content-script injection patterns, and we recommend a hardened browser profile (or our standalone desktop app) for high-sensitivity teams. The same applies to physical access to an unlocked machine: we can't out-engineer someone sitting at your keyboard.
The point of all this
Memory security in the browser is layered, imperfect, and absolutely worth doing. Each defense we've described raises the cost of an attack by an order of magnitude. Stack five of them and the practical attack surface shrinks to a small set of high-effort scenarios that we can talk about openly. That's the goal: not a magic bullet, but a system where every layer has to fail simultaneously for your secrets to leak. Your master key deserves that much paranoia.


