Hacker Newsnew | past | comments | ask | show | jobs | submit | KevinChasse's commentslogin

I built Whispered, a web experiment in ephemeral, user-driven posts. Every post is born with 120 seconds to live, and the community decides what survives: upvotes add 60 seconds, downvotes subtract 15. No post can live longer than 5 minutes. The idea is to explore attention, judgment, and fleeting content in a simple, dynamic way. Minimal UI — just posts, timers, and a constantly evolving feed. I’d love feedback on the mechanics, engagement, and whether the Death Clock creates compelling tension.


Bastion’s cryptography isn’t AI-generated. The system follows well-established cryptographic primitives and protocols: PBKDF2-HMAC-SHA512 for deterministic password derivation, Argon2id for local key stretching, AES-256-GCM for encryption, and Shamir Secret Sharing over a prime field for secret splits. All design decisions are documented, and the code is open-source for verification."


Interesting approach. I like that this is explicit about human recovery rather than pretending crypto alone solves catastrophe. That said, this design and fully stateless systems like mine (deterministic derivation, no escrow) are solving opposite failure modes. Shamir-based social recovery assumes: trusted third parties remain reachable, they are willing and able to cooperate, and that recovery is an exceptional event. Stateless systems assume the inverse: no one can be relied on, recovery is impossible by design, and the primary threat is silent compromise rather than lockout. Neither is “better” universally; they’re value judgments. What I appreciate here is that the tradeoffs are made explicit instead of buried behind UX. One open question I’d be curious about: how you reason about coercion risk over time (friends change, incentives change), and whether you see this as something users should periodically re-shard as relationships evolve.


thanks for your thorough review and congrats on your launch! for my personal use case, I'm not worried about coercion, but many have highlighted it as a real risk. my answer to that is to do what you suggest: update my contact list yearly, send new ZIP files with bundles, and ask them to delete the previous ones.


Most prior attempts reduce to hash(master || site). Bastion treats password generation as a cryptographic protocol with explicit invariants, not a convenience function.

An important note is Hashing ≠ memory-hard Hashing ≠ unbiased sampling Hashing ≠ domain separation Hashing ≠ rotation without storage


Rotation is explicit and deterministic via the version parameter. Old passwords can be regenerated for rollback; new ones don’t require storage.


But you have to remember a version parameter per password??


Bastion does not treat the master as a “password.” It is a cryptographic root secret equivalent to a 256-bit key. If you downgrade it to a human-memorable string, you are violating the security model. Argon2id + 210k PBKDF2 rounds + rejection sampling makes brute force economically brutal


For storage neither does 1P; it masks the password with a 256-bit key. The password is merely to make unlock easier, but will soon support passkey unlock anyway. I feel you have designed this program based on a strawman and not how some of the vendors in this space implement their security model.


Bastion isn’t designed for convenience or multi-device sync — it’s a deterministic, stateless cryptographic protocol. The master isn’t a human-memorable password; it’s a 256-bit root secret. Lowering it to a “password” breaks the threat model. Unlike consumer vaults, Bastion explicitly enforces domain-separated salts, memory-hard derivation (Argon2id + PBKDF2), unbiased sampling, and versioned rotation — all provable invariants, not heuristic convenience. Syncing or masking passwords like 1P is a different design class: it trades third-party trust for usability. This isn’t a strawman — it’s an architectural choice to remove server-side attack surfaces and guarantee deterministic, stateless password generation.


You're just repeating yourself with AI slop, but staying incorrect on the point, which is another good reason to avoid this (at least with 1P I know I can talk to someone that doesn't respond in AI slop and actually has backbone). 1P Vaults are encrypted with a high entropy key just like your tool without needing to make a trade off. The master password aspect of 1P is a convenience, I imagine the same would be said about Bastion as you can simply lock and unlock a vault with a password.


Bastion has the same failure model as a hardware wallet or SSH private key. If you want recoverability, you accept third-party trust. Bastion refuses that trade.


Interesting approach. Exposing high-level goals rather than UI actions definitely reduces token overhead, but reproducible comparisons with open-source setups would strengthen the claim. Also, remote browsers introduce a new attack surface—sandboxing helps, but I’d like to see clear isolation guarantees against malicious pages or rogue scripts.


Nice catalog. One subtle thing I’ve found in building deterministic, stateless systems is that atomic filesystem and memory operations are the only way to safely compute or persist secrets without locks. Combining rename/link/O_EXCL patterns with ephemeral in-memory buffers ensures that sensitive data is never partially written to disk, which reduces race conditions and side-channel exposure in multi-process workflows.


Nice work. One thing I've noticed with locally checking extensions against threat lists is that the verification process itself can become a target. Stateless, deterministic verification — where hashes or IDs are derived on-device and never stored centrally — reduces risk of supply chain or server-side compromise. It’s a subtle design point, but it can prevent a malicious actor from using the verification system itself to exfiltrate data.


Great point. The current setup is exactly what you're describing, a fully local verification with no phone-home behavior.

The CLI/GUI tools I'm building read your locally installed extensions, extract their IDs, and check them against the CSV (which you can clone/download). No data leaves your machine during the scan.

The only "central" piece is the GitHub-hosted CSV itself, which is just a static file anyone can audit, fork, or host themselves. No API calls, no telemetry, no server lookups.

You're right that this design prevents the verification tool from becoming an attack vector. Even if my repo got compromised, worst case is a bad CSV, your local scan process stays isolated.

I'm also looking at surfacing critical permissions for locally installed extensions,things like "access to all websites," "read clipboard," etc. That way users can make informed decisions about what to keep based on what's actually authorized, even if an extension isn't in the malicious database yet.

Appreciate the security-minded feedback.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: