Hacker Newsnew | past | comments | ask | show | jobs | submit | gorgoiler's commentslogin

I don’t really understand how we are supposed to believe in e2ee in closed proprietary apps. Even if some trusted auditor confirms they have plumbed in libsignal correctly, we have no way of knowing that their rendering code is free of content scanning hooks.

We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.

(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)


The unfortunate fact about E2EE messaging is that it is hard to do. Even if you do have reproducible builds, the user is likely to make some critical mistake. What proportion of, say, Signal users actually compare any "safety numbers" for example? There is no reason to worry about software integrity if the system is already insecure due to poor usability.

Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?


I've been making this argument for a long time, and it's never popular.

People want to believe in E2EE, it's almost like religion at this point.

Protecting people is synonymous with E2EE, even if you cant verify it, and it can be potentially broken.

I was even more controversial and singled out Signal as an example: https://blog.dijit.sh/i-don-t-trust-signal/


There are good reasons to not trust signal. The very first line of their privacy & terms page says "Signal is designed to never collect or store any sensitive information" but then they started collecting and permanently storing sensitive user data in the cloud and never updated that page. Much more recently they started collected and storing message content in the cloud for some users, but they still refuse to update that page. I'm pretty sure it's big fat dead canary warning users away from Signal. Any service that markets itself to whistleblowers and activists then also outright lies to them about the risks they take when using it can't be trusted for anything.

Same, my default MO is assuming 'e2ee' is broken and unsafe by default. Anything that I truly don't want sent over the wire would be in person, in the dark, in a root cellar, underwater. Not that I've ever been in the position to relay juicy info like that. Hyperbole I know, but my trust begins at zero.

With e2ee please remember that it is important to define who are the ends.

Perhaps your e2ee is only securing your data in travel if their servers are considered the other end.

Also one thing people seem to misunderstand is that for most applications the conversation itself is not very interesting, the metadata (who to who, when, how many messages etc.) is 100x more valuable.


We don't even know if the passwords aren't stored in plain text.

Hah, yes I switched over as soon as they started showing the scenes behind the scenes behind the scenes.

I worked on the set of an electric shaver commercial once. I’m wouldn’t say out loud that the production team were up themselves, but in addition to the regular crew there was a second director on set making a “making of” documentary about the production process. For a shaver commercial.


It would be good to see a real example. There’s a sketch of one in the README.md but I’d be interested to see how it works in real life with something complicated.

  > Add users with authentication

  > No, not like that

  > Closer, but I don’t want avatars

  > I need email validation too

  > Use something off the shelf?
Someone in this place was saying this the other day: a lot of what might seem like public commits to main are really more like private commits to your feature branch. Once everything works you squash it all down to a final version ready for review and to commit to main.

It’s unclear what the “squash” process is for “make me a foo” + “no not like that”.


Yeah the squash question is the whole thing. If your commit history is "do X" -> "no, not like that" -> "closer" then your final commit message is just "do X" with no trace of why certain approaches were rejected. Which is arguably the most useful part of the conversation.

I imagine you could use AI as well to create a "squash prompt", but verifying using diff that the "squash commit" results in the same code.

> It’s unclear what the “squash” process is for “make me a foo” + “no not like that”.

Commit your specs, not your prompts. When a change is correct, any information of value contained in the prompt should be reflected in the spec.


The problem I have is that when you squash the code it does the same thing.

  PR.patch = a.patch | b.patch
  exec(PR.patch) = exec(a.patch | b.patch)
When you squash the spec you potentially do not get the same thing:

  PR.md = a.md | b.md
  ai(PR.md) != ai(a.md) | ai(b.md)

This is more like squashing the commit log messages, and those are typically rewritten not merely concatenated.

In a way that matches what you describe, using modern Python as an example, the prompt is equivalent to:

  dependencies = [“foo”]
While the code itself is more like:

  $ wc -l uv.lock
  245
You need both.

If you love helping children and also love moving small rectangles around a larger rectangle then Code Club from The Raspberry Pi Foundation is crushing it, and needs volunteers:

https://codeclub.org/


An excellent plan sir, with just two minor drawbacks…

one: what you created was amazing and we will miss you; and

two: you are dead.

— after RG/DN


I think you are right, but here’s a fun counter-example. I recently bought a new robot* to do some of my housework and yet, at around 200lbs, it required two people to deliver it (strength) get it set up (dexterity) and explain to me how to use it (intelligence).

* https://www.mieleusa.com/product/11614070/w1-front-loading-w...


You don't need a lot of imagination to predict those jobs can be done by other robots in the not so far future.

Yeah and I think that extends to even trades we see as protected because they often work in novel and unknown setting, like whatever a drunk tradesman rigged up in the decades previous.

Eventually it will be more economical to just destroy all those old world structures entirely, clear the site out, and replace it with the new modular world able to be repaired with robots that no longer have to look like humans and fit into human centric ux paradigms. They can be entirely purpose built to task unlike a human, who will still be average height and mass with all the usual pieces parts no matter how they are trained.


Most of the “delivery” (getting it from the factory to its final installed location) was done by machine: forklifts, cranes, ships, trucks, and (I'm guessing) a motorized lift on the back of the delivery truck.

”In browsers, the last successful product innovations were tabs and merging search with the URL bar.”

I see the point Ben is making even though there are a lot of nerdier innovations he’s skipping over — credential management, APIs (.closest!), evergreen deployments, plugin ecosystems, privacy guards, etc.

One aspect that model execution and web browsers share is resource usage. A Raspberry Pi, for example, makes for a really great little desktop right up until you need to browse a heavy website. In model space there are a lot of really exciting new labs working on using milliwatts to do inference in the field, for the next generation of signal processing. Local execution of large models gets better every day.

The future is in efficiency.


This is sort of related to a revelation I had once I got into Home Assistant.

The usual idea is a that a smart home becomes filled with smart devices and yet what worked really well for me was having dumb devices with a very smart brain in the middle.

Buttons, switches, lamps, and sensors are commodify Zigbee devices and the entirety of the logic and programming is done on the Home Assistant server. The downside is latency.


Usually you can bind ZigBee devices together. I have multiple IKEA "rodret" switches bound to generic ZigBee smart plugs from Aliexpress. Works great, with minimal latency.

With zha, you can bind them together from the Home Assistant device page.

I usually favor an architecture that can work without Home Assistant, such as standalone ZigBee dimmers, or contactors that can work with existing wiring. Home Assistant brings automation on top, but it doesn't matter much if it breaks (I mostly notice the shutters not opening with sunrise). Then Internet connectivity can bring additional features, but most things still work if it's down.

I'd say it has been pretty solid for years, and I don't stress too much when I have server issues.


”Based on two words separated by 16 others, the President asserts the independent power to impose tariffs on imports from any country, of any product, at any rate, for any amount of time. Those words cannot bear such weight.”

Zing! Surprisingly spicy writing for such a gravely serious body.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: