Hacker Newsnew | past | comments | ask | show | jobs | submit | wtarreau's commentslogin

In my opinion it's the opposite. This type of associations is welcome, and they are fine to promote free software and help people, but they are exactly like neighborhood associations: they're mostly local, relying on volunteers with limited time who become a new dependency for people who were not using these services. That's fine for limited use case but it doesn't scale at all, and causes a huge duplication of efforts (organization, software creation -- several of them reinvented stuff that already exists, advertising etc). And associations rarely if ever merge, because most often association creators have a very clear view of what they're seeking (often idealist) and are rarely willing to compromise it and accept to adopt another association's mostly similar but not exactly identical goals (it often works very similarly to political parties). GAFAMs would never exist under this model.

The problem is that such services which proudly run on low budget, volunteers and recycled hardware, cannot be relied on by companies without risking to enter legal trouble in case of major incident, so it means that a higher-grade service is still needed, with a dedicate funding, and we're facing fragmentation. We must not reproduce the scheme of cloudwatt either. Too much money injected into a wet dream that was only used to spend lots of money in consultants coming here just to confirm their presence and get their check.

What is needed instead is to sponsor the development of such activities by a few (2-3) well-established competing companies, so as to avoid the regular risk of monoculture that diverges from what users expect, and help them reach the point where their offerings can compete with GAFAM's for both end users and enterprise. The contract should be clear that services must rely on open formats, make it possible for leaving users to retrieve all their data, that software developed under such funding must be opensource, though technology acquisition is fine, and that these offerings must become self-sustaining at one point (i.e. a mix of free+paid services). The EU funders should have enough shares of these activities so that their permission is required for business acquisition and that they can restrict it to EU-based companies, so that such companies can still grow and seek public funding.

What we need is a few durable big players, not 10000 incompatible associations each with their own software suite, that no enterprise can trust over the long term and that cannot resist a trivial DDoS by lack of a robust infrastructure, and who are not organized enough to run full-stack security audits to make sure that user data are properly protected. These ones are only fine for friends and family but that's not what we're missing the most (the proof is that they already exist).


No, it needs first to encourage local investment. Companies who seek investors or who get sold do not do it by pleasure, but as a last resort before dying. And in the EU you don't get any offer to save a company that has a limited commercial activity. Many companies die every day by lack of money. It turns our that US etc are willing to take much more risk and invest sufficient money to transform such fragile companies into durable ones. And in the case of software it's great, because software is sold all over the world, and the income serves to hire more local people, so in the end it's a way to really develop EU sales to the rest of the world. It would clearly be better if the investors were EU-based, but at least it's better than nothing that some investors are willing to risk their money on such companies.


Just doesn't work for me, it says "here's the combined image" after ~10s but shows nothing at all. Maybe already victim of its success ?


Blaming the audience makes sense because after all, they're the ones not getting the message right and not asking the presenter to explain it better. But it remains the presenter's failure to catch their attention better and try to deliver a clear message.

Every time I had a presentation, I tried to analyze the failures (including listening to me when it was recorded, a really painful experience). Certain mistakes such as like having slides on a white background that makes attendees look at the screen and read instead of watching the presenter and listening to him can be devastating. Just because attendees are naturally attracted by light. It's not the audience's fault, it's the presenter's fault (and to some extents the tools in use). A good exercise is to stop slides from time to time during the presentation (i.e. switch to a black one), you'll be amazed how much you suddenly catch the attention, you feel like you're at a theater. It even manages to catch attention of those who were looking at their smartphones because the light in the room suddenly changes.

Also another difficulty which is specific to English native speakers is that many of them initially underestimate the difficulties of the audience to catch certain expressions (with some people it's very hard to distinguish "can" from "can't" for example, which complicates the understanding), or idiomatic ones, or references to local culture, because such things are part of their daily vocabulary. Of course, after a few public talk, when they get questions at the end proving there were misunderstandings, they realize that speaking slower, articulating a bit more and avoiding such references does help with non-native listeners. Conversely, when you present in a language that is not yours, you stick to very simple vocabulary using longer sentences to assemble words that try to form a non-ambiguous meaning. It can probably sound boring for native speakers but the message probably reaches the audience better.

In any case, it definitely always is the presenter's failure when a message is poorly delivered and their responsibility to try to improve this, however difficult this is. It's just important never to give up.


Thanks for the pointer, it looks particularly interesting. I'm not good with the terminology and it always takes me a while to figure which properties we're talking about starting from a name. But the reported times in the article look pretty good and certainly interesting to consider. One difficulty I'm facing with ebtrees and strings is that I need the position of the first difference, that strcmp() doesn't return to you, and that if you reimplement yourself in multi-byte matches at once (I did it already), it will quickly upset valgrind and sanitizers for reading out of bounds. That makes such functions annoying to adopt in various projects as it requires more customizations. So I kept the hand-crafted one-byte-at-a-time comparison, but figured that other approaches based on just a comparison (strcmp, like used in rbtrees) could actually be a win for this reason.


The problem is not the performance of the low-level crypto code IMHO, but how it interfaces with the rest, which is where you're crossing a myriad of locks (and atomic ops for newer versions) that cost a lot as soon as you're interested in using more than one CPU core :-/


Just to be clear, we don't care at all about performance of 1.0. The tests resulting in the pretty telling graphs were done in 1.3 only, as that's what users care about.


Absolutely. Sometimes when using OpenSSL in performance tests, you notice that performances vary significantly just by switching to a different memory allocator, which is totally scary.

I hadn't seen the conversation above, thanks for the pointer. It's surrealistic. I don't see how having to support multiple file formats requires to invest so many allocations. In the worst case you open the file (1 malloc and occasionally a few realloc) and you try to parse it into a struct using a few different decoders. I hope they're not allocating one byte at a time when reading a file...


Not to mention the catastrophic security that comes with these systems. On a local ubuntu, I've had exactly 4 different versions of the sudo binary. One in the host OS and 3 in different snaps (some were the same but there were a total of 4 different). If they had a reason to be different, it's likely for bug fixes, but not all of them were updated, meaning that even after my main OS was updated, there were still 3 bogus binaries exposed to users and waiting for an exploit to happen. I find that this is the most shocking aspect of these systems (and I'm really not happy with the disrespect of my storage, like you mention).


Why do snaps have sudo at all?


The sudo binaries in the snaps are likely to have their SUID bit stripped, so they won't cause any trouble even if they have known vulnerabilities.


Yep. Actually H1/H2/H3 do have the same problem (remember the good old days when everyone was trying to pipeline over H1?), except that H1 generally comes with multiple connections and H3 currently goes over QUIC and it's QUIC that addresses HoL by letting streams progress independently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: