Hacker Newsnew | past | comments | ask | show | jobs | submit | otbutz's commentslogin

> HTTP/3 was designed for unstable connections, such as cellphone and satellite networks.

Satellite networks are not a good example. Regular HTTP/1.1, paired with a PEP, outperforms HTTP/3 by an order of magnitude.


The specific thing they’re talking about connection migration (and resumption) through multiple disconnection events — HTTP/1 and 2 do not offer a similar feature.


[flagged]


You will need to back up that accusation, because now you are accusing Daniel Stenberg of being a liar. On paper HTTP shouldperform much better than HTTP 1.1 and especially HTTP 2 on links with high packet loss since TCP handles packet loss very poorly.

https://http3-explained.haxx.se/en/why-quic


[flagged]


> you folks just don't even bother to learn TCP first before shitting on it.

Let me stop you right there. I promise you you're not the only person who really knows how TCP works. The people who made HTTP2 and HTTP3 are clearly smart, knowledge folks who have a different perspective than you do. It's OK to disagree with them, but it's a bad look for you to assume that they're ignorant on the subject.


I didn't assume they are ignorant. I assumed that the are fraudulent. They knew they couldn't really improve existing protocols because it's simply not possible but moved forward anyway for personal gain. Just like everything from Google for the last 20 years. You make big splash with new and 'revolutionary' 6th version of instant messaging, get your promo and move on. And here we are, HTTP2 ran its course, time for HTTP3 because we need the promotions and the clout of 'innovators'


Not to disagree but there is something to be learned from failing


They didn't fail. They got their promotions. It's you, the end user, who is left holding the bag. But fear not, HTTP3 is on the horizon and this time it's going to be glorious!


> Also TCP handles packet loss just fine

That's just plain wrong. I commented in more depth in https://news.ycombinator.com/item?id=39709591. In short, TCP treats packet loss as congestion signal and slows down. If the packet loss was due to congestion that's absolutely the correct response and it increases TCP's "goodput". But if the packet was lost due to noise then it has the opposite effect and goodput plummets to a fraction of what the link is capable of.


In wireshark logs that I saw TCP almost immediately resends the lost packet and exponentially slows down after the resent packet is lost.


That HTTP2 is the worst of the bunch is a given. But HTTP3 should on paper be able to handle packets loss better than HTTP1.1 and way better than HTTP2.

And SACK does not seem to help under my real life workloads. Maybe poor implementations. I don't know.


Just a few years ago HTTP2 was the best thing since sliced bread and any criticism was silenced. This begs the question if HTTP2 was so great then why did they come up with HTTP3? SACK is not a silver bullet because when you have high latency high loss link then nothing really helps. The difference is that HTTP2/3 folks like to deny reality and claim that they can do better when in fact they can't


I don't have high latency, I have high packet loss and high latency on some packets but most not. And that is something TCP cannot handle without breaking down totally but some UDP based protocols can handle it just fine. I don't know about HTTP3 though, that might also fail under those circumstances.


I suspect your case is not that the packets are simply dropped but that the TCP connections are reset. Lookup your TCP stack statistics to verify. If that's the case try to find out if the resets are made by your side, the source or the intermediaries.


I was actually curious why SACK's don't resolve issue, but according to https://stackoverflow.com/questions/67773211/why-do-tcp-sele... > Even with selective ACK it is still necessary to get the missing data before forwarding the data stream to the application.


Yes, TCP provides the guarantee that your application will always receive data in the same order it was sent. Your kernel will do the necessary buffering and packet reordering to provide that guarantee.

So SACK might reduce packet resends, but it doesn’t prevent the latency hit that comes for having to waiting for the data went missing. Even if your application is capable of either handling out-of-order data, or is simply capable of handling missing data.


It's possible to build something similar on top of TCP, see Minion [0] for an example. There are multiple reasons why this is less practical than building on top of UDP, the main two being, from my perspective: (1) this requires cooperation from the OS (either in form of giving you advanced API or having high enough privilege level to write TCP manually), and (2) this falls apart in presence of TCP middleboxes.

[0] https://dedis.cs.yale.edu/2009/tng/papers/nsdi12-abs/


Which is exactly the same in HTTP2/3. You can't forward a stream when it's missing some data.


Yes, bytes from a *logical* stream need to be delivered in order. But in HTTP2 (3) multiple logical streams are multiplexed on top of one physical TCP (QUIC) connection. In the HTTP2 case this means that a dropped segment from logical stream A will block delivery of subsequent segments from an different logical stream B (which is bad for obvious reasons). QUIC doesn't have this problem, which is a large part of its value proposition.


Except it doesn't work in practice and real world data proves it. Multiplexing streams inside of a single TCP connection don't magically make your data link less prone to dropped packets or high latency.


I am curious of why the kernel does not allow it; there could be an API that gives you fragments of the stream in "events" like {slice: [10000, 10100], data:<blob>} and let the application have a peak preview of future data


The issue you linked to is an excellent example of why everyone and their dog is becoming a CNA these days. It's the only way to keep CVE spam at bay. The system has been broken by the gamification of CVEs and is in desperate need of reform.


Isn't the QUIC transport the default since v0.6.0? Compared to TCP, it's much better suited for satellite communications.


Quic is not suited for satellite communications. There are good custom protocols though that are FEC heavy.


Anything that involves high latency and/or packet loss. And in general: testing servers.

The question is also a bit odd, to be honest. There's no need to look for a beneficial use case to add support for the next iteration of the HTTP protocol.




Yeah. tldr:

- Both Caddy and nginx are really fast.

- Both can handle high traffic loads.

- Caddy and nginx have different failure modes. Nginx will drop requests. Caddy will hang onto them and serve them even if it takes longer.

(Note that "Caddy" is basically equivalent to "Go" -- we use its std lib HTTP implementation. It's really comparing Go and C.)


You could try to force QUIC which should handle latency a bit better. The priority of TCP/QUIC will be configurable with the upcoming v1.23.5 release:

https://github.com/syncthing/syncthing/pull/8868


OT but i wish they'd add Mastodon support.

https://github.com/caronc/apprise/issues/586


Mastodon Support added


Let me see what I can do about prioritizing this one a bit.


That would be great! I've only had a quick look at the docs, but the API seems reasonable:

https://docs.joinmastodon.org/methods/apps/oauth/ https://docs.joinmastodon.org/methods/statuses/


It's not the sending the messages that has me stumped, it's the setup of the server (the hosting of it itself).

Do you know of a public service that exists i could build/test the Apprise plugin against? Would love just even temporary access to a server to perfect its design (then my account could be terminated)


You don't need to host a Mastodon instance yourself. Plenty of them are open for registration :)

e.g https://masto.ai/ or https://fosstodon.org


Thank you! I feel silly for asking when the answer really was right there in my face. I was able to sign up with a service. Apprise will hopefully have Mastodon support next!


There is quic-go[1] but i don't think that it's sufficiently optimized[2-4] to be used for this kind of workload. caddy will use it to provide HTTP/3 by default[5] in the upcoming 2.6.0 release.

[1]: https://github.com/lucas-clemente/quic-go [2]: https://github.com/lucas-clemente/quic-go/issues/2877 [3]: https://github.com/lucas-clemente/quic-go/issues/2607 [4]: https://github.com/lucas-clemente/quic-go/issues/341 [5]: https://github.com/caddyserver/caddy/pull/4707


I have also been struggling lately with prepared statements and obscure query planning decisions. The developers of the mssql JDBC driver seem to be determined to force the use of prepexec in their driver, which causes high execution times for typical ORM-generated queries:

https://github.com/microsoft/mssql-jdbc/issues/1196


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: