I've always thought that Nagle's algorithm is putting policy in the kernel where it doesn't really belong.
If userspace applications want to make latency/throughput tradeoffs they can already do that with full awareness and control using their own buffers, which will also often mean fewer syscalls too.
The actual algorithm (which is pretty sensible in the absence of delayed ack) is fundamentally a feature of the TCP stack, which in most cases lives in the kernel. To implement the direct equivalent in userspace against the sockets API would require an API to find out about unacked data and would be clumsy at best.
With that said, I'm pretty sure it is a feature of the TCP stack only because the TCP stack is the layer they were trying to solve this problem at, and it isn't clear at all that "unacked data" is particularly better than a timer -- and of course if you actually do want to implement application layer Nagle directly, delayed acks mean that application level acking is a lot less likely to require an extra packet.
It's kind of in User Space though - right? When an application opens a socket - it decides whether to open it with TCP_NODELAY or not. There isn't any kernel/os setting - it's done on a socket by socket basis, no?
Technically yes, practically userspace apps are written by mostly people that either don't, or don't want to care about lower levels. There is plenty of badly written userspace code that will stay badly written.
And it would be right choice if it worked. Hell, simple 20ms flush timer would've made it work just fine.
The tradeoff on one program can influence the other program needing perhaps the opposite decision of such tradeoff. Thus we need the arbiter in the kernel to be able to control what is more important for the whole system. So my guess.
I spent around 15 years doing Java exclusively, and after discovering Go (around 2012) I couldn't wait to adopt it and have never looked back since. I don't think Go is perfect, but I find it a much better default pick for most things I've worked on compared to Java. Of course Go has limitations, especially in the type system and error handling, but rather than go back to Java for those cases where it matters most to me, I tend to choose Rust these days. It shares some of the advantages of Java (compared to Go) and also comes with its own set of other major advantages too.
I just interpret this whole thing as "developer realises their preferred set of trade-offs matches host A more than host B, so they are switching hosts". This happens fairly frequently in various directions for many different reasons.
Slightly off topic, but related: I wish we could focus less on which git host is "best" and more on figuring out workable interoperability between them. Sadly, it seems less of a technical challenge and more a question of motive.
But it's not about a git host. It's about a discussion / issues / code review host.
All it takes to host git proper is a network-accessible machine with git and ssh. It's the trivial part.
Making it convenient for you to communicate with other varied humans contributing to, or otherwise interested in the code, is the key differentiator. And apparently this is not the part SourceHut prioritizes. No wonder, because it's the hardest part.
> Making it convenient for you to communicate with other varied humans contributing to, or otherwise interested in the code, is the key differentiator. And apparently this is not the part SourceHut prioritizes. No wonder, because it's the hardest part.
Just because you don't seem to understand or agree with another person's priorities doesn't mean that they don't have them. By my reading, contributors to SourceHut absolutely do prioritize tools that humans use to communicate, and in particular ones that have been demonstrated to support complex and nuanced technical discussions.
SourceHut is new, and likely has a ton of competing priorities.
Also, different groups have different preferred styles of communication. (E.g. chat vs email vs forums is a typical divide.) Different places offer different styles, and this is great, because one size often does not fit all.
That said, most people are conditioned by using GitHub, and this sets their default expectations. Then the network effect kicks in.
I'm not sure we have full data on the question, but by my own experiences the default expectation for most people regarding version control is copy the file and rename it. The default expectation for most people on collaboration tooling is sending an email. If they consider such things at all.
We can easily define a niche within which GitHub-awareness can be presumed but it's certainly not "most people".
I suspect the majority of people who come to GitHub with intentions other than contributing come either to skim the README for installation instructions, or to complain about a problem. They may not even know about git. This is the wide kind of GitHub-awareness, which still assumes a level of computer literacy above that of most people.
It's also not entirely a technical issue, anyway. In a vacuum the Sourcehut UX might be fine, but if people are used to GitHub-style UX, then they will have a hard time with Sourcehut and end up doing the wrong thing, like emailing the maintainer directly rather than using the mailing lists – through no fault of the mailing lists themselves!
I agree about the discussion / issues aspects. I suspect doing that interoperably would be hard or impossible because of the vastly different feature sets on different hosts.
All I am looking for is a few interoperability features for the repositories themselves. In fact, if I were to describe the most important single feature, it would be something like this:
I am able do some work in my repo hosted on host A, which was originally cloned from a repo on host B, and offer it back to the author on host B to merge if they wish. I'd like to be able to do this by notifying them (in a way integrated into my workflow) with the same kind of information that `git request-pull` generates. Importantly I would not need an account on host B for this. Possibly this might need some one-off setup on the original authors side, perhaps adding my repo as an remote.
The use cases I have in mind here are mostly occasional contributions or minor changes. I don't think this would work well for people who are frequent major contributors - they would really need the discussion / issues aspects to collaborate effectively.
The UK government almost seem to be deliberately passing multiple pieces of legislation that they know will be overturned due to ECHR, because they believe such rulings would strengthen their argument for withdrawing from the convention.
I think "scam" is a bit strong. It maybe offers less value in some scenarios that people assume, so perhaps offers a false sense of security.
People have been saying for many years that ticking the "encrypt at rest" box in your cloud console only protects against things like people breaking in to their data centre, and they are right. On the other hand, it's easy to do, and while arguably not helping much with actual security, it can be a cheap way of meeting policy requirements.
Although I am fortunate enough to be able to afford to subscribe to almost anything I might want to read, I don't like the idea of paywalls in general.
I believe they exclude the less fortunate from access to important resources, so I refuse to give my money to companies that use paywalls, because I see that as rewarding bad behaviour.
Instead, I always pay for sites I find valuable that don't have paywalls (e.g., guardian.co.uk)
I realise this isn't the same view that everyone has, which is fine. Vote with your money.
> I believe they exclude the less fortunate from access to important resources, so I refuse to give my money to companies that use paywalls, because I see that as rewarding bad behaviour.
I agree with what you wrote but I think it is an understatement.
The good fortune of spending an amount of years in a great coastal community with an excellent library system; those primarily physical assets I used as a child are now mainly digitized.
Where does that leave much of the world that cannot afford these exploitative paywalls?
This is truly a diversity & inclusion issue about providing the same 100 steps forward many of us were fortunate to receive, to everyone. Our tax dollars already funded much of it anyways.
We cannot permit the unjust enrichment of paywall tyranny!
Paywalls seem to work but at the cost of restricting access to a large group of people. It is evident from sites like Eurogamer, Wikipedia and The Guardian that a more patreon-like "supporter" model can work just as well without annoying people.
I really like that there is a focus on being able to deploy in k8s. Nice work!
However, some teams (ours) already run their own k8s clusters and would probably want to deploy gitlab in a namespace in there.
I hoped there would be simple example k8s manifests for doing this, but last time I checked I could only find helm charts. We don't use helm and don't want to use it.
If anyone knows some k8s manifests I can cut and paste to get started I'd appreciate it. Otherwise, it's going be be a job of creating it all myself, which right now is what's stopping us evaluating gitlab properly.
:wave: I'm a maintainer of the GitLab helm chart. You can certainly use `helm template` to get the output of our chart, and then make use of `kubectl` without issue. We have several large customers using this pattern, and is expected as an option due Helm 2.x's Tiller component being problematic in compliance regulated industries.
We can certainly work on surfacing this better in our documentation :thumbsup:
Whoops - I think you are suggesting deploying GitLab the application to your existing Kubernetes cluster. The issue, and highlight from the blog post, were about deploying your applications (in GitLab projects) to a Kubernetes cluster.
Yes, I was talking about deploying to our existing cluster. Pretty much the only requirement for software that we run is that it must run in our cluster.
Ironically, it's often harder to deploy applications that offer their own k8s deployment approaches because they often have their own opinionated ways of using it that don't match our policies.
This looks great, and I look forward to finding time to try it soon.
However, I believe it's misleading to call it "open source". The SPPL license is not generally considered to be an open source licence by any meaningful definition, and in particular does not meet the OSI definition and is incompatible with most licences that do.
I understand the need these days to protect against aggressive cloud providers, but there are other ways to achieve that without becoming completely non open source, such as the BSL
Thanks for the pointer - will check it out. We are totally novice on this - we don't even have a license attorney. We just picked SSPL because that's what everyone seemed to suggest to prevent the likes of AWS cloning it. Now that we got some visibility, we will carefully take a look at this issue.
But at heart we want to build an open-source community while still being a viable business on the likes of MatterMost, Elastic etc.
This is, what I think is, Mattermost's answer to the current debate around the ethics of FAANG (and others) using open source software to make lots of money without substantially contributing back to the OS projects financially or in code. https://github.com/mattermost/mattermost-server/blob/master/...
My understanding is that, Mattermost is okay with others making money from their software if they don't modify it - which will practically work for some, but not all, small companies, and will be very difficult for the big companies to use. If the big companies want to modify and use Mattermost-server for free they are forced to contribute back the changes to the OS project, and then can make as much money as they want. Or use option 2, pay Mattermost a bunch of money for the privilege of not contributing back code to the OS project. In other words FAANG and co can either contribute to Mattermost financially or in code - their pick.
If userspace applications want to make latency/throughput tradeoffs they can already do that with full awareness and control using their own buffers, which will also often mean fewer syscalls too.