It appears to be ~3-4 ms faster when counting repositories like redis which is rather nice for me as a free speed boost. Probably related to the changes to GC listed here https://golang.org/doc/go1.12#runtime
I'm impressed every release by Go's GC improvements. IIRC, they started with a terrible conservative GC. It was slow and (particularly on 32-bit platforms) ineffective. But then they made it increasingly precise, then targeted 10 ms pause times, then sub-millisecond, and so on. All with far fewer knobs than the Oracle JVM.
I understand there's still a cost compared to non-GCed languages, but I think it's mostly RAM usage (perhaps affecting CPU cache effectiveness and thus efficiency) rather than tail latency.
I feel that Go really excells in getting out of your way. I rarely find myself fighting with the language or runtime. The few occasions I have have been related to native interop and generally mean I'm having to look at the internal fields of slices or strings to modify data effeciently, and even that isn't too diffit.
I often find myself fighting with the language but the runtime more than makes up for it.
In particular the GC & thread scheduler are great. The fact that they are accessible and digestible by the average go dev is so impressive a testament to their authors and their focus.
> But then they made it increasingly precise, then targeted 10 ms pause times, then sub-millisecond, and so on. All with far fewer knobs than the Oracle JVM.
The number of "knobs" in Go's GC is the same as that of G1: max pause time and max memory usage. (HotSpot has more intuitive names for its configuration settings compared to Go's confusing names.) More importantly, Go's GC sacrifices a very large amount of throughput in pursuit of low latency at all costs. This is not a very good tradeoff for most applications.
I haven’t checked since 1.10 but at that time you could definitely see what I presume was the impact of lack of compaction on heap allocations over time. That is allocations would take longer as your program ran longer.
This is mitigated quite a bit by how much gets stack allocated but I imagine there are workloads that this hampers a lot, in particular large in memory data sets.
That’s not to disagree with your point the GC improvements are great for my workloads but only to point out there is a cost to them.
There are GC paradigms that do compaction and do not stop the world (https://www.azul.com/products/zing/). They are expensive in other dimensions though (actual cash).
Again, not disagree-ing with the choices the golang team are making on GC, they work great for my workloads but they aren't without trade off for other kinds of workflows.
Also this release introduces ABI versioning, it should be a first step to introduce register based calling convention and deliver a 5% performance boost.
It appears to be ~3-4 ms faster when counting repositories like redis which is rather nice for me as a free speed boost. Probably related to the changes to GC listed here https://golang.org/doc/go1.12#runtime