Firefox has a similar testing feature called "chaos mode" that tries to shake out bugs by doing things like randomly forcing short file or socket reads or changing hashtable iteration order. Due to the increased chance of buggy behavior or performance issues, this isn't the sort of thing you would do in production (unless you're on Netflix's chaos engineering team. :)
I thought an "interesting" part of the change was actually how they made a 50% chance work for the extra read. They select across a closed channel twice -- and it results in a mostly even distribution of chance.
for i := 0; i < 100000; i++ {
select {
case <-closedChan:
a++
case <-closedChan:
b++
case <-closedChan:
c++
case <-closedChan:
d++
}
}
Results in an almost even distribution across the select choices!
Interesting. In contrast, the JavaScript standard says Map object iteration must return values in original insertion order. Firefox used to randomize the iteration order (to help find bugs), but (surprise!) it broke some web content. :(
I don't think this is to shake out bugs. This is just to have a more random pseudo-random number generator so that applications using the crypto package for encryption will be more secure.
One really easy way to break any encryption scheme is to crack the random number generator. This works because random number generators are deterministic and you can get them to output the exact same stream of random numbers as they did before if you have the same seed / state. This is presumably about making the output of the random number generator harder to predict.
Wow, vgo seems pretty reasonable. ...I have to take one of my "damn it, go" complaints back.
Interesting to see their critique of bundler/et el...AFAICT isn't this vgo min version just old-school Maven resolution + semver major version in the import path?
Tools to auto-bump semver by examining the code's own public API would be nice.
The "go release" command is intended to support selecting the right version for a new package release. The golang.org/cmd/api command detects incompatible API changes and so can serve as the basis for an auto-bump feature.
I'd also like to see a command to generate a new major version as a wrapper of the previous major version (or vice versa), to simplify creating major versions that can coexist within the same program.
[EDIT] Will leave the questions up have delved deeper into this myself and the above questions I have are basically answered but I am surprised at the route taken. Seems like everyone should be including the major version in their import path from the very beginning and every subsequence version which seems... ehhhhh.
------------------
Not a Go dev:
Is the mantra of "the new package must be backwards compatible with the old package.", which is an underlying requirement for vgo to work, actually followed in the Go ecosystem? It seems crazy to me that a package would have to change its name if it wants to introduce a backwards incompatible change.
Has this happened in practice? How does it work? Are people going to suffix packages with "Really major major" version numbers separately from the actual version number?
The criticisms of bundler in the proposal seem to be be a bit wierd when the solution is to force a new paradigm on developers to make vgo's job easier.
Why even use Semantic versioning at that point since a major version can never be incremented.
In all big packages I use: yes, it is. This is actually a must given there was no (standard) version management system. If you have no versioning system and don't make your code backwards compatible, you break someone else's code. So, either you make a new repository, or you make a subpackage in your current repository with the new, breaking version.
Free lunch is over for me. For the past 4 releases, each delivered 3-5% performance boost to my Go application. Go 1.11beta1 marks the end of that, it is sadly 1-2% slower than Go 1.10.
Not complaining, just trying to hear numbers from other developers using their production software...
I think I just realized something about golang as I was walking down the hall. This is something which confuses many people in a way which many respond to with unwarranted emotion, since they are unable to make this connection.
Since Go uses Duck Typing, having certain methods is effectively a Type Annotation for a given type implementing an Interface. Many programmers, failing to realize this, become outraged at "boilerplate." It is a design trade-off forcing the programmer to specify all cases when implementing an interface, in much the same way that golang's exception handling eschews tools that let you create clever catch-alls and requires you to specify everything. It's just the "no magic" tradeoff.
That's not my experience. checking for errors is completely optional. I'm probably just a bad lazy programmer that leans to hard on the compiler. I like the catch everything and then refine error handling as i get more understanding of failure modes (i'm from java, i avoid runtimeexception).
Go makes me feel like i need to understand all possible error cases for every line as i type it. I feel like i have to handle it right now, forever, or i'll forget that line can have an error. I really miss being able to gradually refine error handling, because i can rely on the compiler to remind me about all the stuff that can go wrong.
Indeed, Go's position is that your style of programming produces fragile programs, and that it is better to consider the failure modes of each expression as you type it, rather than afterwards.
is ceremony that requires no thought and adds no value. There should be sugar for this so it doesn't completely swamp the tiny minority of cases that actually do something.
You can enrich err.Error(), but when you do you lose the type of the original error and any other fields it may have had. This is something the language can and should do more reliably (i.e., sugar could mutate err.Error() and prepend "MyStruct.myMethod failed: ", or better yet add a damn stack trace).
is ceremony that requires no thought and adds no value.
Aw, hell no! That's a very easily detectable (anti)pattern! You can scan even a very large code base for that! As noted in other comments, you should be adding some contextual information at this point.
Contrast this with having too-much magic, where you might have absolutely no trace in code left. You can't usefully search a large code base for all such relevant blanks.
This is generally a bad idea in any production code. You normally want to add context to the error so that your logging captures the issue at the very minimum. You may want to do cleanup in the function etc. I almost never have written this style of code in many years now.
> Depth first, you make the happy path work, then refine error handling. Breadth first, you ensure everything is handled at every moment.
That's correct. I've heard people describe Go as wanting you to code the sad path first, and then backfill the happy stuff (the business logic) once the error handling is complete. This tracks to my experience. That is, when I program that way, I feel like I'm in sync with the language.
it's a real trade-off. i think for more interactive software, a "fuck it and throw an exception" approach can be reasonable. for server software it's death. (you need to service the request to the best of your ability, and make sure that you are handling any failures in the best possible way.)
it can be not-so-bad, because it's something like, "oh, i passed in the wrong file" or "oh, that menu item is broken" or something like that. you as a user are entirely in control of the inputs & the state is usually just a file or two, so it's easy to just try again until things work.
but for something running on a server somewhere, many inputs are beyond the user's control & the state is long-lived and fragile (if my HN or FB account gets somehow b0rked, that's a serious problem for me), so it's important to handle all possible error conditions in the best possible way. in my experience this is something exceptions make very hard to do.
(& i would still prefer explicit error handling even for front-end software, but i think the argument for exceptions is at least stronger there.)
Writing most of your front/backend business logic in Go is one possibility, but I think that easily moving complex logic to the web as a platform (especially for web demos, see these[0] example with gopherjs) is the most compelling possibility. Generally, I'm excited that this democratizes the web programming space, because the best use cases are often the ones no one has thought of yet.
Ok, ninja edit, this whole comment was based on a misunderstanding and adds no value to the discussion. I've been corrected below, the rest does not need to be preserved. Downvote it to the bottom for me please; I can't seem to delete it.
crypto: randomly read an extra byte of randomness in some places.
https://go-review.googlesource.com/c/go/+/64451
Firefox has a similar testing feature called "chaos mode" that tries to shake out bugs by doing things like randomly forcing short file or socket reads or changing hashtable iteration order. Due to the increased chance of buggy behavior or performance issues, this isn't the sort of thing you would do in production (unless you're on Netflix's chaos engineering team. :)