Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go 1.11 Beta 1 is released (groups.google.com)
134 points by _lwad on June 26, 2018 | hide | past | favorite | 70 comments


This change is interesting:

crypto: randomly read an extra byte of randomness in some places.

https://go-review.googlesource.com/c/go/+/64451

Firefox has a similar testing feature called "chaos mode" that tries to shake out bugs by doing things like randomly forcing short file or socket reads or changing hashtable iteration order. Due to the increased chance of buggy behavior or performance issues, this isn't the sort of thing you would do in production (unless you're on Netflix's chaos engineering team. :)


I thought an "interesting" part of the change was actually how they made a 50% chance work for the extra read. They select across a closed channel twice -- and it results in a mostly even distribution of chance.

    for i := 0; i < 100000; i++ {
        select {
        case <-closedChan:
            a++
        case <-closedChan:
            b++
        case <-closedChan:
            c++
        case <-closedChan:
            d++
        }
    }
Results in an almost even distribution across the select choices!

https://play.golang.org/p/cnAXBEZJm2B

(fixed playground link)


You're printing `b` three times in that Playground which makes it a look skewed.

    a: 25100
    b: 25080
    c: 25080
    d: 25080
vs

    a: 24973
    b: 25010
    c: 25013
    d: 25004


So it's deterministic. I got the same numbers.


On the Playground or on your PC? The Playground has an execution cache, so the same input code always gives the same (thus "deterministic") result.


Ah OK, it was on the playground. Is it deterministic when not cached?


It is not deterministic on playground if it is not cached.

You can add dummy code to the file to break free from the cache.


Adding some white space also does the trick


> or changing hashtable iteration order.

This is a builtin feature of Go.

https://blog.golang.org/go-maps-in-action


Interesting. In contrast, the JavaScript standard says Map object iteration must return values in original insertion order. Firefox used to randomize the iteration order (to help find bugs), but (surprise!) it broke some web content. :(

https://bugzilla.mozilla.org/show_bug.cgi?id=743107


I don't think this is to shake out bugs. This is just to have a more random pseudo-random number generator so that applications using the crypto package for encryption will be more secure.


Why would it do it randomly then? Just read in the extra byte if you want extra entropy.


Starting at the link above, click on the #21915 in the commit message, which takes you to https://golang.org/issue/21915.


So for no reason, it is about forcing code not to rely on the behavior.


There's a reason given.


As in, the reason has nothing to do with entropy, and it is just about breaking code relying on an unspecified behavior.


One really easy way to break any encryption scheme is to crack the random number generator. This works because random number generators are deterministic and you can get them to output the exact same stream of random numbers as they did before if you have the same seed / state. This is presumably about making the output of the random number generator harder to predict.


> This is presumably about making the output of the random number generator harder to predict.

Randomly deciding to add an extra byte of entropy only adds 1 bit of entropy. Adding the extra byte always adds 8 bits of entropy.

There is no cryptographic reason to do this. It is just to break assumptions about the API.


Go 1.11 Release Notes: https://tip.golang.org/doc/go1.11

- Go 1.11 adds experimental, integrated support for package versioning (vgo).

- Go 1.11 adds an experimental port to WebAssembly (js/wasm).



Important to note about `vgo` for beta1 though:

> NOTE: This is not present in go1.11beta1 but will be available in future betas and subsequent releases.


Anyone ever notice that the assignment + declaration syntax looks like a sideways gopher? :=


I've seen it called the Zoidberg operator, too -- but I can see the gopher now that you mention it!


All these wholesome ideas make me feel bad because the name that came naturally to me is crass.


- :=

+ Ɛ:=

Add the missing ears

It actually works as long as no `gofmt` applied. Code: https://play.golang.org/p/bMDxAq_l6F0


"The gopher operator"


That's what I'll be calling it from now on.


Wow, vgo seems pretty reasonable. ...I have to take one of my "damn it, go" complaints back.

Interesting to see their critique of bundler/et el...AFAICT isn't this vgo min version just old-school Maven resolution + semver major version in the import path?

Tools to auto-bump semver by examining the code's own public API would be nice.


The "go release" command is intended to support selecting the right version for a new package release. The golang.org/cmd/api command detects incompatible API changes and so can serve as the basis for an auto-bump feature.

I'd also like to see a command to generate a new major version as a wrapper of the previous major version (or vice versa), to simplify creating major versions that can coexist within the same program.


Oh, it's looking like the ground work for future RISC-V support is happening too.

https://go-review.googlesource.com/c/go/+/106256

That's really good news. :)


Almost complete, but stalled, and based on Go 1.8: https://github.com/riscv/riscv-go


I'm not fully versed in the what the transition to the `vgo` proposal will look like, so pardon my ignorance.

Will it be available behind an environmental variable like the vendor experiment was?


[EDIT] Will leave the questions up have delved deeper into this myself and the above questions I have are basically answered but I am surprised at the route taken. Seems like everyone should be including the major version in their import path from the very beginning and every subsequence version which seems... ehhhhh.

------------------

Not a Go dev:

Is the mantra of "the new package must be backwards compatible with the old package.", which is an underlying requirement for vgo to work, actually followed in the Go ecosystem? It seems crazy to me that a package would have to change its name if it wants to introduce a backwards incompatible change.

Has this happened in practice? How does it work? Are people going to suffix packages with "Really major major" version numbers separately from the actual version number?

The criticisms of bundler in the proposal seem to be be a bit wierd when the solution is to force a new paradigm on developers to make vgo's job easier.

Why even use Semantic versioning at that point since a major version can never be incremented.


I the context of versions I think Rickey puts it best, actually there are no guarantees of compatibility.

Even a minor update can eventually break someone's code that relied on the bug.

"Spec-ulation – Rich Hickey"

https://www.youtube.com/watch?v=oyLBGkS5ICk


In all big packages I use: yes, it is. This is actually a must given there was no (standard) version management system. If you have no versioning system and don't make your code backwards compatible, you break someone else's code. So, either you make a new repository, or you make a subpackage in your current repository with the new, breaking version.


Free lunch is over for me. For the past 4 releases, each delivered 3-5% performance boost to my Go application. Go 1.11beta1 marks the end of that, it is sadly 1-2% slower than Go 1.10.

Not complaining, just trying to hear numbers from other developers using their production software...


If something got slower, file a bug with details. We look into all such reports.



I think the next release will have the midstack inlining feature (which has been previewed since Go 1.9)


I think I just realized something about golang as I was walking down the hall. This is something which confuses many people in a way which many respond to with unwarranted emotion, since they are unable to make this connection.

Since Go uses Duck Typing, having certain methods is effectively a Type Annotation for a given type implementing an Interface. Many programmers, failing to realize this, become outraged at "boilerplate." It is a design trade-off forcing the programmer to specify all cases when implementing an interface, in much the same way that golang's exception handling eschews tools that let you create clever catch-alls and requires you to specify everything. It's just the "no magic" tradeoff.


> requires you to specify everything

That's not my experience. checking for errors is completely optional. I'm probably just a bad lazy programmer that leans to hard on the compiler. I like the catch everything and then refine error handling as i get more understanding of failure modes (i'm from java, i avoid runtimeexception).

Go makes me feel like i need to understand all possible error cases for every line as i type it. I feel like i have to handle it right now, forever, or i'll forget that line can have an error. I really miss being able to gradually refine error handling, because i can rely on the compiler to remind me about all the stuff that can go wrong.


Indeed, Go's position is that your style of programming produces fragile programs, and that it is better to consider the failure modes of each expression as you type it, rather than afterwards.


  if err != nil {
          return nil, err
  }
is ceremony that requires no thought and adds no value. There should be sugar for this so it doesn't completely swamp the tiny minority of cases that actually do something.


I actually rarely type that very snippet. In like 3/4 of the cases, I make the error more specific, like

    if err != nil {
        return nil, fmt.Errorf("could not parse file %s: %v", path, err)
    }
the remainder being either the snippet you put above or more complex error management.


You can enrich err.Error(), but when you do you lose the type of the original error and any other fields it may have had. This is something the language can and should do more reliably (i.e., sugar could mutate err.Error() and prepend "MyStruct.myMethod failed: ", or better yet add a damn stack trace).


I'm always using errors.Wrap or Wrapf otherwise it's hard to find the error cause.

I've just created a live template for that in Go, which just puts my cursor in the wrap message to pass along.


is ceremony that requires no thought and adds no value.

Aw, hell no! That's a very easily detectable (anti)pattern! You can scan even a very large code base for that! As noted in other comments, you should be adding some contextual information at this point.

Contrast this with having too-much magic, where you might have absolutely no trace in code left. You can't usefully search a large code base for all such relevant blanks.


This is generally a bad idea in any production code. You normally want to add context to the error so that your logging captures the issue at the very minimum. You may want to do cleanup in the function etc. I almost never have written this style of code in many years now.


Seems like breadth first vs depth first search. They will converge at the same answer.

Depth first, you make the happy path work, then refine error handling. Breadth first, you ensure everything is handled at every moment.

I’ll have to think about that a bit. Trivial cases don’t matter. Complex cases are more nuanced.

This is really insightful, thank you.


> Depth first, you make the happy path work, then refine error handling. Breadth first, you ensure everything is handled at every moment.

That's correct. I've heard people describe Go as wanting you to code the sad path first, and then backfill the happy stuff (the business logic) once the error handling is complete. This tracks to my experience. That is, when I program that way, I feel like I'm in sync with the language.


This is the position of the Google c++ style guide, from which I think Go borrowed heavily.


it's a real trade-off. i think for more interactive software, a "fuck it and throw an exception" approach can be reasonable. for server software it's death. (you need to service the request to the best of your ability, and make sure that you are handling any failures in the best possible way.)


> i think for more interactive software, a "fuck it and throw an exception" approach can be reasonable.

Your users probably don't think so


it can be not-so-bad, because it's something like, "oh, i passed in the wrong file" or "oh, that menu item is broken" or something like that. you as a user are entirely in control of the inputs & the state is usually just a file or two, so it's easy to just try again until things work.

but for something running on a server somewhere, many inputs are beyond the user's control & the state is long-lived and fragile (if my HN or FB account gets somehow b0rked, that's a serious problem for me), so it's important to handle all possible error conditions in the best possible way. in my experience this is something exceptions make very hard to do.

(& i would still prefer explicit error handling even for front-end software, but i think the argument for exceptions is at least stronger there.)


You can throw away errors, by assigning errors or panic with Must. But a goru said, Why not log everything?

Must(func()(val, err)) -> LogIf(func()(val, err))


...that would be nice except you need to reimplement LogIf or Must again for every single function signature due to lack of metaprogramming..


Then just `panic` first on all errors, and correct it as you go on. This is not really the advised practice, but if it works for you...


Go uses structured typing as described by Rob Pike.

Duck typing can still lead to runtime errors which is practically all accounted for thanks to the compile-time safety of structured typing.


duck typing is an informal term, no? i think it's fair to think of structural typing as "compile time duck typing".


don't think so


It's what I meant.


The term you’re looking for is “type inference”, where the compiler determines the type of a variable automatically at compile time.


I think he's referring to not having to explicitly declare if some type implements any interfaces.


What are some good use cases for compiling Golang code to wasm?


Writing most of your front/backend business logic in Go is one possibility, but I think that easily moving complex logic to the web as a platform (especially for web demos, see these[0] example with gopherjs) is the most compelling possibility. Generally, I'm excited that this democratizes the web programming space, because the best use cases are often the ones no one has thought of yet.

[0]: https://hajimehoshi.github.io/ebiten/


Even just boring stuff like being able to have one object definition somewhere shared between client and server is useful.


Ok, ninja edit, this whole comment was based on a misunderstanding and adds no value to the discussion. I've been corrected below, the rest does not need to be preserved. Downvote it to the bottom for me please; I can't seem to delete it.


They never said vgo would be production ready for 1.11, they said experimental until 1.12: https://github.com/GoogleCloudPlatform/runtimes-common/issue...


Thanks for pointing that out, I've been trying very hard to keep up but somehow missed that detail. Perhaps it should be in the release notes!


I would have found it useful if you left your original comment below the edited disclaimer :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: