Hacker Newsnew | past | comments | ask | show | jobs | submit | rakyll's commentslogin

OpenCensus is not a vendor specific project. The data OpenCensus collects can be exportable to any tracing backend. We already have a Jaeger exporter for Go: https://godoc.org/go.opencensus.io/exporter/jaeger.

Code instrumented with OpenCensus can export to any backend by changing the registered backend.


LLVM is a future consideration for Go. One of the main reasons I don't want to tackle (1) right now if the possibility of benefiting from XRay even though it doesn't sort the case out for the vast majority: the gc users.


Whoops, fixed the typo. Thanks!


There are only two err != nils in the article. What's your point?


Four, actually, but two have been removed for brevity.


Are you referring to the instances where error is a return value, but an error result under the known inputs would be impossible, so it would be pointless to check it? That's not exactly the same as removed for brevity.


Yes - just my quirky sense of humor! It isn't being documented as being safe (that I could find), so it should be checked IMO.


Does the Go standard library document that on any function like you are expecting?

The code for both functions make it pretty clear that an error will only be returned if the input is invalid, which is not something that will occur with the known constants being fed into them.


Yes, it does, where the guarantee exists: e.g. https://golang.org/pkg/bytes/#Buffer.Write

Looking at the implementation is no guarantee that the implementation won't be changed in future.


> Yes, it does, where the guarantee exists

That isn't quite the same thing. That says that an error is returned only because it is required to conform to the io.Writer interface. Without that requirement, it wouldn't return error in the first place. It tells the reader that they should not be confused as to why writing to a buffer might return an error, when such an operation should fundamentally not return an error.

These other functions in question have very good reasons to return an error: Invalid input.

> Looking at the implementation is no guarantee that the implementation won't be changed in future.

The Go compatibility promise says that the behaviour won't change, except under exceptional circumstances – like a security flaw that cannot be fixed using the original behaviour. It is highly unlikely that is an issue for those particular functions.

But, if you still think it is important, what do you plan to do with an error that might just randomly appear in the future anyway?


> But, if you still think it is important, what do you plan to do with an error that might just randomly appear in the future anyway?

Terminate processing and pass the error back up the call chain. This is basically what exceptions do, and basically what 'if err != nil { return err }' does.

One could also pause processing, interrogate the call chain for what to do about the error condition and the proceed as directed; this is what conditions & restarts do. IMHO it's a better solution, but it requires some syntax to be readable.


> Terminate processing and pass the error back up the call chain. This is basically what exceptions do, and basically what 'if err != nil { return err }' does.

The function in question that could return an error is being called in the main function. Who are you going to pass it to, exactly? error is a not a return value on main. If you knew what the error could be, you might find a suitable workaround under those conditions. But since the error does not even exist for you to handle, about all you can do is terminate the application. Which is exactly what will happen in this circumstance anyway, even if you don't check the error result.

I'm still not clear what you are really going to meaningfully do with the error information here even if you did magically start getting errors against the compatibility promise? If you are going to handle errors, you actually have to know what you want to do with them. The following adds nothing to your code.

    if err != nil {
        // I didn't expect that. Oh well.
    }
It's not like anyone is suggesting you should avoid checking errors in all circumstances. We're talking about very specific cases where all of the conditions are known. If GET stops being an HTTP request method, or http://example.com/ is no longer a valid URL, you've got way bigger problems than not gracefully handling the world flipping upside down.


> The function in question that could return an error is being called in the main function. Who are you going to pass it to, exactly?

The operating system: exit with a non-zero exit code, indicating the specific error.


Which is exactly what will happen as the code is written.


Two. The URL is a constant that is guaranteed not to return an error. It is not brevity.

Anyways, the point is I don't understand the point of err != nil bashing here, given there are two programs in the article that only contain one error check each.


As someone who writes a lot of Go I have mixed feelings about it. The problem is that there are implicit errors regardless and the explicitness in err != nil tricks beginners into thinking that they've covered all edge cases, they haven't. Recover() is probably the only way to handle internal panics and even then it only works if you know to that internal call might call panic().

That said I can't think of an alternative other then to reduce the number of panic()s hidden in various libraries.


Avoiding error checking against a properly formatted constant is not avoiding error checking. I cannot follow your argument.


Sorry, I guess I'm not really making one. If anything I'm commenting that (IMO) explicit error handling isn't as explicit as people thing it is. There are internal errors induced by panic() that are really inconvenient.


Panics are supposed to be launched on an "end all work on this; code path is FUBAR". On a web server that usually means dropping the connection, logging everything and firing an automated e-mail to the dev team to notify there's a bug in the code that needs to be fixed.

All other error conditions are expected and handled. That's what Go's error handling philosophy is all about.


panic should be a "the world is ending" sort of operation.


This is turning into a major tangent, but where is the guarantee documented?


People really hate GoLang's error checking because of the impression that there is a lot of repetition around err != nil statements.


People really hate handling errors


Well, the reason why I'm using a high-level programming language is there are a bunch of mechanical things that the computer will do better and more reliably than me. Otherwise I'd just write assembly.

What sorts of things fall into this category, and what tradeoffs you're willing to make to have the computer do things for you, is a matter of application (and taste), and so we reasonably have many different programming languages. But it's entirely defensible to want, in the abstract, your language to do repetitive and important work for you. That's what computers are good at.


There's not one reliable, or a set of reliable, way to handle errors. A high level language could handle it for you reliably in the sense of throwing exceptions and hoping you'll catch it. That'd work, you'd get Java software that throws exceptions all the time during normal operations.

Errors are valid values, handling them is as normal as handling other values you get from functions.


Rust's error handling:

    let foo = try!(something_that_might_error())
    // or soon...
    let foo = something_that_might_error()?
The try!() macro or the ? operator unwrap a Result<T, E> value, which is basically "either the return value or an error value". If it's a return value (T), the macro/operator just gets the value out of it - but if it's an error (E), it will convert the error into your own function's Result<T, E> return type using the From trait, and return that from your own function immediately.

The practical result? When I'm parsing input or doing other things that are heavy on errors, I can split it out into another function, wrap every call that might fail in try!(), and only have to actually handle the error once. This allows me to see my code clearer - I can see the success case very clearly, but I'm still forced to decide whether to handle or return errors by the compiler, because there's no other way to get the T out of a Result<T, E>.

There's a few other languages which implement this, though they lean towards the functional side a lot more than Rust does.

Experience: I'm writing a small internal certificate authority in Go because it has x.509 and signing support in the standard library. I'm no safer by having roughly 3/5 of my lines be if err != nil { return err; }.


That's actually a great counter example! Tho most day-to-day high level PL don't have a type system capable of offering this sort of checks!


Honestly... you could do it as a primitive type if you wanted. The only "special" bits needed are a piece of syntax which checks if it's an error, and if so returns it, else evaluates to the private value; and a way of unwrapping the actual error value when you want to handle it.

Go already has plenty of complicated language primitives with complex behaviour.


it's not that terrible in languages with abstraction


Like the ones where most people end up not actually handling errors where they matter.


Which languages specifically, and why do exceptions encourage you to drop errors on the floor?


I think people are misinterpreting support for try catch family in some high level languages as lazy approaches.

But truth is it is the programmers who are lazy, we should not blame language for providing a feature which some programmer abuse.

I have never seen a Java programmer profiling their code for Memory or CPU at work and I am here running my shitty C code under Valgrind and a profiler all the time.

I guess, it is programmers and not the language.

Go's error != nil approach though rudimentary actually forces lazy programmers to complain, since they cannot abuse it.


I had a big group of coworkers from EPFL when I was working for Google Switzerland. At some point, I actually questioned whether we have a bias for these schools.

EPFL or ETH are definitely not something I'd consider as just "Swiss education system". These schools are what Stanford or Berkeley are in their region, and are highly international.


There is not a single mention of Java at the Brillo announcement.

Irrelevant but flush your biases and assumptions down to the toilet. Java can statically compile to microcontroller architectures with minimal runtime footprint. Stop blaming the languages, blame the runtime implementations.


Google is building their own hardware, an embedded Linux distro is primarily a need at Google to support our hardware projects for a long time. But, for the first time, the industry is capable of building hw projects at scale. It's not surprising we reinforce our interest to get involved by opening what we're already doing.


Brillo contains a very minimal part of Android for HAL and the network stack. 64MB is our medium-end target. The ART is not in the scope of Brillo.


You must interpret low-power devices as low-end Linux boards. No one is aiming to target the microcontroller market with a Linux/arm board.


He is right. We run contiki with 128 bit AES on these systems. We need 30KB ROM to run a full ip stack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: