Hacker Newsnew | past | comments | ask | show | jobs | submit | jabkobob's commentslogin

So are you saying I can copy and paste some useful functions from your GPL code into my BSD code? And then the next person can use my code (with the pasted code from your project) in their proprietary software, given they stick to the requirements from my BSD license?

As far as I understand, the viral nature of the GPL does not allow this.


Sort of.

If you copy and paste functions into your code, then those functions are still under the GPL, and your code is still under the BSD.

Anybody could then copy and paste your BSD code into their proprietary software, and only have to conform to the BSD licence's requirements.

The main issue with this is that it when you mix code from various sources in the same file, it can be very difficult for readers to know what licenses each section of code has. For that reason, when I need to include code which has a different license or copyright from the main work, then I put it in its own file with its own copyright/license header.


I expect that tracking every single transaction would be a great deterrent to corruption, theft, and tax evasion. On the other hand, a lot of corruption, theft and tax evasion already occurs with electronic payments.


I'd go further and say that even with context my comments on hacker news are practically useless. I see online discussions as a pastime, not something that needs to be conserved on triple redundant backups.


I must disagree with the command-line part. I don't think that the command line is fundamentally more powerful than any other interface.

Why is the command line powerful? Because it offers a large number of utilities that are highly configurable and that can be linked easily.

But you could have just the same expressiveness if the interface was eg. a circuit diagram, where you connect configurable commands with lines.

You know why the command line uses text input? Because it is the simplest to implement. The only people who need to know how to use the command line are people who need to use software where it doesn't pay off to make a more intuitive interface.


While I agree that the commandline provides "a large number of utilities" that "can be linked easily", I don't think that's the whole story. While you could certainly design a circuit-diagram-style GUI for building commands (so-called "visual programming"), it would be a lot more tedious than typing, just because there's so much more bandwidth available on a 100+ button, two-handed input device than a two-or-three button one-handed input device. Also, a good deal of efficiency comes from terseness: I can imagine a GUI that would make it simple and visually obvious how the different atoms of a regular expression fit together, but such a GUI would spend a lot of visual bandwidth communicating which combinations are legal and which are absurd. Expressing a regular expression as a string gives you absolutely no such feedback, but if you already know the regex you want to use, it's an awful lot faster to type it.

Lastly, the command line gets a good deal of power from meta-programming: most commands deal with a loosely structured stream of text, and a set of commands is a loosely structured stream of text. Specifically in POSIX environments, primitives like command-substitution ("$()" or backticks) and the xargs command are powerful ways to write commands that construct and execute other commands. If your diagram-based UI contains a set of primitives for launching commands and working with text, you're going to have to add a whole new set of primitives for working with diagrams.


As somebody who spent some time working with LabVIEW, I can safely say that drawing a circuit diagram is more difficult and less intuitive than using text. Now, maybe this is the fault of LabVIEW, but I think it's true in the general case as well.


I once had to use LabVIEW. In general, I agree with you, but I got the feeling that G (the language implemented in LabVIEW) might almost be a decent language if the only IDE for it (and the file format) didn't suck so badly.


While numbers don't lie, the conclusions you draw from them can still be incorrect.

Look at the second graph (website visits peaking on Dec 24th for giftcertificatefactory.com). The author concludes the following from this peak: "People favor doing things at the very last possible moment."

But this is nonsense. Such a conclusion would require a model for how people's preferences affect web page statitics. If you don't have a model, your intuition is going to fool you. Let me illustrate this with a simple example:

Assume that in our model world you have two kinds of people: Early-Buyers and Late-Buyers. Early-Buyers buy presents on a random day from Dec 1 to Dec 20. Late-Buyers on the other hand buy presents on a random day from Dec 21-24. Assume that 80% of people are Early-Buyers and 20% of people are Late-Buyers.

If you looked at the number of presents bought per day, you would see that the rates are 25% higher in the days from Dec 21-24. Your intuition will tell you: "People favor buying presents late". But that is not true, because in our model world 80% of the people are actually Early-Buyers!

Now, to explain the web page statistics shown in the article, we would need a more elaborate model; but constructing such a model and working with it is difficult, and that's why people avoid thinking about models, just post raw numbers, and then write whatever their intuition tells them, and then claim that it must be true because "numbers don't lie".


Easy tiger. Maybe it's a problem of vocabulary (non-native english speaker here) with "People favor doing things at the very last possible moment." ?

What I meant was that of all days, people buy on the very last. Which makes sense according to our intuition indeed. I didn't mean to imply that (number of people buying the last 5 days) > (number of people buying before during the whole year) as you seem to argue against. I was only observing that absolute numbers increase day by day starting 5 days before Christmas.

Your early buyer-late buyer model is interesting but as you said, it would require more thorough research to set it up and was not in the scope of the article.

I thought the data was interesting and wanted to share it with the community, and I had no ambition to draw a complete buying model from it.


The observations presented in the article are interesting, but as you suggest, the problem lies with the attribution of these observations to the broad and sweeping set of "[all] people."

We can consider the fitness example in the same light as your analysis of the gift-card example. My corporate gym -- and pretty much any gym to which I've ever belonged -- gets deluged by New Year's resolution newbies every January. By mid-February, most of them are gone.

At first blush, we may be tempted to suggest that "most people" sign up for gym memberships in January, then gradually lose interest. But in fact, we are simply observing one subset of people at one touchpoint. The set happens to have a dramatic impact, so our minds assign it unduly high weight through a cognitive bias known as the availability heuristic (http://en.wikipedia.org/wiki/Availability_heuristic). But this subset, in fact, may not even be significant in the overall set of gym users and non-gym users. Perhaps "most people" use the gym on a regular basis. Even more likely, perhaps "most people" don't set foot in the gym at all, New Year's or otherwise. All that we've learned, by observing the New Year's subset in isolation, is how the New Year's subset behaves.


I find it funny how they try to use iTunes and the music purchased from iTunes as an example for how this legislation would affect people, even though music purchased from iTunes has been DRM free for a few years.


> you probably won't know your restrictions until you've already put in the work

Do people really start putting significant effort into projects without thinking about legal issues, distribution, etc.? (I don't consider downloading an app and creating a two page test book as 'significant effort')


I'm sure people do, but they shouldn't blame Apple (or anyone else) for their lack of due diligence. The glovebox argument is just one more example of the entitlement issue that Nirvana describes.


I'm amazed how many people actually complain about that funny letter. To me this proves just how right she is. Apparently making fun of Oxford is off-limits for some...


The Objective C runtime on MacOS 10.7 also does this for NSNumber objects.


The problem with his 'noob' example is not only that the comments are overly verbose, but also that his variable names are generic and totally useless. 'counter' and 'pos' or 'ref' are so generic that you always have to look at the whole code before you know what's going on. Rename the 'counter' variable to 'bytesProcessed', and rename the 'pos' variable to 'startOfBuffer', and use a variable named 'currentByte' to walk through the buffer... With descriptive variable names many comments become unnecessary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: