Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, but the really lucky break IMO was a decade of INTC repeatedly punching itself in the face w/r to manycore computation. I was at Nvidia from 2001 to 2011 (and now back) and I spent 2006-2011 as one of the very first CUDA programmers. It was pretty obvious within a week or two that this technology was going to be huge, and I more or less made my career with it.

But instead of stepping up to the plate and igniting a Red Queen's Race that would have benefited everyone, INTC first tried to discredit the technology repeatedly, then they built an absolutely dreadful series of decelerators that demonstrated how badly they didn't understand manycore. Eventually, they gave up, and now they're playing catch-up by buying companies that get within striking distance of NVDA rather than building really cool technology from within.

Now if someone threw a large pile of money at AMD again, things could get really interesting IMO. But the piles of stupid money seem biased towards throwing ~$5M per layer at the pets.com of AI companies these days.



Nvidia's coup was in getting people to switch to a different programming model and rewrite their code to achieve the necessary performance. Intel, especially in upper management, is stuffed with people who assumed that was impossible. And faced with new competition from GPUs, the only acceptable response to management was the many-x86, "you don't have to rewrite your code to get performance" approach which didn't actually work out.

It's not that Intel doesn't have people that recognize the issues, but rather the people who do have that foresight are drowned out by people who don't realize the game has changed. Intel, to be fair, does have the best autovectorizer--but designing vector code from scratch in a purpose-built vector language is still going to produce better results, as shown when ispc beat the vectorizer.

But Nvidia can also get drunk on its kool-aid, just as Intel has been. Nvidia's marketing would have you believe that switching to GPUs magically makes you gain performance, but if your code isn't really amenable to a vector programming style, then GPUs aren't going to speed your code up, and the shift from CPU-based supercomputers to GPU-based supercomputers are going to leave you happy. There's still room for third-way architectures that is anyone's game.


People are all full of foresight until their foresight doesn't work. Which is most of the time when working at the cutting edge of Tech. That's when companies crash and burn. Intel has not, despite missing the boat on big calls almost 10 to 15 times now. What does that say about Intel? People think missing the call is a sign of bad management.

Bad management is when your company evaporates because you make one bad call.

Management gets points for surviving and then fighting back despite being wrong. And when you look at Intel's history, there are few companies on the planet who have managed to do that multiple times. They have a good mix of people who know what they are doing technically AND people who do whatever it takes to keep the company from sinking when those bad technical calls happen.

If Nvidia survives whatever their next bad call maybe expect them to start looking more and more like Intel.


"Bad management is when your company evaporates because you make one bad call."

That's a pretty low bar!


Intel was founded in '68. They've been around since the inception of the integrated circuit. Their war chest of patents and dollars and assets/resources aka foundries, is helpful in soaking up damage from bad calls in a way that no one else can match.

AMD could never made the same gamble as Intel did for Itanium. There's a long technical argument as to whether the world is better off in a technical CPU design sense because of that, but I disagree that it's necessarily good management of Intel that's allowed it to recover from disaster.

The best management can play the hand they're dealt perfectly and still lose. However bad management can play the best hand poorly and still win.


If you don't have a fundamentally serial workload (and usually either you don't or you have a lot of them you can parallelize across tasks) and you are willing to write bespoke CUDA code for that workload, Nvidia is telling the truth.

CUDA's sweet spot lies between embarrassingly parallel (for which ASICs and FPGAs rule the world because these are generally pure compute with low memory bandwidth overhead) and serial (for which CPUs are still best), a place I call "annoyingly parallel." There are a lot of workloads in this space in my experience.

But if you don't satisfy both of the aforementioned requirements and/or you insist on doing this all from someone else's code interfaced through a weak-typed garbage collected global interpreter locked language, your mileage will vary greatly cough deep learning frameworks cough.

Finally, it doesn't matter who's doing it, marchitecturing(tm) drives me nuts too.


They're not lying. To parallelize, you have to code in a different style - though it doesn't have to be CUDA. However, it's easier to enforce the syle in a parallel-specific language, and it can help to support idioms.

Controlling the language certainly helps Nvidia's economic moat.


Intel also got burned by Itanium; though maybe if it had been a many-core design as well the payoff would have been worth it. (Looking (about a generation later, Itanium first released in 2001 and cell development started then) at the Cell processor the PS3 used, the idea was probably around at the time and didn't seem to pay off very well their either...)

Arguably one of the things that Nvidia really got right was learning from those past failures at other companies and making it easier for developers to utilize the platform starting from a standpoint that they were familiar with and helpfully nudging them towards what would run fast in parallel.


> Intel, especially in upper management, is stuffed with people who assumed that was impossible. And faced with new competition from GPUs, the only acceptable response to management was the many-x86, "you don't have to rewrite your code to get performance" approach which didn't actually work out.

I think the big disconnect is thinking they had to make people rewrite code. CUDA often targets entirely new codebases, and in some cases new types applications.

The "rewriting of code" is mostly for things like AV processing and codecs where there was such a sellable benefit in performance it would have been insane for them not to invest the effort.

Intel was doubly hindered here, because they wanted everything to use x86. Intel had trade secret and patent protections from competitors, and a critical mass of marketshare.

Parallel programming was something that had to fit into that "x86 for everything" mindset rather than being a separate/competing technology to x86. The company that pushed winmodems and software sound cards wasn't going to be able to lead the disruption there.


Intel's SIMD autovectorizer against NV's SIMT was like bringing a sword to a machine-gun war. The fact Intel's own ispc beat that too should have shown them there was an entirely different class of weaponry they should've been developing. Not only didn't they respond adequately, they doubled down on xeon phi.. That's future textbook material right there.

Only now, more than a decade later they realize their mistake and try to correct the juggernaut's course. Such glacial mistakes in this industry can cast death blows to even the largest entities.


My pet conspiracy theory is that Intel pushed Xeon Phi so hard in order to sell an expensive but mostly useless HPC system to China. And now they got burnt and are rolling their own tech.


>but rather the people who do have that foresight are drowned out by people who don't realize the game has changed.

They drowned out Pat Gelsinger late 00s, then Justin Rattner and many others Retired in early 10s, the rest is history.


> But instead of stepping up to the plate and igniting a Red Queen's Race that would have benefited everyone, INTC first tried to discredit the technology repeatedly, then they built an absolutely dreadful series of decelerators that demonstrated how badly they didn't understand manycore. Eventually, they gave up, and now they're playing catch-up by buying companies that get within striking distance of NVDA rather than building really cool technology from within.

Reminds me of "First they ignore you, then they laugh at you, then they fight you, then you win".

So many companies have this reaction (e.g. RIM with BlackBerry). Wondering if this is some kind of "corporate instinct".


This is "The innovator's dilemma" material. TL.DR. yes, it's a perfectly rational (but short term biased) reaction that corporations mostly can not resist having.


The thing is, it works 98% of the time, the scrappy upstart gets laughed out of business.

There is a sort of survivors bias in focusing in the 2% and assuming that's the norm. Corporations act this way because it generally works.

Like the OP said though, it does lead to arrogance over time, and that's when a fall happens.


A corporation (the management, that is) also act this way because they know their maneuverability is just about the same as that of the Titanic. So while they (maybe) prepare a response knowing full well they're already late too the party they try to discredit the startup hoping it will at the very least slow it down.

Corporations can't really be as successful at innovating as a startup. A startup is free to build or reshape itself into anything, focus on a single thing, and pivot on a dime. In a corporation the same structures that hold it up and moving are the ones that resist it changing direction or promoting something new. Easy to lose focus and get lost in the red tape.

And that's before you consider the risk a CEO sees in potentially cannibalizing their own (currently successful) business or just throwing money down the drain at 98 losing ideas. Like you said, 2% of ideas may be successful so a corporation would rather let the startup play it out and then buy it if it has potential. Easier to justify to investors.

So corporations innovate when they have nothing to lose and any risk is worth taking. See MS trying to somewhat successfully reinventing themselves after seeing mobile and FOSS booming. Private companies also have an easier time innovating because they have no investor pressure. They may be behemoths but at least they can avoid suffering from the "too many cooks in the kitchen" syndrome.


>because they know their maneuverability is just about the same as that of the Titanic.

I know this to be true, however I cannot understand for the life of me why this is the case.

If I was the CEO or CTO and had, say, 5k people under me, you had better believe there would be dozens of little 3-4 person teams doing hard research on threats and coming up with recommendations to get in front of them.

I mean this is basic 1st year MBA SWOT Analysis stuff.


From what I've read above 150 people [0] things start to break down a bit. Social relationships and coordination break down. You no longer know everybody, decisions aren't based on trust anymore, you cannot maintain a flat hierarchy, etc. The structures that support the "behemoth" with tens of thousands of employees spread across the world make it more rigid. With hundreds of teams, products, services, and managers office politics becomes a very real thing and people start having their own plans and ambitions. People stop pulling together towards the single goal because there is no single goal anymore.

And having lots of teams "innovating" is also not that great. You'll just end up with a stack of 100 great ideas on your desk but only 2 that might make money. Your job is to guess which 2. Any decision you take will be heavily scrutinized by everyone in the company and shareholders. You may just go the safe way, that worked over the past few years and put a bonus on the table.

A 10-20-100 person startup with everybody in the same office and a very flat structure will be a lot more agile. The people are all there for that one single purpose, and the dynamic is quite different. Once the goal is reached many just move on. This provides a very different motivation vs. the typical corporate employee.

[0] https://qz.com/846530/something-weird-happens-to-companies-w...


Even if you know what is coming, that doesn't guarantee you can outmaneuver it. When you are dealing with thousands of people, contracts with hundreds to tens of thousand of customers, and infrastructure based on the assumptions that make your existing business tick, a fundamental change that costs a new competitor $0 to make because they don't have any of that built up stuff could costs a fortune for you to counter.

holding the place of an incumbent has advantages and disadvantages. Sometimes you can't leverage the advantages, and that's when a company can get buried by the upstart the worst.


But all it takes is a 1/50 shot to sink your business or overlook a key feature. That's why it's a real thing, and a 98% success rate is not very good at all.


I was just thinking about it as I wrote it. I think that's correct and glad you brought it up.


> Now if someone threw a large pile of money at AMD again, things could get really interesting IMO.

That somebody might be Nvidia. I believe that Nvidia is still battling and has not yet paid the 1.06 billion Euro fine to AMD imposed on it by the UK courts in 2009. Hearsay claims that it was the similar US fine that basically paid for Zen R&D...


Is there a reason for referring to Intel as INTC and Nvidia as NVDA? Is this just the internal Nvidia jargon or something?

It seems gratuitously confusing for readers, and doesn’t seem to have any benefit I can see.


I always wonder what people who feel it worthwhile to shave off 2 letters from a word do with all the free time they gain.


> Is this just the internal Nvidia jargon or something?

Stock symbols


Yes it’s obvious that these cryptic 4-letter abbreviations are ticker symbols, but my point is that most people refer to companies by their names, not their ticker symbols. I’m wondering if using the latter is something common internal to Nvidia, or if there’s some other explanation.


Possibly a hangover from when sending data across the wire was expensive and being able to uniquely identify companies with just a few letters was a huge cost-saving innovation.


In my experience it is common when talking with people that invest in individual stocks--either personally or professionally.


Why would you say INTC and not Intel? Are you deliberately trying to be confusing?


Are you hiring?


Do you mind putting an email in your profile (or by reply)? I'd like to get your opinion on something but would rather not ask here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: