I lucked out a few years ago and ended up at a startup that was focused on scaling GPU computation for a web service. I've spent 15 years being a web/mobile developer before that. First Perl/PHP for about 5 years and the last 10 mostly Python/C/C++/Obj-C/C#/Go.
What I can say about that and the current thing that's happening to computation is that we will all be doing code for vector processors soon. It's a sea change and the effect of the throughput is an edge for anyone who can use it now and in the near future. That's what this article is about. Google is the only other company making vector processors for compute at scale and they have the BEST tools for people making use of them in their cloud service even though generally they are provisioning Nvidia GPUs.
AWS doesn't ship computers and is lagging behind in their compute services. Nvidia with CUDA changes the landscape in a crazy way. You might not care about it now, but having an understanding of it is, IMO, critical to anyone that plans to be working on computers in the next 2-10 years unless you have a really untouchable position. Even if you think your position is untouchable, you might be in for a shock when someone blows the doors off your business logic with CUDA and you can't catch up.
Regardless of GPU choice, "write once" (or close to it) cross platform compute shaders[0][1][2] are coming in 2020 and there's no way anyone is going to bump CUDA out of being at the front of that.
What I can say about that and the current thing that's happening to computation is that we will all be doing code for vector processors soon. It's a sea change and the effect of the throughput is an edge for anyone who can use it now and in the near future. That's what this article is about. Google is the only other company making vector processors for compute at scale and they have the BEST tools for people making use of them in their cloud service even though generally they are provisioning Nvidia GPUs.
AWS doesn't ship computers and is lagging behind in their compute services. Nvidia with CUDA changes the landscape in a crazy way. You might not care about it now, but having an understanding of it is, IMO, critical to anyone that plans to be working on computers in the next 2-10 years unless you have a really untouchable position. Even if you think your position is untouchable, you might be in for a shock when someone blows the doors off your business logic with CUDA and you can't catch up.
Regardless of GPU choice, "write once" (or close to it) cross platform compute shaders[0][1][2] are coming in 2020 and there's no way anyone is going to bump CUDA out of being at the front of that.
[0] https://en.wikipedia.org/wiki/WebGPU
[1] https://www.khronos.org/vulkan/
[2] https://github.com/9ballsyndrome/WebGL_Compute_shader