We're teaching a course at ETH Zurich [1] where --besides the actual payload of solving partial differential equations (PDEs) on GPUs-- we put a lot of emphasis on "tools". Thus students learn how to use git and submit their homework via pushing to a repo of theirs on github, we teach testing and continuous integration, writing documentation, running code on a cluster, etc. In their final project, again submitted as a GitHub repo, they need to make use of all of theses skills (and of course solve some PDEs).
Note that excellent work in this space is done by the Software Carpentry project which exists since 1998 [2].
As an alumni, thanks a lot for doing this. Looking back, all the things that I've learned in just the first few weeks in the industry made writing code so much more productive - if only someone had shown some of it already during some early semester, even just during some assistant teaching hour, it would have saved so many hours.
I remember specifically when one of the exercises for some compiler lecture contained unit tests the code had to satisfy, and I was like, wow, why didn't I already knew about this during algorithm classes earlier where I was fumbling around with some diff-tools to check my output. Let alone proper version control, now that would have been a blessing.
In hindsight, it's a bit embarrassing that I didn't bother to, well, just google for it, but neither did my colleagues - I guess we were so busy with exercises and preparing for exams that we just didn't have the time to think further than that.
Thank you very much for the GPU course. Even though my college taught shell usage to some extent, when I asked about GPU programming it was considered a nerd topic back in 2009.
Whilst Julia's foreign function interface is indeed good and it is really easy to call into C, the point is that Julia itself is as fast as C. So you don't need to write any C code to get performance, instead just tune the bottlenecks in Julia itself.
For instance, the standard library of Julia [1] is written in Julia itself (and is very performant) and only calls into external C or Fortran libraries where there are well established code-bases (e.g. BLAS, FFTW). Compare this to, e.g., Python or R where much of the standard library is written in C.
The outlook toward Julia 1.0 was given by Stefan Karpinski in his JuliaCon 2016 talk: https://youtu.be/5gXMpbY1kJY . But yes, the plan is to have a 0.6 and then 1.0.
Aside: femtolisp is used in the Julia parser, and was created by one of (or the) main Julia contributors. A femtolisp REPL is included in the Julia executable:
There are now fellowships in the UK specifically aimed at scientific programmers: https://www.epsrc.ac.uk/funding/calls/rsefellowships/ . Even so, currently there no real career path for scientific programmers. Hopefully this will change in the future!
Note that excellent work in this space is done by the Software Carpentry project which exists since 1998 [2].
[1] https://pde-on-gpu.vaw.ethz.ch/ [2] https://software-carpentry.org/