This is really a fantastic resource - it doesn't hide too much away behind "helper" libraries so you understand where everything is coming from. It's the only beginner resource I've found that covers drawing multiple things: that was always my stumbling block with WebGL!
Shameless self plug: You should check out my Youtube channel. I have a playlist that I cover Webgl (not 2) https://www.youtube.com/watch?v=oDiSqQT_szo&list=PLPqKsyEGhU.... I've done 108 WebGL screencasts, everything is built up slowly over time from scratch (so nothing is hidden away), and covers a lot for beginners to include covering drawing multiple things. Cheers!
Can someone explain why the API design is such that it takes roughly ~100 lines of code and understanding of many different concepts (buffers, vertex/fragment shaders, uniforms, attributes, etc.) to draw a monochrome triangle on a canvas?
I'm not trying to be snarky or say WebGL sucks (I have very little experience with it), I'm just wondering why the design choices have culminated into such an API?
Because this is what a modern programmable pipeline looks like. You can go from that same ~100 lines of code to 80,000 triangles and have it still run well.
You need:
1. A way to send opaque data to the GPU (this is a buffer)
2. A way to programmatically transform that data (this is a shader)
3. A way to specify input parameters to those transforms not found in the data (a uniform)
4. A way to decompose that opaque data into tangible (an attribute)
With not much more code, you can go from that to something like e.g. http://magcius.github.io/pbrtview/ using the same exact primitives. This is an example of a custom WebGL2 engine (not using three.js) in action. Leaf through https://github.com/magcius/pbrtview/blob/gh-pages/src/models... and you'll still see the same createBuffer, bufferData, drawArrays. There's not much more that I added on top.
The truth is a tiny amount on top could significantly reduce the boilerplate for true beginners: a spec defined trivial fragment and vertex shader that has an associated glBindDefaultShaders() would already go a long way.
The truth is that it doesn't really make sense to make the "render one triangle" easier since it only makes sense to use OpenGL if you are going to go deep on how it works, so accelerating the first two days down to one day isn't going to make a huge impact on the learning curve (compared with other apis where cutting the first two days to one would be a huge deal).
My two biggest shelf pulls are Real-Time Rendering [0] by Akeine-Moller, Haines and Hoffman. 3D Math Primer for Graphics and Games [1] by Dunn & Parberry
I also read a ton of presentations and papers. Highly recommend the famous PBR SIGGRAPH course notes [0], especially the intro to light & physics by Naty Hoffman. GPU-Driven Rendering Pipelines [1] is another recent goodie.
Flexibility and performance are the two driving forces behind the amount of template you need to get something on-screen. All the GPU API designs have gone the same direction. WebGL actually takes less template code than, say, Vulkan or Direct3d.
Back when less template code was needed, we had fixed function pipelines that couldn't run shader code, and we were bound by all kinds of assumptions & limits. The APIs basically assumed you were doing perspective projection of textured triangles with Phong shading using point lights, for example. The APIs these days assume far less about what you want to do, and allow far more customization, at the expense of more setup function calls.
That said, getting started really is a total nightmare, and there's lots of room for acknowledging that most people only have one of several basic pipelines. There's no reason we couldn't have some easy default setups in addition to all the flexibility. API designers would probably argue that templates are not in their purview, but it sure would be nice if GPU interfaces took a step or two in the ease of use direction.
GL is the worst of graphics APIs regarding newbie support.
While every other 3D API has support for math, textures, models, shaders, fonts, basic scene graph, GL leaves to the developers to hunt for 3rd party libraries.
Khronos tried to change that by creating an SDK page, listing endorsed libraries, but it has not been updated in ages.
WebGL is essentially OpenGL ES, the same API that is used on mobile phones and a subset of OpenGL that is used on desktops.
A lot of the verbosity is typically hidden in helper functions or libraries. I've used twgl.js (https://twgljs.org/), and it's much less code (and less potential for stupid mistakes in the boilerplate) than the pure WebGL API.
There are still many concepts to understand, there is no way around it. You're mostly writing programs that are executed on the GPU itself, and you need to understand the limitations of that to write WebGL code.
There are higher-level abstractions on top of WebGL you can use, if you don't want or need the low-level access WebGL gives you.
One point of view is that because it's low level API. Flexibility has been valuated over easy of use. In general I think that offering low level API is ~only right way to do for real application platform. You can always make more easier opinionated high level API over it (like three.js). If platform has only high level API you cannot make it another way around.
Good example about this issue is CSS. I have a bit strong opinion about this but I really think that fact that we cannot polyfill flexbox / grid layouts is "proof" that this technology is not good enough and lower level API would be better option (and current CSS model maybe built on top of that).
My own rather limited opinion is that accessing the Video Card via any technology involves using the facilities created by the video card itself, and that making it easier to use has been sidelined in favor of adding features for video game developers to allow much of the modern games to be created. There is probably a happy-medium between super simple API/engine for throwing polygons at the screen and having to build an entire 3D engine pipeline every time you want to use WebGL, but I'm not entirely sure we are there yet. In the mean time, some people use libraries to abstract away some of the uglier parts of WebGL so they can make things more quickly, my favorite is https://threejs.org/
WebGL has been around for quite a while now, but I never see any websites using it. Surely they exist somewhere? What have I been missing? Anyone have a good source of links to things using WebGL?
Lucidchart uses WebGL for the canvas in our editor. The menus around the canvas are Angular, but the actual editor is all WebGL. Full disclosure: I work at Lucid.
Using latest Firefox: why is Google Maps so comparably bad (scaling is only in steps when using scrolling, much slower time to load the map) compared to a MapBox mentioned in a neighbouring comment?
Almost all browser games coming out these days. Here's a link with a list if you want to see what they're like:
https://www.crazygames.com/t/webgl
It's quite impressive. Try any of Madalin Stunt Cars, Bullet Force, or Truck Driver Crazy Road 2 in fullscreen considering each of them is built by a single developer and be amazed (I am, at least).
I wonder how much of that performance difference is due to "Three.js vs Unity/Unreal" rather than "WebGL vs Native GLES".
AFAIK Three.js is still missing standard optimizations that are taken for granted in a "real" engine, like batching and occlusion culling, and some optimizations it does support are scarcely used due to poor tooling. Pre-baked lighting and compressed textures are technically supported but there's no easy workflow to pre-generate the necessary data.
I think PlayCanvas is the most robust engine that's built specifically for the web. Only the core runtime is open-source though, to get the full benefit you need to use their proprietary editor and asset pipeline.
I would actually just recommend diving into THREE.js.
It's truly one of the best documented and most "obvious" codebases out there. If all you've got is a solid understanding of JS, you can just start hacking up something nontrivial on THREE.js and you'll come out of it knowing most of WebGL.
And in doing so you'll eventually realize that even WebGL itself is quite "high level" -- there is a _huge_ amount of abstraction that you take for granted in the browser.
> you can just start hacking up something nontrivial on THREE.js and you'll come out of it knowing most of WebGL.
Sorry, but this is wildly untrue. I don't really know where to begin, but you wouldn't even need to touch shaders to accomplish that, so no, you wouldn't know most of WebGL. THREE.js is very much a thick abstraction over WebGL.
> And in doing so you'll eventually realize that even WebGL itself is quite "high level" -- there is a _huge_ amount of abstraction that you take for granted in the browser.
Also not accurate. WebGL is a -very- thin wrapper over OpenGL ES, which is itself as low level as you can go without stepping into something like Vulkan. There's almost no abstraction.
This is true. From experience I can say that the core experience of using WebGL in JS is very similar to using OpenGL ES in C.
That being said, WebGL and the browser do a little more work for you than the GL API does in C, things you'd otherwise need a helper library for. In particular, there's no shared library or header file hell, context creation is easy, image import is easy, and JS will handle memory allocation for you. It eliminates a lot of the annoying stuff needed to get to writing the actual GL code, though you still have to go through the state machine hell of GL itself :)
While true, only beginners actually use raw GL calls, even in C.
Anyone with experience quickly builds up their mini-engine to handle loading shaders, images, fonts, models, handling everything together for each model, in a mini scene graph way, handling driver and GPU specific bugs...
If you don't need to touch shaders, then you're doing something trivial, like drawing models with a camera.
Once you dip your toes into even something like alpha transparency, or z fighting, depth buffer precision, or texture upload hitches, you start to unpack the shader pipeline under the hood. Which THREE.js makes it super easy to do, because it's much more of a broad library than it is a thick one.
> Also not accurate. WebGL is a -very- thin wrapper over OpenGL ES
Conceptually it seems thin, but it's all a lie. WebGL's most awesome feature is that for simple cases you don't realize how much of a lie this is.
In practice, there is shader recompilation, sentinel rewrites and caching, sync fences, format decodes, a full-blown message queue protocol to a render thread, automatic frame flipping and composition, CPU rendering, and the end of the pipe isn't even necessarily OpenGL at all!
But I agree it appears very thin until you actually look at what's happening (or are forced to due to leaky abstractions).
One of the best resources I've read on this is WebGL Insights:
I'd argue that THREE.js is more of a graphics engine by itself. It makes a lot of choices for you (typical forward renderer, classic wasteful cubemap shadow map rather than doing any frustum culling). People confuse it with a "light-weight wrapper" probably because of the generic name and ".js" suffix but don't get confused. You'll learn as much about WebGL as you will using Unity.
I really enjoy THREE.js, but I don't think it gives you the same level of understanding of WebGL that this link does. This is like taking a course on how OpenGL works in the browser.
That being said, the amount of code needed to make a hello world in WebGL2, not including learning GLSL, is a bit daunting, and if you showed this tutorial to me first, I might not have gotten interested in doing any 3D coding in the browser. The Three.js WebGL abstraction library makes the task much less daunting and gives you some force multipliers that WebGL does not, it makes a lot of assumptions for you, and still gives you a fair amount of control over the action.
Seriously, we are having immense trouble with securing CPU virtualization and now we are talking about GPU virtualization for unreviewed, unaccounted, automatically downloaded and executed code? Stop this madness.
Use native code and native APIs like OpenGL or Vulkan.
Let's not go too far - Meltdown and Spectre is not an excuse to shut everything down. They existed for years before and I'm sure issues will exist for years after.
Do you really think the WebGL runtimes are safe and don't leak data through a side channel, possibly combined with the running JavaScript code? To make matters worse, the hardware is opague, running unknown firmware.
A side channel attack in WebGL similar to Spectre/Meltdown is probably not possible, simply because you don't know when stuff is going to be rendered and how long it takes. You tell the GPU "do this computation" and if it took too long vsync will take 32 ms instead of 16. Maybe you can get a bit more precision with gl.readPixels but I seriously doubt it. I tried to micro benchmark features and changes and I always get too much noise.
Best case is you get the source or binary package from your trusted package manager and run/compile that.
If that is not available, and you still really need the result of that piece of code, fetch the code, compile it and run it.
Code distribution has inherent dangers and cannot be safely made extremely effortless and comfortable. Accounting can be bothersome, but is necessary for code.
We already have one in the system, why duplicate the effort?
Also, It is important to keep the number of authorized sources of code to a system small.
If you insist on using that horrible language, you can deliver a webapp that runs in a JavaScript runtime through the OS package manager. That ensures proper accounting for the users.
In most distros you don't even need the URL but just the name and get the program in seconds, including signed code that is vetted for by maintainers. Usually you also have many runtime options in the config file, if not in the program itself, but that are details.
When your authorized source of code doesn't agree with a program then there is a problem, either with your source or with the program. There are too many parameters to describe all scenarios. For example, if distros refuse to provide a certain program, then you most likely are better off without it, if Apple doesn't let you install a program on your iphone, then you picked the wrong gatekeeper. Apple is a
tyrant and puts its interests above yours. If you enter their garden you are on your own and I care very little for the problems there.
I'm not saying that code delivery is optimal like it is in the most popular distributions. But the Web is not the solution.
Oh, also you have at least such an authorized source of code unless you have writtem every bit of software yourself. You might be unaware what the sources are.
If you use Google Chrome with auto updates, for example, you've made Google an authorized source of code for your system. More general, your browser developer gets to decide for you what technologies your browser supports by default.
Most OS' facilitate automatic or semiautomatic update mechanism. Then the organization who develops these updates and sends them to you is a source of code, that is authorized by you by virtue of you installing the OS.
A gatekeeper is a good thing if you have the last say. So you choose one that works in your interest and you decide whats good for you when you disagree. And you do have gatekeepers that work, but they work in your interest and for you so you don't notice nor complain.
Not having any gatekeeper means basically letting run any code on your machine. I detailed before why that is a bad idea from the security perspective and for it's other consequences.
First, at least the driver of Nvidia is affected in an unspecified manner. [0]
Second, the hardware is opague and runs unknown firmware. What lies sill in there? What will in future lie in there?
Third, how do these hardware components interact? Do they create vulnerabilities in connection with each other? Maybe side leaks that wouldn't be there otherwise?
Fourth, there are arguments for rejecting these Web technologies that are orthogonal to their security. These Web technologies lead to an ecosystem of unreviewed code that is outside of the direct and indirect control of the user, shifting power to the developers, ultimately leading to centralization.
Considering that GPU architecture and low-level programming is still treated largely as trade secrets and the domain of proprietary blobs, it's hard to argue how virtualization should be trustworthy.
When the OS designers get a good spec about the GPU instruction sets, memory architecture, MMU, and IO-MMU, and the OS designers can be the ones in charge of programming these to be consistent with existing OS protections, then we might start to approach an equivalence with CPUs. Then and only then, we can start worrying about subtle, modern risks like errata where the GPU hardware deviates from the idealized spec or where side-channel attacks take the wind out of our sails.
Right now, we have something much closer to the GPU being another computer with an opaque and proprietary OS and a higher-level "managed code" runtime that accepts jobs defined in a mystery ISA by a proprietary compiler suite embedded inside the "graphics driver" in our own computer. We have to have blind faith that there is any data protection as well as that there aren't any code-injection flaws that would allow malicious data and "shader" code to hijack the GPU and its bus interface to compromise the host computer.
You seem to misunderstand. It's not about not using CPUs or GPUs, but about what code runs on them, and how that code is accounted for. Who vetted for it and who is responsible for it?
When you've exhausted this and are looking for next steps to try, I recommend this youtube series by Sketchpunk Labs: https://www.youtube.com/watch?v=LtFujAtKM5I&list=PLMinhigDWz...