Hacker Newsnew | past | comments | ask | show | jobs | submit | djeric3's commentslogin

Personally, I see no greater advance for humanity than curing diseases. Some of these illnesses have been with us since before humans became modern humans, and now for the first time ever, we have a chance at curing or preventing them. I'd take new disease cures over colonizing Mars.

But more to your point, I understand health care costs are crazy (especially in the US), but I am concerned that capping health care expenditures will dis-incentivize companies from taking risks on new vaccines and therapies.


Facebook.com, LinkedIn.com, and Google Sheets ship ~1MB of JS compressed for their main pages, which is 5-10 MB of JS uncompressed. So JS parsing time ends up taking hundreds of milliseconds on initial load.

And of course, people want to build even richer experiences (live streaming, 360 media, etc) so there is a lot of appetite for loading more code in the same amount of time.


To be fair, pretty much any platform (operating system) provides rich APIs and basic functionality like built-in UI components. So I don't think there's a lot more pre-existing functionality in a browser vs Windows or Android.


I think you are wrong. Windows API compared to HTML APIs are very low-level. For example, there is no automatic layout and you must specify exact size and position of every GUI element like button or input field.


Some apps use a GUI framework, and don't combine it with their app into a single file, which makes for a better comparison (e.g. Paltalk does this).


Yes, Facebook already lazily loads code for secondary functionality. More details in this comment https://news.ycombinator.com/item?id=14912871


Please also look at my comment that shows the opposite: Facebook is loading a lot of unnecessary modules: https://news.ycombinator.com/item?id=14916177


Yup, replied above. Thanks for digging into this more :)


Chrome and Firefox both cache hot code in its compiled (bytecode) form. This proposal addresses cold code loads. Many web apps update frequently (more than once per day) making caching much less effective, and cold code much more common.


Facebook already only ships code that is needed to render the current page. It even goes further and it streams the code in phases so that the browser renders the UI incrementally and the page becomes interactive as soon as possible. It then pulls in code dynamically for any secondary feature only if the user interacts with that feature.

Some of the design is documented here https://www.facebook.com/notes/facebook-engineering/bigpipe-...

Facebook has done a TON of optimization. The fundamental issue is that Facebook is not a webpage, it's a full application. And the Web as an application platform lags behind native platforms with respect to startup performance.

I explained why Facebook.com is really an app in this comment https://news.ycombinator.com/item?id=14912393

Now compare the functionality of the Facebook Android app vs Facebook desktop webpage (nearly identical) and look at their respective installed sizes (~7MB vs ~180MB).

This proposal will benefit both web apps and web documents. And more importantly, it will allow people to build richer, more sophisticated applications on the web without sweating over extra kilobytes in their JS code size.


Most of the code doesn't have to be loaded immediately. For example, you write in your comment, that Facebook can play 360 degrees videos, but the player doesn't have to be loaded until the user tries to play such a video.

I decided to look at Facebook code more closely. It is modular and contains several thousand of small modules. For example, on a news feed page the browser has loaded 66 JS files with total size of 5.7Mb containing 3062 modules [1].

But it is clear that many of those modules are not necessary to display a page. For example, a module with a name "XP2PPaymentRequestDeclineController" that is probably related to payments is not necessary. Or a module with a name "MobileSmsActivationController". Obviously Facebook preloads a lot of code for different dialogs that might be unused most of the times the page is loaded.

Of course I understand that it is very difficult to optimize code when a large team is contantly writing new code and everybody has strict deadlines so there is no time for optimizations, especially if they require serious refactoring.

[1] https://gist.github.com/codedokode/cb506cee367bdb9e1071bc186...


Facebook.com today loads functionality dynamically. Open the Network panel, interact with a secondary feature, and you will see it load code on-demand.

With respect to your example of unnecessary modules, sometimes the dependency trees between modules are non-obvious. But more to the point, the code served to a user is NOT personalized for each individual user and to their specific newsfeed and UI contents. This is actually a performance optimization. Facebook looks at which modules are most commonly required across most users based on their recent activity, and then bundles those common modules into large packages and pushes them aggressively to the browser. This actually results in a very large loading-time win, but ends up overserving some % of extra modules that are not needed by a specific user.


WebAssembly is well suited for statically typed codebases written in C++ etc. You can't compile JavaScript to WebAssembly. Yes, you could do Web development in a statically typed language, but would you want to?


1. One of my complaints was that yes, WebAssembly for some reason decided to target C++ first.

2. From the FAQs:

Beyond the MVP, another high-level goal is to improve support for languages other than C/C++. This includes allowing WebAssembly code to allocate and access garbage-collected (JavaScript, DOM, Web API) objects

3. News flash: some people use statically typed languages for web development: TypeScript, Elm, Purescript


> One of my complaints was that yes, WebAssembly for some reason decided to target C++ first.

It's much easier to support C++ in WebAssembly. C++ and other statically typed languages can be compiled ahead of time to low-level instructions that manipulate memory or registers in a virtual CPU.

It's much more difficult to compile dynamic languages. Consider a JavaScript statement like:

let result = a + b;

If this were a statically typed language, the compiler would know "a" and "b" are integers and can compile it into a single ADD.INT assembly instruction.

In a dynamically typed language, that "+" symbol could be an integer addition, or a floating-point addition, or a string concatenation depending on the types of "a" and "b". So what should the JS-to-WASM compiler generate? It has to generate different code to handle all the different data types, including throwing an exception for invalid types.

There would be a few problems with WASM code generated by such a compiler:

1) the generated code with all this extra checking is not going to be performant, 2) the generated code would be much larger, which would hurt transfer & parsing times, 3) the compiler would essentially be outputting the JavaScript interpreter in WASM by adding all these runtime guards for types.

> Beyond the MVP, another high-level goal is to improve support for languages other than C/C++. This includes allowing WebAssembly code to allocate and access garbage-collected (JavaScript, DOM, Web API) objects

Adding GC and DOM interop will help WASM adoption, but you'll still have the issues I described above if you try to compile JS codebases to WASM.

> statically typed languages for web development

Yes, people can use statically typed languages for web dev, and if they compile them to WASM after it has GC and fast DOM interop support, they will get performance wins vs transforming their codebases to plain JS.

But there are productivity advantages from using dynamically-typed languages during development and there are existing, very large web app codebases written in JavaScript which cannot be typed.


> There would be a few problems with WASM code generated by such a compiler: > 1) the generated code with all this extra checking is not going to be performant, 2) the generated code would be much larger, which would hurt transfer & parsing times, 3) the compiler would essentially be outputting the JavaScript interpreter in WASM by adding all these runtime guards for types.

And, of course, you have full evidence supporting these statements


I don't, you're welcome to prove me wrong if you want to whip up a basic prototype. I'm vdjeric on github.

My goal is to make sophisticated web apps faster, I'm not married to any particular approach.


> you're welcome to prove me wrong

Ahahah what?

You are the one claiming this. The burden of proof is on you


I would say that the burden of proof is on me to prove that Binary AST is a significant real-world performance win and that it will not cause undue burden on JS engine implementors.

I don't think the "burden of proof" philosophy requires me to disprove every other possible approach, right?

I explained my reasoning for my statements in the comment itself. If you believe WASM could address this use case better, and are so inclined to build a toy proof-of-concept JS-to-WASM compiler, I'd be very interested in seeing it.


Facebook.com does a lot more than show a newsfeed, profiles, and notifications.

  - It contains a fully-fledged messenger application that supports payments, videoconferencing, sharing rich media etc
  - The newsfeed supports interactive 360-degree video, a live video player, mini-games in the newsfeed, and lots of other rich/interactive media
  - It's a gaming platform for 3rd party games
  - It's a discussion forum, groups management system, event planning UI
  - Photo sharing and editing platform, as well as live video streaming tool
  - A platform for businesses to have an online presence (Pages)
  - A peer-to-peer online marketplace (called "Marketplace")
And a dozen other things I can't think of right now. You might say "but I don't use all of those things". That's another tricky part, the fact that every user has a different "configuration" and different types of content in their newsfeed at any given moment, requiring them to be served different sets of JavaScript code.


Most of these features are unnecessary until the user tries to play a video, a game, opens a dialog etc.


Right, and that's what happens today, the JS for the secondary functionality is loaded on demand.

Here's what I have in my FB homepage during a random load:

  - Search bar for searching people/groups/posts/pages
  - News ticker
  - Friend requests, Notifications
  - Sidebar ads
  - A rich text editor for sharing my status
  - A newsfeed story with a special "memory - 3years ago" feature
  - Comments & commenting UI under newsfeed stories
  - Suggestions for "People You May know"
  - A video auto-playing a clip from a friend, with capability to auto-select between tracks of different video quality based on bandwidth (including bandwidth estimator code)
  - And probably a dozen different A/B experiments that I'm a part of
It takes a lot of code just to render all these UI elements. If I interact with any of them, additional code is loaded (you can see this in the Network monitor).

This homescreen UI is as rich as any desktop application and requires no less code to render. The problem being addressed in this proposal is that a native version of this app would start a lot faster than the web version. And that's because a browser will parse all the code files loaded on startup in an app (inefficiently, by necessity), but a native app will only read the code for the code paths that are actually executed.

Basically, O(code executed) is a lot better than O(all files containing code that is executed). And this proposal features a more parser-friendly encoding and change to the parsing semantics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: