Hacker Newsnew | past | comments | ask | show | jobs | submit | v413's commentslogin

This is the second "law" of the dialectical materialism by Engels:

"The law of the passage of quantitative changes into qualitative changes"

According to Wikipedia it has its roots from ancient Greece https://en.wikipedia.org/wiki/Dialectical_materialism


There is a spin on the same idea when working with data (maths/stats/comp/ML) and having to skirt around the curse of dimensionality. Suppose I have a 5-dimensional observation and I'm wondering if it's really only 4 dimensions there. One way I check is - do a PCA, then look at the size of the remaining variance along the axis that is the smallest component (the one at the tail end, when sorting the PCA components by size). If the remaining variance is 0 - that's easy, I can say: well, it was only ever a 4-dimensional observation that I had after all. However, in the real world it's never going to be exactly 0. What if it is 1e-10? 1e-1? 0.1? At what size does the variance along that smallest PCA axis count as an additional dimension in my data? The thresholds are domain dependent - I can for sure say that enough quantity in the extra dimension gives a rise to that new dimension, adds a new quality. Obversely - diminishing the (variance) quantity in the extra dimension removes that dimension eventually (and with total certainty at the limit of 0). I can extend the logic from this simplest case of linear dependency (where PCA suffices) all the way to to the most general case where I have a general program (instead of PCA) and the criterion is predicting the values in the extra dimension (with the associated error having the role of the variance in the PCA case). At some error quantity >0 I have to admit I have a new dimension (quality).


Interesting, thank you for point that out! I've heard this before but never knew the source.


It is also a quote from Stalin.


Engels pre-dates Stalin by a considerable period of time and we can assume Stalin has read Engels. Safe to say its Stalin just paraphrasing Engels.


And, further, Engels is just paraphrasing Hegel.


Kant etc.


Not safe to say at all, no. It is such an obvious thing to say, and such an easy observation, many people have said something of this nature for a very long time independent of each other. The is basically another phrasing of the question: “how many grains of sand makes a pile?”

I’m not impressed by a cheap observation like this, even when phrased in a clever sounding way. I am impressed when people make new observations when this applies, such as when they are able to model a specific macro system that behaves very differently when the number of inputs is increased by a lot, and show how that is useful for our understanding of nature (including human nature).


Hmm, maybe you could write to Engels to tell him just how unimpressed you are?


I suspect that Stalin read that from Engels. I think that is a reasonable suspicion.


As I understand it, Stalin said, "Quantity is its own kind of quality." But I don't have the original Russian (someone here no doubt does) where he was referring to the USSR's ability to produce arms faster than their opponents even though the quality was lower.


This is a quote by Thomas A. Callaghan Jr, but is often mis-atrributed to Stalin.

https://klangable.com/blog/quantity-has-a-quality-all-its-ow...


in this[1] work titled “On Dialectic and Historic Materialism”, Stalin references the idea and properly attributes it to Engels.

1 https://c21ch.newcastle.edu.au/stalin/t14/t14_55.htm


And people ask why I still come to this site :-). That is a great link.


Certainly that dialectic principle is broadly known. But it's specifically (mis)attributed to Stalin with the reference to wartime production/conscription and you won't find that in his works, recorded speeches or memoirs of contemporaries.

This goes in fact for most of his grand quotes. Whatever deep sounding passage are attributed to him and can't be traced back to Marxism tenets, are typically adaptations from the Bible, reflecting his education as a priest.


I misread Engels as Hegel. Of course it makes more sense now.


Yep, looks like they are the same.



My then-five-year-old was run over by a human driver. Luckily, in his case the human was paying attention and managed to slow down to ~20 KPH before sending my son flying tens of meters, breaking his maxilla and knocking out all his front teeth, not to mention road rash on every part of his body, from his face and ears to his hips, legs, shoulders. I think that only his shoes survived to be worn again.

That human driver reacted in about 1.5 seconds, judging by the dashcam footage. I fear that another 0.5 seconds of reaction time might have had a vastly different outcome. Likewise, I would have had many calmer months had the reaction time been 0.5 or 1.0 seconds sooner, like a computer would do.

I anxiously await computer-assisted driving to protect my family, even when my family is pedestrians. I've since bought a Tesla.


I myself was hit in a very similar way when 4 years old, while crossing a road outside a school, and also lost my front teeth.

My father tells of my clothes being cut off by the medics with scissors in the ambulance and my entire body being bruised and me having a Joker like smile from where the car ripped my face open. The scar still itches in cold weather.

He then himself nearly hit a child that ran out in front of him about a decade later and was totally shaken by the experience even though the car didn't actually make contact this time.

One of the reasons (there are many, I highly recommend it) I cycle commuted for years was that I didn't want to put myself in the position of being the driver that hits the kid that I once was.

Computer assisted braking and self driving both seem like good technical solutions to me. I trust computers much more than distracted humans and see the benefit for both the pedestrian who doesn't get hit, and the car driver who doesn't injure someone.


I hope your son is doing okay!

But I don’t think your generalization is correct - just because computers can in theory indeed react way faster than humans, they are just a black-box algorithm that may very well just break abruptly due to a plastic bag in the wind, vs not stopping at all for a child. Humans are simply much much much better at reading the environment, understanding it based on their internal model of reality and reacting aptly.

If anything, the “correct” decision would be to back any car with auto-breaking, as that is a sufficiently well-constrained problem for computers, and that can save countless lives enhancing human capabilities. And this feature is available in even lower end Modern cars nowadays

But FSD is a scam that may very well take lives.


I like how they are doing experiments as to whether the car will stop for children, on a road with actual children running up and down the sidewalk. (a 1:45 - that's as far as I got)


Maybe watch a bit further when the guy uses his own kids for the test instead and show that it see's kids at the far end of the road, a long way from the car.


Ideally, it would always stop.

The fact that sometimes it sometimes stops before hitting a child is not completely comforting.


I don't think ideally is the right mindset to have here. So long as it does better than a human driver and/or better than the current status quo, it's already a great improvement. Of course things can always be better, but just because it isn't ideal doesn't mean it's bad.


> So long as it does better than a human driver and/or better than the current status quo, it's already a great improvement.

I've yet to see evidence that this is the case.

In this particular situation, it's hard to believe that a human driver would have struggled.


Bun is using the JavaScriptCore engine which is the Webkit/Safari Javascript engine. It is not written in Zig.


This is not efficient. Each iteration creates a new array instance due to the spread operator.


`acc.concat()` also creates a new array instance, so I don't get your point.


React is slow for the perf targets of VSCode.


React Native != React.

Also do you consider that VSCode has higher performance requirements than XBox dashboard, or Office plugins?


It's still React, and it's still too slow. The only difference is where the render is committed to (DOM vs Native UI)

Having a declarative rendering logic, a rendering runtime, the VDOM, diffing, re-rendering, update scheduling and everything else that React uses under the hood is always going to be slower than tailor-made imperative rendering.

And let's not even go down the rabbit hole which is concurrent mode and suspense, where React is going to be basically a black box more suited to quantum computing with the whole "render x times and settle on a result that it thinks is correct".

I'm not sure if it will take up more resources than the Xbox dashboard or office plugins, but I am willing to say "yes". The dashboard is quite simple compared to a full-fledged IDE which VScode is becoming, and I'm yet to see the whole Office suite, including the underlying logic, being ported to React Native.

Atom was ported to React at one point but they quickly abandoned it because of the bad performance. Microsoft followed suit and hasn't even bothered with React for VScode.


Ah, that is why Teams dropped Electron for React Native!


Microsoft Teams? No they didn't drop Electron for RN/RNW. They dropped Electron for Microsoft Edge WebView2. I might be missing some sarcasm though


No commenting on the React thing, but yes I do think that VSCode has much higher performance requirements than Xbox dashboard or Office plugins.

VSCode has a very technical user base that will complain a lot if there is any kind of sluggish performance when doing live debugging. It's not just to work on small JS projects anymore. I use it to debug CUDA kernels on several processes at the same time and a lot of people have similar push-it-to-the-limit use cases.


If the user base was that technical, with performance requirements, they wouldn't be using an Electron based product to start with, when there are plenty of other native alternatives, including for CUDA debugging from Nvidia themselves.


I have over 2 decades in this industry and have used almost every IDE that exists.

I work on very large and complex projects ... the current one I'm working with is a massive Angular 12 project.

The fact that it's Electron doesn't matter to me. I am not concerned with it "using much more memory than native" because that has no effect on me.

All I care about that it works, and that is is performant. It is both.


The # symbol denoting private fields has already been finalized and approved to be a legal part of the language. No going back.


Object properties defined as Symbols (even through classes) are still accessible by e.g. Object.getOwnPropertyDescriptors(myObject)

Before the private fields, it was possible to create private properties through a WeakMap object which is somewhat clumsy and not very performant.


It is already implemented in the current Firefox, Chrome and Safari 12.


Some TC39 members proposed this. It is in no way even discussed or decided by the TC39 committee yet. The fancy naming is proposed by some as a solution to prevent name collisions with the flatMap and flatten array methods used by an old version of the Mootools library. I.e. do not break the web.


Yes, this is by no means normative! Nothing will happen without a long discussion and consensus among dozens of people. TC39 has a meeting later this month where I imagine this will discussed in more detail.

Also, assuming we can't use flatten, I am sure we will land on a reasonable name that is not smoosh.


I'm curious what examples there are of TC39 saying no. How many proposals have been withdrawn, ever? That's a pretty low bar for review. Looking higher: are there examples where TC39 has agreed that the quality of a submission was high enough, but where it has decided the feature is not a good fit for the language?

I feel like TC39 is letting anyone with an interest in adding to the language do so. It may take a long long time, may require a lot of technical back and forth, but I feel like ultimately, once the proposal can meet the technical demands required, TC39 will approve the new feature.

A lot of really good things have happened. But I also worry that the language is out of control. Features like pipeline operator or the new smart pipeline operator are daunting and scary capabilities that would make JS vastly less approachable. I don't know if TC39 has the means or spirit to be able to say no.

And at some point, I feel like we need to have some cool-off time to experience what is. We've done so much, so fast. A moratorium on reinventing the language, giving everyone some time to get over the culture shock of it all, and time to practice and learn and experience, so we can collectively learn what the real issues are before carrying on the rapid, fast expansion- it seems in order to me that we have a moratorium, a break, at some point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: