Hacker Newsnew | past | comments | ask | show | jobs | submit | the_fall's commentslogin

They don't. I'm using Cloudflare and 90%+ of the traffic I'm getting are still broken scrapers, a lot of them coming through residential proxies. I don't know what they block, but they're not very good at that. Or, to be more fair: I think the scrapers have gotten really good at what they do because there's real money to be made.

35 years ago, a good chunk of the current EU was under a Soviet-imposed totalitarian rule. Spain was a dictatorship until 1975. And it's been just 80 years since WWII.

It always boggles my mind that most Europeans are absolutely convinced that nothing like that could ever happen again. Meanwhile, many people in the US are convinced that the government will be coming for them any minute now.


They literally get arrested for posting memes.

They who and what?

> I believe real numbers to be completely natural,

Most of real numbers are not even computable. Doesn't that give you a pause?


Why would we expect most real numbers to be computable? It's an idealized continuum. It makes perfect sense that there are way too many points in it for us to be able to compute them all.

It feels like less of an expectation and more of a: the "leap" from the rationals to the reals is a far larger one than the leap from the reals to the complex numbers. The complex numbers aren't even a different cardinality.

> for us to be able to compute them all

It's that if you pick a real at random, the odds are vanishingly small that you can compute that one particular number. That large of a barrier to human knowledge is the huge leap.


Maybe I'm getting hung up on words, but my beef is with the parent saying they find real numbers "completely natural".

It's a reasonable assumption that the universe is computable. Most reals aren't, which essentially puts them out of reach - not just in physical terms, but conceptually. If so, I struggle to see the concept as particularly "natural".

We could argue that computable numbers are natural, and that the rest of reals is just some sort of a fever dream.


The idea is we can't actually prove a non-computable real number exists without purposefully having axioms that allow for deriving non-computable things. (We can't prove they don't exist either, without making some strong assumptions).

You can go farther and say that you can't even construct real numbers without strong enough axioms. Theories of first order arithmetic, like Peano arithmetic, can talk about computable reals but not reals in general.

> The idea is we can't actually prove a non-computable real number exists without purposefully having axioms that allow for deriving non-computable things.

Sorry, what do you mean?

The real numbers are uncountable. (If you're talking about constructivism, I guess it's more complicated. There's some discussion at https://mathoverflow.net/questions/30643/are-real-numbers-co... . But that is very niche.)

The set of things we can compute is, for any reasonable definition of computability, countable.


I hold that the discovery of computation was as significant as the set theory paradoxes and should have produced a similar shift in practice. No one does naive set theory anymore. The same should have happened with classical mathematics but no one wanted to give up excluded middle, leading to the current situation. Computable reals are the ones that actually exist. Non-computable reals (or any other non-computable mathematical object) exist in the same way Russel’s paradoxical set exists, as a string of formal symbols.

Formal reasoning is so powerful you can pretend these things actually exist, but they don’t!

I see you are already familiar with subcountability so you know the rest.


What do you really mean exists - maybe you mean has something to do with a calculation in physics, or like we can possibly map it into some physical experience?

Doesn't that formal string of symbols exist?

Seems like allowing formal string of symbols that don't necessarily "exist" (or well useful for physics) can still lead you to something computable at the end of the day?

Like a meta version of what happens in programming - people often start with "infinite" objects eg `cycle [0,1] = [0,1,0,1...]` but then extract something finite out of it.


I am talking about constructivism, but that's not entirely the same as saying the reals are not uncountable. One of the harder things to grasp one's head around in logic is that there is a difference between, so to speak, what a theory thinks is true vs. what is actually true in a model of that theory. It is entirely possible to have a countable model of a theory that thinks it is uncountable. (In fact, there is a theorem that countable models of first order theories always exist, though it requires the Axiom of Choice).

I think that what matters here (and what I think is the natural interpretation of "not every real number is computable") is what the theory thinks is true. That is, we're working with internal notions of everything.

I'd agree with that for practical purposes, but sometimes the external perspective can be enlightening philosophically.

In this case, to actually prove the statement internally that "not every real number is computable", you'd need some non-constructive principle (usually added to the logical system rather than the theory itself). But, the absence of that proof doesn't make its negation provable either ("every real number is computable"). While some schools of constructivism want the negation, others prefer to live in the ambiguity.


Because inexplicably, there's random pixel-level noise baked into the blue area. You can't see it unless you crank up contrast, but it makes the bitmap hard to compress losslessly. If you remove it using threshold blur, it doesn't change the appearance at all, but the size is down to 100 kB. Scale it down to a more reasonable size and you're down to 50 kB.

Modern web development never ceases to amaze me.


None of this is due to "modern web development". It's just about a dev not checking reasonable asset size before deploying/compiling, that has happened in web, game-dev, desktop apps, server containers, etc. etc.

This should be an SVG (a few kb after proper compression) or if properly made as a PNG it'd probably be in 20-ish kb.


The dev not having the common sense to check file size and apparently not realising that the PNG format was being grossly misused for this purpose (by not even having a single tone of white for the J and the corners, let alone for the blue background) is modern web development.

Right, so you mean that this is unique and inherent to web dev and specifically modern web dev.

What is that noise actually? It's clearly not JPEG artifacts. Is it dithering from converting from a higher bitdepth source? There do appear to be very subtle gradients.

I would bet it's from AI upscaling. The dark edges around high contrast borders, plus the pronounced and slightly off-colour antialised edges (especially visible on the right side of the J) remind me of upscaling models.

Not even the white is pure. There are at least #FFFFFD, #FFFFFB and #FEFEFE pixels sprinkled all over the #FFFFFF.

I'd bet that it's AI generated, resulting in the funky noise.

Oh, ding ding! Opening in a hex editor, there's the string "Added imperceptible SynthID watermark" in an iTXt chunk. SynthID is apparently a watermark Google attaches to its AI-generated content. This is almost certainly the noise.

Make it an SVG and it's down to 1kb.

Meh. The room-temperature endurance of modern EEPROMs (e.g., ST M95256) is something like 4 million cycles. If you use a simple ring buffer (reset on overflow, otherwise just appending values), you only need to overwrite a cell once every 32k ticks, which gives you a theoretical run time of 250,000 years with every-minute updates or 4,100 years with every-second updates.

The history of journalism is written by journalists, often in a self-serving way. You'll be hard-pressed to pinpoint the purported golden age of impartial truth-seeking. Early newspapers in the US were often owned by a local railroad tycoon and published hit pieces about his opponents. From the 1960s, this morphed into a way to broadcast the ideological consensus of East Coast Ivy League graduates. Some of their ideas were good and some were bad, but every single day, this consensus influenced which stories made it to the front page and how they were framed.

Weirdly, I think this model was beneficial even in the presence of bias: when everyone read the same news, it helped with social cohesion and national identity, even if the stories themselves presented a particular viewpoint.

But now, everyone can get their own news with their own custom-tailored bias, so there's no special reason to sign up for the biases of Washington Post or The New York Times unless you want to signal something to your ingroup. I don't think this is as much Bezos' fault as it's just a consequence of the internet evolving into what it is right now: one giant, gelatinous cube of engagement bait.


> The history of journalism is written by journalists, often in a self-serving way. You'll be hard-pressed to pinpoint the purported golden age of impartial truth-seeking.

We generally assume that there is an external reality that can be observed and understood. When someone 'consumes' journalism, how well does that reporting reflect the external reality? How well do people's perceptions match up with what physically happened?

For example: in November 2020 there was an election. Who got more votes, both in the popular vote and in the various states individually that counts towards the Electoral College? Who "won" the election?

It turns out that some news organizations—even with any biases—allow their readers/viewers to have a better picture of reality than others:

* https://archive.is/https://www.businessinsider.com/study-wat...

* https://www.nbcnews.com/think/opinion/fox-news-study-compari...

* https://washingtonmonthly.com/2011/06/19/the-most-consistent...

* https://portal.fdu.edu/fdupoll-archive/knowless/final.pdf

* https://portal.fdu.edu/fdupoll-archive/confirmed/final.pdf

* https://www.fdu.edu/academics/centers-institutes/fdu-poll/


> the purported golden age of impartial truth-seeking.

It's constantly been with us since the beginning of the republic. Several of our founding fathers were actually publishers.

> this consensus

Consensus doesn't exist in a vacuum. It's a product of an interest in profiting off the news. It seems obvious from this vantage what the fundamental problem is and why "journalists" are not a homogeneous group with identical outputs and why terms like "main stream" even exist.

> it helped with social cohesion and national identity

Which is why the FBI and CIA target it for manipulation so relentlessly.


I'm so tired of these false equivalences

You brought up the most notorious part of US history (the gilded age / age of yellow journalism) as if that was defining of journalism in general. You would be hard-pressed to pinpoint a time in which there was less bullshit in media than then. Besides today, of course.

And then you somehow equate this to the 1960s. As if the fact that journalists tended to study at university and therefore share points of view with people who went to university is the same thing as William Randolph Hearst wholly inventing a story about Spain attacking a US ship to convince the public to start a war.

And what we have today, with social media & search monopolies sucking all economic surplus completely out of journalism, plus foreign-run and profit-run influence farms, plus algorithmic custom-tailoring of propaganda, is undoubtedly the worst we have ever seen.


I'd like to know whether there's any objective way to measure how truth-seeking journalism actually is. Otherwise it just turns into people declaring, purely subjectively, that one outlet is "biased" and another is "impartial" or "truth-seeking".

Ultimately, every editorial decision — what to publish, which story to highlight, what angle to frame it from — is a value judgment. And value judgments aren't matters of objective truth.



From my point of view journalism is or was about calling attention to points of reference that we can all agree that we are affected by in a similar way. The way you are framing this is more about agreeing with each other. IMHO that's not what journalism is about.

In France, at a very young age, we're taught that journalism is not impartial: people must take sides to express interesting opinions. We simply need to read them all: the Humanite to understand the communist point of view, the Monde for the socialists, the Figaro for the conservatives, the Croix for the Christians, etc.

Once you mix all these perspectives of the same events, you get, if not "the truth", a view of the impact of the events on each sub group in the nation, what they propose to do about it, and put some water in your own wine whichever side you're on: when time comes to vote on policies, having read everyone, you may consider their point of view a bit more.

Thinking "The Washington Post" was "impartial" and "about the truth" before is a pipe dream: they were partial, rational within the confines of their choice ideology, and disagreeing with many subgroups in your country anyway. They just shifted sides but you can find other newspapers now to counter balance.

As long as no newspaper pretend to be impartial and is clearly identified, the national debate stays healthy, no ?


No, man.

> when time comes to vote on policies, having read everyone, you may consider their point of view a bit more.

Trying to be impartial, trying to understand all the points of view, is a noble effort. It's impossible to do, but the process of trying is how you can achieve the best version of truth. Seems like I agree with you here.

And that's what the best newspapers do.

I need people to be making an honest effort to understand all the perspectives and distilling them down for me.

If nobody is doing that, then it makes my job (the job of understanding everyones' perspectives) a lot harder, because it's an exercise in multi-player adversarial thinking.


> But gcc is part of it's training data so of course it spit out an autocomplete of a working compiler /s

Why the sarcasm tag? It is almost certainly trained on several compiler codebases, plus probably dozens of small "toy" C compilers created as hobby / school projects.

It's an interesting benchmark not because the LLM did something novel, but because it evidently stayed focused and maintained consistency long enough for a project of this complexity.


They're not coming from anywhere. It's an LLM-written article, and given how non-specific it is, I imagine the prompt wasn't much more than "write an article about how OpenClaw is changing my life".

And the fact this post has 300+ comments, just like countless LLM-generated articles we get here pretty much daily... I guess proves the point in a way?


That’s another reason there just isn't any point in looking at these articles anymore unless they take you on a trip deep in the weeds of some specific problem or example. We need deep case studies (pro and con), not bulleted lists and talking points.

It's common for compilers to generate mildly unusual code because they translate high-level code into an abstract intermediate notation, run a variety optimization steps on that notation, and then emit machine-specific code to perform whatever the optimizations yielded. There's no constraint along the lines of "but select the most logical opcode for this task".

The claim that the code is inefficient is really not substantiated well in this blog post. Sometimes, long-winded assembly actually runs faster because of pipelining, register aliasing, and other quirks. Other times, a "weird" way of zeroing a register may actually take up less space in memory, etc.


> The claim that the code is inefficient is really not substantiated well in this blog post.

I didn't run benchmarks, but in the case of clang writing zeros to memory (which are never used thereafter), there's no way that particular code is optimal.

For the gcc output, it seems unlikely that the three versions are all optimal, given the inconsistent strategies used. In particular, the code that sets the output value to 0 or 1 in the size = 3 version is highly unlikely to be optimal in my opinion. I'd be amazed if it is!

Your point that unintuitive code is sometimes actually optimal is well taken though :)


Stefan Kanthak has previously noted that GCC's code generator is quite horrible, in these extensive investigations:

https://skanthak.hier-im-netz.de/gcc.html


Gun frames can be made out of plastic or aluminum, and there are fixtures for benchtop CNC machines that can be used to make them. This is not nearly as complicated as you make it sound. I think Cody Wilson was basically selling a turnkey solution for that, maybe still is.

AFAIK they claim to still be selling general purpose CNC machines that aren't marketed as being for firearms... but only take the money and ghost customers without actually delivering anything.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: