Hacker Newsnew | past | comments | ask | show | jobs | submit | pilif's commentslogin

On the other hand, by erroneously treating a SHOULD as a MUST, I would say that Google is the one who's not RFC-compliant


Google is rejecting it to ensure incoming messages aren't spam. SHOULD means "you should do this unless you have a really, really good reason not to." Do they have a good reason not to? It doesn't seem so, meaning Viva is in the wrong here.


No, SHOULD is defined in the RFC, not by colloquial usage. Google is on the wrong, regardless of their "safety" intent.

After all, linguistics is full with examples of words that are spelled the same, but have different meaning in different cultures. I'm glad the RFC spelled it out it for everyone.


The RFC says a SHOULD is to be treated like a MUST, but well-justified exceptions are allowed.


RFC speak requires you to think for a while about skipping a SHOULD. It doesn't require strong justification.

When producing a message, it SHOULD have the id. With or withot it is compliant.

On the other end, we may receive messages with or without. Both are valid. We MUST therefore accept both variations.

The second one is a consequence of the former. So yes Google is the violating party.


No it doesnt lmao. It's quoted all over this thread and clearly is not in any way like a MUST


if Google's choices are protecting users, they can't be in the wrong. That's the reality of a shared communications infrastructure regardless of what the docs say.

When the docs disagree with the reality of threat-actor behavior, reality has to win because reality can't be fooled.


Spam senders don’t have pseudorandom number generators?


They're more likely to put in the least amount of effort or care the least about the reasons how the header is used later on.


Did i miss the part of the RFC that says google must accept every message? Pretty sure the RFC allows email providers to reject any message they feel like.


RFC cannot force a mail server to accept spam. You may argue that requiring message-id is a bad anti-spam policy but it does reduce amount of spam. In my observations around a half or messages without message-id are spam. I would not use personally this as the only reason to reject a message but I understand why someone may choose to do.

The RFC says a SHOULD is to be treated like a MUST, but well-justified exceptions are allowed.


Per RFC2119: 3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

So, it's fairly explicit that the sender should use message-id unless there's a good reason to not do so. The spec is quiet about the recipients behavior (unless there's another spec that calls it out).


Not a specification but "Be liberal in what you accept?" comes to mind. (which I always personally hated but i'm just one shoveler).


Postel's law was a precept of the Internet of the 80's and 90's, when due to the primitive software engineering practices at that time, implementations couldn't be tested properly. That lead to many cases of poor interoperability, and it's no longer a good idea: for example, when HTML 5 was designed, they decide to put into the spec how to deal with the frequent errors like mismatched closing tags, etc... because all major implementations were "liberal" in what they accepted, but each in a different way.

It’s open source and runs on all kinds of platforms. Original HL1 runs on old Windows and IIRC DOS. Nowhere else


Valve updates HL1 every few years so it runs on contemporary platforms. DOS was ancient history by the time HL came out, you might be getting it mixed up with Quake1


I wouldn’t post some random vulnerability report, but the disclosure timeline at the end was very interesting to me and not putting the vendor in a great light


Isn't this asking for the exact trouble musl wanted so spare you from by disabling dlopen()?


You should care because once your PC is part of a bot network, it’s part of the problem


it's running Microslop Windows, so it's born compromised

it's an OS with constant built-in ads and spyware

it would have been considered malware in the 2000s


[flagged]


I had a KMFMS shirt back in the 90's. Lost it in one move or another, alas.


Not being an incompetent or inexperienced Windows user, I'm vanishingly unlikely to be infected by a bot network trojan... and if that does happen, rest assured, I'll notice it.

Windows Update, on the other hand, is part of my threat model.


what's especially strange to me is that in the more distant past, he was a pretty normal guy - at least as normal as any other linux user. Heck, he had a super great podcast (Linux Action Show).

Something changed in the 2014ish time-frame when it got more and more politically extreme.


what do you think changed culturally around 2014 (I'd say it started a little earlier, maybe 2011)?


His views are the normal ones.


As much as I like to hate on a new OS like the next person, I think it's worth pointing out we're probably not seeing the full picture here:

When trying to reproduce the problem as shown in the article by resizing the Safari window currently displaying the article, the drag cursor changes shape at the visible border of the window, not the shadow and consequently, dragging works as expected.

https://youtu.be/kNovjjvYP8g

This might be an application- or driver specific issue, not necessarily a common Tahoe issue.


I'm not sure "it works this way in Application A, and this other way in Application B" is a particularly strong rebuttal.


It wasn't meant as a rebuttal. Just as a point of thought: By showing that at least one application doesn't exhibit the problem, I thought I was showing that the problem might not be related to the Tahoe redesign at all but might have other causes.

It definitely serves to prove that this is not a design-issue but just a simple bug and thus has at least some chance of being fixed.

FWIW, I cannot reproduce the issue demonstrated in the original article with any window of any application on my machine (M1 Mac Studio), but I thought that listing a very commonly used application alone would be enough to challenge the article's assertion ("the macOS designers are stupid because they make me do something that doesn't make sense in order to resize windows").


> It wasn't meant as a rebuttal.

“As much as I like to *” is a common way to start a rebuttal (the subsequent “I’m not going to see/do that” is implied by that turn of phrase).

> but I thought that listing a very commonly used application alone would be enough to challenge the article's assertion

So it was a rebuttal? Why the disingenuous doublethink?


This is absolutely true. The demo in the original article seems quite deceptive in that respect. Nobody would attempt to resize a window by launching their cursor at the corner with great speed as the demo shows. The resize pointer seems to show in exactly the right place, and allows for an extra hit area slightly outside the rounded corner — I don’t see any problem with that.

As for the fact that one cannot resize from inside the window, it makes absolute sense for every other corner of the window, where the user would instead be clicking an icon or some other button (try the top right corner of the finder, where the search button sits).

So, while I agree on the whole that Tahoe is a huge step backwards in terms of design, this seems like an odd gripe to point out, as it doesn’t in fact seem to be an issue at all.

Edit: clarification


> As for the fact that one cannot resize from inside the window,

if you check the screencast I posted, you'll see that you can indeed resize from inside the window. Not by a huge margin, but definitely from inside the actual window boundaries.


Indeed, just enough. And the correct resize pointer shows all along the rounded edge, so I agree, this doesn’t seem like the problem it’s made out to be.


> Nobody would attempt to resize a window by launching their cursor at the corner with great speed as the demo shows.

... great speed? Interpolating from the zoom, I would say its not fast at all.


I’m referring to the demo in the original article. The mouse pointer moves rather rapidly onto the inside of the window. You can just about see the resize pointer flashing as the user does so. I don’t think I ever attempted to resize a window with such erratic mouse movements. Approaching the corner at reasonable speed shows the resize pointer where expected.


> I’m referring to the demo in the original article. The article from noheger.at? I am also referring to it. My guess is that the pointer speed is exaggerated due to zoom of the gif, and/or that we are using the mouse in different ways.


Yes, that demo. You can clearly see the resize pointer flashing briefly, but the user continues aiming right inside the window. I’m not sure why he’s not stopping when the resize pointer appears. It seems erratic.


Arguably the feedback via the cursor change is feedback to help you learn, like the icons that appear in the close / minimise / zoom, or stickers on the keys of a musical instrument. You pretty quickly learn which one is which, or you can't use them effectively. At some point you'd hope that common actions become muscle memory.

So if it was something that was learned whilst using the previous version, and worked, I'd argue it wasn't 'erratic'.


Judging by this comment https://news.ycombinator.com/item?id=46599464

It seems to be common.


400k would last me 13 years for a rack, power and 10Gbit/s bandwidth at my colo place (Switzerland, traditionally high prices)


Yes, but that's not their only expense.


Yes, but that’s not the last or only donation they’re receiving either.


Don't bet on receiving money in the future.


It's a community donation-supported project. That's kind of the whole deal.

Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.


Since you've already done the math, what's the interest on $400k pay for the colo costs?


at a (fairly modest) 3.3 its like 1100/month.

I don't know what kind of rates are available to non-profits, but with 400k in hand you can find nicer rates than 3.3 (as of today, at least).

that covers quite a few colo possibilities.


USD money market funds from Vanguard pay about 3.7% now. Personally, I would recommend a 50/50 split between a Bloomberg Agg bond ETF and a high-yield bond ETF. You can easily boost that yield by 100bps with a modest increase in risk.

Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.


Stupid question from me: What are their other costs? I'm a total newbie about data center colo setups, but as I understand, it includes: power and internet access with ingress and egress. Are you thinking their egress will be very high, thus thus need to pay additional bandwidth charges?


Becky was so good for participating in mailing lists. I could slip by as a Unix user even though I was still mostly using Windows as my client OS.


Ha!

I have a Becky backup on a Iomega Zip disk I have to check one day :D


One thing that’s not quite clear to me is how safe it is to generate v7 uuids on the client.

That’s one of the nice properties of v4 uuids: you can make up a primary key of a new entity directly on the client and the database can use it directly. Sure: there is tiny collision risk, but it’s so small, you can get away with mostly ignoring it

With v7 however, such a large chunk of the uuid is based on the time, so I’m not sure whether it’s still safe to ignore collisions in any application, especially when you consider client’s clocks to probably be very inaccurate.

Am I overthinking things here?


How many clients requests do you get in the same millisecond?

With UUIDv7 it's split into:

- 48 bits: Unix timestamp in milliseconds

- 12 bis: Sub-millisecond timestamp fraction for additional ordering

- 62 bits: Random data for uniqueness

- 6 bits: Version and variant identifiers

So >4,600,000,000,000,000,000 IDs per fraction of a millisecond.

And unprecise time on the client doesn't matter, because some are ahead and some behind, vut that doesn't make them more likely to clash.


Does that factor in the birthday paradox?


If the client can generate a uuid4 they can also reuse a known uuid4


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: