Hacker Newsnew | past | comments | ask | show | jobs | submit | leeter's commentslogin

When MS removed Solitaire and made it an app, that should have been the sign to move.

When they introduced a mobile first UI onto a desktop OS...

When they forced mandatory Microsoft accounts...

When they started saving files that had no place being in one drive to the cloud by default and charging people for it...

When they announced the worst AI privacy disaster in computing OS history...

When their updates refused to install cleanly and bricked people's computer to the point of hardware damage...

Seriously thinking I might have Stockholm syndrome at this point. To me the best windows would be Windows 11's kernel and libraries with Windows 7's UI and apps. Because it's been all down hill (generally) since there.


It's not stockholm syndrome for a lot of people. Microsoft is so firmly entrenched in so much of the corporate world that you can't get away from them. My mom was in the market for a new laptop recently, and I so badly wanted to get her setup with a MacBook Air, but it's not an option because the Sage accounting software she uses for my dad's business is Windows only. And furthermore, the business itself (a small pawn shop) is forced to use some specific software to manage inventory (I believe it allows police to access the database to track serial numbers in finding stolen goods or something), which is a webapp using some antiquated decades-old technology that only runs in Microsoft Edge's IE-compatibility mode (which has become a more and more difficult incantation to enable over the years) and I believe that can only be used on the Windows version of Edge.

For me it's currently the minimal-hassle way to make my Steam library runnable. But it feels like we're moving in a good direction thanks to Valve's efforts where one day I may be able to never boot into Windows on my PC.


I've switched to Linux Ubuntu KDE desktop, play my games on Steam Proton, and I'm happy.

> When they introduced a mobile first UI onto a desktop OS…

That's when I jumped to Macs and haven't looked back since. Windows is just a glorified game console to me now, but I have enough fun with PS5/Switch exclusives.

Though macOS is also becoming annoying, not quite to that breaking point yet, but worrying

Meanwhile Linuxland seems like a chaos of 10000 people who all think they're right, under an anal overlord

Maybe it's time to dig the Commodore 64 back up? :')

But who cares though, soon AI will make operating systems meaningless, right?


I still have a Win7 VM I fire up sometimes for nostalgia sake. Beautiful and snappy. Bittersweet.

I still use Windows 7 regulary. And guess what, the virus and bot apocalypse doesn't happened as the support stopped.

That you know of.

> To me the best windows would be Windows 11's kernel and libraries with Windows 7's UI and apps.

Does anyone now how to achieve that? What happens when you replace the kernel in a Windows 7 installation with the one from Windows 11? How is the manual update procedure for kernels on MS Windows?


Based on the info if you click into them, likely no. I would have expected them to be incidental materials from tunneling, but reading the description that's not the case.


[removed]


Part of the reason, I think, is that Qualcomm and Apple cut their teeth on mobile devices, and yeah wider SIMD is not at all a concern there. It's also possible they haven't even licensed SVE from Arm Holdings and don't really want to spend the money on it.

In Apple's case, they have both the GPU and the NPU to fall back on, and a more closed/controlled ecosystem that breaks backwards compatibility every few years anyway. But Qualcomm is not so lucky; Windows is far more open and far more backwards compatible. I think the bet is that there are enough users who don't need/care about that, but I would question why they would even want Windows in the first place, when macOS, ChromeOS, or even GNU/Linux are available.


A ton of vector math applications these days are high dimensional vector spaces. A good example of that for arm would I guess be something like fingerprint or face id.

Also, it doesn't just speed up vector math. Compilers these days with knowledge of these extensions can auto-vectorize your code, so it has the potential to speed up every for-loop you write.


> A good example of that for arm would I guess be something like fingerprint or face id.

So operations that are not performance critical and are needed once or twice every hour? Are you sure you don't want to include a dedicated cluster of RTX 6090 Ti GPUs to speed them up?


I'd argue that those are actually very performance critical because if it takes 5 seconds to unlock your phone, you're going to get a new phone.

The point is taken, though, that seemingly the performance is fine as it is for these applications. My point was only that you don't need to be running state of the art LLMs to be using vector math with more than 4 dimensions.


Those are extremely performance critical operations. A lot of people use their phone many times an hour.


I believe you're thinking of the x86 Hotpatching hook[1], which doesn't exist on x86-64[2] (in the same form, it uses a x86-64 safe one).

[1] https://devblogs.microsoft.com/oldnewthing/20110921-00/?p=95...

[2] https://devblogs.microsoft.com/oldnewthing/20221109-00/?p=10...


yes, that's it. Thanks for clarifying


Almost assuredly, given that 10.0 was released on 32bit PPC... and was built around Carbon, not Cocoa... yeah it's changed just a wee bit.


I remember failing an interview with the optimization team of a large fruit trademarked computer maker because I couldn't explain why the x87 stack was a bad design. TBF they were looking for someone with a masters, not someone just graduating with a BS. But, now I know... honestly, I'm still not 100% sure what they were looking for in an answer. I assume something about register renaming. memory, and cycle efficiency.


Having given a zillion interviews, I expect that they weren't looking for the One True Answer, but were interested in seeing if you discussed plausible reasons in an informed way, as well as seeing what areas you focused on (e.g., do you discuss compiler issues or architecture issues). Saying "I dunno" is bad, especially after hints like "what about ..." and spouting complete nonsense is also bad.

(I'm just commenting on interviews in general, and this is in no way a criticism of your response.)


I think I said something about the stack efficiency. I was a kid that barely understood out of order execution. Register renaming and the rest was well beyond me. It was also a long time ago, so recollections are fuzzy. But, I do recall is they didn't prompt anything. I suspect the only reason I got the interview is I had done some SSE programming (AVX didn't exist yet, and to give timing context AltiVec was discussed), and they figured if I was curious enough to do that I might not be garbage.

Edit: Jogging my memory I believe they were explicit at the end of the interview they were looking for a Masters candidate. They did say I was on a good path IIRC. It wasn't a bad interview, but I was very clearly not what they were looking for.


I believe the Academy Awards and a few other things too also influence this. The rules to be eligible still very much favor legacy studios IIRC. But, with this that may change? Hard to say. I know that quite a few Netflix movies have had theatrical runs at random mom and pop theaters in Cali so they could meet eligibility requirements for the various awards.


A current example (although not Netflix) is The Secret Agent with an award qualification run in NYC and LA before wider release.


Honestly? I expected this to be talking about the MiSTer project FPGA core[1]. That has been tuned so it's capable of running the AREA5150 demo[2] which is an insane challenge (AFAIK the timings of the v20 break that demo). Not saying this isn't cool, but it's definitely not what I was expecting.

[1] https://github.com/MiSTer-devel/PCXT_MiSTer

[2] https://www.youtube.com/watch?v=tOmcgp99fEk


I've said for years that any smart thermostat should have a bimetallic backup that controls maximum ranges and acts in the dumbest way possible. Just max temp and min temp for AC and heat. Nothing that should ever be hit... but there nonetheless.


You could just put a backup dumb thermostat in parallel with the smart one.


I'm reminded of Raymond Chen's many many blogs[1][2][3](there are a lot more) on why TerminateThread is a bad idea. Not surprised at all the same is true elsewhere. I will say in my own code this is why I tend to prefer cancellable system calls that are alertable. That way the thread can wake up, check if it needs to die and then GTFO.

[1] https://devblogs.microsoft.com/oldnewthing/20150814-00/?p=91...

[2] https://devblogs.microsoft.com/oldnewthing/20191101-00/?p=10...

[3] https://devblogs.microsoft.com/oldnewthing/20140808-00/?p=29...

there are a lot more, I'm not linking them all here.


One of my more annoying gotchas on Windows is that despite this advice being very reasonable sounding, the runtime itself (I believe it actually happens in the kernel) essentially calls TerminateThread on all child threads before running global destructors and atexit hooks. Good luck following this advice when the kernel actively fights you when it come time to shutdown


So there is a reason that in the C++ spec if a std::thread is still joinable when the destructor is called it calls std::terminate[1]. That reason being exactly this case. If the house is being torn down it's not safe to try to save the curtains[2]. Just let the house get torn down as quickly as possible. If you wanted to save the curtains (e.g. do things on the threads before they exit) you need to do it before the end of main and thus global destructors start getting called.

[1] https://en.cppreference.com/w/cpp/thread/thread/~thread.html

[2] https://devblogs.microsoft.com/oldnewthing/20120105-00/?p=86...


Global destructors and atexit are called by the C/C++ runtime, Windows has nothing to do with that. The C and C++ specs require that returning from main() has the same effect of ending the process as exit() does, meaning they can’t allow any still-running threads to continue running. Given these constraints, would you prefer the threads to keep running until after global destructors and atexit have run? That would be at least as likely to wreak havoc. No, in C/C++, you need to make sure that other threads are not running anymore before returning from main().


When you return from main(), there shouldn't be any child threads running in the first place. Join your threads and you will be fine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: