I think the problem is exactly the opposite. The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable. The problem that we are seeing is that the reigns to that power can be held by too few people it turns out. The checks and balances have ceased to exist. No one is held accountable and people are allowed to be above the law.
The power and scale of governments doesn't have to be correlated with the scale of the society. The concept of nations themselves aren't even a necessity.
I get that this is what we have today and all we've had in recent history, but we are ignoring a huge number of possibilities to assume that being human means always inventing new things, using more resources, creating more weapons, and needing larger and larger governments because someone had to be in charge.
> The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable.
Perhaps massive and complex (I'd say complicated) nation-states inevitably create industrial complexes, but it's certainly not inevitable that nation-states grow so large (or even exist) in 2026.
The idea that we still need soverign-esque entites across entire continents, when we can now communicate and coordinate instantly across them, and use cameras to documents truth all around us at all times, is just downright silly.
We can reduce states to the size that you can walk across in a day or two, and everybody will be much happier and healthier.
They also bought and killed texture, a fantastic cross-platform magazine subscription service, to somehow further Apple News. I subscribed to Texture on Android. I wouldn't give a dollar to Apple News even if I was in the Apple ecosystem.
Yes! You are best served by learning what a tool is doing for you by doing it yourself or carefully studying what it uses and obfuscates from you before using the tool. You don't need to construct an entire functioning processor in an HDL, but understanding the basics of digital logic and computer architecture matters if you're EE/CompE. You don't have to write an OS in asm, but understanding assembly and how it gets translated into binary and understanding the basics of resource management, IPC, file systems, etc. is essential if you will ever work in something lower level. If you're a CS major, algorithms and data structures are essential. If you're just learning front end development on your own or in a boot camp, you need to learn HTML and the DOM, events, how CSS works, and some of the core concepts of JS, not just React. You'll be better for it when the tools fail you or a new tool comes along.
Because that's the entire point of college. It's supposed to teach you the fundamentals - how to think, how to problem solve, how to form mental models and adapt them, how things you use actually work. Knowing how different sorting functions work and what the tradeoffs are allows you to pick the best sorting function for your data and hardware. If the tools you have aren't doing the job, you can mend them or build new tools.
I don't think most software houses spend enough time even focusing on engineering time. CI pipelines that take tens of minutes to over an hour, compile times that exceed ten seconds when nothing has changed, startup times that are much more than a few seconds. Focus and fast iteration are super important to writing software and it seems like a lot of orgs just kinda shrug when these long waits creep into the development process.
That's the thing about investing in scientific research, especially toward the basic science end of the spectrum - the real benefit is seen years down the line after technology transfer to public-private partnerships and private industry. It can take many years to decades to see the long-term benefit, which is why it needs government backing. It's not sustainable for most players in the private sector to invest research that is high risk (with respect to applicability), long term, or both. This also makes it easy to cast doubt on the value of research being done now or recently - we don't have a ton of concrete results to show for it yet. The best numbers to look at would probably be emigration / immigration of PhDs, papers published in top-tier journals and the universities associated with them, and where conferences are being held.
This is interesting, but how do you bootstrap it? How does this little software enclave get key material in that doesn't transit untrusted memory? From a file? I guess the attacker this is guarding against can read parts of memory remotely but doesn't have RCE. Seems like a better approach would be an explicitly separate allocator and message passing boundaries. Maybe a new way to launch an isolated go routine with limited copying channels.
What's the reason for moving from ASCII CHAR to UTF16 WCHAR rather than UTF8 CHAR? I wouldn't think any parts of the codebase that don't need to render the string or worry about character counts would need to be modified.
Edit: https://devblogs.microsoft.com/oldnewthing/20190830-00/?p=10... seems the justification was that UTF-8 didn't exist yet? Not totally accurate, but it wasn't fully standardized. Also that other article seems to imply Windows 95 used UTF16 (or UCS2, but either way 16-bit chars) so I'm confused about porting code being a problem. Was it that the APIs in 95 were still kind of a halfway point?
Windows NT started supporting unicode before UTF-8 was invented, back when Unicode was fundamentally 16-bit.
As a result, in Microsoft world, WCHAR meant "supports unicode" and CHAR meant "doesn't support unicode yet".
By the way, UTF-16 also didn't exist yet: Windows started with UCS-2. Though I think the name "UCS-2" also didn't exist yet -- AFAIK that name was only introduced in Unicode 2.0 together with UCS-4/UTF-32 and UTF-16 -- in Unicode 1.0, the 16-bit encoding was just called "Unicode" as there were no other encodings of unicode.
> Windows NT started supporting unicode before UTF-8 was invented
That's not true, UTF-8 predates Windows NT. It's just that the jump from ASCII to UCS2 (not even real UTF16) was much easier and natural and at the time a lot of people really thought that it would be enough. Java made the same mistake around the same time. I actually had the very same discussions with older die-hard win developers as late as 2015, for a lot of them 2 bytes per symbol was still all that you could possibly need.
Windows NT started development in 1988 and the public beta was released in July 1992 which happened before Ken Thompson devised UTF-8 on a napkin in September 1992. Rob Pike gave a UTF-8 presentation at USENIX January 1993.
Windows NT general release was July 1993 so it's not realistic to replace all UCS-16 code with UTF-8 after January 1993 and have it ready in less than 6 months. Even Linux didn't have UTF-8 support in July 1993.
UTF-8 was invented in 1992 and was first published in 1993. Windows NT 3.1 had its first public demo in 1991, was scheduled for release in 1992 and was released in 1993.
Technically UTF-8 was invented before the first Windows NT release, but they would have had to rework a nearly finished and already delayed OS
Also keep in mind that ISO’s official answer was UTF-1 not UTF-8, and UTF-8 wasn’t formally accepted as part of the Unicode and ISO standards until 1996. And early versions of UTF-8 still allowed the full 31 bit range of the original ISO 10646 repertoire, before it was limited to the 21 bit range of UTF-16. Also, a lot of early UTF-8 implementations were actually what we now call CESU-8, or had various other infelicities (such as accepting overlong encodings, nowadays commonly disabled as a security risk). So even in 1993, I’m not sure it was yet clear that UTF-8 was going to win.
Oh god, this again. One word: "History". No one thought we would need more than 16 bits (65k chars) to represent all the world's written languages. Then it happened. There must be no less than one thousand individually authored blog posts and technical articles on this matter. Win32, Java, and Qt all suffer from the same UTF-16 internal representation. There has been endless discussion on the matter over the last 10 years about how to change these frameworks to use UTF-8 internal representation. It is a crazy hard problem.
The tragic part is how brief the period of time was between “ascii and a mess of code pages” and the problem actually getting solved with Unicode 2.0 and UTF-8.
Unicode 1.0 was in 1991, UTF-8 happened a year later, and Unicode 2.0 (where more than 65,536 characters became “official”, and UTF-8 was the recommended choice) was in 1996.
That means if you were green-fielding a new bit of tech in 1991, you likely decided 16 bits per character was the correct approach. But in 1992 it started to become clear that maybe a variable with encoding (with 8 bits as the base character size) was on the horizon. And by 1996 it was clear that fixed 16-bit characters was a mistake.
But that 5-year window was an extremely critical time in computing history: Windows NT was invented, so was Java, JavaScript, and a bunch of other things. So, too late, huge swaths of what would become today’s technical landscape had set the problem in stone.
UNIXes only use the “right” technical choice because it was already too hard to move from ASCII to 16-bit characters… but laziness in moving off of ASCII ultimately paid off as it became clear that 16-bits per character was the wrong choice in the first place. But otherwise UNIX would have had the same fate.
Let's say the only devices you can get that will run YouTube are running i/pad/visionOS or Android and that those will only run on controlled hardware and that the hardware will only run signed code. Now let's say the only way to get the YouTube client is though the controlled app stores on those platforms. You can build a chain of trust tied to something like a TPM in the device at one end and signing keys held by Apple or Google at the other that makes it very difficult to get access to the client implementation and the key material and run something like the client in an environment that would allow it to provide convincing evidence that it is a trusted client. As long as you have the hardware and software in your hands, it's probably not impossible, but it can be made just a few steps shy.
reply