Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Who else is excited that we are might revive the 80s Cambrian explosion of different system system and architectures? Back then there were so many options.


Hmmm... the chances for that are pretty slim I'm afraid. "Apple Silicon" is not a new system, it's just one of the large incumbents switching to another architecture (which is also not a first, this now being their fourth architecture, after 680x0, PowerPC and x86). In the desktop/notebook market, Wintel and Apple are firmly entrenched, with only ChromeOS and Linux challenging them - plus a few less significant OSes (FreeBSD, ReactOS anyone?). For mobile devices, we had a bit of a "Cambrian explosion", unfortunately followed by a very quick extinction, which left us with another duopoly. Here also there are free alternatives which however have very marginal market share.

As for actual CPU architectures, there are only two that really matter at the moment: x86/AMD64 and ARM. It's of course very cool that ARM has proved itself flexible enough to be used from (almost) the smallest embedded devices to supercomputers (not to mention Apple M1), but there's not that much diversity as there was in the 80s either...


Not only is it an incumbent switching to another architecture; it's an incumbent switching to another incumbent architecture. ARM is older than PowerPC and almost as old as the Macintosh itself; it came out in 1985.


The 64bit Aarch64 ISA has very little in common with the original ISA from '85.


It makes about as much sense as calling humans "lactating fish"


Where the category "fish" isn't a clade - it's possible to evolve to no longer be a fish - it's more comparable to a specific generation of ARM chips, like perhaps ARM32, than it is to the ARM line in general. It would be weird to say "64-bit ARMv5" in the same way that it would be weird to say "lactating fish". But it is not weird to say "64-bit ARM" for the same reason it isn't weird to say "lactating euteleostome."


You guys are going to have to eat these words when fishermen off the coast of Madagascar pull up an example of Pisces lactatus.


Is that the scientific designation of mermaid fantasies of lonely fishermen?


I appreciate that you used my bad joke as a fact-checking opportunity to spread some scientific knowledge :)


We take our analogies very seriously here, no jokes sir.


While this is a better analogy and worth reading, "fish" is funnier.


I gather that it's true that ARM hasn't been as good about backwards compatibility as some of its competitors, but was ARMv8 really so much of a jump from ARMv7 that one can't count it as part of the same line of processors anymore?


They weren't horrible either, AArch64 is incompatible with AArch32 but you can still implement both on the same chip with shared internals.

AMD didn't have to extend x86 the way they did, but without buy in from intel there was no way forward unless they went the route they did. Because unless both had agreed to shift to UEFI at the same time and agreed on an ISA it wasn't going to happen. This is why even a modern x86-64 processor has to boot up in real mode... because there was no guarantee that the x64 extensions were going to take off, so AMD had to maintain that strict compatibility to be competitive.

AArch64 had no prohibition, because there is no universal boot protocol for ARM. Insofar as the UEFI or loader sets the CPU in a state the OS can use then it's fine. The fact that there is one IP holder helped as well.

That said could AMD make a x86-64 processor without real mode or compatibility mode support? Yes they can. In fact I would hope that the processors they ship to console manufacturers fit that bill. There is a lot they could strip out if they only intend to support x86-64.


The v7->v8 jump was the biggest one in the history of ARM, it's totally redesigned, they only kept the name.


Well, that and the ability to run v7 code if you switch it into aarch32 mode.

So one could be forgiven, I should think, for thinking the shift was more comparable to x86 -> amd64 than it was to x86 -> ia64.


Short answer is yes. Just one significant example all instructions 32 bit long and no Thumb.

If you read Patterson and Hennessy (Arm edition) there is a slightly wistful throwaway comment I think that Aarch64 has more in common with their vision of MIPS than with the original Arm approach.

Elsewhere you've commented that it's more similar to x86 -> x64 than x86 -> Itanium - which may be true but Itanium was a huge change. However, Aarch64 is philosphically different to 32 bit Arm so it's not really like the x86 -> x64 at all which was basically about extending a 32 bit architecture to be 64 bit.


There's a sort of category problem underlying what you're saying though, perhaps fueled by the fact that ARM has more of a mix-and-match thing going on than Intel chips do.

aarch64 isn't really an equivalent category to x64, because it describes only one portion of the whole ARMv8 spec. ARMv8 still includes the 32-bit instructions and the Thumb. I realize you did mention Thumb, but you incorrectly indicated that it doesn't appear at all in ARMv8. As a counterexample, Apple's first 64-bit chip, the A7, supports all three instruction sets. This was how the iPhone 5S, which had an ARMv8 CPU, was able to natively run software that had been compiled for the ARMv7-based iPhone 5.

A better analogue to aarch64 would be just the long mode portion of x64. The tricky thing is that ARM chips are allowed to drop support for the 32-bit portions of ISA, as Apple did a few years later with A11. Like leeter said in the sibling post, though, x64 chip manufacturers don't necessarily have the option to drop support for legacy mode or real mode.

I think that's a fairly important distinction to make for the purposes of this discussion. I wasn't ever really talking about just aarch64; I was talking about all of ARM.


> Not only is it an incumbent switching to another architecture; it's an incumbent switching to another incumbent architecture. ARM is older than PowerPC and almost as old as the Macintosh itself; it came out in 1985.

> I gather that it's true that ARM hasn't been as good about backwards compatibility as some of its competitors, but was ARMv8 really so much of a jump from ARMv7 that one can't count it as part of the same line of processors anymore?

> I wasn't ever really talking about just aarch64; I was talking about all of ARM.

M1 is AArch64 only. You incorrectly brought ARMv8 into the discussion. AArch32 is irrelevant in the context of the M1.

Fair to highlight worse backwards compatibility but then you can't bring back AArch32 which Apple dropped years ago to try to claim that the M1 somehow uses an old architecture.


> AArch32 is irrelevant in the context of the M1.

Is it? It's not like Apple moving MacBooks to M1 happened in a vacuum. M1 is only the latest in a whole series of Apple ARM chips, about half of which were non-aarch64.

That context actually seems extremely relevant to me; it demonstrates that Apple is not just jumping wholesale to a brand new architecture. They migrated the way large companies usually do: slowly, incrementally, testing the waters as they go. And aarch64 was absolutely not involved in the formative stages (which are arguably the most important bits) of that process. It hadn't even come into existence yet when Apple released their first product based on Apple Silicon. Heck, you can make a case that the process's roots go way back before Apple Silicon, all the way back to ~1990, when Apple first shipped the Newton.

Note, too, that the person I was originally replying to didn't say "M1", they said "Apple Silicon." In the interest of leaving the goalpost in one place, I followed that precedent.


Your point now seems to be that M1 is the latest in a line of processors with ISAs designed by Arm limited. I'll agree with that and leave it there.


It is a jump. There is plenty to dislike about ARMv7.


I'd regard the fact no one seemed to notice that Arm has switched to a more modern 64 bit architecture (Aarch64) that has very little in common with its predecessors as being quite impressive.


We're getting open source RISC-V wich seems more promising long term than ARM.


We'll see. ARM architecture is now about 36 years old. I believe RISC V originated about 10 years ago. I think MIPS started about 40 years ago, but I believe it has finally ground to a stop.


The way I see it is that x86 is still around despite ARM, so ARM will still be around despite RISC-V. No reason why all three can't exist.


Not sure why you'd say that - especially if you look at Arm v9 and the fact that the architecture is starting to make inroads into there server market.

RISC-V is open source which is great in some respects but also not helpful in others.


You assume Risc V will be open source when it reaches consumers hands.


Just the design. Firmware and software probably not except for FOSS purist products.


The original Apple I / II before the Macintosh used the MOS 6502 processor.


6502 is arguably a "proto-ARM", so one could say Apple has come full circle.


It's arguably a proto-RISC architechture (eg ADD has to be coded explicitly from CLC and one or more ADC, register file is memory locations 00-FF, etc), but it has little to do with ARM.


My understanding was that much of the design of ARM was literally based on 6502: https://en.wikipedia.org/wiki/ARM_architecture#Design_concep...

Edit: Granted, Sophie Wilson, one of the designers of ARM, is on record stating that 6502 didn't inspire anything in particular, beside being one of the few inputs to her pool of ideas (16032 and Berkeley RISC being the others): https://people.cs.clemson.edu/~mark/admired_designs.html#wil... So... arguably :)


> My understanding was that much of the design of ARM was literally based on 6502

Huh, hadn't seen that previously. I'd still call that a influence on ARM rather than a proto-ARM, but fair enough.


What do you mean “proto ARM”? I thought the 6502 was based on the MC6800 (the predecessor to the 68k)?


https://en.wikipedia.org/wiki/ARM_architecture#Design_concep...

Parts of ARM were modeled after 6502, it having been the processor used in the company's first successful microcomputer.


No the 6502 was Chuck Peddle's baby - not related.

It had a quite a nice simple instructions set.


powerpc/IBM is still a big player in the server/HP computing market. They do many cool things with their architectures since cost is less of a factor(dynamic smp, switcheable endieness, OMI) but they suck to build code for from an out-of-box experience standpoint.


That's POWER, a different ISA, there are still some PowerPC embedded cpus but it's just slowly dying.


true, was just going off the debian package architecture naming PPC/ppc64el


Power and PowerPC are essentially the same ISA, I don’t know what that post has a problem with.


Apple’s arm has a partially different instruction set than other arm devices. So it’s not just arm, something to consider.


This is the first I have heard of Apple doing this, and I feel like, in my position, I would have heard of this... I have just spent some time searching around myself trying to find any such reference and the closest I could find was the opposite: an article from Electrical Engineering Journal that said that Apple could have, but stated they didn't need to and pretty strongly implied they didn't, even going so far as to claim that they couldn't in any drastic way due redirections "even Apple" has on ARM licensees.

https://www.eejournal.com/article/whats-inside-apple-silicon...

Can you provide some more information on this? I would love to be able to hit them on this, as this would actually be really upsetting to a lot of people I know who work on toolchains.


https://blog.adafruit.com/2021/01/15/the-secret-apple-m1-amx...

The rumor I've heard is that Apple is keeping their custom extensions to the ISA undocumented in deference to ARM's desire not to have the instruction set just completely fragment into a bunch of mutually incompatible company-specific dialects.

It's worth noting that the article you link predates the public release of the M1 by a good 10 months. Given how secretive Apple tends to be about these sorts of things, one can only assume that it was based almost entirely on rumor and conjecture.


Undocumented or not, they would be hard to hide: I would think you could scan through MacOS binaries and find them, if they exist. (I guess it's still possible they exist even if you don't find them, maybe unused or only produced by JITs, but that doesn't sound very useful.)


Yup. If you follow the links from that article, you'll get to the site of the person who found and documented them. It doesn't look like it took too much effort.

But it's not really about trying to prevent anyone from discovering that these opcodes exist. It's about trying to discourage their widespread use. If it's undocumented, then they don't have to support it, and anyone who's expecting support knows to steer clear. That gives them more freedom to change the behavior of this coprocessor in future iterations of the chip. And people can still get at them, because Apple uses them in system libraries such as the OS X implementation of BLAS.


Every ARM licensee does this though; they license the core designs from ARM and add features (including additional instructions) around it to package into an SOC. It’s just that Apple has the scale to design their own SOCs instead of buying one from Qualcomm or Samsung.


Most licensees do not, in fact, add their own instructions.


Which most - there is most as in number of cores shipped, and most as in number of organizations who have a license.

The second I have no doubt you are correct - I know of several organizations that have licensed ARM just to ensure they have a long term plan to get more without the CPU going obsolete again (one company has spent billions porting software that was perfectly working on a 16 bit CPUs that went obsolete - there was plenty of CPU for any foreseeable feature, but no ability to get more). These want something standard - they are kind of hoping that they can combine a production run with someone else in 10 years when they need more supply and thus save money on setup fees.

The first is a lot harder. The big players ship a lot of CPUs, and they the volumes to make some customization for their use case worth it. However I don't know how to get real numbers.


Back then code was usually closely tied to the hardware with very little abstraction. Nowadays even if you write in a low level language it's not difficult to target a wide array of devices if you go through standard interfaces.

Proprietary software is probably the main reason we haven't had a whole lot of diversity in ISAs over the past couple of decades (see: Itanium). It's no coincidence that ARM's mainstream explosion is tied to Linux (be it GNU/ or Android/).


ARM's first explosion was in PDAs, not running Linux. SA110 and XScale.

A ton of ARM hardware is embedded cores running VxWorks or EmBed. M0 through M4. Yes, Phones are the dominant core consumer here, but there is a whole bunch of embedded/IoT stuff shipping ARM cores every day that will never see Linux installed.


And it's always fun to remember ARM's second explosion: Nintendo. For a while, the most popular device using ARM was the Game Boy Advance.


>that will never see Linux installed.

Right up until you see someone run Doom on it.


Back then C was a high level language. Programmers regularly dropped down to assembly (or even raw machine bytes) when they needed the best performance. Now C is considered low level and compilers can optimize much better than you can in almost all cases so more programmers are only vaugly aware of assembly.

Though you are correct, a lot of abstraction today makes things portable in ways that in the past they were not. The abstraction has a small performance and memory cost which wouldn't have been acceptable now, but today it is in the noise (cache misses are much more important and good abstractions avoid them)


> Now C is considered low level and compilers can optimize much better than you can in almost all cases so more programmers are only vaugly aware of assembly.

This is not true, compilers don’t generate super-optimized asm output from C. It’s actually not that optimizable because eg the memory access is too low level so there are aliasing issues.

But optimizing doesn’t actually help most programs on modern CPUs so it’s not worth improving this.


Look in the microcontroller space if you want more "diversity". There are 4-bit MCUs, 8 and 16-bit ones with banked/paged memory, Harvard architectures, non-byte instruction sizes, etc.


I would love to see a CPU Renaissance like this. Back then we had tons of variety, 680x0, x86, Rx000, various lisp machines, Vector computers, VLIW and Multiflow, Sparc, VAX, early ARM, message passing machines, 1-bit multiprocessors, Hypercubes, WD CPUs, and later an explosion of interesting RISC architectures... It was really interesting and enjoyable era.


As someone who programmed at that time, it was also very hard to write even small production programs.

Today I do things in a half-an-hour with Python that would have taken me days - maybe weeks! - to accomplish in 1978.

Each little vendor had their own janky tooling. Compilers cost hundreds of 1970s dollars (until Borland's $49 Turbo Pascal, over $150 in today's money).

Don't get me wrong. I was very unhappy when Intel dominated everything. The fact that ARM, an open-source architecture, is now eating Intel's lunch makes me happy.

But I'd honestly be glad if everyone just settled on ARM and were done with it. It was fun messing with all these weird processors (my first team leader job was writing an operating system for a pocket computer running the 65816 processor!) but it meant that actually generating work was very slow.


I mostly agree with your overall argument, but the "mostly" qualification goes along with a small but important correction:

>The fact that ARM, an open-source architecture

ARM is in no way open, it's fully proprietary. Unlike x86 it is not vertically integrated and is available for anyone to license all the way to the architectural level, and that's huge. But said licenses certainly are not free either, nor Free.

There are promising actual open architectures, in particular OpenPOWER and RISC-V come to mind as interesting with a lot of solid work behind them. So that's one small remaining opening IMO, even if it's more work on the dev side I wouldn't mind having those stick around and get more competitive.


SPARC is open and not only that Sun shared some actual cores that were used in production. I don’t know why Power gets mentioned but SPARC doesn’t.


Probably because few would consider Sparc promising, since, unlike Power and certainly Risc-V, it's pretty much dead?


Picking a CPU is not just about the CPU architecture. It is mainly about the ecosystem around that processor. ARM has a huge amount of IP, bus fabrics, compilers, operating systems, boot loaders, and people you can hire with knowledge of all of that. There are far more people out there with ARM experience than SPARC. I don't really see anybody interested in POWER outside of IBM and the chips they sell.


we won't get a CPU Renaissance, but we are seeing a new era of "hybrid processors", aka dedicated processors running a custom ISA.

For example, Huawei's Kirin NPU.


My bet is we will have a small explosion of cheap consumer laptops running ARM, but more as a marketing ploy to ride the hype train around Apple computers with ARM better being much better than Intel. (even though those ARM chips won't compare to the Apple Silicon, but like I said, sales).


How is is the ARM build of Windows?


Before we had PC and phones, now we only have phones in a different case. It looks quite the opposite to me.... :(



Yes, I still have a working VIA C7 netbook.


must be terrifying for build and release engineers


Doubt.

CPU hasn't been the limiting hardware in a decade. I think Intel stagnated because people have prioritized spending money on GPUs, memory, and SSDs.

Even when I'm writing an intensive program, I'm using multiple cores, so a single threaded benefit is useless to me.

I have a half a mind to think the m1 is a marketing gimmick because making a better processor was low hanging fruit that CPU companies aren't trying to compete on(outside of price).


> Even when I'm writing an intensive program, I'm using multiple cores, so a single threaded benefit is useless to me.

maybe poorly worded but i would like to point out that a single-thred performance gain is never useless, not even with a highly a parallel workload.

i would think that due to multi-stage cpu caches and ipc overhead, something within one core/thread will be way more efficient.


Maybe, but this seems to be a bug with Apple's IO toolkit on x86, so it's unrelated(other than x86 support on macOS already falling apart, which is completely unexpected, considering the quality of the rest of the OS after recent releases).


This is a bug on M1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: