For future archaeologists wondering why everyone is talking about Philae, the title of this thread used to be to the effect of "The rad-hard CPU used in Philae".
An 8051-family microcontroller, the CPU architecture that's found almost everywhere - including space. I wonder if they're running Forth too, as this old discussion I found also mentions 80C32 and RTX2010 together with Forth:
The 8052AH-BASIC is better known because of a Byte Magazine article, but there was also a commercially available 8051 variant that was mask programmed with a built in Forth interpreter. I've never seen one.
The yearly Forth Day meeting is happening right now, and Chuck Moore (inventor of Forth, designer of the RTX2010) is going to give his annual fireside chat in a matter of hours. I'm honored to be in attendance.
So if HN has any questions, this is a good time to ask them. The hangout is here:
Sam, Chuck designed the Novix but the Harris RTX 2000, which is very similar, was designed by others.
These are 16-bit chips (the Novix addressed 128 KB) that use dedicated stack memories, that's 3 ports to memory total. The RTX puts the stacks on chip.
The nice thing about these CPUs is they can do stack, alu, and return in parallel, interrupts are cheap, timings are predictable, and programming model is nice if you like Forth. One disadvantage of the design is that clock rate is limited. RAM fetch and instruction processing have to fit between pulses meaning RAM has to be approx twice as fast as clock.
I'm sure Mr. Moore feels vindicated that the fruits of his design work have powered humanity's first comet landing ever.
Still, it's telling of the industry's regard for Forth that the manufacturer itself manages to misspell the language as "Fourth" in the chip's official product page.
To be fair, Moore originally wanted the language to be called "Fourth":
> The file holding the interpreter was labeled FORTH, for 4th (next) generation software - but the operating system restricted file names to 5 characters.
I wonder how hard it is to create a radiation hardened CPU. Is the premise that radiation can flip bits in the on-die cache and registers? would simply making those redundant be a solution? Or could a single flash of radiation invalidate all memory on the device?
There's all sorts of problems you have to deal with. Memory flipping is one of the most visible, but that radiation also will damage the lattice of the processor, making some transistors harder to flip on, and some much easier to flip on, leading to transient glitches, etc.
Redundancy's definitely one solution, which is one reason why spacecraft tend to have multiple processors and/or processors with redundant logic pipelines. They also tend to use different substrates that are harder for cosmic rays to effect, different, more expensive, casings that offer more shielding, and also larger, older, more vetted processes. For example, this chip is uses a 1 micron process, which is a lot bigger than the process used in current cpus, which means that there's a lot more mass in the transistors to soak the effects of radiation, making it harder to cause damage.
Or we can use alternative semiconductors that are more radiation-resistant. One of many reasons why materials discovery is still very important even if initial chips will never match the performance of silicon.
Why would you need silicon for that? Even if it were the only process we use now (which it isn't. As a simple example, Germanium-based photo diodes do exist), materials research could conceivably produce other materials that work, too.
There are even more exotic examples to think of. Eyes turn light into electricity without (AFAIK) Silicon. Maybe part of that physics/chemistry can be practically used elsewhere?
>Eyes turn light into electricity without (AFAIK) Silicon.
Correct. A photon hits a chromophore bound to an opsin protein, and flips the chirality of the chromophore from 11-cis to all-trans, which changes the structure of the opsin protein, which starts the cascade of activity leading to sight. We could definitely use chemical detectors to take advantage of this or a similar process.
Germanium diodes are still made because they have a much smaller voltage drop than silicon diodes do (0.1V nominal versus 0.7V nominal). This has its uses.
I work with optical detectors. There are certainly other materials that can be used for imaging. There are some things going for silicon -- depending on the application of course. Notably, it's sensitive to the visible wavelength range, up into the UV, and can be turned into detectors with extremely low self-noise. It can be manufactured at a level of purity that allows for extremely low electrical leakage that would otherwise add noise. Any system of two or more elements can't reach this purity because it would require an absolutely perfect stoichiometric mixture.
Large "scientific" CCD's benefit from large geometries (typ. 25 micron pixels), which probably contributes to radiation hardening. But it's hard to protect a chip from radiation without keeping it in the dark.
There's also the fact that silicon has received such refinement thanks to the overall semiconductor industry.
Cool, where do you work? I work for an image sensor company (CMOS - not CCD), though I'm a pure software engineer (wafer and assembly test software among other tools) so I don't know that much of device physics, only that which I've gleaned from company-wide presentations and the such.
Theoretically you can make integrated circuit with any semiconductor, but I don't know if anyone has been brave enough to try it.
The smallest integrated circuit is a diode, it has 1/2 transistors :). OK, a diode is not actually an integrated circuit, but if you can use a material to make a diode, you probably can use the same material to make an integrated circuit with enough money, time and ingenuity.
Some of the diodes are made of germanium. It has a very low band gap, that is useful for "crystal radios" ( http://en.wikipedia.org/wiki/Crystal_radio#Crystal_detector ). Perhaps this can be useful to make very low voltage CI, to reduce the power and heat. But perhaps there is a technical problem that I don't know.
The diodes in the LED have many semiconductors. The band gap of the material is related to the color ( http://en.wikipedia.org/wiki/Light-emitting_diode#Colors_and... ). With this you could make high voltage CI (like 10V?). Perhaps it may reduce the noise ratio??? But perhaps there is a technical problem that I don't know.
[kens: If you are reading this, I'd love to see a technical post about this subject.]
> Half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilby's had not. Noyce's design was made of silicon, whereas Kilby's chip was made of germanium. Noyce credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation caused by the action of a biased p–n junction (the diode) as a key concept behind the IC.
> "Germanium differs from silicon in that the supply for germanium is limited by the availability of exploitable sources, while the supply of silicon is only limited by production capacity since silicon comes from ordinary sand or quartz. As a result, while silicon could be bought in 1998 for less than $10 per kg,[20] the price of 1 kg of germanium was then almost $800.[20]"
So nearly 2 orders of magnitude price difference. In 1998 admittedly, but still. (Someone should come up with a better example for the Wikipedia article)
I know that, but I don't think every material has the interesting properties that Si does for image sensors, i.e. it needs to basically turn photons of the right wavelength range into electrons.
> As a photodiode, an LED is sensitive to wavelengths equal to or shorter than the predominant wavelength it emits. For example, a green LED is sensitive to blue light and to some green light, but not to yellow or red light.
If you want to measure the light of all visible wavelength, you must use a semiconductor with a low band gap, for example the named in the other comments: Germanium, Silicon or Gallium Arsenide
Yes, one of the premises is it flips bits, a phenomena called Single Event Upset(SEU) - e.g. the open source LEON FT (http://www.gaisler.com/index.php/products/processors/leon3ft) processor, is designed to be able to correct and detect such errors - in addition to being manufactured with materials that block radiation. In spacecrafts such radiation hardened processors are often used in addition to having redundancy.
One issue with only using redundancy to overcome errors from radiation is you'll have a hard time determining which processor has an error - especially when several processors are affected at the same time.
If it's rad-hard, it's export-controlled in the US. See the US Munitions List, category XV, section (d), "Radiation-hardened microelectronic circuits".
So mass market products aren't made radiation-hard. Rad-hard CPUs more modern than that FORTH engine exist; Atmel makes a rad-hard SPARC. But they're produced in tiny quantities and are thus very expensive.
Hah, an old friend. A company I worked with had a Novix in 1986/1987 for prototyping purposes (early days number plate recognition system). Before we got it we had to sign an unreasonable amount of paper. Rumor had it this is the chip powering the Tomahawk cruisemissile.
Brought a smile to my face too: I had a summer job in 1986 at FORTH, Inc., and spent some quality time with my officemate's cmForth manual. (cmForth was Chuck Moore's new Forth dialect for the Novix. IIRC the manual said the multiply-step instruction didn't quite work and gave a software workaround.)
>Nice to see this neat-but-underappreciated architecture in the news.
It's already appreciated.
Let me recall here the 'scientist' character (A&B Srugatsky) who explains the World as a huge inertial mass to the good or evil in the same.
Tertium non datur it is a principle of our mind sometimes used in computers.
I looked on Alibaba and Shenzhen suppliers have the HS9-RTX-2010-RH for sale at 10 cents a chip, able to supply up to 100,000 chips per month.
How do they do this? They're lying. Various Alibaba vendors list literally every part number they can find as something they are selling. If someone wants it, then they see if they can actually get it.
it's quite strange that there is not backlash against those manufacturers that don't make their parts or their datasheet available. They are just a pain in the butt and they drive our buying power down.
Can someone point to resources comparing stack machines and register based machines? The RTX2010 technical documentation talks about all the advantages of a stack machine but doesn't say anything about its shortcomings.
A lot of the usual criticisms don't apply. This code will never be JIT-compiled. Instructions take either 1 or 2 cycles depending on whether memory is accessed, so there's no pipelining or caching concerns.
Actually in this thing the top two elements of the stack are held in registers. So if you wanted to use the ANSI C compiler instead of FORTH, you might just think of it as a two-register machine (there are actually quite a few more) with a few special stack-manipulation instructions.
Optimal register allocation is an NP-complete problem. By doing away with it, you don't need to allocate registers.
How that compares on a practical level, I have no idea. But it's one of the reasons the JVM has a stack-based model and no registers: it makes writing compilers for JVM byte code a lot easier.
This isn't quite right - we don't in general really care whether we're allocating optimally (for example, if we have 16 registers we don't care if we can potentially allocate to 10), and heuristic based approaches are practically very useful (and cheap).
In terms of compiler writing, if you wanted to write any optimisations you'd probably want to manipulate 3-address code internally, if only because the data flow based analyses would be made easier (and then you would emit stack code in the last step).
This might be a reason why the JVM designers decided to use a stack machine, but I doubt it's an important one - they'd easily be able to get ahold of a few decent compiler writers if they needed to.
I don't doubt the JVM designers have good compiler writers. I rather think that they wanted the bytecode to be accessible to write for other compiler writers, such as the team that implemented JSP servlets a century ago, but also perhaps with a future vision for the alternative JVM languages.
Don't forget bytecode verification which is easier with just a stack rather than a stack + registers. Java was not originally envisioned to become the default server side + JIT technology it eventually became, it was meant for embedded apps which meant small simple bytecode.
Interesting. I agree with your statement regarding it being an NP-complete problem, as intuitively it seems that way, but do you have a reference by chance? Thanks!
Ok, so we can view register allocation as being a case of graph colouring; create a graph where the nodes are your 'virtual registers' and there's an edge between two nodes if those registers are both live at the same time. Then we can run on n registers iff we can colour the graph with n colours. This is one of the first identified NP-complete problems.