Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

a program running on a 480 megahertz microcontroller can quite reasonably crash a million times a second; even a single 80-character error log line would be 80 megabytes per second or 7 terabytes per day. and it would be reasonable to record more telemetry from a crash than a single line


How far we have come. Back in my day, a program could only crash a few thousand times a second.


To err is human. To really screw things up we had to invent microcontrollers.


It is amazing how far we have come. On my own, I can only really screw up a few times per day, unless I am really trying. On a really, really bad day, maybe a few dozen times? How inefficient.


A washing machine is probably running a 48 MHz, not a 480 MHz processor, but even if it were running its core at 480 MHz, its network interface is probably not going to be able to output TCP packets at 1 MHz.


we are faced with the assertion that the washing machine was sending 3.7 gigabytes per day of data

someone asserted that that is an unreasonable amount of data for a washing machine to generate

i'm pointing out an easy way for a washing machine might generate 2000 times more data than that

the fact that the network interface is possibly insufficient to send the data out is irrelevant to the question of whether the washing machine can or cannot generate it in the first place, which is what was being discussed

however, i will point out that if it's using tcp, unless it opens a new tcp connection for each telemetry message, the tcp stack will batch together many telemetry messages into a single tcp segment, probably about 1500 bytes worth

— ⁂ —

of course you can control a washing machine with an 8051, or an eprom and a register clocked from the power line (see jeff laughton's printing press controller at https://laughtonelectronics.com/Arcana/One-bit%20computer/On...), or for that matter a mechanical timer or, as i've done, by unplugging the power cord and pulling a rubber drain plug when you think the agitator motor has been running for long enough. but more powerful control systems enable new functionality

historically it is true that manufacturers have used low-spec microcontrollers because more powerful ones were too expensive. today digi-key will sell you a 500-megahertz i.mx rt1010 cortex-m7 from philips/nxp, with 128 kilobytes and dc/dc conversion on-chip, for under four dollars, roughly π dollars in fact https://www.digikey.com/en/products/detail/nxp-usa-inc/MIMXR.... home appliances and motor control are two of the application areas the datasheet claims it's 'specifically useful' for. unlike an 8051, you can program it in micropython and single-step it over a debugging umbilical, and once it's deployed, it can send you surveillance data over the internet. and you can get cheaper and better chips on lcsc if you can read datasheets in chinese

oops, did i say surveillance data

i meant telemetry. telemetry, telemetry, telemetry

for better or worse this unlocks a lot of temptation for manufacturers to put ridiculously powerful cpus in things where they only serve to cause headaches to the consumer

other comments in this thread have even pointed out non-evil ways this could make manufacturers more profitable


maybe it's the error logging that crashed and then triggers ...


yeah, usually try to provide some kind of delay if the MCU can detect that it rebooted from a crash / watchdog timeout


as well you should, and also on restarting failed tasks, but maybe somebody forgot


Really megabytes or it's just a typo? If so. How 80 characters could possibly generate so much data?


My napkin math:

    freq = 480 megahertz                            // 480000000 Hz
    size = 80 * 1 byte                              // 80 bytes
    cycles_per_write = 400                          // 100 to print, 300 to fail
    writes_per_sec = freq / cycles_per_write        // 120mHz
    duration = 24 hr
    (writes_per_sec * size * duration) to terabytes // 8.2 tb
Seems somewhat plausible to hit >7TB of writes assuming no compression of the data.


i think that instead of 120 millihertz you mean 1.2 megahertz? and your calculation actually works out to 8.3 terabytes? otherwise i agree

btw the units(1) program is useful for things like this

    You have: 480 megahertz / 400 * 80 bytes * 1 day
    You want: terabytes
     * 8.2944
     / 0.12056327


it's not a tumor. i mean a typo. 80 bytes multiplied by one million times per second equals 80 million bytes (also known as 80 megabytes) per second


You are right. Thank you for clarification.


sure, happy to oblige


Uplinks are usually somewhat restricted.


yeah, and maybe someone was depending on that historical situation without realizing it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: