Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Encryption is not future proof (encryption that was previously thought of as secure has been broken). Writing zeros is future proof.


On an SSD writing zeroes, may only trigger a remap of the NAND cells. There is a paper where data has been recovered under such situations... So writing zeroes hasn't been future proof for a decade or longer.


In general, overwriting an LBA on a SSD will not cause a write to the same physical flash memory cells, no matter what the data pattern is. If you want to guarantee that the old version of the data is actually erased from the physical medium, you need to issue a drive sanitize command.


Pretty sure people have discovered that is not reliable on some drives too, that it may only destroy the logical to physical map, and not change the actual data cells.

It is surprisingly difficult to ensure deliberate data loss


> Pretty sure people have discovered that is not reliable on some drives too, that it may only destroy the logical to physical map, and not change the actual data cells.

This can be the case for something like an ATA Secure Erase command, which is why the Sanitize commands were introduced to ATA, SCSI and NVMe. Those do explicitly mandate that all user data be erased, including from all caches and any storage media that is not normally accessible to the host system (ie. old blocks that haven't been garbage collected yet).


Absolutely not. Writing random data is future proof.


1) Writing all zeros is generally considered more SSD-friendly than random data. The exact reasons for this are complex in part because behavior of SSD controllers varies significantly with all-zero blocks. But, while absolutely inferior to using TRIM, there is reason to believe that writing all zeroes is less likely to lead to premature wear than random data.

2) While it's been "common knowledge" since Gutmann that data from old writes can be recovered (thus the advice to write multiple passes of random data), this turns out to have been iffy in Gutmann's day and an outright myth today. Multiple university teams have tried and failed to recover data using advanced techniques (such as SEM tomography) after a single zero pass. Generally the success rate for single bits is only slightly better than random chance. Gutmann himself criticized multi-pass overwriting as "a kind of voodoo incantation to banish evil spirits" and unnecessary today.

3) By far the larger concern in data recovery, for platters as well as SSDs, is caches and remapping performed in the firmware. As a result, the ATA secure erase command is the best way to destroy data because it allows the controller to employ its special knowledge of the architecture of the drive. However, ATA SE has been found to be extremely inconsistently implemented, especially on consumer hard drives. The inability to reliably verify good completion of the ATA SE is a major contributor towards preference for "self-encrypting" drives in which ATA SE can be reliably achieved by clearing the internal crypto information, and the US government's recommendation that drives can only reliably be cleared by physical destruction. Physical destruction is probably your best bet as well, because self-encrypting enterprise drives come at a substantial price premium and you still lack insight into the quality of their firmware. In other words, the price of a drive with an assured good ATA SE implementation is probably higher than the price of a cheap drive and the one you'll replace it with after you crush it.


in regards to 2):

It's true that multiple overwrites are overkill. But for SSD's it's has been shown that it's possible to read data after a full overwrite [1].

[1] https://static.usenix.org/event/fast11/tech/full_papers/Wei....


The data recovered in this paper, though, was recovered by direct readout of flash chips in order to locate pages which had not actually been overwritten at all. This is a very different kind of problem and attack than the one that led to multiple-pass overwrites and falls into my point 3. The reason that multi-pass overwriting can be effective on SSDs is because the increased number of write operations encourages the SSD controller to remap more blocks in and out of the page space which increases physical coverage of the overwrite.

There is a potential benefit to multi-pass random write to SSDs in this case, but this paper shows exactly why you shouldn't do this: because the improvement in security from random overwrites is stochastic at best and cannot be guaranteed without full knowledge of the behavior of the controller, as can be seen in the paper in the drives which continued to contain remnant data after many passes.

As the paper finds, multi-pass overwrite is not a valid technique to sanitize SSDs, and is still cargo-cult security.


Yes like I already said multi-pass is not a good way to sanitize SSDs. However it does directly contradict your stance that data is irrecoverable after a full-write. It doesn't really matter that it's done via a direct flash chip readout. Literally anyone can do that. In comparison the cost of a SEM(which can't read out platters) approaches a million dollars.


Not writing the data in the first place is future proof.


Melting the platters is the next best option.


Writing encrypted zeros is more future proofish


True, but you can do both.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: