Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Modern (well, post-ZFS) filesystems operate by moving the filesystem through state changes where data is not (immediately) destroyed, but older versions of the data are still available for various purposes. Similar to an ACID-compliant database, something like a backup or recovery process can still access older snapshots of the filesystem, for various values of "older" that might range from milliseconds to seconds to years.

With that in mind, you can see how we get in a scenario where deleting a file will require a minor bit of storage for recordkeeping the old and new states, before it can actually free up the storage by releasing the old state. There is supposed to be an escape hatch for getting yourself out of a situation where there isn't even enough storage for this little bit of record keeping, but either the author didn't know whatever trick is needed or the filesystem code wasn't well-behaved in this area (it's a corner-case that isn't often tested).



I'm most surprised by the lack of testing. Macs tend to ship with much smaller SSDs than other computers because that's how Apple makes money ($600 for 1.5TB of flash vs. $100/2TB if you buy an NVMe SSD), so I'd expect that people run out of space pretty frequently.


And if you make the experience broken and frustrating people will throw the whole computer away and buy a new one since the storage can’t be upgraded.


Not to mention potentially paying for your cloud storage for life.


Don't forget to make it so nothing they have works properly with any other brands devices so the next one they buy must also be Apple.

Rinse and repeat.


It feels like insanity that the default configuration of any filesystem intended for laymen can fail to delete a file due to anything other than an I/O error. If you want to keep a snapshot, at least bypass it when disk space runs out? How many customers do the vendors think would prefer the alternative?!


It's not really just keeping snapshots that is the issue, usually. It's just normal FS operation, meant to prevent data corruption if any of these actions is interrupted, as well as various space-saving measures. Some FSs link files together when saving mass data so that identical blocks between them are only stored once, which means any of those files can only be fully deleted when all of them are. Some FSs log actions onto disk before and after doing them so that they can be restarted if interrupted. Some FSs do genuinely keep files on disk if they're already referenced in a snapshot even if you delete them – this is one instance where a modal about the issue should probably pop up if disk space is low. And some OSes really really really want to move things to .Trash1000 or something else stupid instead of deleting them.


Pretty much by the time you get to 100% full on ZFS, the latency is going to get atrocious anyway, but from my understanding there are multiple steps (from simplest to worst case) that ZFS permits in case you do hit the error:

1. Just remove some files - ZFS will attempt to do the right thing

2. Remove old snapshots

3. Mount the drive from another system (so nothing tries writing to it), then remove some files, reboot back to normal

4. Use `zfs send` to copy the data you want to keep to another bigger drive temporarily, then either prune the data or if you already filtered out any old snapshots, zero the original pool and reload it by `zfs send` from before.


Modern defrag seems very cumbersome xD


Defragmentation and ability to do it are not free.

You can have cheap defrag but comparatively brittle filesystems by making things modifiable in place.

You can have filesystem that has as its primary value "never lose your data", but in exchange defragmentation is expensive.


I don't buy this? What does defragmentation have to do with snapshotting? Defragmentation is just a rearrangement of the underlying blocks. Wouldn't snapshots just get moved around?


The problem is that you have to track down all pointers pointing to specific block.

With snapshotting, especially with filesystems that can only write data through snapshots (like ZFS), blocks can be referred to by many pointers.

It's similar to evaluating liveness of object in a GC, except you're now operating on possibly gigantic heap with very... pointer-ful objects, that you have to rewrite - which goes against core principle of ZFS which is data safety. You're doing essentially a huge history rewrite on something like git repo, with billions of small objects, and doing it safely means you have to rewrite every metadata block that in any way refers to given data block - and rewrite every metadata block pointing to those metadata blocks.


But more pointers is just more cost, not outright inability to do it. The debate wasn't over whether defragmentation itself is costly. The question was whether merely making defragmentation possible would impose a cost on the rest of the system. So far you've only explained why defragmentation on a snapshotting volume would be expensive with typical schemes, which is entirely uncontroversial. But you neither explain why you believe defragmentation would be impossible (no "ability to do it") with your scheme, nor why you believe it's impossible for other schemes to make it possible "for free"?

In fact, the main difficulty with garbage collectors is maintaining real-time performance. Throw that constraint out, and the game changes entirely.


I never claimed it's impossible - I claimed it's expensive. Prohibitively expensive, as the team at Sun found out when they attempted to do so, and offline defrag becomes easy with two-space approach which is essentially "zfs send to separate device".

You can attempt to add an extra indirection layer, but it does not really reduce fragmentation, it just lets you remap existing blocks to another location at a cost of extra lookup. This is in fact implemented in ZFS as solution for erroneous addition of a vdev, allowing device removal though due to performance cost its oriented mostly at "oops, I added the device wrongly, let me quickly revert".


If by "not able to" you meant "prohibitively expensive" - well, I also don't see why it's prohibitively expensive even without indirection. Moving blocks would seem to be a matter of (a) copy the data, (b) back up the old pointers, (c) update the pointers in-place, (d) mark the block move as committed, (e) and delete the old data/backups. If you crash in the middle you have the backup metadata journaled there to restore from. No indirection. What am I missing? I feel like you might have unstated assumptions somewhere?


My bad - I'm a bit too into the topic and sometimes forget what other people might not know ^^;

You're missing the part where (c) is forbidden by design of the filesystem, because ZFS is not just "Copy on Write" by default (like BTRFS, which has in-place rewrite option, IIRC) nor LVM/disk-mapper snapshot which similarly don't have strong invariants on CoW.

ZFS writes data to disk in two ways - a (logically) write-ahead log called ZFS Intent Log (which handles synchronous writes and is read only on pool import), and transaction group sync (txgsync), where all newly written data is linked into new metadata tree, sharing structure with previous TXG metadata tree (so unchanged branches are shared), and the pointer to the head of the tree is committed into on-disk circular buffer of at least 128 pointers.

Every snapshot in ZFS is essentially a pointer to such metadata tree - all writes in ZFS are done by creating a new snapshot. The named snapshots are just rooted in different places in filesystem. This means that sometimes even in case of catastrophic software bug (for example, master branch had for few commits a bug where they accidentally changed on-disk layout of some structures - one person ran master branch and hit that resulting in pool that could not be imported... but the design meant they could tell ZFS import to "rewind" to TXG sync number from before the bug)

Updating the blocks in place violates design invariants - once you violate them, the data safety guarantees are no longer guarantees. And this makes it into minimally offline operation, and at that point the type of client that needs in-place defragmentation can reasonably do the two-space trick (if you're big enough, to make that infeasible, you're probably big enough to easily throw in an extra JBOD at least and relieve fragmentation pressure).

To make latter paragraphs understandable (beware, ZFS internals as I remember them):

ZFS is constructed of multiple layers[1] - from the bottom (somewhat simplified):

1. SPA (Storage Pool Allocator) - what implements "vdevs" - the only layer that actually deals with blocks. It implements access to block devices, mirroring, RAIDz, draid, etc. and exposes single block-oriented interface upwards

2. DMU (Data Management Unit) - An object oriented storage system. Turns bunch of blocks into object-oriented PUT/GET/PATCH/DELETE like setup, with 128bit object IDs. Also handles base metadata - the immutable/write-once trees for turning "here's a 1GB blob of data" into 512b to 1MB portions on disk. For every given metadata tree/snapshot, there is no in-place changes - modifying an object "in place" means that new txgsync has, for given object ID, a new tree of blocks that shares as much structure with previous one as possible.

3. DSL / ZIL / ZAP - provide basic structures on top of the DMU - DSL is what gives you "naming" ability for datasets and snapshots, ZIL handles the write-ahead log for dsync/fsync, ZAP provides a key-value store in DMU objects.

4. ZPL / ZVOL / Lustre / etc - Those are the parts that implement user-visible filesystem. ZPL is ZFS Posix Layer, which is a POSIX-compatible filesystem implemented over object storage. ZVOL does similar but presents emulated block device. Lustre-on-ZFS similarly talks directly to ZFS object layer instead of implementing ODT/OST on top of POSIX files again.

You could, in theory, add an extra indirection layer just for defragmentation, but this in turn makes problematic layering violation (something found at Sun when they tried to implement BPR) - because suddenly SPA layer (the layer that actually handles block-level addressing) needs to understand DMU's internals (or a layer between the two needing bi-directional knowledge). This makes for possibly brittle code, so again - possible but against overarching goals of the project.

The "vdev removal indirection" works because it doesn't really care about location - it allocates space from other vdevs and just ensures that all SPA addresses that have ID of the removed vdev, point to data allocated on other vdevs. It doesn't need to know how the SPA addresses are used by DMU objects


I appreciate the long explanation of ZFS, but I don't feel most of it really matters for the discussion here:

> Updating the blocks in place violates design invariants - once you violate them, the data safety guarantees are no longer guarantees.

Again - you can copy blocks prior to deleting anything, and commit them atomically, without losing safety. The fact that you (or ZFS) don't wish to do that doesn't mean it's somehow impossible.

> the type of client that needs in-place defragmentation can reasonably do the two-space trick (if you're big enough, to make that infeasible, you're probably big enough to easily throw in an extra JBOD at least and relieve fragmentation pressure).

You're moving goalposts drastically here. It's quite a leap to go from "has a bit of free space on each drive" to "can throw in more disks at whim", and the discussion wasn't about "only for these types of clients".

And, in any case, this is all pretty irrelevant to whether ZFS could support defragmentation.

> this makes it into minimally offline operation

See, that's your underlying assumption that you never stated. You want defragmentation to happen fully online, while the volume is still in use. What you're really trying to argue is "fully online defragmentation is prohibitive for ZFS", but you instead made the wide-sweeping claim that "defragmentation is prohibitive for snapshotted filesystems in general".


You're hung on the word "impossible" which I never used.

I did say that there are trade offs and that some goals can make things like defragmentation expensive.

ZFS' main design was that it nothing short of (extensive) physical damage should allow destruction of users data. Everything else was secondary. As such, the project was not interested, ever, in supporting in-place updates.

You can design a system with other goals, or ones that are more flexible. But I'd argue that's why BTRFS got undying reputation for data loss - they were more flexible, and that unfortunately also opened way for more data loss bugs.


> You're hung on the word "impossible" which I never used.

That's not true. That was only in the beginning -- "impossible" was only what I originally took (and would still take, but I digress) your initial comment of "ability to defragment is not free" to be saying. It's literally saying that if you don't pay a cost (presumably, performance or reliability), then you become unable to defragment. That sounded like impossibility, hence the initial discussion.

Later you said you actually meant it'd be "prohibitively expensive". Which is fine, but then I argued against that too. So now I'm arguing against 2 things: impossibility and prohibitive-expensiveness, neither of which I'm hung up on.

> ZFS' main design was that it nothing short of (extensive) physical damage should allow destruction of users data. Everything else was secondary.

Tongue only halfway in cheek, but why do you keep referring to ZFS like it's GodFS? The discussion was about "filesystems" but you keep moving the goalposts to "ZFS". Somehow it appears you feel that if ZFS couldn't achieve something then nothing else possibly could?

Analogy: imagine if you'd claimed "button interfaces are prohibitively expensive for electric cars", I had objected to that assertion, and then you kept presenting "but Tesla switched to touchscreens because they turned out cheaper!" as evidence. That's how this conversation feels. Just because Tesla/ZFS has issues with something that doesn't mean it's somehow inherently prohibitive.

> As such, the project was not interested, ever, in supporting in-place updates.

Again: are we talking online-only, or are you allowing offline defrag? You keep avoiding making your assumptions explicit.

If you mean offline: it's completely irrelevant what the project is interested in doing. By analogy, Microsoft was not interested, ever, in allowing NTFS partitions to be moved or split or merged either, yet third-party vendors have supported those operations just fine. And on the same filesystem too, not merely a similar one!

If you mean online: you'd probably be some intrinsic trade-off eventually, but I'm skeptical it's at this particular juncture. Just because ZFS may have made something infeasible with its current implementation, that doesn't mean another implementation couldn't have... done an even better job? e.g., even with the current on-disk structure of ZFS (let alone a better one), even if a defragmentation-supporting implementation might not achieve 100% throughput while a defragmentation is ongoing, surely it could at least get some throughput during a defrag so that it doesn't need to go entirely offline? That would be a strict improvement over the current situation.

> But I'd argue that's why BTRFS got undying reputation for data loss - they were more flexible, and that unfortunately also opened way for more data loss bugs.

Hang on... a bug in the implementation is a whole different beast. We were discussing design features. Implementation bugs are... not in that picture. I'm pretty sure most people reading your earlier comments would get the impression that by "brittleness" you were referring to accidents like I/O failures & user error, not bugs in the implementation!

Finally... you might enjoy [1]. ;)

[1] https://www.reddit.com/r/zfs/comments/1826lgs/psa_its_not_bl...


i've filled up an zfs array to the point where i could not delete files.

the trick is to truncate a large enough files, or enough small files, to zero.

not sure if this is a universal shell trick, but worked on those i tried: "> filename"


For reasons I am completely unwilling to research, just doing `> filename` has not worked for me in a while.

Since then I memorized this: `cat /dev/null >! filename`, and it has worked on systems with zsh and bash.


That seems to be zsh-specific syntax that is like ">" except that overrides a CLOBBER setting[1].

However, it won't work in bash. It will create file named "!" with the same contents as "filename". It is equivalent to "cat /dev/null filename > !". (Bash lets you put the redirection almost anywhere, including between one argument and another.)

---

[1] See https://zsh.sourceforge.io/Doc/Release/Redirection.html


Yikes, then I have remembered wrong about bash, thank you.

In that case I'll just always use `truncate -s0` then. Safest option to remember without having to carry around context about which shell is running the script, it seems.


"truncate -s0 filename"

I believe "> filename" only works correctly if you're root (at least in my experience, if I remember correctly).

EDIT: To remove <> from filename placeholder which might be confusing, and to put commands in quotes.


Oh yes, that one also worked everywhere I tried, thanks for reminding me.


Pleasure.

It saved me just yesterday when I needed to truncate hundreds of gigabytes of Docker logs on a system that had been having some issues for a while but I didn't want to recreate containers.

"truncate -s 0 /var/lib/docker/containers/**/*-json.log"

Will truncate all of the json logs for all of the containers on the host to 0 bytes.

Of course the system should have had logging configured better (rotation, limits, remote log) in the first place, but it isn't my system.

EDIT: Missing double-star.*


Simple to verify with strace -f bash -c "> file":

    openat(AT_FDCWD, "file", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
man 2 openat:

    O_TRUNC
        If the file already exists and is a regular file and the
        access mode allows writing (i.e., is O_RDWR or O_WRONLY) it
        will be truncated to length 0.
        ...


Sure, but I just get an interactive prompt when I type `> file` and I honestly don't care to troubleshoot. ¯\_(ツ)_/¯


Probably you are using zsh and need:

    MULTIOS=1 > file
- zsh isn't POSIX compatible by default


I see. But in this case it's best to just memorize `truncate -s0` which is shell-neutral.


Ok, we'll leave that a mystery then!


Depending on the environment you can also use the truncate command. This will work if the file is open as well.

https://man7.org/linux/man-pages/man1/truncate.1.html


It'd be better to do ": >filename"

: is a shell built-in for most shells that does nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: