Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty much by the time you get to 100% full on ZFS, the latency is going to get atrocious anyway, but from my understanding there are multiple steps (from simplest to worst case) that ZFS permits in case you do hit the error:

1. Just remove some files - ZFS will attempt to do the right thing

2. Remove old snapshots

3. Mount the drive from another system (so nothing tries writing to it), then remove some files, reboot back to normal

4. Use `zfs send` to copy the data you want to keep to another bigger drive temporarily, then either prune the data or if you already filtered out any old snapshots, zero the original pool and reload it by `zfs send` from before.



Modern defrag seems very cumbersome xD


Defragmentation and ability to do it are not free.

You can have cheap defrag but comparatively brittle filesystems by making things modifiable in place.

You can have filesystem that has as its primary value "never lose your data", but in exchange defragmentation is expensive.


I don't buy this? What does defragmentation have to do with snapshotting? Defragmentation is just a rearrangement of the underlying blocks. Wouldn't snapshots just get moved around?


The problem is that you have to track down all pointers pointing to specific block.

With snapshotting, especially with filesystems that can only write data through snapshots (like ZFS), blocks can be referred to by many pointers.

It's similar to evaluating liveness of object in a GC, except you're now operating on possibly gigantic heap with very... pointer-ful objects, that you have to rewrite - which goes against core principle of ZFS which is data safety. You're doing essentially a huge history rewrite on something like git repo, with billions of small objects, and doing it safely means you have to rewrite every metadata block that in any way refers to given data block - and rewrite every metadata block pointing to those metadata blocks.


But more pointers is just more cost, not outright inability to do it. The debate wasn't over whether defragmentation itself is costly. The question was whether merely making defragmentation possible would impose a cost on the rest of the system. So far you've only explained why defragmentation on a snapshotting volume would be expensive with typical schemes, which is entirely uncontroversial. But you neither explain why you believe defragmentation would be impossible (no "ability to do it") with your scheme, nor why you believe it's impossible for other schemes to make it possible "for free"?

In fact, the main difficulty with garbage collectors is maintaining real-time performance. Throw that constraint out, and the game changes entirely.


I never claimed it's impossible - I claimed it's expensive. Prohibitively expensive, as the team at Sun found out when they attempted to do so, and offline defrag becomes easy with two-space approach which is essentially "zfs send to separate device".

You can attempt to add an extra indirection layer, but it does not really reduce fragmentation, it just lets you remap existing blocks to another location at a cost of extra lookup. This is in fact implemented in ZFS as solution for erroneous addition of a vdev, allowing device removal though due to performance cost its oriented mostly at "oops, I added the device wrongly, let me quickly revert".


If by "not able to" you meant "prohibitively expensive" - well, I also don't see why it's prohibitively expensive even without indirection. Moving blocks would seem to be a matter of (a) copy the data, (b) back up the old pointers, (c) update the pointers in-place, (d) mark the block move as committed, (e) and delete the old data/backups. If you crash in the middle you have the backup metadata journaled there to restore from. No indirection. What am I missing? I feel like you might have unstated assumptions somewhere?


My bad - I'm a bit too into the topic and sometimes forget what other people might not know ^^;

You're missing the part where (c) is forbidden by design of the filesystem, because ZFS is not just "Copy on Write" by default (like BTRFS, which has in-place rewrite option, IIRC) nor LVM/disk-mapper snapshot which similarly don't have strong invariants on CoW.

ZFS writes data to disk in two ways - a (logically) write-ahead log called ZFS Intent Log (which handles synchronous writes and is read only on pool import), and transaction group sync (txgsync), where all newly written data is linked into new metadata tree, sharing structure with previous TXG metadata tree (so unchanged branches are shared), and the pointer to the head of the tree is committed into on-disk circular buffer of at least 128 pointers.

Every snapshot in ZFS is essentially a pointer to such metadata tree - all writes in ZFS are done by creating a new snapshot. The named snapshots are just rooted in different places in filesystem. This means that sometimes even in case of catastrophic software bug (for example, master branch had for few commits a bug where they accidentally changed on-disk layout of some structures - one person ran master branch and hit that resulting in pool that could not be imported... but the design meant they could tell ZFS import to "rewind" to TXG sync number from before the bug)

Updating the blocks in place violates design invariants - once you violate them, the data safety guarantees are no longer guarantees. And this makes it into minimally offline operation, and at that point the type of client that needs in-place defragmentation can reasonably do the two-space trick (if you're big enough, to make that infeasible, you're probably big enough to easily throw in an extra JBOD at least and relieve fragmentation pressure).

To make latter paragraphs understandable (beware, ZFS internals as I remember them):

ZFS is constructed of multiple layers[1] - from the bottom (somewhat simplified):

1. SPA (Storage Pool Allocator) - what implements "vdevs" - the only layer that actually deals with blocks. It implements access to block devices, mirroring, RAIDz, draid, etc. and exposes single block-oriented interface upwards

2. DMU (Data Management Unit) - An object oriented storage system. Turns bunch of blocks into object-oriented PUT/GET/PATCH/DELETE like setup, with 128bit object IDs. Also handles base metadata - the immutable/write-once trees for turning "here's a 1GB blob of data" into 512b to 1MB portions on disk. For every given metadata tree/snapshot, there is no in-place changes - modifying an object "in place" means that new txgsync has, for given object ID, a new tree of blocks that shares as much structure with previous one as possible.

3. DSL / ZIL / ZAP - provide basic structures on top of the DMU - DSL is what gives you "naming" ability for datasets and snapshots, ZIL handles the write-ahead log for dsync/fsync, ZAP provides a key-value store in DMU objects.

4. ZPL / ZVOL / Lustre / etc - Those are the parts that implement user-visible filesystem. ZPL is ZFS Posix Layer, which is a POSIX-compatible filesystem implemented over object storage. ZVOL does similar but presents emulated block device. Lustre-on-ZFS similarly talks directly to ZFS object layer instead of implementing ODT/OST on top of POSIX files again.

You could, in theory, add an extra indirection layer just for defragmentation, but this in turn makes problematic layering violation (something found at Sun when they tried to implement BPR) - because suddenly SPA layer (the layer that actually handles block-level addressing) needs to understand DMU's internals (or a layer between the two needing bi-directional knowledge). This makes for possibly brittle code, so again - possible but against overarching goals of the project.

The "vdev removal indirection" works because it doesn't really care about location - it allocates space from other vdevs and just ensures that all SPA addresses that have ID of the removed vdev, point to data allocated on other vdevs. It doesn't need to know how the SPA addresses are used by DMU objects


I appreciate the long explanation of ZFS, but I don't feel most of it really matters for the discussion here:

> Updating the blocks in place violates design invariants - once you violate them, the data safety guarantees are no longer guarantees.

Again - you can copy blocks prior to deleting anything, and commit them atomically, without losing safety. The fact that you (or ZFS) don't wish to do that doesn't mean it's somehow impossible.

> the type of client that needs in-place defragmentation can reasonably do the two-space trick (if you're big enough, to make that infeasible, you're probably big enough to easily throw in an extra JBOD at least and relieve fragmentation pressure).

You're moving goalposts drastically here. It's quite a leap to go from "has a bit of free space on each drive" to "can throw in more disks at whim", and the discussion wasn't about "only for these types of clients".

And, in any case, this is all pretty irrelevant to whether ZFS could support defragmentation.

> this makes it into minimally offline operation

See, that's your underlying assumption that you never stated. You want defragmentation to happen fully online, while the volume is still in use. What you're really trying to argue is "fully online defragmentation is prohibitive for ZFS", but you instead made the wide-sweeping claim that "defragmentation is prohibitive for snapshotted filesystems in general".


You're hung on the word "impossible" which I never used.

I did say that there are trade offs and that some goals can make things like defragmentation expensive.

ZFS' main design was that it nothing short of (extensive) physical damage should allow destruction of users data. Everything else was secondary. As such, the project was not interested, ever, in supporting in-place updates.

You can design a system with other goals, or ones that are more flexible. But I'd argue that's why BTRFS got undying reputation for data loss - they were more flexible, and that unfortunately also opened way for more data loss bugs.


> You're hung on the word "impossible" which I never used.

That's not true. That was only in the beginning -- "impossible" was only what I originally took (and would still take, but I digress) your initial comment of "ability to defragment is not free" to be saying. It's literally saying that if you don't pay a cost (presumably, performance or reliability), then you become unable to defragment. That sounded like impossibility, hence the initial discussion.

Later you said you actually meant it'd be "prohibitively expensive". Which is fine, but then I argued against that too. So now I'm arguing against 2 things: impossibility and prohibitive-expensiveness, neither of which I'm hung up on.

> ZFS' main design was that it nothing short of (extensive) physical damage should allow destruction of users data. Everything else was secondary.

Tongue only halfway in cheek, but why do you keep referring to ZFS like it's GodFS? The discussion was about "filesystems" but you keep moving the goalposts to "ZFS". Somehow it appears you feel that if ZFS couldn't achieve something then nothing else possibly could?

Analogy: imagine if you'd claimed "button interfaces are prohibitively expensive for electric cars", I had objected to that assertion, and then you kept presenting "but Tesla switched to touchscreens because they turned out cheaper!" as evidence. That's how this conversation feels. Just because Tesla/ZFS has issues with something that doesn't mean it's somehow inherently prohibitive.

> As such, the project was not interested, ever, in supporting in-place updates.

Again: are we talking online-only, or are you allowing offline defrag? You keep avoiding making your assumptions explicit.

If you mean offline: it's completely irrelevant what the project is interested in doing. By analogy, Microsoft was not interested, ever, in allowing NTFS partitions to be moved or split or merged either, yet third-party vendors have supported those operations just fine. And on the same filesystem too, not merely a similar one!

If you mean online: you'd probably be some intrinsic trade-off eventually, but I'm skeptical it's at this particular juncture. Just because ZFS may have made something infeasible with its current implementation, that doesn't mean another implementation couldn't have... done an even better job? e.g., even with the current on-disk structure of ZFS (let alone a better one), even if a defragmentation-supporting implementation might not achieve 100% throughput while a defragmentation is ongoing, surely it could at least get some throughput during a defrag so that it doesn't need to go entirely offline? That would be a strict improvement over the current situation.

> But I'd argue that's why BTRFS got undying reputation for data loss - they were more flexible, and that unfortunately also opened way for more data loss bugs.

Hang on... a bug in the implementation is a whole different beast. We were discussing design features. Implementation bugs are... not in that picture. I'm pretty sure most people reading your earlier comments would get the impression that by "brittleness" you were referring to accidents like I/O failures & user error, not bugs in the implementation!

Finally... you might enjoy [1]. ;)

[1] https://www.reddit.com/r/zfs/comments/1826lgs/psa_its_not_bl...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: