In liu of ZFS replication (send and receive). I still use ZFS as a file system.
Also, I have a 1.5TB directory. There is a lot of redundant backup data in it. I archived it with Restic to 400 GB. As a ZFS dataset, it would have taken ~4X the size.
Honestly, backups up to 100 TB are better be done with tools such as Restic than file system stream backups: less hardware requirements, lots of support for repository management, integration with clouds, portable repositories, better and more trusted encryption, etc. Tools based on Go are static binaries with no dependencies. You can recover your data in the future on any X86 platform.
Fully agree on using tools such as Restic! The `--exclude-caches` is really helpful in keeping backups small, it makes Restic skip directories with a CACHEDIR.TAG file and that includes Rust compile target directories. Combined with a small exclude list for browser caches and other temporary storage this makes backup deltas way smaller than in the average ZFS setup. (And no, creating a new ZFS filesystem for every cache directory and excluding each of them from snapshots is not really a solution)