Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenZFS launch (open-zfs.org)
293 points by mahrens on Sept 17, 2013 | hide | past | favorite | 163 comments


I suspect an unmentioned goal of this project is eventually to make the installation of OpenZFS on Linux (and other *ix operating systems) quick and simple, legally running around Oracle's license restrictions.[1]

Unsurprisingly, the list of supporting companies[2] does not include Oracle, which surely isn't happy about this project.

--

[1] The source code upon which OpenZFS is based was provided by Oracle (Sun) under the CDDL license, which prevents OpenZFS from being distributed in binary form as part of the Linux kernel.

[2] http://www.open-zfs.org/wiki/Companies


Installing ZFS on Linux is quite simple.

sudo apt-get install linux-headers-`uname -r` linux-headers-generic build-essential

sudo apt-add-repository ppa:zfs-native/stable

sudo apt-get update

sudo apt-get install ubuntu-zfs

The goal of OpenZFS is to foster cooperation between the various groups using ZFS.


That takes care of post install.

- How do I install onto ZFS?

- How do I maintain this? Since ZFS is outside the mainline kernel, how do I know whoever compiles the ZFS packages will keep track of (distribution)'s kernel package?


Good questions. Booting from Linux ZFS is a bit more of an advanced project. Here is the guide for Ubuntu: https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubu...

The ZFS packages are not fully pre-compiled. The ZFS packages use the dkms system where the kernel specific parts are recompiled when the kernel changes. This greatly reduces the maintenance work and is a system that has been in use by other out of tree kernel modules for a decade. Its not perfect, but it does largely reduce the distribution specific maintenance burden.

Of course, when a new kernel version comes out, ZFS may have to make adjustments to support it and ZFS may lag a little bit. However, currently there are significant resources being dedicated to ZFS on Linux development and packaging and for the last 18 months they've kept up pretty well.

It isn't perfect. I wish that ZFS could be in the linux source tree. While I saw ZFS on linux being available, I wasn't ready to try it until last year. However, third party file systems do have a long tradition, such as AFS, vxfs, and numerous SAN file systems. ZFS on Linux seems to be doing very well.


Adding a ppa introduces one more point of trust. That makes it not an easy option.


With the current licence (CDDL) it is not going to be included in the repositories of main distros anyway.


It's actually being integrated in Debian. (right now it's in the NEW queue, ie it's under final copyright review). See this bug report:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=686447


I don't think it has much to do with the license, CDDL is based on Mozilla Public License after all (https://en.wikipedia.org/wiki/Common_Development_and_Distrib...). The license being different to the kernel prevents distribution of kernel images with inbuilt ZFS support, but this can be worked around by installing it later, just as they already do with proprietary GPU drivers.


What about RHEL or Fedora?



You can already install on ZFS in FreeBSD.


It's a great pity more ZFS advocates are taking this as an opportunity to get better acquainted with FreeBSD.

I've been running FreeBSD on my home server for about a decade now -even before ZFS- and have loved it. But I used to run Linux VMs on top of it. However it's only recently that I've decided to go fully FreeBSD on it, using jails instead of hardware virtualisation. I honestly can't understand why I waited so long to do so. It's proven to be a far more elegant solution for what I needed.

While I do still run Linux on my desktop and work with Solaris and linux as my day job; FreeBSD seems a vastly overlooked alternative these days, which I think that's a great pity. its stable, proven and dead to administrate. But each to their own I guess.


My story is somewhat similar to yours, and I also regret not giving FreeBSD a try sooner. I guess hardware has a lot to do with many people's reluctance in using FreeBSD, and in that regard I was lucky that my netbook was compatible right out of the box so I didn't experience any of the issues others had.


May I ask which netbook you have?


I have an Acer Aspire One, ZG5. Intel Atom N270 1.6Ghz 1 GB RAM 160 GB hdd

I bought it about 5 years ago for a whopping $99, but it has an issue with shutting down about 30 to 50 seconds after POST.

I figure that I got my $99 worth out of it, and have acquired a new HP Envy 15, which is, unfortunately, equipped with a Broadcom 4313 wireless adaptor, thus I haven't yet taken the time to attempt getting the WiFi card to work in FreeBSD, but I intend to do so soon. (time permitting)


I also agree. I run a pretty heterogenous set of servers, mostly linux, with a single FreeBSD machine using ZFS with about 100TB on it, and I really like it. There's a lot of missing conveniences from Linux, but overall FreeBSD seems better thought out and orchestrated. It also has some really cool technologies that receive little fanfare, while newly arrived linux equivalents get hailed as the "next best thing™".


Given your sentiments about FreeBSD, I'd imagine you'd find an OpenIndiana based platform even more compelling.


While I haven't used OpenIndiana specifically, I have used OpenSolaris and a great number of it's forks, and honestly I didn't like them much (preferring pure Solaris).

However even that aside, I couldn't see myself prefering OpenIndiana; FreeBSD is a very different kind of 'UNIX'. While SunOS does historically have some roots in BSD, it's still very much a SysV-style UNIX; and as much as I use SysV-like systems daily for work, my true love is for BSD.


ZFS is actually part of FreeBSD since FreeBSD 7, so you don't even need to install anything else!


I thought this too, but I think the real issue is that they want to extend and add new features to ZFS and the original structure of the project isn't appropriate. The SPL interface isn't ideal either.


Nothing stopping you from writing a script that automatically compiles and installs the module when you're installing your distro.


There are lots of things stopping me from doing that. Like ensuring I don't break everything I touch. Not everybody is a sysadmin.


I think the Debian install is almost as easy as that - download a .deb package and install it, and it pulls in and installs and compiles everything else.


Homebrew?


I built a 16TB raidz home office server this weekend using Ubuntu and ZFS on Linux[0]. It worked great out of the box. I was even able to import a pool created on another server without any problem. Of course, your mileage may vary.

I was using FreeNAS previously (mainly for the ZFS support to keep my data safe and not spend a bunch on raid controllers) and kept getting bogged down by feeling the need to grok jails. I think jails are terrific in theory, but a pain to work with if you're not intimately familiar with them. Maybe it's just the way it works on FreeNAS, but newly created jails (by default on FreeNAS) were getting a new virtual IP addresses, which really threw me for a loop. Add to that frustration trying to get all the permissions correct just to make a few different services work together started to get really painful.

The drop-dead simplicity of setting up exactly what I had previously on a fresh Ubuntu box with native ZFS port really warmed my cockles.

[0] http://zfsonlinux.org/


Think of jails as VMs without the overhead of having the same OS multiple times in memory. Similarly you can't use host's IP by any of the guests.

That said many people go around that by simply binding the jail against an unsused loopback address (127.0.0.0/8) and then use firewall such as pf to redirect specific ports to given jail, like here http://blog.burghardt.pl/2009/01/multiple-freebsd-jails-shar...


Ah! I never thought of using a loopback address. Next time around I'll give it a shot.


...warmed my cockles.

Pray tell, kind sir, from which corner of our good planet does the phrase "warmed my cockles" hail?



What's the advantage of using ZFS RAIDZ over mdadm? I thought that mdadm was more flexible in growing your RAID array.


I have been doing lots of research on this recently and here is the main thing that makes ZFS win every time:

When you have a RAID of any kind you need to periodically scrub it, meaning compare data on each drive byte by byte to all other drives (let's assume we are talking just about mirroring). So if you have two drives in an mdadm array and the scrubbing process finds that a block differs from drive A to drive B, and neither drive reports an error, then the scrubber simply takes the block from the highest numbered drive and makes that the correct data, copying it to the other drive. What's worse is that even if you use 3 or more drives, Linux software RAID does the same thing, despite having more info available. On the other hand, ZFS does the scrubbing by checksums, so it knows which drive has the correct copy of the block.

How often does this happen? According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week.

My conclusion is that as I am building my NAS, I want ECC RAM and ZFS for things I cannot easily replicate.


Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data. It writes the correct data if one of the drives has a corrupted block.

https://raid.wiki.kernel.org/index.php/RAID_Administration

  How often does this happen? According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week.
This is complete nonsense without more data to back it up.


> Just to make it clear. raid-5/6 mdadm arrays does the right thing when repairing/checking/scrubbing data.

This is inherent to RAID-5/6. Doesn't really have anything to do with mdadm other than mdadm implements RAID-5/6. And now you probably have a write hole.


Here is one of the sources: http://linas.org/linux/raid.html


Just to make it clear: on raid 5/6 parity isn't checked on reads, so to get your "right thing when repairing/checking/scrubbing data" you'd have to do a full parity rebuild. This isn't anything like what ZFS does.


Thank you, this was very helpful!


The documentation for raidZ is wonderful, and the commands are logical.

mdam is a pain, and doesn't provide block level checksums, so you can easily get silent corruption.


It's really the integrity checking. As you say, mdadm is much more suited if you need to change the geometry of you array, add disks and so on. It's much handier for the smaller business, who can't afford a second send of disks to build a second array on when they want to reshape.

We ran btrfs on top of mdadm, getting both integrity checking and flexibility (although this just tells you that something is wrong).


There is a great advantage to combining the filesystem with the disk mapper. You don't have to use different commands to add and grow disks and the partitions upon those disks. Your filesystem knows about what it's living on and stores data accordingly. ZFS has more advanced file system properties, like sending snapshots, even of block devices. BTRFS is still working on feature parity with this. ZFS is much more stable than other FS with similar features.

The big disadvantage is the memory and CPU requirement. If your server has plenty of memory and CPU, I'd use ZFS. If you're running on an ARM NAS with 128MB RAM, I'd use something less fancy.


I think the primary advantage is that ZFS collapses all the standard filesystem abstractions. With mdadm or hardware raid you have a raid controller (which could be mdadm), volume manager (i.e. lvm), and filesystem (ext4, xfs, etc). ZFS combines all of that into one. It's really a different philosophy, but means that things like creating a new filesystem is almost instant (and CoW, snapshots, replication are all easy - although perhaps that's possible with the traditional abstractions as well).


I think the RAIDZ is also less susceptible to data loss due to the RAID layer and the filesystem being tightly integrated.


As a side note, I have wanted to do a project like this for a while. What hardware did you end up using?


I do the same and use an HP ProLiant microserver (it was dirt cheap). ZFSOnLinux just worked out of the box and has kept working ever since.

Word to the wise: Read about the block size of your disks, mine are newer and needed a block size different than the default but I didn't know about it, and now they are slower than they could be. I don't remember details, but you will definitely find it in a cursory search.


Newer disks use 4K instead of 512 byte sectors like older drives, so when you partition your disks you need to take care to align them accordingly.


Exactly. ZFS has a parameter for this, which you need to set manually.


Same as StavrosK -- HP Micro Server. The 16TB box is an N40L. I bought it awhile ago, but haven't needed extra storage until recently.

I also have an N36L running FreeNAS/ZFS on 4x2TB disks which I've used for about 1.5 years. It finally filled up (SO has data intensive profession) so I was forced to pony up. It's still being used every day and has no signs of failing.

They're both great home servers. I sometimes wish they had more CPU, but for the money, HP is basically giving them away.


As a side-side note, the HP Microservers (N54L[]) are currently under cashback promotion afaik in UK ('til end of sep) and Italy ('til end of oct). In Italy it's typically 199€, with a 40€ cashback (so 159€), in UK i think it's 190£ with 90£ cashbash so around 100£ final. I don't know if there's something similar in other countries.

[] there are 3 HP microserver versions afaik, the old N40L, the N54L and the new just released Gen8 that is way more expensive (500$ vs 200$)


As another point of reference, I have a ProLiant Microserver running FreeBSD with 4x3TB drives in RAID-Z. I installed a custom BIOS that removes the limitations on the optical SATA port and run a small single drive for the OS off that.

It's a dream to use and gives me 8TB of usable storage. I can easily hit saturation of the NIC and it's nice and small and quiet, fitting in the bottom corner of my bookshelf.


What about patent encumbrance? Has the squabble between Sun(now Oracle) and NetApp been completely resolved? According to this[1], it was a quiet draw-down, followed by NetApp suing some hardware manufacturers selling ZFS.

Are the patent issue laid to rest?

[1] http://en.swpat.org/wiki/NetApp's_filesystem_patents


CDDL has patent shielding. Oracle is on the hook for that patent, not users of Sun's (now Oracle's) CDDL licensed code.


I have trouble understanding the position of OpenZFS.

My understanding is that Oracle ZFS, which can not be integrated in mainline Linux, can still be distributed as a separate project (zfsonlinux.org).

On the other hand, Linux kernel developers have started btrfs, which is inspired by, but incompatible with, ZFS.

So, what is this project? I can only imagine that this is either a white room reimplementation of ZFS, or a fork before the license changed (but I think it was CDDL from the start).

A more interesting question would be: who should use or develop this? IMHO this will never be on par with "the" ZFS, so btrfs is where everyone's energy should go.

(Also, by reimplementing a ZFS product you're supporting them in a way.)


You should read the link, it answers many of your questions here. For example, it's not a reimplementation of ZFS, it's an organization to coordinate between the many different groups that are actively using ZFS in their products.

Parts of btrfs may be inspired by ZFS, but btrfs doesn't even aspire to some of ZFS's niceties like zraid3. And if you're looking at ever using 4TB disks, a third parity bit should be a requirement.

I'm a fan of btrfs, but I'm a much much bigger fan of ZFS. ZFS will almost certainly be at the base of our next storage buildout, and btrfs probably will not. The only thing keeping btrfs relevant is that the GPL and CDDL interact poorly. But when there's a great, well tested, and higher tech code base, why should people abandon it? ZFS is used by many many people in production, btrfs by very few, and even if btrfs hits all its development roadmap it won't be the equal of ZFS.


I read the link, and I disagree that it answers these questions. It is presented as "the truly open source successor to the ZFS project", which is why I understood it was a fork.

I agree that btrfs is not ready for production (and I'm not ready to hand my precious bytes to it yet), and will probably never be as feature rich as ZFS. But the licensing issue will always exist, and Linux needs a modern file system -- btrfs.

That said, a lot of people/businesses use ZFS on Linux in production, so it's nice that there is an central place where they can find documentation about it.


My apologies, on reading again, my first sentence comes across far snarkier than I meant it to be! I thought that the announcement was clear, but reading again, I can see some ambiguities.

I do agree that that Linux probably could use something better than ext4. But Linux also needs something like ZFS.

If Linux's license makes it too difficult to run ZFS, then I can run FreeBSD, Illumos, OpenIndiana, or whatever other open source OS I want in order to get ZFS. But I can't replace ZFS with btrfs, and it doesn't look like btrfs wants to be able to replace ZFS.


Well, it's like fork but it's the other way. After Oracle bought Sun, other projects that adopted ZFS continued fixing it and adding new features.

So FreeBSD has its version, Illumnos (fork of Solaris) has it own, Delphix own and so on. Those projects were using patches from each other but managing all of that became problematic.

So they basically designated one central place to do ZFS development from which all of the projects will use.

So now instead of many ZFS forks there just two: - Oracle - which is now closed source - Open-ZFS - which will be now the official open source ZFS that all Open Source systems will use.

Unfortunately it will still be CDDL, since no one in that project has power to do it. This would require Oracle and all contributors to allow for changing the license.


And to make things even more interesting/confusing, Oracle is one of the forces behind btrfs.


That is only partially true. btrfs was founded by Chris Mason before he joined Oracle. They simply let him continue hacking on it. He left Oracle in June of last year and works at Fusion IO. I'm sure Oracle has someone hacking on btrfs, but no one there was seriously pushing it (that I'm aware of) other than Chris Mason.


Do you have any evidence of this? I've tried to corroborate it and from what I can tell Chris Mason himself is quoted as saying he started it shortly after joining Oracle. Additionally Oracle seems to be pretty involved with btrfs and has devoted a lot of resources.


Yes, this is in no way encouraging, but at least it's GPL and as long as they pay the developer I won't mind. If they stop its development I'm sure Redhat or IBM will pick it up.


OpenZFS is a fork from before Oracle took ZFS closed again. It still has the CDDL license. OpenZFS is directly used by the people who maintain ZFS on various platforms (such as ZFS on Linux).

The end ZFS implementation can be used many places and ways. For instance, my employer believes that ZFS on Linux is safer and more reliable than btrfs is. Likewise, it is used on FreeBSD and various illumos (open source fork of Solaris) systems as the default file system.

While Oracle has presumably added new features to their ZFS, numerous companies have been working on OpenZFS for some time now and adding their own sets of new features that Oracle doesn't have, such as LZ4 compression.


Basically, this is a ZFS fork, forced by the stupidity of Oracle.

The whole issue is that Oracle has stopped publishing the source code of new ZFS versions for some years already: there are new ZFS "versions" (pool versions from version 29 to 34, which include some new backward-incompatible features) from Oracle, but no new open source code has been released. The opensource ZFS code has been almost stagnant/in mainteinance mode all this time.

OpenZFS is an attempt to keep ZFS open source alive. Since Oracle doesn't work with opensource anymore, the people who uses ZFS (BSDs, Nexenta, etc) have to do their own thing, and evolve the filesystem in their own way..


The opensource ZFS code has been almost stagnant/in mainteinance mode all this time.

We must have been looking at different ZFS projects. ZFS development was far from stagnant. :)


My understanding is that btrfs is limited to only the Linux platform today, whilst ZFS is not.

That alone seems, to me, sufficient motivation for this project.


And aren't there a bunch of newer ZFS features that Oracle has implemented that are not OSS? So, those are known only in documentation?


ZFS has the notion of version numbers - for both the zpool and filesystem. The last FOSS releases are zpool 28 and zfs 5. There have been subsequent releases of ZFS by Oracle, but versions 28 and 5 are the latest used by all the open source implementations.

What the FOSS community has done is to add "feature flags" to ZFS instead of constantly bumping the version number. So encryption is a feature on top of zfs, but the encryption introduced in FreeNAS for ZFS isn't the same as the encryption in Oracle's zpool v30.


I'm glad to see open-zfs.org for exactly the reasons you mention.

I've been runnning MacZFS for years, but it's currently way behind at zpool version 8, zfs version 2. At one point I put a FreeBSD machine in the basement, with plans to stream snapshot diffs for backup, but the version mismatches prevented me from doing this in a smart (zfs-based, rather than rsync-based) way.

Looks like this central coordination point ought to help the porting of features/versions from the Linux branch to the Mac branch.


Have you looked at Zevo? It's at Zpool 28 on OS X.

Regarding OpenZFS, I thought it was quite surprising that the FreeBSD version wasn't used as a basis for a port, I imagined they had more in common.


Actually, Oracle was the company that started development on btrfs, and still does contribute, so using btrfs would also be supporting them in a way. I'd argue that developing for zfs is the way forward because the underlying license is less restrictive than the GPL, and can be used in more Free and Open operating systems, making it easier to use the best kernel for the task at hand.


It takes easily another 10 years before btrfs is where ZFS is now reliability and feature vise and it might not have all the features even then.


We've had a very positive experience with btrfs, using it on a redundant system for a few years. It's a lot more flexible than zfs in some respects. For instance, all the snapshots are modifiable and can themselves be modified. You can build up a graph of snapshots, which is not possible in zfs which enforces a strict hierarchy. We ran btrfs on top of mdadm, to get raid and integrity checking.

btrfs can do things like copy particular files using copy on write, which is really cool. btrfs also supports offline deduplication, which isn't supported in zfs. This is very useful if you want to do the deduplication when the system is not otherwise being used and avoids the overheads of keeping hashes in memory all the time.

I think that 10 years behind is exaggerating where it is at. For instance, people using zfs on linux often have problems with running out of memory and so on, even on systems with very large memory.


Modifiable snapshots are called clones in ZFS. Everything you said about snapshots is possible with ZFS with clones+snapshots. Clones are just as instant as snapshots.

Runnin ZFS in linux is not good idea. That's why I'm currently using OpenSolars and maybe switching to FreeBSD in the future for file and db servers.

The problem with btrfs time estimate is that it takes long time for filesystems to become reliable enough that you can put important data on them. ZFS has crossed that threshold, it takes years and years for btrfs.


ZFS doesn't support an arbitrary graph of snapshots, even with cloning. There is a "zfs promote" command that can be used, but can only be used in certain circumstances. For example, I wanted to take the latest backup of a system and rsync older and older backups onto that backup, making a snapshot each time. I then wanted the snapshot of the newest data to be the "HEAD". I also wanted to gradually delete the older data. This setup was not possible with ZFS, because the child-parent relationships were in the wrong order. With btrfs it was simple.


And its not like some of the groups using ZFS today would help btrfs along because of its license. Kind of a shame we couldn't all agree on something with a neutral license.


GPL is hardly a neutral license. If anything, it's as extreme copyleft as there is. CDDL is actually quite neutral, and the compatibility issues between the two has more to do with the GPL, which is incompatible with pretty much all other licenses other than BSD/Apache/MIT/X11 style minimal licenses. Hence why ZFS can be safely adopted by FreeBSD and MacOSX.


I guess I should have wrote that better as I believe btrfs is NOT under a neutral license and thus the reason the ZFS crowd won't migrate or help.


you know why this is huge? its the first kernel piece of code that are and can be shared by different open-source operating systems in the kernel codebase!

i hope more technologies could get up into this model.. while the democratic open-source ecosystem in userland is a rule this days.. on the kernel land this is not true and that may force us to use a OS instead of other because of some feature that only that Os has, even if we wish to install the other one..

for instance, i love linux, but i also love the BSD´s and want them to grow as much as linux did.. if the good things created in one OS could also be used on other, by a proper port to that kernel, we might not being bullied to accept on OS in favor of another, and be stuck by it!

I hope this movement make its way to others kerneland technologies! great news!


Not quite the first, but likely to end up one of the most wide spread. The DRM graphics drivers are actually older than ZFS and have been MIT licensed from the beginning (they started from older MIT licensed X11 projects). There's also possibly some other's from the BSD projects (network drivers and such) that are even older than those.


yeah, but in those cases you have cited, the interested party get the source and implements it on its own codebase himself

eg. microsoft using bsd network layer on windows and freebsd porting linux drm to its own kernel..

this is different.. its aggregated as a product that target several os´s with different implementations for each kernel, but shareing only one core codebase.. this tends to be much more stable and cheap to everybody.. less bug prone, etc..

didnt heard of anything like it..


ZFS is really a good thing, even if you never use it.

Why?

It raised the bar on filesystems and other filesystems have innovated in response.


I would tentatively disagree with raising the bar and describe it using my favorite MySQL / PostGRES analogy that they have a fundamentally different philosophy but do about the same thing.

For example, if you want to do something software raid-ish, ZFS has the philosophy that should be done at the filesystem layer, not as a virtual device like every other linux filesystem, ever. Its not a new feature to be able to do RAID, but its new to embed RAID into the filesystem layer itself. Linux style virtual RAID devices don't care if you build a FAT32 on top of /dev/md0.

There are other examples of the same philosophy in ZFS. For example everywhere else in linux if you want some manner of "volume manager" you simply use LVM. ZFS has its own interesting little volume manager. Which relates to snapshots.

It's exactly the same with encryption. Every other implementation on linux uses a loop device and your choice of algo. ZFS shoves all that inside the filesystem.

Another philosophical decision is every other linux filesystem doesn't scrub, but only fsck's metadata, so logically ZFS implements the exact opposite.

Although ZFS supporters are technically telling the truth when they run around saying only ZFS can provide software RAID, or only ZFS has a volume manager, and ext2/3/4 does not, its not relevant. I've had LVM and software RAID and all that for many years on existing linux stuff.

One of the few true features ZFS provides is allowing ridiculously big filesystems. Which is cool.

It is mostly a philosophical difference between modularity and monolithic design, with pretty much everything else being modular, and ZFS being extremely monolithic.

In that way I don't think ZFS has prodded any innovation at all in any other filesystems other than maybe BTRS which I haven't been following because my data is too valuable to experiment upon and filesystems aren't my thing. I don't see the iso9660 FS driver adding native volume management, snapshotting, software RAID, and encryption any time soon.


> For example, if you want to do something software raid-ish, ZFS has the philosophy that should be done at the filesystem layer, not as a virtual device like every other linux filesystem, ever.

Not exactly true. RAID and mirroring logically sit at the zpool layer, and therefore anything on top of a given zpool has the zpool's RAID/mirror characteristics. This may be a filesystem, but it could also mean a zvol too. A zvol would be analogous to your /dev/md0 block device in that you could put a FAT32 filesystem on top of a zvol and still benefit from the underlying redundancy, parity, and checksumming features of ZFS.

Addendum: Strictly speaking, redundancy (RAID or mirror) configuration is on a vdev, and a zpool is comprised of one or more vdevs over which data is striped.


ZFS shoves all that inside the filesystem. [...] I've had LVM and software RAID and all that for many years on existing linux stuff.

I suggest you study the design of ZFS and also NetApp's WAFL to understand why they are combining what have traditionally been separate layers into one larger system.

The short version is that this cross-cutting of layers enables substantial optimizations and new features that aren't possible when everything is kept to strict interfaces.


But your conclusion is basically what I wrote, that one design is monolithic and one design is modular.

Yes in theory you could probably come up with a weird pathological scenario where a monolithic design is slower and a modular design is faster. But that usually doesn't happen.

Usually, turning a modular design into a monolithic design for a tiny performance gain turns into an epic disaster/mistake. Maybe the whole ZFS thing will be an exception. Probably not.


Yes in theory you could probably come up with a weird pathological scenario where a monolithic design is slower and a modular design is faster. But that usually doesn't happen. [...] Usually, turning a modular design into a monolithic design for a tiny performance gain turns into an epic disaster/mistake.

Well, think about this. Suppose you're running RAID-1 with two drives, and you've got some filesystem (maybe ext4, but that doesn't matter) running on top of that. You create one huge file, and then a little while later you delete it. And right after that, one of the disks dies, and you replace it.

In this case, your RAID layer doesn't know that most of the data written to the original drive is junk, and that the only really important bits are some inodes and directory entries consuming a few MB near the end of the disk. It has to re-mirror the entire drive from the original to the replacement before they are in-sync again and you are fully protected. Even with modern drives, that leaves a large window of time that you're not protected.

If, on the other hand, your RAID layer has a thicker interface to the filesystem than just a dumb block store, it can just mirror the little bit of metadata, and within seconds you're in sync again and fully protected.

That's just one example. There are many more. Go read the stories about people complaining about RAID-5 and RAID-6 performance.


I don't know much about RAID and other systems, but does anything other than ZFS read data from disk, determine that it was corrupt, go to another disk and get the same data and verify that the hash is good and then automatically fix the corrupt data? That seems like a killer feature to me.


Hmmm. I run NTFS and truecrypt on top of raidz zpool allocations (zvols, virtual block devices) all the time, block storage doled out over iSCSI from my ageing Nexenta box.

I do think you're too quickly dismissing the advantages of the fs being aware of the underlying block storage provider, to the point of being able to grow elastically from common storage rather than preallocate at partition time.

I'd much rather the whole md layer be replaced by zpool even if zfs itself is not used.


years later, I'm still waiting for the supposedly better algorithmic underpinning of btrfs to pay off. I also understand the conceptual benefits of technology layering as done in linux (RAID, and/or LVM, FS), but ZFS's vertical integration and monolithic set off tools are SO nice and consistent.


Yup, what happened with butter fs?? It was supposed to be better than zfs, wasn't it?


It exists and it's pretty good so far, but it's still far from the functionality and polish of ZFS.


How does this and ZFS on Linux compare? How are they related? As a casual bystander, this is all very confusing.


ZFS on Linux is OpenZFS ported to Linux. As various OpenZFS users (FreeBSD, Nexenta, etc) add features to OpenZFS the code will be merged upstream then adopted downstream by the other OpenZFS users (including ZFS on Linux).



Still CDDL, so still cannot be included in the main Linux kernel tree?


The legal issue is a matter of some debate (see below). In practice, I don't think the Benevolent Dictator wants ZFS in the main Linux kernel tree at the moment.

http://zfsonlinux.org/faq.html#WhatAboutTheLicensingIssue


There's no debate. The CDDL has never been compatible with the GPL... and CDDL'd code will never be a part of the kernel.

http://www.gnu.org/licenses/license-list.html#CDDL

http://www.groklaw.net/articlebasic.php?story=20041205023636...

http://www.groklaw.net/article.php?story=20050205022937327

In this link Linus talks about loadable module licensing.. gives an example of AFS, which he says he did not think counted as a derived work (and therefore did not need to be licensed under the GPL).. very similar to the ZFS case.

https://lkml.org/lkml/2003/12/3/228


Here's a much better link.. someone collected various emails from Linus on loadable modules:

http://linuxmafia.com/faq/Kernel/proprietary-kernel-modules....


You don't see Linus's viewpoint as contrary to the one on gnu.org? It seems like Linus is saying that the GPL allows AFS to be used with Linux (because it is not a "derived work"). How is ZFS different from AFS in this respect?


No, I don't. The gnu link says CDDL is not compatible with the GPL. What Linus is saying is, if a kernel loadable module relies on kernel internals, then it may count as a derived work, and must be licensed under a GPL-compatible license. If it does not -- for example, it is a filesystem that was ported to Linux (like AFS, and zfs fits here too IMO), then it may not be a derived work, and can be licensed under a non-gpl compatible license (like the CDDL).

Obviously, including the AFS or zfs into the source WOULD DEFINITELY create a derived work, and would require AFS/ZFS to be licensed under the GPL.

So Linus' is only explaining how a kernel loadable module could be licensed under a GPL-incompatible license.


Well written, I will add that it's pretty clear cut that zfs on linux would be a derived work of the kernel cause it has some differences specific to linux internals to make it work. That's why zfsonlinux only distributes module sources. Since it never distributes a binary, it can't run afoul of licensing.


I don't understand how putting the code into the same repository/makefile changes whether it's derivative in terms of copyright.


It's really more than simply adding it to a repository. I can store two text files that have nothing to do with each other on my HD or in a repo or wherever, and I would not be creating a derived work.

But if I have file A (kernel source) and file B (zfs code), and I compile A+B into a binary (the kernel image), then I have a single work that has been derived from A and B.

When it's suggested that ZFS be added to the kernel repo, what's really being said is that a single work (kernel+zfs) should be created.

In contrast, ZFS is currently a kernel loadable module. We have the kernel binary, and the module binary.. two separate works. (What Linus was clarifying was how integrated the module could be with the kernel and it still be considered a separate work.)


So what if it was inside the official kernel repo but defaulted to building as a module, and anyone distributing binaries left it on the default? Would that actually satisfy the licenses or are there clauses that would get in the way?


Good question. I think this is getting to specific for me to comment on. IANAL.


From my mostly second-hand knowledge, in practice even lawyers specializing in IP wouldn't have a solid answer for how to make this distinction, without looking at a specific case in detail. If it came up in a trial, the two sides would make a version of the arguments presented here: one would emphasize that the sources have now been "added to the kernel tree", a unified project managed with close integration etc. etc., while the other side would argue they were merely placed alongside the kernel sources in a version control system, like collecting short stories in an anthology.


Let me ask a slightly different question then. Does the GPLv2 ever try to control anything that is not derived from the GPL source code? Some of the FSF's saber-rattling seems to imply that either the answer is yes, or they're being misleading, or they have a completely ridiculous definition of derivation.


A derivative work is one that extends upon an original work. Thats an simple definition of a derivative work, but doesn't include any clear examples.

The FSF gives the example that linking causes a derivative work, and incorporates that line of thinking into the LGPL. The reason behind it is that a linked work existence is based upon an original work, and can not exist without it. As such, linking is an easy example where the line into derivative work has been crossed.

In the end, it will be up to the courts to decide what is or isn't a derivative work in software. The statutory definition is incomplete and the concept of derivative work is thus interpreted with reference to explanatory case law. Each time a music company wins a lawsuit against remixes, derivative work extends its grasp. Each time a game like WoW wins a lawsuit against bot software, one more step is taken.

In light of the precedential cases, I consider the FSF example of linking to be quite conservative definition of derivation. It might not be true every time and for every possible use of linking, but it should be true enough in the general case. Is there a strong argument against that interpretation?


Yes, if you link to a GPL'd library your program must also be GPL'd. That is why the LGPL was created.

Only example that immediately jumps to mind.

Edit: rephrased for clarity


Don't they claim that the linking rule works via derivation? As far as I understand it, the FSF would tell you that anything linked is always derivative. But that doesn't mean it's true. If you could prove that a particular instance of linking to a library was not derivative, would they still claim your program had to be GPL?

As I understand it, the LGPL exists to 1. provide legal certainty and 2. allow some amount of external derivation if necessary.

And it's easy to create an artificial dynamic-linking case where there is provably no derivation, using multiple libraries with the same API.


Don't they claim that the linking rule works via derivation?

Yes.. it was the closest thing I could think of where the two pieces of software are fairly separated. I mean you could have a huge proprietary program, and a developer calls gsl_pow_int() from the GNU scientific library, and the entire program must be licensed under the GPL.

I think that's about as close as you're going to get.

If you're looking for a case where the FSF said a piece of software had to be licensed under the GPL, even though it was NOT a derivative work, I don't think you'll find it. The reason it must be a derivative is copyright law.. the GPL can't unilaterally change that.


The GPL can't extend copyright law, but it can refuse to let you distribute.

It's possible to make a license that would say "can't be distributed with other software that does X".

But good, I'm glad there's nothing like that in the licenses here that I missed. Just the normal derivation-based questions.


Yes, you are right.. I didn't think of that case.. They cover this question in the FSF GPL faq: http://www.gnu.org/licenses/gpl-faq.html#MereAggregation

And they do have a requirement if you do so: The only condition is that you cannot release the aggregate under a license that prohibits users from exercising rights that each program's individual license would grant them.


If yo take a Makefile which say Linus had written and tack on some of your own rules, you have almost certainly (unless the original Makefile was something trivial like ::) created a derived work.


If you distribute a single source tarball with code files with different licenses, all of them apply. There may be different terms limiting what you are allowed to do with the file in each of the licenses, while the most restrictive license terms apply to the whole file.

Besides the real legal issues, this also causes unnecessary confusion for users and those creating packages for binary distribution. Therefore, using multiple incompatible licenses in a single source tree is usually avoided.


>If you distribute a single source tarball with code files with different licenses, all of them apply.

Yes, the tarball distribution must respect all the licenses of files inside it, but once the tarball is unpacked it goes back to individual licenses per file. I understand how it's a big mess when it gets to binaries, but do any licenses actually place restrictions on what they can be tarballed with?


Lets forget the licensing entirely. ZFS was built ontop of the Solaris VFS layer (virtual file system), which is entirely different than the Linux VFS. Taking the code in the Solaris kernel that provides ZFS and porting it to the Linux VFS is a massive undertaking that would require rewriting most of the "stable" guts of ZFS.

Honestly other than the fuse stuff, I doubt ZFS will ever be in Linux as it would require a rewrite of such a large portion of code, it would be impractical.


> Taking the code in the Solaris kernel that provides ZFS and porting it to the Linux VFS is a massive undertaking that would require rewriting most of the "stable" guts of ZFS.

And it's been done and working quite well considering. There still are improvements to be made, but progress is happening.

See slide 9 of this in particular: http://www.slideshare.net/MatthewAhrens/open-zfs-linuxcon


I don't mean adding solaris / illumos interfaces (see slide 8 on your link) into the Linux kernel. That will simply never be accepted upstream. I mean properly porting it from the Solaris to the Linux VFS without a shim ie a native port.

That still has not been done. It is a ridiculous amount of work.


Wait, if the current Linux ZFS module doesn't interface with the Linux VFS, then how does the filesystem even work? Did they write a Solaris VFS -> Linux VFS bridge?


Yes, as evidenced on slide 8 of that link.

""" Solaris Porting Layer - Adds stable Solaris/Illumos interfaces * Tasqs, lists, condition variables, rwlocks, memory allocators, etc - Layers ontop of Linux equivalents if available - Solaris specific interfaces were implemented from scratch """

It doesn't (currently) use the Linux page cache, which causes quite a few ancillary issues. The idea is awesome, but this will simply never be able to be "natively" in linux without a rewrite of much of the core.


ZFS is the best thing that happened in storage! Big thanks to the developers! Happily uses Western Digital 25AV 2.5 disks. Will probably use WD Red series. Total remote backup time with ZFS incremental snapshots negligible.


Are they going to reconcile incompatibilities with GPL by adding additional license? Since OpenZFS is not associated with Oracle, who owns it (as in giving it a new / additional license?). It's a really silly situation since according to the authors they never intended to use the license to exclude Linux. But that's what happened.


Unfortunately, Sun (now Oracle) provided the original source code under the CDDL, so re-licensing it would require Oracle's approval, which is unlikely to happen.

ZFS's license is compatible with those of FreeBSD and Illumos, which are very stable operating systems. Given that ZFS is most likely to be used for a SAN or a NAS box, you can quite easily use FreeBSD for those boxes and Linux for your application servers if you choose.


So essentially Oracle still owns the original rights on it? How does it work with derivatives? Let's say OpenZFS over the years will move far away from Oracle's ZFS. Will it sill be indirectly controlled by them as in not allowing to relicense it?


Generally it's a complicated area. Generally the only safe way to sneak out from under an existing license would be a black-box rewrite, done by people who hadn't looked at the source for the original version. Otherwise the original author could claim that it's a derivative work, and thus falls under the terms of the original license.

The CDDL in particular specifies that any modifications (changes, additions or deletions to the source code or their files) are also under the CDDL.

See http://web.archive.org/web/20090305064954/http://www.sun.com... 3.2,3.4 along with their definition of Modification.

However, even a black-box rewrite could still fall foul of any patents granted to the original creators.


> the only safe way to sneak out from under an existing license would be a black-box rewrite

It still doesn't protect you of patent lawsuits.


> However, even a black-box rewrite could still fall foul of any patents granted to the original creators.

Well, OpenZFS is already vulnerable to it. So if Oracle will decide to sabotage it, it easily can.


[deleted]


If CDDL prevents patent abuse by Oracle for derivatives - then great.


My understanding (although I'm not certain) is that a license to use and modify the software also implicitly grants a license to use the patents contained in the original code.


Yup, section 2 essentially grants that.


I dabbled in ZFS on FreeBSD + OpenSolaris 3 years back. It was nice and all, but it hasn't been worth the overhead of running another OS to get its features since. I'm therefore glad to see some unity in the ZFS community to create more trust around its use in Linux, and proud to see my beloved Gentoo in the list of standard-bearers! Bring on the unrivalled pragmatism. Quack quack.

Observation: Gentoo packages still point to http://zfsonlinux.org/ not to http://open-zfs.org/ .. is this the same code? I suppose so.

Further observation: The kernel code looks like, as packaged by Gentoo, it can only be compiled as a module. Generally I disable LKMs on production systems. Grumble.


it hasn't been worth the overhead of running another OS to get its features since.

That's a rather cavalier attitude towards the integrity of your data. :/

I can replace operating systems. It's a lot more difficult to replace lost or silently-corrupted data. That makes data integrity one of my prime concerns, which it should be for any production systems.


That's a rather cavalier attitude towards the integrity of your data ... data integrity one of my prime concerns, which it should be for any production systems.

Different projects have different resources and requirements. There's far more ways than just ZFS to provide for data redundancy and integrity (TMTOWTDI).

(Edit: why the downvote? Geez.)


Downvote wasn't me. I agree with the sentiment regarding requirements.

However, I disagree with the bit regarding data redundancy and integrity. You can do it other ways, but that doesn't make it a good idea; it's a bit like Greenspun's Tenth Rule, but for data. ZFS, or something like it (and there isn't anything else like it), is the foundation of any modern setup where data is important.

Because it's not on Linux is a terrible reason. If your data is important, then you'll need to look elsewhere than Linux for the servers where the data sleeps. The importance of data requires it.

If data is relatively unimportant, then you're right. There are few domains where that's true nowadays though.


I disagree with the bit regarding data redundancy and integrity.

You are welcome to disagree but I'd like to see some reasoning.

You can do it other ways, but that doesn't make it a good idea; it's a bit like Greenspun's Tenth Rule, but for data.

Had to go searching for that rule, which seems to be Lisp-snobbery which is clearly somewhat justified in theory but almost irrelevant in practice. Right tool for the job, and all that. It's such a broken metaphor for storage consistency or availability that I'm not going to comment further.

ZFS, or something like it (and there isn't anything else like it), is the foundation of any modern setup

Do you honestly view ZFS as the be-all and end-all of data storage? That would be ... sad. Other filesystems can offer snapshots and high availability, as can other elements within a storage system. For example, in Linux, DRBD is a block device driver that provides even more powerful availability guarantees that any conventional (~single-host-homed) filesystem. Likewise, LVM2 has provided block-layer snapshots for ages. Similarly, Linux is unsurprisingly the most vibrant platform for cluster filesystems. Then there's also other great general purpose tools such as RAID, signatures/checksums, and such.

If your data is important, then you'll need to look elsewhere than Linux for the servers where the data sleeps.

That's just ridiculous. I guess you're going to tell me most of the world's data lives on ZFS? Google uses ZFS? Facebook uses ZFS? Yahoo uses ZFS? Let's be realistic here: you're absolutely and demonstrably wrong, and have provided no compelling argument.


which seems to be Lisp-snobbery which is clearly somewhat justified in theory but almost irrelevant in practice

I agree, but you're missing the forest for the trees here. Please accept my arguments in good faith.

Do you honestly view ZFS as the be-all and end-all of data storage?

For local storage? Right now? Yes, it's the best we have.

provided no compelling argument

How many filesystems have Merkle trees? You need something like them to avoid phantom reads, phantom writes, and silent corruption.

How many filesystems have duplicate metadata blocks, duplicate [what's analogous to] the superblock several times, and can duplicate data a user-specified number of times? And then check their validity using the Merkle tree property above to validate reads?

How many filesystems offer free and instant snapshots? As many as you want? Those things are wonderful for databases.

How many filesystems offer software RAID? Hardware RAID is a dodgy idea, because it's a complex binary blob in firmware you have no insight into when something goes wrong (speaking from bitter experience, things go wrong). Furthermore some hardware RAID suffers from a write hole.

How many filesystems are transactional? And allow you to roll back if a transaction becomes unfixably corrupted? How many can replicate? How many use SSDs efficiently? How many have been in heavy industrial use for years?

ZFS has all that (not some of it, that's the point), and more. There's nothing else like it. btrfs probably will be one day as well, but not yet.

So, no, it's not ridiculous. I've been down this trail of tears before, and ZFS has made life so much better. At least I don't need to dread a number in my database silently flipping a digit anymore -- if that scenario doesn't give you the hives, then I really don't know what to say.


Many things can corrupt your data ... outside of the filesystem. You seem unswervingly fixated on ZFS for some reason. This is simply wrong. If there's any forest-missing going on for tree fascination, it's with you.


I make an argument in good faith and get a nonsensical passive-aggressive blow-off in return. You should be ashamed.


I fully recognize ZFS's great feature set, it's just a tool though, and only represents one potential solution, appropriate for certain requirements, within one layer of a storage subsystem. If paranoid levels of data integrity are an end-to-end requirement, ZFS isn't a magic bullet.


the site is lacking the Mac OS X version called Zevo: http://getgreenbytes.com/solutions/zevo/. It was developed by Don Brady, formerly a Senior Software Engineer at Apple


And it's at Zpool 28 already.


Not GPL compatible and patents from my understanding. Is there any way they can expect large scale deployments or linux integration?


ZFS on Linux large scale deployments: 55 Petabytes at LLNL

http://www.slideshare.net/MatthewAhrens/open-zfs-linuxcon

See slide 3; last slide has a little more detail.

ZFS on illumos: Nexenta claims 1.5 Exabytes under management (across multiple deployments)

http://billroth.ulitzer.com/node/2461630


FWIW, my understanding is that CDDL, as a MPL-derived license, had a patent licensing clause and therefore the patents don't matter as long as it's using that set of source code. So a clean room rewrite might run afoul of patents, but the existing CDDL licensed code is okay. (See CDDL, section 2.1(b), but keep in mind section 2.1(d) where modifications and deleted code are not covered.)


Can I please, please, please have a Windows port?

At work, we use Windows boxes, and I would love to use ZFS on them.


The operating systems that are getting support are mostly or at least partly open-source, so good luck with that.


That's probably not as big of a detriment as the high level functionality Windows expects to be in the filesystem layer that unix doesn't (or puts in VFS). From what I read, this is why Microsoft themselves have a hard time replacing NTFS.


There's having ZFS running as a native Windows file system...

...and then there's running ZFS on my Windows systems, which I must use a special API to access.

The second one isn't ideal, but it would be awesome to have.


There is http://code.google.com/p/zfs-win/ ,but it is not really useful.


You might be surprised at what WinServer 2012 already supports in that area: http://www.microsoft.com/en-us/server-cloud/windows-server/s...


You might be surprised that if you pay more money, Microsoft has more to sell you.

No, I'm not. I like FOSS, and ZFS seems excellent - I wish it made it to Windows. Like, Windows 7, for consumers and businesses...


FORK YEAH, this is what I've been waiting for the last 2 or so years. <3


I am curious. What are you using ZFS for?


I'm personally using it on my set of freebsd jailed boxes to provide quota-ing in a useful way, de-duplication of shared files in those quotaed trees, compression of things like the ports tree, etc. Subvolumes are sexy as hell in how you can have one filesystem, but give lots of different attributes to various bits and pieces.



I know. I mostly ask, because a lot of people these days keep nearly every kind of data in databases and ZFS usually isn't the best thing for this (depending on the exact use case of course). It's by far one of the most exciting file systems, for storing "raw" data, but if you are having another layer, your data base system it (again, based on the exact use case) can become an unbearable overhead performancewise.


I have a lot of respect for those working on this project, but realistically, if you use Linux, using an out-of-tree filesystem is just asking for pain-- lots of it. I would never use this on a production system. You know how painful out-of-tree video drivers are? Yeah. Imagine that, only now with the potential for data loss and divergent on-disk formats. And if it's your root fs, you can forget about booting if there's a problem.

Sure ZFS has a great reputation, but a lot of that came from how well-integrated it was into Solaris and how much QA was done on it. Neither of those things were ever true (or are going to be true in the future) for the various ZFS-on-Linux projects (yes, there are multiple.)

The comments about btrfs are about 5 years out of date. SuSE has already shipped btrfs in their "stable" 11.1 distribution, and Red Hat is going to do so in RHEL7. Give it a chance.


> The comments about btrfs are about 5 years out of date.

Not really. I've tried it, and it still has pain points I'd not like to have in my filesystem. It's like ZFS almost a decade ago (and I'm not talking about features)... although ZFS on Linux vs. btrfs on Linux... right now I'd still go with btrfs.

> Neither of those things were ever true (or are going to be true in the future) for the various ZFS-on-Linux projects (yes, there are multiple.)

I believe there is a shift with regard to this, as demonstrated by the Gentoo project's integration of ZFS.


> Not really. I've tried it, and it still has pain points I'd not like to have in my filesystem. It's like ZFS almost a decade ago (and I'm not talking about features)... although ZFS on Linux vs. btrfs on Linux... right now I'd still go with btrfs.

Care to elaborate? I've never tried ZFS, but been very happy with btrfs for my smalltime personal usage, I'm wondering why people find it so painful in comparison.


Well, there's just the general lack luster performance: http://www.phoronix.com/scan.php?page=article&item=linux_311...

I've also seen particularly bad pain points when doing things like using it with an NFS server.


ZFS also has "general lackluster performance" in areas like using memory (it requires tons of it). It's inherent in the design of a copy-on-write filesystem.

According to Ted Dunangst: "ZFS wants a lot of memory. A lot lot lot of memory. So much memory, the kernel address space has trouble wrapping its arms around ZFS. I haven't studied it extensively, but the hack of pushing some of the cache off into higher memory and accessing it through a small window may even work." See http://www.tedunangst.com/flak/post/ZFS-on-OpenBSD

Different filesystems are good for different things. If you want a filesystem that has subvolumes, copy-on-write snapshots, built-in RAID, transactions, space-efficient packing of small files, batch deduplication, checksums on data and metadata, and so forth, you have to pay a price. Just the same way as running Apache with all the bells and whistles is not going to be as fast as ngnix.


> ZFS also has "general lackluster performance" in areas like using memory (it requires tons of it). It's inherent in the design of a copy-on-write filesystem.

Those benchmarks aren't about CPU or memory consumption. These days a good filesystem probably should trade memory and CPU for increased performance. Those benchmarks are about throughput/latency.

> Just the same way as running Apache with all the bells and whistles is not going to be as fast as ngnix.

...except ZFS generally performs very well as compared to other filesystems. When it first came out it had all kinds of ugly corner cases where it performed poorly, but it seems to do great these days.


I originally had a lot of text here, but let's just leave it at this... something being in Gentoo doesn't mean it's in any way mainstream. I mean that in the nicest possible way (Gentoo can be fun).


Sorry if you got the idea that I was asserting Gentoo == mainstream. I was merely suggesting that maybe it might be heading towards mainstreaming, and pointing to the Gentoo integration as a step that might make that easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: