Hacker Newsnew | past | comments | ask | show | jobs | submit | ewwhite's commentslogin

This was user error and a major misunderstanding and mischaracterization of an issue that caused a bit of unexpected panic.


It's insane and horribly disrespectful. I don't understand the animus either. I just sent a $ donation to the maintainer.

The response to this bug is completely over the top. He found a security issue in an optional feature, immediately fixed it over the New Year holiday, and provided clear documentation about who was affected and how to address it. That's exactly how responsible disclosure should work.

The level of hostility - especially over adding optional features that people can simply choose not to use - suggests this is more about bandwagoning than legitimate criticism. We should be supporting developers who maintain critical open source infrastructure, not attacking them over a prompt response to a contained issue.


Donating in times of stress is a great idea, just donated as well.


ZFS on Linux is absolutely fine in high-performance and critical computing applications.

I also owned a Thumper and Thor running Solaris in 2009. Much prefer Linux and the hardware solutions today.


This is a bit ranty.

ZFS is maturing on Linux and the codebase and general focus is on Linux today. Hardware compatibility and mindshare are also big factors.

It seems like a subset of the TrueNAS community reveled in the fact that Core was FreeBSD-based. Maybe just a bit contrarian.

I've had meh experiences supporting, repairing and transitioning installations away from ixSystems. I wouldn't advocate their offerings in the first place, but it does seem like you have unrealistic expectations from ixSystems.


There are definitely better ways to benchmark.

I design similar NVMe-based ZFS solutions for specialized media+entertainment and biosciences workloads and have put massive time into the platform and tuning needs.

Also think about who will be consuming data. I've employed the use of an RDMA-enabled SMB stack and client tuning to help get the best I/o characteristics out of the systems.


What methods/tools are you using to benchmark your ZFS systems?


It depends on the use case. For high-speed microscopes, I may get a request that says, "we need to support 4.2 Gigabytes/second of continuous ingest for an 18-hour imaging run." - In those situations, it's best to test with realistic data.

For general video and media workloads, it may be something like, "we have to accommodate 40 editors working over 10GbE (2 x 100GbE at the server) and minimize contention while ingesting from these other sources".

I work with iozone to establish a baseline. I also have a "frametest" utility that helps when mimicking some of the video characteristics.


I was there and lead Linux engineer with the hosting firm CGI outsourced this effort to...


Can you tell us more?


We had the CGI Massachusetts state exchange business, which led to [additional exchange work](https://www.datacenterknowledge.com/archives/2014/02/25/mana...) in the lead-up. There were many last- minute changes and very little load testing, but I was instrumental in getting core VMware infrastructure onto HPE equipment instead of Supermicro. The firm was a private cloud and infrastructure services group, [LogicWorks](https://www.logicworks.com).


Syncoid is the way. Zrepl was more fraught in my use cases and significantly more complicated.


Oh, that's a bit cluttered...


Can you give some specifics if you have a sec?

(Not trying to be smart or anything, genuinely trying to get feedback as feedback from random users on HN is actually a really good and different audience than maybe we usually hear from so just trying to get a couple of bullet points maybe if you have a few minutes).

Disclaimer: I work at Netdata on ML.


Yeah, that's a myth now. It's not current advice.


72% is my rule of thumb for write heavy production stuff (my absolute limit would be 75%) but it depends on record-size, raidz-level, if you have mostly write or mostly read workloads, how big your files are, how many snapshots you have, if you have a dedicated ZIL-Device and much more. For a Home-NAS (Movies etc) you can easy go up to 85%...if it's a "~WORM" workload maybe 90%...but resilvering can then be a thing of days (weeks?), depends on the raidz-level or mirror etc.

>Yeah, that's a myth now. It's not current advice.

It's not and you know it, keep it under 72% believe me if you want a performant zfs (especially if you delete files and have many snapshots...check the YT linked at the end)

>>Keep pool space under 80% utilization to maintain pool performance. Currently, pool performance can degrade when a pool is very full and file systems are updated frequently, such as on a busy mail server. Full pools might cause a performance penalty, but no other issues. If the primary workload is immutable files (write once, never remove), then you can keep a pool in the 95-96% utilization range. Keep in mind that even with mostly static content in the 95-96% range, write, read, and resilvering performance might suffer.

https://web.archive.org/web/20150905142644/http://www.solari...

And under no circumstances go over 90%:

https://openzfs.github.io/openzfs-docs/Performance%20and%20T...

>An introduction to the implementation of ZFS - Kirk McKusick

https://www.youtube.com/watch?v=TQe-nnJPNF8


What about it?


tosh submits wikipedia links for karma points. Don't see why this person always does this. Really old ones too.

I flag every single post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: