Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not a sysadmin but recently started using CoreOS to deploy small web apps. Could anyone explain to me like I'm 5 what's the difference between those cluster schedulers and something like CoreOS' fleet (https://github.com/coreos/fleet)?


The fleet maintainers have taken a hard stance on keeping fleet simple with "just enough" features. This line has been worked out to be resource scheduling -- fleet will stick to unit colocation, simple fan out via conflicts, machine metadata and pinning to a specific machine id. If you need to do hardcore bin packing, using one of the other schedulers is going to help you with that goal. Fleet is a great tool for bootstrapping those schedulers. For example, in Tectonic, fleet is used to run the Kubernetes control plane and related services.


I think they are trying to achieve the same thing, with differences in API and richness of each ecosytem.

http://www.slideshare.net/teemow1/container-orchestration

https://groups.google.com/forum/#!msg/coreos-dev/nHK8irdnmM0...


Actually, having spoken with one of the CoreOS guys recently about this, it seems that their concerns are a bit lower-level. Where these resource managers actually do concern themselves with the problem space of resource management as well as orchestration, Fleet is taking the position of a "distributed systemd" in a way without much else in terms of provided porcelain.

For those of us who are old school HPC people, it's probably more reasonable to think of Fleet as a dynamic always-running manifestation of xCAT or, perhaps, Fabric (of fabfile.py fame) or similar. For example, I've heard of people installing and running Mesos with it, which might seem like a bit of cluster scheduler self-satisfaction at first until one understands the reasoning.


Fleet is only for running services. It is effectively a distributed systemd.

Mesos can schedule resources for multiple different frameworks. It can run services with Marathon or Aurora frameworks. But it can also run cron jobs and batch pipelines with Chronos or Aurora. It can also schedule the distributed tasks for Hadoop or Spark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: