Does this mean I can run containers natively on Mac? And I don't need a VirtualBox VM running on my Mac to launch containers? This would be huge for me, and is always a big in my mind why I would consider switching back to Linux.
Edit: Docker on Mac has never felt as snappy as on Linux, because of the VM, though I have no hard numbers. Networking is a PITA, but it's not hard to figure out. The other main thing I hate is I have to give up a bunch of RAM to the VM that my containers may or may not use, instead of sharing with the host like on Linux.
> Docker on Mac has never felt as snappy as on Linux
It's extremely slow compared to Linux and I'm pointing my fingers at the virtualization layer without any hard evidence because it's the most likely suspect.
With all this focus on sandboxing apps of late, I'm wondering how far the OSX kernel is from having a feature set that resembles cgroups and network namespaces.
I used docker inside a vagrant+virtualbox VM running ubuntu, on macOS, for a few years. It's more reliable, and more debuggable, than docker for mac. It's some easy-auto-transparent storage and networking layers that make docker-for-mac so flaky.
> We aren't even close to a point where someone can just pretend a container is a magical box that just works.
I challenge this assertion. While it is of course true that there are situations that require a deeper knowledge of Docker, this is also not universally true.
Many projects these days have a Getting Started doc that has some options like:
- Build from source
- Install this package
- Use Docker
I often choose the Docker option because I know I'll (most likely) get a working version of the project in 5 minutes with minimal effort. I might not even fully understand the project yet or its architecture (much less that of Docker), but I can get it up and running with `docker pull` and `docker run`.
In many cases, I'll never need to know anything more.
I've personally spent more time using docker to build my own stuff, so I've had to learn more. But for many folks, it absolutely is a magical box that just works, and that's perfectly ok.
I agree that all abstractions tend to eventually leak. But depending on why you're using Docker, you may never have a reason to encounter that leakage.
Just because CorporationX says “don’t worry about it we got you bruh” doesn’t mean technologists - people who actively work with technology and write software - should be excused for just throwing up their hands and saying “it’s just ducking magic I don’t know how it works”.
I’m not talking about understanding it to the level of being able to contribute a patch to the project.
I’m talking about understanding that containers are inherently tied to the kernel, and thus are limited to running software written for the same kernel as the host running the container.
It isn’t rocket science. I literally explained it in one sentence, and I’ve never used docker in my life.
This is along the same level of knowledge as “no, you can just take an iPhone app and run it on android” or “no you can’t just take a SQLServer query and run it on anything that vaguely knows “SQL”.
> Just because CorporationX says “don’t worry about it we got you bruh” doesn’t mean technologists - people who actively work with technology and write software - should be excused for just throwing up their hands and saying “it’s just ducking magic I don’t know how it works”.
Entire industries are built on the premise that "don't worry about it, we got you". I'm not saying that it's appropriate to be completely blind/unaware of what you're using, but there's a line somewhere that's surprisingly difficult to draw in 2020.
I don't think anyone would argue that learning more is a bad thing. But the more salient point is that for many, it's just not necessary.
If you're doing work that requires a deeper knowledge of the thing, then of course you should learn it. If you're not doing work that requires this knowledge, it'd be a waste of time, the most precious commodity available to us.
Others have made the comparison to learning assembly. Useful? sure. Necessary in 2020? Usually not.
> It isn’t rocket science. I literally explained it in one sentence, and I’ve never used docker in my life.
This is what you said:
> ... if docker ran “natively” it’d mean using kernel hooks provided by xnu, which means you’d be able to run another instance of macOS in a container.
Not only does this tell the reader nothing practical about how Docker actually works, it doesn't even address the parent comment in a useful/informative way.
You followed this up with a statement that is a borderline personal attack on the parent comment.
I mean this as constructively as possible, but you need to work on your delivery, and ask yourself what you're trying to accomplish with these comments. So far, they've been unhelpful and borderline abusive.
The “one sentence” I was referring to is right above the bit you quoted:
> I’m talking about understanding that containers are inherently tied to the kernel, and thus are limited to running software written for the same kernel as the host running the container.
You've lost me. This sentence does not explain how Docker, or containers work.
It explains one aspect/limitation related to the execution of code in a container, but does not foster a deeper understanding of container architecture for the uninformed reader, and is actually somewhat misleading considering Docker's use of a VM behind the scenes in some situations (in which case the "host running the container" is technically the VM, not the user's PC).
I sense that you have useful knowledge to share. I'm afraid you've missed the mark. Instead of spreading vitriol and asking why everyone around you is so dumb, focus that energy on sharing that knowledge!
> considering Docker's use of a VM behind the scenes in some situations (in which case the "host running the container" is technically the VM, not the user's PC).
That is exactly the point. If you want to run Linux binaries you need a Linux container. On windows or macOS that means a Linux vm.
If you want to run windows binaries you need a windows container.
Conversely if you had a macOS container you’d only be able to run macOS binaries.
This is my point. It’s not a hard concept to understand. I’m not asking people to learn about cgroups or Chroots or network namespaces or any of that.
One, it might be totally fine to run macOS binaries! If your code is portable to macOS and Windows, you might still want to use Docker for dependency management, network isolation, orchestration of multiple processes, etc., but you might not care what the actual host OS is. (Just like how people are interested in running ARM binaries, even though Docker started out as x86-64.) At my day job, all the stuff we put in Docker is either Python (generally portable), Java (generally portable), or Go (built-in cross-compilation support). It's absolutely sensible to do local dev on a Mac and then deploy on Linux in prod - it's perfectly sensible to do so without Docker in the picture, and plenty of people do just that.
So, maybe all the people you're yelling at understand the concept you think they don't, and they're okay with it.
Two, it's not at all true that to run Linux binaries on non-Linux, you need a Linux VM. WSL1 is an existence proof against this on Windows, as is the Linuxulator on FreeBSD, as are LX-branded zones on SmartOS. Linux itself has a "personality" mechanism for running code from non-Linux UNIXes. You could do the same thing on macOS, and teach the kernel to handle a good-enough subset of the Linux system call interface - it would be far less work than adding containerization (namespacing and resource isolation) in the first place, so I'm not sure why you're so hung up about this.
> Two, it's not at all true that to run Linux binaries on non-Linux, you need a Linux VM.
So (a), this entire thread, the entire post, is about docker. (b) WSL1 worked so well, Microsoft not lonely abandoned that approach for WSL2, they also never used that approach for containers on Windows. Hence, Windows native containers are, drum roll... Windows.
Because we use abstraction as a way of lowering barrier to entry to reduce the requirements for people to be proficient in the career and be able to reduce overall costs downwards?
> which means you’d be able to run another instance of macOS in a container
This is not true, for multiple reasons. Strictly speaking it only means you'd be able to run another instance of Darwin in a container. And, as you surely know because your tone of voice implies you bear immense knowledge, a Docker-style container is not a full OS: it doesn't run an init or normal system daemons, so it wouldn't even be a full instance of Darwin, so it wouldn't have to support functionality only needed by launchd or system daemons (e.g. WindowServer). It would just need to let you run a stanalone program in a chroot + separate network, PID, and IPC namespace + apply resource controls.
Furthermore, since most people are using Docker for developing software that's going to run on Linux, there would be no real need to virtualize the parts of XNU that aren't also provided on Linux - notably all the Mach stuff. You'd just need to provide a BSD-style syscall API to programs in a container.
There’s no technical limitation stopping you from running init or system daemons inside a container, it’s just an anti-pattern and missing the point of a container in most cases.
I mean, they do provide the hooks via Hypervisor.framework. I'm not 100% familiar, but if it's anything like KVM then running a linux VM like that shouldn't have that much overhead.
I deploy on linux, but dev on macOS without a vm or docker or anything. If you're not doing anything OS dependant, which most web apps don't, you can run everything natively.
Me and I just about everyone I know that has a mac develops _for_ Linux. What is nice is that I can push, pull and run Linux images on my mac.
If the containers where native Macos docker images it would be about as useful as native Docker on Windows. Which I'm sure is great for the few ppl that need it - but pretty useless for most ppl.
But I sure wouldn't mind if was a bit snappier. But it is plenty fast enough for my needs atm.
You wanted Apple to add container support to the OSX kernel? Hah. I wonder if the virtualization API that Apple is pushing performs better than Hyperkit.
People griefing about to you that you have no idea what you're talking about, smh, don't listen to them. All I have to say to them is: look at the Wine project running Windows software natively on Linux. Don't underestimate nerds who have a vision in mind. And look at Kubernetes deprecating Docker. At the highest level of application development, all these details don't matter. I'm using all GNU command line tools compiled for my Mac, I'm sure we could figure something out to increase containerization efficiency on Mac. ¯\_(ツ)_/¯
Docker Desktop already ran on Macs. This is specifically for the new Apple Silicon support (M1). It's not native, technically, but it feels native the way Docker Desktop works. Basically they manage the VM for you, so you don't have too.
Can you run a container of Windows on an x86 machine? The answer is no, and for the same reason it won’t work on ARM. A “container” is not a virtual machine, you can only run the same Linux executables you would on a normal Linux system.
That said, as another person commented, you can run Windows for ARM in a VM on an Apple M1.
No nested virtualisation present currently, as such no virtualization support provided to VMs, so on Windows on an M1 only WSL1 works. Docker Linux containers on Windows require WSL2 instead.
Docker Windows containers aren't available on arm64 Windows yet, but stay tuned...
It actually did at one point; before Docket Desktop there was “Docker Toolbox”, which required separate virtualization software. The installer came with Virtualbox by default, but there were options to use Parallels and VMWare as well. This is probably what GP is thinking of.
I'm actually using a version of this setup today in order to run Docker on OS X 10.9.
Edit: Docker on Mac has never felt as snappy as on Linux, because of the VM, though I have no hard numbers. Networking is a PITA, but it's not hard to figure out. The other main thing I hate is I have to give up a bunch of RAM to the VM that my containers may or may not use, instead of sharing with the host like on Linux.