Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does this mean I can run containers natively on Mac? And I don't need a VirtualBox VM running on my Mac to launch containers? This would be huge for me, and is always a big in my mind why I would consider switching back to Linux.

Edit: Docker on Mac has never felt as snappy as on Linux, because of the VM, though I have no hard numbers. Networking is a PITA, but it's not hard to figure out. The other main thing I hate is I have to give up a bunch of RAM to the VM that my containers may or may not use, instead of sharing with the host like on Linux.



> Docker on Mac has never felt as snappy as on Linux

It's extremely slow compared to Linux and I'm pointing my fingers at the virtualization layer without any hard evidence because it's the most likely suspect.

With all this focus on sandboxing apps of late, I'm wondering how far the OSX kernel is from having a feature set that resembles cgroups and network namespaces.


I used docker inside a vagrant+virtualbox VM running ubuntu, on macOS, for a few years. It's more reliable, and more debuggable, than docker for mac. It's some easy-auto-transparent storage and networking layers that make docker-for-mac so flaky.


So you have your setup documented anywhere?


They have containers already in a sense, whatever the iOS Simulator.app uses is not VMs, and it's a container in more ways than just one.


It's a launchd namespace. It's not meant to be secure in the same way a container would be, but it could be used for something.


That is not exactly designed to deliver any kind of isolation though - it just runs normal processes


Under the hood there is a linux vm running.


Too bad, if they had it running natively I would actively consider switching back to mac.

Glad to see they are making progress on the M1 port though, my team will be excited.


... if docker ran “natively” it’d mean using kernel hooks provided by xnu, which means you’d be able to run another instance of macOS in a container.

How is it so many people use docker but have zero fucking clue about how it works?


Docker obscures the implementation details. Part of this is by design: don't worry about the hard stuff, because Docker does it for you.

It's completely unsurprising that many folks haven't spent the time to dive into the internals. In many cases, because they don't need to.

Such is the nature of abstraction and higher level frameworks.


We aren't even close to a point where someone can just pretend a container is a magical box that just works.

All abstraction leak. This is a relatively recent, quite leaky one.


> We aren't even close to a point where someone can just pretend a container is a magical box that just works.

I challenge this assertion. While it is of course true that there are situations that require a deeper knowledge of Docker, this is also not universally true.

Many projects these days have a Getting Started doc that has some options like:

- Build from source

- Install this package

- Use Docker

I often choose the Docker option because I know I'll (most likely) get a working version of the project in 5 minutes with minimal effort. I might not even fully understand the project yet or its architecture (much less that of Docker), but I can get it up and running with `docker pull` and `docker run`.

In many cases, I'll never need to know anything more.

I've personally spent more time using docker to build my own stuff, so I've had to learn more. But for many folks, it absolutely is a magical box that just works, and that's perfectly ok.

I agree that all abstractions tend to eventually leak. But depending on why you're using Docker, you may never have a reason to encounter that leakage.


Just because CorporationX says “don’t worry about it we got you bruh” doesn’t mean technologists - people who actively work with technology and write software - should be excused for just throwing up their hands and saying “it’s just ducking magic I don’t know how it works”.

I’m not talking about understanding it to the level of being able to contribute a patch to the project.

I’m talking about understanding that containers are inherently tied to the kernel, and thus are limited to running software written for the same kernel as the host running the container.

It isn’t rocket science. I literally explained it in one sentence, and I’ve never used docker in my life.

This is along the same level of knowledge as “no, you can just take an iPhone app and run it on android” or “no you can’t just take a SQLServer query and run it on anything that vaguely knows “SQL”.


> Just because CorporationX says “don’t worry about it we got you bruh” doesn’t mean technologists - people who actively work with technology and write software - should be excused for just throwing up their hands and saying “it’s just ducking magic I don’t know how it works”.

Entire industries are built on the premise that "don't worry about it, we got you". I'm not saying that it's appropriate to be completely blind/unaware of what you're using, but there's a line somewhere that's surprisingly difficult to draw in 2020.

I don't think anyone would argue that learning more is a bad thing. But the more salient point is that for many, it's just not necessary.

If you're doing work that requires a deeper knowledge of the thing, then of course you should learn it. If you're not doing work that requires this knowledge, it'd be a waste of time, the most precious commodity available to us.

Others have made the comparison to learning assembly. Useful? sure. Necessary in 2020? Usually not.

> It isn’t rocket science. I literally explained it in one sentence, and I’ve never used docker in my life.

This is what you said:

> ... if docker ran “natively” it’d mean using kernel hooks provided by xnu, which means you’d be able to run another instance of macOS in a container.

Not only does this tell the reader nothing practical about how Docker actually works, it doesn't even address the parent comment in a useful/informative way.

You followed this up with a statement that is a borderline personal attack on the parent comment.

I mean this as constructively as possible, but you need to work on your delivery, and ask yourself what you're trying to accomplish with these comments. So far, they've been unhelpful and borderline abusive.


Leading by example.

You have a lot more patience than I do, unfortunately I think this person just wants to be mad.


> This is what you said:

The “one sentence” I was referring to is right above the bit you quoted:

> I’m talking about understanding that containers are inherently tied to the kernel, and thus are limited to running software written for the same kernel as the host running the container.


You've lost me. This sentence does not explain how Docker, or containers work.

It explains one aspect/limitation related to the execution of code in a container, but does not foster a deeper understanding of container architecture for the uninformed reader, and is actually somewhat misleading considering Docker's use of a VM behind the scenes in some situations (in which case the "host running the container" is technically the VM, not the user's PC).

I sense that you have useful knowledge to share. I'm afraid you've missed the mark. Instead of spreading vitriol and asking why everyone around you is so dumb, focus that energy on sharing that knowledge!


> considering Docker's use of a VM behind the scenes in some situations (in which case the "host running the container" is technically the VM, not the user's PC).

That is exactly the point. If you want to run Linux binaries you need a Linux container. On windows or macOS that means a Linux vm.

If you want to run windows binaries you need a windows container.

Conversely if you had a macOS container you’d only be able to run macOS binaries.

This is my point. It’s not a hard concept to understand. I’m not asking people to learn about cgroups or Chroots or network namespaces or any of that.


One, it might be totally fine to run macOS binaries! If your code is portable to macOS and Windows, you might still want to use Docker for dependency management, network isolation, orchestration of multiple processes, etc., but you might not care what the actual host OS is. (Just like how people are interested in running ARM binaries, even though Docker started out as x86-64.) At my day job, all the stuff we put in Docker is either Python (generally portable), Java (generally portable), or Go (built-in cross-compilation support). It's absolutely sensible to do local dev on a Mac and then deploy on Linux in prod - it's perfectly sensible to do so without Docker in the picture, and plenty of people do just that.

So, maybe all the people you're yelling at understand the concept you think they don't, and they're okay with it.

Two, it's not at all true that to run Linux binaries on non-Linux, you need a Linux VM. WSL1 is an existence proof against this on Windows, as is the Linuxulator on FreeBSD, as are LX-branded zones on SmartOS. Linux itself has a "personality" mechanism for running code from non-Linux UNIXes. You could do the same thing on macOS, and teach the kernel to handle a good-enough subset of the Linux system call interface - it would be far less work than adding containerization (namespacing and resource isolation) in the first place, so I'm not sure why you're so hung up about this.


> Two, it's not at all true that to run Linux binaries on non-Linux, you need a Linux VM.

So (a), this entire thread, the entire post, is about docker. (b) WSL1 worked so well, Microsoft not lonely abandoned that approach for WSL2, they also never used that approach for containers on Windows. Hence, Windows native containers are, drum roll... Windows.


> “no you can’t just take a SQLServer query and run it on anything that vaguely knows “SQL”.

Not with that attitude you can't. https://aws.amazon.com/blogs/opensource/want-more-postgresql...

The world depends on people who don't take "no you can't" for an answer.


Because we use abstraction as a way of lowering barrier to entry to reduce the requirements for people to be proficient in the career and be able to reduce overall costs downwards?


Lowering the entrance requirements doesn't mean you should STAY at the entrance.


You keep going, I’ll take another door.

I know enough Docker to create and launch a dev environment, but I don’t need anything more than that since I don’t do deployments.

Likewise you don’t need to learn about browser rendering since you barely interact with it.


Do you know the details of the x86 architecture? Why so toxic?


> which means you’d be able to run another instance of macOS in a container

This is not true, for multiple reasons. Strictly speaking it only means you'd be able to run another instance of Darwin in a container. And, as you surely know because your tone of voice implies you bear immense knowledge, a Docker-style container is not a full OS: it doesn't run an init or normal system daemons, so it wouldn't even be a full instance of Darwin, so it wouldn't have to support functionality only needed by launchd or system daemons (e.g. WindowServer). It would just need to let you run a stanalone program in a chroot + separate network, PID, and IPC namespace + apply resource controls.

Furthermore, since most people are using Docker for developing software that's going to run on Linux, there would be no real need to virtualize the parts of XNU that aren't also provided on Linux - notably all the Mach stuff. You'd just need to provide a BSD-style syscall API to programs in a container.


> it doesn't run an init or normal system daemons

There’s no technical limitation stopping you from running init or system daemons inside a container, it’s just an anti-pattern and missing the point of a container in most cases.


Because it's not necessary to know for day-to-day work. Which is what most software aspires to achieve.


For the same reason that the general dev doesn't have in-depth knowledge about every OS they use: abstractions.


Because you don't need to?


It's called abstraction, and it is what enables you to live in modern society without knowing the technical details of every facet of your life.

Furthermore, not being able to hook into xnu to virtualize macos is a business decision, not a technical limitation.


Docker can't run natively since it uses linux kernel features not available on a mac. Apple would need to do a lot of work first to make it happen.


If Docker for mac was running natively it would be more or less useless. Nobody deploys production code on MacOS hosts anymore.

That it is all so smoothly powered by Linux is what makes it a great product.


I develop exclusively on Linux, but it would be nice if my teammates who don't didn't melt their laptops doing local dev on mac.

If apple gave a shit about non apple developers they would provide the kernel hooks to help make it possible.

Also, nobody deploys on mac hosts because virtualizing macos in a cloud environment is against tos, so options are expensive and limited.


I mean, they do provide the hooks via Hypervisor.framework. I'm not 100% familiar, but if it's anything like KVM then running a linux VM like that shouldn't have that much overhead.


I deploy on linux, but dev on macOS without a vm or docker or anything. If you're not doing anything OS dependant, which most web apps don't, you can run everything natively.


Isolation is another important reason people use docker.

I don't want my developers to have to manage a complex dev stack and it's dependencies.

I would argue that this is the primary use case for local docker dev these days.


Me and I just about everyone I know that has a mac develops _for_ Linux. What is nice is that I can push, pull and run Linux images on my mac.

If the containers where native Macos docker images it would be about as useful as native Docker on Windows. Which I'm sure is great for the few ppl that need it - but pretty useless for most ppl.

But I sure wouldn't mind if was a bit snappier. But it is plenty fast enough for my needs atm.


I believe they would just need to virtualize what isn't already provided by xnu-darwin.

This may already be the case, but the memory allocation for docker vm on mac really handicaps things.


You wanted Apple to add container support to the OSX kernel? Hah. I wonder if the virtualization API that Apple is pushing performs better than Hyperkit.


> HyperKit currently only supports macOS using the Hypervisor.framework.

Do people really not know how shit works these days?


It’s great that you know about how Docker works. How about being a little more informative in your comments here?


Your hostility towards everything and everyone doesn't engender productive discussions.


If you've got something to say, just spit it out.


People griefing about to you that you have no idea what you're talking about, smh, don't listen to them. All I have to say to them is: look at the Wine project running Windows software natively on Linux. Don't underestimate nerds who have a vision in mind. And look at Kubernetes deprecating Docker. At the highest level of application development, all these details don't matter. I'm using all GNU command line tools compiled for my Mac, I'm sure we could figure something out to increase containerization efficiency on Mac. ¯\_(ツ)_/¯


Dang, wishful thinking that they were going native on M1. ;_;


Docker Desktop already ran on Macs. This is specifically for the new Apple Silicon support (M1). It's not native, technically, but it feels native the way Docker Desktop works. Basically they manage the VM for you, so you don't have too.


It doesnt feel native at all, performance and networking are very suboptimal.

Haven't tested it on M1 yet, but I doubt the networking challenges will disappear.


There is overhead for sure but if you use x86 containers on an Intel Mac or arm64 containers on an Apple Silicon mac, it's pretty performant.


Can you share any details on M1 perf and resource consumption? Docker vm on mac is noticeably slower and consumes a lot of resources in my experience.


The only thing I can say with confidence is that if you're using ARM containers it is really fast, probably thanks to the Apple Silicon.

I imagine memory consumption is on par with running on Intel. I don't think Docker Desktop can really change that.


Honestly memory is the primary constraint for most of my mac devs running our docker envs.


Can this run a container of windows on Mac M1?


Can you run a container of Windows on an x86 machine? The answer is no, and for the same reason it won’t work on ARM. A “container” is not a virtual machine, you can only run the same Linux executables you would on a normal Linux system.

That said, as another person commented, you can run Windows for ARM in a VM on an Apple M1.


You can run Windows For ARM on M1 Macs right now, I doubt you can get docker to do this without lots of manual effort.


Got a link to proof of this?



Question, is there persistence with that? Or are changes lost once it is closed?


I've been using the ACVM app with. Windows 10 VMDK file. Changes are indeed persisted to the VMDK


No nested virtualisation present currently, as such no virtualization support provided to VMs, so on Windows on an M1 only WSL1 works. Docker Linux containers on Windows require WSL2 instead.

Docker Windows containers aren't available on arm64 Windows yet, but stay tuned...


Im confused by this comment? I am running Docker Desktop on Mac and have multiple containers running?


It is running in a vm


Docker has never needed a VirtualBox VM to launch containers on your Mac. It does its own virtualization internally, and will continue to do so.


It actually did at one point; before Docket Desktop there was “Docker Toolbox”, which required separate virtualization software. The installer came with Virtualbox by default, but there were options to use Parallels and VMWare as well. This is probably what GP is thinking of.

I'm actually using a version of this setup today in order to run Docker on OS X 10.9.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: