This is not about a single Docker file vs a setup script. If you read my post you will see that I describe the steps the author took. And they are plenty.
My guess is that you did not read the article at all.
He was "building Docker images for each of the services". So not a single one. 10 of them. And he signed up for a commercial registry to host them. An additional service he depends on now.
Yet even a single Docker file would not be as simple as a setup script. A setup script on the host OS would install some packages that the host OS will keep up to date. Using a Docker image instead puts the burden on you to keep it up to date.
I agree with this statement that Dockerizing creates more dependencies that you need to track. But...
A setup script on the host OS would install some packages that the host OS will keep up to date.
This is simply not as easy as you make it out to be. Installing dozens of services from the OS, is inherently creating a nest of dependencies which is hard to explicitly reproduce on other systems.
Whereas Docker provides explicit isolated environments for each service so it's far easier to reproduce on other systems. This appeals to me for cloud environments but Docker on the desktop might be a bit too far for me...
Yes, isolation is a big win. It means I can update the “os” each service resides on independently of each other, so I don’t have to tackle 10 upgrades at once.
It also removes attack vectors and weirdness that happens when a package sees optional dependencies on the system. I.e,, if I need ldap for one thing, I don't have services in other containers trying to work with ldap.
Now every time a package in Alpine gets an update you have to update all 10 containers. Because you will have no way of knowing if that package impacts the security of the service running in that container.
Yes, most docker enthusiasts don't do this. They run a bunch of containers full of security holes.
I expect this to become a hot topic as soon as we will start witnessing data breaches that have outdated containers as their source.
> Now every time a package in Alpine gets an update you have to update all 10 containers. Because you will have no way of knowing if that package impacts the security of the service running in that container.
That's pretty much the baseline when dealing with any software system, whether it's a bare metal install of a distro, a distro running on a VM, or software running in a container.
> Now every time a package in Alpine gets an update you have to update all 10 containers.
All it takes is inheriting the latest version of an image and running docker build prior to redeploying.
I mean, this stuff is handled automatically by any CI/CD pipeline.
If you don't carr about running reproducible containers you can also sh into a container and upgrade it yourself.
Do you also complain about package managers such as deb or rpm because most debian and redhat users run a bunch of software full of security holes?
Software updates is not a container issue. It is a software deployment issue. I mean, when you complain about keeping packages updated you are in fact complaining about the OS running on the base image.
That's pretty much the baseline when dealing
with any software system
Exactly. And now instead of one system, he has 11.
All it takes is inheriting the latest
version of an image
He is not using "an image". From the article: "After the Alpine Linux 3.9.1 release I noticed the official Docker images had not been updated so I built my own."
I mean, this stuff is handled
automatically by any CI/CD pipeline.
He has not describe any CI/CD pipeline involved in his infrastructure. Yet another aspect he has to build.
you can also sh into a container
and upgrade it yourself
> He was "building Docker images for each of the services". So not a single one. 10 of them.
10 services, 10 installers, 10 installations.
Where exactly do you see any problem or issue?
> even a single Docker file would not be as simple as a setup script. A setup script on the host OS would install some packages that the host OS will keep up to date. Using a Docker image instead puts the burden on you to keep it up to date.
That's simply wrong on many levels. Yes, a single Dockerfile is as simple (if not simpler) than a setup script. A Dockerfile is a setup script.
And yes, you can update individual containers or even build updated images.
Again, you seem to be commenting on stuff you know nothing about.
> Yes, a single Dockerfile is as simple (if not simpler) than a setup script. A Dockerfile is a setup script.
Sure, but:
a) you have 10 setup scripts rather than 1. This would make sense if you actually wanted to have different dependencies/OS setup/whatever for your 10 services. But if you've decided to standardise on a common baseline set of dependencies for the sake of consistency (which is a valid choice) then why repeat them 10 times over?
b) You have the extra intermediate artifacts of the images which just give you one more thing to get out of date, go wrong, or slow down your process. Rather than run script -> get updated things, it's run script -> generate images and then deploy those images. Sure, it's all automatable, but what's it gaining you for this use case?
If you have a single setup script to build, package and deploy all 10 services, and you can't build and/or deploy each service independently, then you have more important things to worry about than figuring how containers are used in the real world.
Actually, it is, because your criticizing proper deployment strategies, which are not specific to containers, with a usecase that has many technical red flags. You can't simply criticize deployment best practices by giving an blatant anti-pattern as an example. And do note that this has nothing to do with containers at all, because this applies equally well to VM and bare metal deployments.
To have a productive discussion you have to actually engage. If there's really a "blatant anti-pattern" then it shouldn't be so hard to explain what's wrong with it. Your replies so far have been no more substantial than "you're wrong".
This is not about a single Docker file vs a setup script. If you read my post you will see that I describe the steps the author took. And they are plenty.
My guess is that you did not read the article at all.
He was "building Docker images for each of the services". So not a single one. 10 of them. And he signed up for a commercial registry to host them. An additional service he depends on now.
Yet even a single Docker file would not be as simple as a setup script. A setup script on the host OS would install some packages that the host OS will keep up to date. Using a Docker image instead puts the burden on you to keep it up to date.