Hacker Newsnew | past | comments | ask | show | jobs | submit | btmcnellis's commentslogin

A Pontiac Aztek.


He was president and COO from 1997 to 2004 and CEO from 2007-2013.


Numenor was explicitly an allusion to Atlantis.


Slack supports it, though I suspect most enterprises (on Slack and elsewhere) use SSO instead, if they’re actually paying for the enterprise tier.


I'm shocked that no one has said Terraform yet. It has its own declarative DSL, which some people complain about (because people complain about everything), but it works well for what it's intended to do.

Providers can be created for anything with an API, from the major cloud providers to k8s to anything else.

No agent is required: it just writes state to a file, and then it diffs that file against the actual state every time it runs. (In practice, you'll probably want to put that state in a remote location like an S3 bucket, but that's very easy to do. And if you're the only one using it, you can just save it locally, which is the default behavior.)

Depending on your use case for Ansible, it could be a very good fit.


Terraform is great for certain tasks, but even they advise against using it for local execution. Whatever you use it for you really need a provider and their module system isn't very intuitive either. Ansible has/had potential, but sucks in a lot of ways too. Unfortunately, as much as I dislike certain aspects, it really is the best generic automation tool available at the moment.


Yeah, if you need to manage individual servers/VMs, it's not a great fit. I've used cloud-init files to configure EC2 instances on startup with things like packages and SSH keys, and that works pretty well if you can treat those servers as if they're immutable. But if you need to get in there and run something, it's not quite a replacement for Ansible.


Ansible for very small scale projects. It is not scalable for larger unreliable clusters. You probably need to go to salstack if you need any scale.


Yet people use daily to manage 10k-100k+ servers/devices.

The term "scaling" has very different meanings depending on context and how one product scales is very different from another.

You could setup a context that favors push vs pull and vice versa, you can also see different products scaling well or not depending on slight variations in context and implementation.


I am highly dubious that it would be possible to manage 100k servers, specially to do interactions with large numbers at a time. The way tower collects results in a thread pool, assuming success, simply does not work at any scale. I tried and tried. I fixed many bugs and got to about 4000 hosts before changing to another platform.

If almost every server is reliable, I am sure it would work fine. That is not going to happen at scale.


Terraform does the job but it's pretty dirty and unreliable. I've had so many cases where a plan looks all great, PR gets approved and merged, and then something happens during the apply causing it to fail because all validation is done in the cloud API, not in the provider code.

It's another tool pretending to be declarative.


Yeah, we had a PR bomb recently in a way that "plan" could have *trivially* caught had it used any of the "Get*" APIs from the cloud provider to ask about the current situ.

I appreciate that "the map is not the terrain," and that "plan" is speculating about a future configuration of the world, but come on -- if "terraform plan" is going to require _live credentials_ to run, and then only use those to enumerate the active regions, what are we even doing here?!


Shoudn't a `terraform plan` tell you that? If not then the state of the infra vs what's in the terraform state is different. I've had issues with version changes in the past and needing to update state files and all that malarky.


No, that's kind of my point. Terraform looks sexy and declarative on the surface but it's really just turning HCL into cloud API calls where the actual logic happens. Once you've got a few hundred lines the wheels start falling off. If it were truly declarative it wouldn't need to store what it knows about the existing infrastructure in a tfstate file.

Tform started off as a cool idea with good principles and over time has morphed into a shitty scripting language for managing multi cloud infra without clickops.


I'll do you one better: it's turning HCL into *an opaque golang intermediary*[1] of cloud API calls

It's like a game of telephone were every new participant in the chain is one more place to have "let me help you" turn into "what the hell was that?"

1 = and that's not even getting into the tire fire of the providers being either some Internet rando or an already overloaded team trying to have PRs make it through and out to release. I believe the the recent "we're not reviewing PRs anymore, exhausted" was just scoped to the hashicorp/terraform repo specifically, but it could very easily also apply to every code-gen shim that sits between TF and the underlying cloud SDK


You'll find a lot of places use Terraform and a config management tool. Terraform is great to build out cloud infra (not just instances but load balancers, object storage etc), but when it comes to mantaining system configuration and application state, it's less optimal.


> Eventually you outgrow the cloud provider and go into a data center. By that point you’re talking number of racks vs VMs.

How many companies actually hit this stage? I can only think of a few, and usually it's because they have very specific hardware requirements (e.g. Dropbox's whole business is file storage, or if you're doing something that requires tons of GPUs).


> How many companies actually hit this stage?

In practical terms you should expect to never hit it. The point where such scaling (also the main value add of aws/azure) really matters and you start looking at an entire DC to lease, you've arrived in the realm of speculative fiction. You should not plan to get there, just as you should not plan to get a winning lottery ticket.


Rarely companies hit this. I’ve done work in a few startups and we never hit that point.

Also, it’s about priorities and goals of the company. Security and control is the main reasons I see companies migrate to data centers. Generally things like GitHub Enterprise are being used.


This is true if the slides are for a presentation, but unfortunately, many people in business (coughconsultantscough) use the deck as their deliverable and cram all the information in there. A memo or report would be better in most cases, but the culture is what it is, and at the end of the day, if the CEO expects a deck, the CEO is going to get a deck.


The big problem with one-person packages isn't so much security as it is support. I have been burned more than once by old applications where key features rely on random packages with one maintainer who disappeared years ago. At least with a group, you have options to keep things moving without having to fork the library yourself.

(Of course the root cause here is arguably too much reliance on third-party dependencies, but searchable dropdowns are _such_ a pain to make on your own, and it's so tempting...)

The Sangria GraphQL library in Scala ran into a version of this. The libraries were primarily maintained by one person, who wrote the vast majority of the code and was the only person with write privileges in the main repos. Sadly, he passed away unexpectedly, and it took months (maybe a year or so) before his colleagues and other contributors were able to get access to the GitHub org.


Well, for what is worth, we have a lot of dependencies maintained by Microsoft of all companies, with lots of production-breaking bugs and they're not too interested in fixing or letting us fix. Even getting fully-functional PRs (with good test coverage and community support) looked at takes a lot of work and time, let alone getting fixes after reporting issues.

One of those packages is a JS package that is hosted by them, so we can't even fork it and host ourselves.

On the other hand, with simple packages that get abandoned, we just fork, publish ourselves with another name or namespaced, and it's solved.


Solo maintainer vs. organization is definitely an imperfect heuristic for long-term support. But it's a decent approximation for dependencies that are low ROI but potentially high impact if they break, like a UI widget that gets used everywhere in your app.

It's the problem with any third-party dependency (ask anyone who's used certain Google products). But then if you build everything in-house, a) it's expensive, and b) you end up with homegrown frameworks written by somebody who left the company five years ago and now everyone is afraid to touch it.

The laws of software thermodynamics come for all of us. Eventually, old systems decay, and you need to roll up your sleeves and do the work to keep them going.


> But it's a decent approximation for dependencies that are low ROI but potentially high impact if they break, like a UI widget that gets used everywhere in your app.

Not really, it's not decent at all. What is a great approximation, however, is the heuristic presented by the grandparent poster: projects that are easy to audit, easy to fork (if necessary) and don't have outrageously large dependency trees. Everything else is a liability.


There is a big difference between finding work for yourself and finding enough work to keep a whole team employed, as you would be doing if you ran an agency.


Python is a lot older and a lot more widely used, so it’s not as easy to just change things.


Python3 is pretty close to Go in age and was a purposely compatibility breaking, so it could have easily changed things at that point and they definitely knew about the problem at that point.


The difference between 2 and 3 is way smaller than people make it out to be. I migrated a large Python 2.7 project to Python 3.8 and it wasn't a particularly painful experience. I feel that way more effort should have been directed at making 2 and 3 compatible.


It took more than a decade for the ecosystem to migrate. IIRC, a lot of effort went into making them compatible, but some things are just fundamentally incompatible and also very difficult to automate (e.g., string encodings).


I learnt Python in 2013, when Python 3 still had some serious growing pains. I switched from 2 to 3 almost overnight circa 2015 or 2016.

The last few years I cannot help but feel they were very close to making 2 and 3 play nice together, but the idea ironically lost traction because of how much Python 3 adoption had accelerated.


Python 3 is easy for a Python 2 programmer to pick up, but that doesn’t mean it’s easy to port large systems from one to the other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: