Related to this topic: When running integration/e2e tests, setting up the environment (all the required services, data stores etc.) in the right sequence, loading them with test data and so forth can be thorny to automate.
Good automation around preparing/provisioning the testing environment is a necessary companion to the testing tools/frameworks themselves.
Most commonly, fully-capable testing environments aren't available during the inner loop of development (where the dev setup can usually only run unit tests or integration tests for 1-2 services + a database).
Because of this, people tend to rely solely on their CI pipelines to run integ/e2e tests, which can slow things down a lot when one of those tests fails (since the write/run/debug loop has to go through the CI pipeline).
As an industry, I think we should start taking automation and developer productivity more seriously—not least when it comes to writing and debugging tests for complex distributed systems. The more we can lower the marginal cost of writing and running tests, the more effective our test suites will become over time.
Shameless plug: My company (https://garden.io/) is developing a framework and toolchain to bring the full capabilities of a CI pipeline to the inner loop of development, so that developers can efficiently run all/any test suites (including integ/e2e tests) in their personal dev environments.
We do this by capturing the full dependency graph (builds, deploys, tests, DB seeding etc.) of the system in a way that can power CI, preview environments and inner-loop development.
The issue isn't tooling, it's hardware resources and in some cases licencing.
While the problem isn't inherently trivial, the issues the tooling can solve is, which is the order of startup and is usually solved with a waitforit startup script, as these services generally talk over network with each other.
The real challenges, like determining seed data etc is too project specific to be abstracted away.
Not to take away from your problem domain, it would be nice to have framework to use so the developers don't have to wire the plumbing for these automations manually anymore - it's just not going to solve the issue that developers will have to wait for the ci pipeline to get their results
> The issue isn't tooling, it's hardware resources and in some cases licencing.
Hardware resources are definitely an issue. That's why we generally recommend using remote development environments, which aren't as resource-constrained as the local dev machine. Making that comparably smooth to the local dev experience (e.g. for live reloading of services without rebuilding containers) needs some clever tooling (which is partly the reason we're building our product).
With production-like remote dev environments, you get the same capabilities as your CI environment, but can run test suites ad-hoc (and without having to spin them up and tearing them down for every test run).
There's no fundamental reason why CI environments should have capabilities that individual dev environments can't have—it's all a matter of automation in the end.
> The real challenges, like determining seed data etc is too project specific to be abstracted away.
Very much agree with that! The generic stuff (dependencies, parallel processing, waiting for things to spin up etc.) should be taken care of by the tooling, but without constraining the project-specific stuff (which is highly individual).
> Hardware resources are definitely an issue. That's why we generally recommend using remote development environments, which aren't as resource-constrained
On the contrary, I almost always advocate for having a rack full of machines in a/the office, running some kind of workload management (kubernetes, vmware/proxmox or a combination of the two).
Hardware is dirty cheap, and plenty fast these days.
If you have an office, chances are you already have a server room anyways (enterprise grade network switches require cooling anyway) you might as well throw a bunch of physical machines in there.
The only real issue I see is that most developers have literally no idea of the runtime resources needed by their code, for a number of reasons (like runtimes hiding that kind of information and the general mindset that pushing out new releases/features is more important than tuning existing ones) so in the cloud developers will just provision bigger and bigger environments. It's all fun and games until two things happens: (A) environment provisionin in the cloud will take long times just like on developer machines andb (B) company earnings will be eroded by cloud infrastructure bills (on-prem infrastructure OTOH provide tax shielding)
I would expand your point to add that another missing key part of dev on big distributed platform is being able to run parts of the system locally.
For some shop it is a lot harder than a simple docker-compose (think of envs with 10 or 100 of micro-services), any laptop cannot handle such load and it is critical for devs to work on the machine, otherwise you lose a lot of time with shared dev envs, SSH tunnels or rsync...
I agree that CI pipelines are a pain in a distributed system but I heard more complains from devs not being able to test locally some new feature in serviceA that depends on 10 other services + DBs to work.
Running locally is great but I would already be happy if I could step through a CICD pipeline with a debugger. This includes stepping though the services the pipeline calls. Also include breakpoints.
Garden supports in-cluster building, using buildkit or kaniko.
This way, you don't need to have Docker or k8s running on your dev machine as you're working.
It also automates the process of redeploying services and re-running tests as you're coding (since it leverages the build/deploy/test dependencies in your stack).
We also provide hot reloading of running services, which brings a similarly fast feedback loop as with local dev.
The idea is to have a dev environment that has the same capabilities as the CI environment, and to be able to run any/all of your tests without having to go through your CI system (which generally involves a lot more waiting).
We already have some home-grown kubernetes dev environment in which every developer/QA can spin up all of our services in a dedicated namespace, but it's a bit tedious and spaghetti-code as it grew organically over time (from a 15 devs team to a +70 one). Garden looks like a nice alternative solution, do you think Garden Core is enough to get started? (we like to get our hands dirty)
Sounds like Garden Core could be a great fit here.
The motivation behind Garden was that, like you, we had built our own home-grown kubernetes dev environments, but felt like there should be a polished, general-purpose framework + tool for this sort of thing.
Hi, another Gardener here. Garden Core should indeed be enough to get started. I'm trying to keep this as factual and non-pitchy as possible for the sake of providing context—the enterprise product gets you:
• RBAC and secrets management (also makes it possible to control which users have access to which types of environments)
• Direct integration with GitHub or GitLab, so you could trigger something to happen in Garden based on a VCS event
We have an easier time categorising a disorder as a disease rather than a moral failure when we see clear neurobiological correlates.
But as we know (or at least currently believe), the separation of disorders into "hardware" (non-moral/impersonal) and "software" (moral/personal) is ultimately an illusion: The material substrate for our personality is precisely our neurobiology.
For example, ΔFosB overexpression in the Nucleus Accumbens following repeated reward stimulus is "the most significant biomolecular mechanism in addiction since its viral or genetic overexpression (through chronic addictive drug use) in D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and behavioral effects (e.g., expression-dependent increases in self-administration and reward sensitization) seen in drug addiction" (https://en.wikipedia.org/wiki/FOSB#Role_in_addiction)
If we reach a point where we also find sufficiently convincing neural correlates for the more "high-level" aspects of addictive psychology and personality, wouldn't that eventually lead to us treating the whole complex in non-moral terms?
My feeling is that we intuitively choose the conceptual structure that we feel is most functional, given our state of knowledge. Morality is just another model for predicting and interacting with the behaviour in question, albeit less formal and more heuristic-based.
Even if the mouse were quicker on average for long jumps, the moving of the hand from mouse to keyboard feels like more of a combo breaker, more distracting. Speed is very important, but equally important is for the editing experience to have a low attention footprint to leave more space for the train of thought behind the changes being made.
As someone who uses a keyboard 98% of the time, but who uses a GUI editor and the mouse when it makes sense, I'd say the mouse is useful for several things, and what it IS useful for WILL be faster than what someone can do with a keyboard.
The clear win is using a powerful GUI-based editor that has tons of keyboard commands. Bonus points for being on Windows or Linux (i.e., not Mac OS) so you can navigate/BROWSE the menus from the keyboard.
I don't know Sublime myself; I'll have to check it out. For some reason I thought it was Mac-only, but I see that it's cross-platform. My editor-of-choice for years has been Visual Slickedit, but I'm not married to it. I just haven't found anything half as good in years of searching. And I keep trying other editors, too, since there are things I find imperfect about it (the scripting language is usable, for instance, but has it's own syntax and quirks).
Tried Zeus probably 4 years ago. I really have tried a lot of options, but I haven't been back to Zeus after the first try. I can't remember any specifics of what turned me away from it, though the thing that SlickEdit does better than almost everyone else is tagging. I use boost::shared_ptr (and now the C++11 version) a LOT, and so completion is useless in anything that doesn't know how to complete a template class. ctags and its ilk, last time I checked, were worthless in this regard, and I have a vague sense that Zeus relied on those for tagging? Yes, looking at the web site, you're using ctags. Will "shared_ptr<Foo> foo; foo->" autocomplete for members of Foo?
SlickEdit also Just Works with tagging. You put the files into a project, and everything is instantly cross-referenced. "Where is Foo::init() used? No, I don't want Blah::init(), just Foo::init()." SlickEdit, at least SOME of the time, can get that right, and with no configuration of external tools.
Aside from that, I think one of the greatest drawbacks of Slickedit is that it isn't open source, so when things go wrong I can't just fix it. Slickedit IS cross-platform, at least, with native Linux and Mac OS versions, and that ALSO is important to me.
Lua is the Right Answer for scripting, IMO. The rest of the options are actually a liability for me, since it means that if I'm editing someone else's script, I might have to deal with Python, TCL, or JavaScript.
I'm rambling now. I feel like I have an "Editor Manifesto" in my head that wants to get out, but I don't have time right now to do it justice. I'll take another look at Zeus when I get a chance, just as I'll take another look at Sublime and the rest. It's been years since I've looked, so I'll take another glance at it.
I'll have to give that a try. I've had countless people tell me it COULD be done, but when I drilled down what they were really telling me was that there were keyboard shortcuts.
If you're telling me I can hit Ctrl-F2, then that's at least a step in the right direction. Thing is, it's several steps behind where I would be on Windows (or most Linux desktops), because I can just hit Alt-F and it opens the file menu, or Alt-T for the tools menu, or what-have-you.
I don't use the menu much so I tend to use the trackpad for that. You could also try KeyCue [http://www.ergonis.com/products/keycue/ ]. You press the Command key for a while and all the options are shown in a popup.
Nice! I'll check that out the next time I'm forced to work on a Mac (it happens when I do any iOS development...oh how I hate Apple...). KeyCue is a step in the right direction, UI-wise, regardless.
His point about touchpads is spot on though. I often find myself jumping around by clicking when I code with my laptop and no peripherals (there's no "combo breaker" since my hands are sitter so close to the trackpad anyway), but when I get home and plug in my mouse and external keyboard, it becomes much more of a hassle to click and I switch to a more keyboard-centric workflow.
I tend to memorize the spatial location of keys (end, home, parenthesis, cursor keys) and of course, can touch type. Going back from using the mouse to touch typing is so immediate than I basically never notice it.
I think your last sentence supports Sublime Text even more than it supports Vi(m).
In fact, it's the opposite. Using keyboard commands for movement is more distracting (and measurably slower) than using the mouse. It is precisely that the mouse is less distracting that tricks the mind into thinking the keyboard is faster. See AskTog:
[If you think this is incorrect, providing links to scientific evidence is more productive than downvoting or stating that your own subjective experience is different.]
You're not reading some of the most cogent arguments here: most vim/Emacs/etc power users couldn't even tell you what keyboard command to use precisely because they don't think about it. It's reflex, muscle memory, completely out of mind so that it doesn't get in the way of getting code from your mind to the machine. Maybe it's that way for some mousers as well, but given that code is text, it's kind of doubtful. There's also the fact that if you have to break your concentration to context switch to the mouse, you will lose your flow.
Try coding in an editor, any editor, for 8, 9, 12 hours a day and see if you don't start forgetting the keyboard shortcuts because they become reflexive. Vi and Emacs are just hyperdesigned to enhance this effect.
EDIT: And I don't care that you link to Bruce Tognazzini, a GUI designer, when millennia of musicians have known that they don't think about what combinations of fingers they press to get an F#, they just play a glissando with an F# in the middle.
The thinking is subconscious: you don't know you're doing it and you "forget" how long it takes, which is why you perceive the keyboard as faster. If you bother to read the research I've linked to, it covers all of your objections. I know it's difficult to let go of subjective impressions, but the stopwatch is always right compared to our own internal sense of time.
It's not an appeal to authority: that would be if I were appealing to the authority for the authority's sake; but I am appealing to scientific research presented by the authority. I'm fairly certain you'd agree that appealing to scientific evidence is valid, as you've tried to present it to me as part of your argument. I think the downvotes are because you running some kind of "experiment" on your own is different from a human interface researcher's actual research.
Fair point, however the choice of editor/editing mode is a personal one. Are my personal results of my personal experiment not the only ones that matter to my personal choice?
I rock back and forth, depending on the languages I'm using. Emacs does more, but is in my experience more kludgy, even after having in aggregate spent many days customizing it (maybe it would rub better if I wasn't hardwired to the vi/vim modal editing/motions method). Vim feels cleaner and less annoying, but sometimes the easy integration with external tools tips the balance in emacs' favor. E.g. I use vim for Rails apps (where I didn't really feel enough difference from vim + terminal), but if I were writing Clojure, stuff like the repl integration would probably mean emacs.
Sublime is really pretty, and has great functionality out of the box.
Ultimately, vi/vim's contribution is the modal editing method (and derived things like motions), which can be transplanted to any other IDE or editor that cares to support it. But for the fundamendals - editing and switching between files - I've so far found vim to be the cleanest, most natural vim.
Emacs can be pretty pretty--perhaps not actively pretty, but very elegant in a minimalist sort of way. I'm pretty happy with how mine looks[1], for example.
Indeed! I disagree with people who think vim and emacs are ugly: they have their own nice 8bit-esque aesthetic which I'd expect hackers to like, given that they've chosen to stare at terminals all day (I've rolled my own colorscheme for both vim and emacs). And they make good use of screen real-estate, very important when coding on a laptop.
I haven't tried Evil, in my last emacs phase it wasn't mature yet IIRC - maybe I should give that a go and see if it tips the balance yet again.
> Emacs can be pretty pretty--perhaps not actively pretty, but very elegant in a minimalist sort of way. I'm pretty happy with how mine looks[1], for example.
Obligatory vim can be pretty too, and I am happy with how mine looks.
> What's your theme, BTW? I also like your custom status bar shape.
My screenshot doesn't have custom status bar shapes. GP does. Anyway, for that, you need powerline(available for both vim and emacs).
My color theme is ir_black https://github.com/wgibbs/vim-irblack If you are using it in terminal, you also need to set up ir_black theme for the terminal.
If you want ⌘S, ⌘C, ⌘V etc., try Macvim (https://code.google.com/p/macvim/). I use it for longer coding sessions (more colors, faster rendering when in fullscreen with split panes), and terminal vim over ssh when working on a remote server. Also, I map Caps-lock to ESC on my mac, easier to reach (and who uses Caps-lock anyway).
I wonder if this could have some great research applications. A year ago I watched some lectures from Stanford on ethobiology (i.e. the branch dealing with the biological processes underlying behavior) on YouTube by Robert Sapolsky, and he talked a lot about the difficulty of figuring out what parts of the brain (and which interplays of brain centers) are responsible for behavioral patterns, especially when it comes to the more complex things. One joke was something like: "You know that feeling when someone calls you and you don't really want to talk, but feel uncomfortable with saying that actually you're busy and would rather just read a book than talk? I think we've found the brain center for that." We have learned a lot from what happens when people have certain parts of their brains damaged. But maybe we could learn much, much more about the brain by being able to fiddle around with many different kinds of signals to different brain centers, and trying out hypotheses by stimulating several centers simultaneously in order to produce (or not produce) certain behaviors?
> Without the state, corporations (and capitalism) are not possible.
A state is not necessary for enforcing contracts - that could be done by private parties, as in anarcho-capitalism. Also: Cooperatives can exist within a capitalist system, but the converse is not true. So I guess it comes down to whether or not all property as defined by the status quo should be redistributed or reallocated, presumably by force; if that were not the case, there wouldn't really be any disagreement between left-anarchists and anarcho-capitalists, right?
I should have been clearer: I'm not advocating anarcho-capitalism, just having an academic discussion. My point was that a state isn't necessary to enforce contracts, although the competing entities that would theoretically replace the state are quite state-like in many ways as you point out. The Icelandic Commonwealth was anarcho-capitalistic though, wasn't it (anarchy + property rights)? Worked okay for a few hundred years.
I do see your point that anarcho-capitalism isn't really anarchism, though. The societies they envision are radically different.
Fair enough. Sorry, dealing with an endless stream of ancaps who demand that yes, they are 'the real' anarchists and yes, their vision of the future would lead to grand utopia.
> The Icelandic Commonwealth was anarcho-capitalistic though, wasn't it (anarchy + property rights)? Worked okay for a few hundred years.
I do know that Iceland is discussed in these circles, and I only know that both sides go "yes it is no it isn't." I haven't studied it enough to make my own call. Primarily, as far as I'm concerned, if it takes a state form, it's just as bad, so I haven't spent any time in this area."
I know you're trolling, but for the benefit of everyone else, anarchism actually existed in real-world Catalonia, the Ukraine, and arguably in Paris, in the real world, for multiple years. You are factually incorrect.
I don't have a ton of interest in repeating left-anarchist/anarcho-capitalist arguments with you though. It's clear you have the left-anarchist bullet points down.
I chalk up my anarchist period to indiscretions of youth. Nowadays I am more concerned about choice and innovation in government than in abolishing government.
If you want a lisp that compiles to JS, ClojureScript is also a good option. It's ready for use, and there's a lot of smart people working on making it better. The Clojure/ClojureScript community is also very good - intelligent and friendly.
Good post! Generalizing the point, maybe a good description of sophistication is: using one's intuition in a natural way to explore the boundaries of one's knowledge, having previously refined that intuition by rigorous study and experiential learning. It's through an increasingly refined intuition about things within our horizon of knowledge that we're able to focus our conscious thought on the border of that horizon and expand it.
Good automation around preparing/provisioning the testing environment is a necessary companion to the testing tools/frameworks themselves.
Most commonly, fully-capable testing environments aren't available during the inner loop of development (where the dev setup can usually only run unit tests or integration tests for 1-2 services + a database).
Because of this, people tend to rely solely on their CI pipelines to run integ/e2e tests, which can slow things down a lot when one of those tests fails (since the write/run/debug loop has to go through the CI pipeline).
As an industry, I think we should start taking automation and developer productivity more seriously—not least when it comes to writing and debugging tests for complex distributed systems. The more we can lower the marginal cost of writing and running tests, the more effective our test suites will become over time.
Shameless plug: My company (https://garden.io/) is developing a framework and toolchain to bring the full capabilities of a CI pipeline to the inner loop of development, so that developers can efficiently run all/any test suites (including integ/e2e tests) in their personal dev environments.
We do this by capturing the full dependency graph (builds, deploys, tests, DB seeding etc.) of the system in a way that can power CI, preview environments and inner-loop development.