Hacker Newsnew | past | comments | ask | show | jobs | submit | ventana's commentslogin

I assume you would use Oracle Cloud if, for whatever reason possibly related to legal or competition, you cannot use AWS, or GCP, or Azure. It's hard for me to imagine a startup that needs cloud and would onboard to Oracle Cloud and not to any of the top 3 providers instead.

I actually like the Claude's Co-Authored-By: line very much. Even in my personal repositories, where I'm the sole author and the sole reader, I would like to know if my older commit I'm looking at was vibe coded, implying possibly lower quality or weird logical issues with the code.

So, my personal rule is: if I implemented a feature with Claude, I'll ask it to commit the code and it will add Co-Authored-By. If I made the change manually, I'll commit it myself.


One thing I learned over the years is that the closer my setup is to the default one, the better. I tried switching to the latest and greatest replacements, such as ack or ripgrep for grep, or httpie for curl, just to always return to the default options. Often, the return was caused by a frustration of not having the new tools installed on the random server I sshed to. It's probably just me being unable to persevere in keeping my environment customized, and I'm happy to see these alternative tools evolve and work for other people.

This sort of thing is a constant tension and it's highly likely to be a different optimum for every individual, but it's also important not to ignore genuine improvements for the sake of comfort/familiarity.

I suspect, in general, age has a fair amount to do with it (I certainly notice it in myself) but either way I think it's worth evaluating new things every so often.

Something like rg in specific can be really tricky to evaluate because it does basically the same thing as the builtin grep, but sometimes just being faster crosses a threshold where you can use it in ways you couldn't previously.

E.g. some kind of find as you type system, if it took 1s per letter it would be genuinely unusuable but 50ms might take it over the edge so now it's an option. Stuff like that.


Story of my life basically. It is just too much effort to keep customization preserved.

I would say some of the 'newer' tools like rg and jq are just about essential.

I might be missing the point of the article, but from what I understand, the TL;DR is, "cover your code with tests", be it unit tests, functional tests, or mutants.

Each of these approaches is just fine and widely used, and none of them can be called "automated verification", which, if my understanding of the term is correct, is more about mathematical proof that the program works as expected.

The article mostly talks about automatic test generation.


That's actually one thing that always prevented me from following the standard pathway of "write a design document first, get it approved, then execute" during my years in Google.

I cannot write a realistic non-hand-wavy design document without having a proof of concept working, because even if I try, I will need to convince myself that this part and this part and that part will work, and the only way to do it is to write an actual code, and then you pretty much have code ready, so why bother writing a design doc.

Some of my best (in terms of perf consequences) design documents were either completely trivial from the code complexity point of view, so that I did not actually need to write the code to see the system working, or were written after I already had a quick and dirty implementation working.


That’s why I either started with the ports and adapters pattern or quickly refactored into it on spikes.

You don’t have to choose what flavor of DDD/Clean/… you want to drink, just use some method that keeps domains and use cases separate from implementation.

Just with shapes and domain level tests, the first pass on a spec is easier (at least for me) and I also found feedback was better.

I am sure there are other patterns that do the same, but the trick is to let the problem domain drive, not to choose any particular set of rules.

Keeping the core domain as a fixed point does that for me.


I am very similar in this respect, however once I get to a place where I am implementing something very similar to something in my past, it becomes easier to draft a doc first because I have been down that path before


The article has so many "it's this, not that" contradictions – I counted seven! – that I seriously consider it to be written with a lot of assistance from LLMs.

One thing not mentioned in the article is that now that many software engineers are back to their offices, we get the regular fall / spring viral infections spreading out between employees who feel obliged to go to the office even if they have mild cold symptoms. If RTO is about productivity, I wonder if anyone has accounted the productivity drop caused by viruses in workspace.



I'm an ex-FAANG engineer working for a smaller (but still big enough) company.

At work we use one of the less popular solutions, available both as a plugin for vscode and as a claude code-like terminal tool. The code I work on is mostly Golang and there's some older C++ using a lot of custom libraries. For Golang, the AI is doing pretty good, especially on simple tasks like implementing some REST API, so I would estimate the upper boundary of the productivity gain to be maybe 3x for the trivial code.

Since I'm still responsible for the result, I cannot just YOLO and commit the code, so whenever I get to work on simple things, I'm effectively becoming a code reviewer for the majority of time. That is what probably prevents me from going above 3x productivity; after each code review session I still need a break so I go get coffee or something, so it's still much faster than writing all the code manually, but the mental load is also higher which requires more breaks.

One nontrivial consequence is that the expectations are adapting to the new performance, so it's not like we are getting more free time because we are producing the code faster. Not at all.

For the C++ codebase though, in the rare cases when I need to change something there, it's pretty much business as usual; I won't trust the code it generates, and would rather write what I need manually.

Now, for personal projects, it's a completely different story. For the past few months or so, I haven't written any code for my personal projects manually, except for maybe a few trivial changes. I don't review the generated code either, just making sure that it works as I expect. Since I'm probably too lazy to configure the proper multi-agent workflow, what I found works great for me is: first ask Claude for the plan, then copy-paste the plan to Codex, get its feedback back to Claude, repeat until they agree; this process also helps me stay in the loop. Then, when Claude implements the plan and makes a commit, I copy-paste the commit sha to Codex and ask it to review, and it very often finds real issues that I probably would've missed.

It's hard to estimate the productivity gain of this new process mostly because the majority of the projects I worked on these past few months I would've never started without Claude. But for those I would've started, I think I'm somewhere near 4-5x compared to manually writing the code.

One important point here is that, both at work and at home, it's never a "single prompt" result. I think about the high level design and have an understanding of how things will work before I start talking to the agent. I don't think the current state of technology allows developing things in one shot, and I'm not sure this will change soon.

My overall attitude towards AI code generation is quite positive so far: I think, for me, the joy of having something working so soon, and the fact that it follows my design, outweighs the fact that I did not actually write the code.

One very real consequence of that is I'm missing my manual code writing. I started going through the older Advent of code years where I still have some unsolved days, and even solving some Leetcode problems (only interesting ones!) just for the feeling of writing the code as we all did before.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: