Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting read, and it may be working well for Raycast, but it didn’t resonate with me. The main drawback of code reviews, as they correctly note, is a lack of velocity due to engineers looking at code review as a “not my real work” thing. That’s a cultural issue in my opinion.

At my current startup, we are trying something different: the code reviewer fetches the feature branch and writes tests for the change as part of the review. As ridiculous as that sounds, it’s been working better than we’d imagined.

First, it’s really forced us to design modular and testable APIs. Separation of concerns at the file or library level gives a false sense of good, modular, reusable design. However, having an engineer write a test for your implementation (essentially a second client to consume your interface) really puts this to the test (!). The quality of feedback emerging from this process is often much richer than what a “skim the diff” code review offers.

Additionally, writing the test suite is an “unavoidable” task for engineers. This works two-fold: firstly, we don’t put it off as an “end of the day” item as with traditional code reviews on which you can spend as much or as little time as you have (it’s a ‘compressible’ task, unlike writing tests). Secondly, people send smaller, focused, easy-to-test PRs as a courtesy to the reviewer - and perhaps in part due to what we call MARBs... mutually-assured review-bombs :)



> At my current startup, we are trying something different: the code reviewer fetches the feature branch and writes tests for the change as part of the review. As ridiculous as that sounds, it’s been working better than we’d imagined.

How is this different from the "traditional" QA-writing-tests-for-devs process?


QA writing unit tests for devs is not “traditional” - it’s a signal to leave the place ASAP.

As an engineer, you’re expected to write, review, and test code. We simply shuffle responsibilities such that you usually test other engineers’ code and not your own (there is some discretion involved - for very small changes we often write or tweak the test ourselves)


Yeah that's why I put traditional in quotes. I guess, rather outdated? But whatever.. the question is how is this different? The engineer who writes the code still not the one who tests it, am I misunderstanding this?


Ack. You’re right in that the engineer who writes the code is not the one who writes the test, but recall that all this happens at the code-review phase.

When QA finds holes when writing unit tests (rare), it goes on a backlog. For us, if the reviewer finds a bug or finds the code not very testable, your code isn’t getting merged. We catch both errors and design issues (especially premature optimizations and overly large API surface area) very early. Habitually, our engineers write simple, well-documented code with concise APIs at a well-defined level of abstraction (we also follow some other heuristics to make this obvious) on the first revision, so we rarely (6 total in the last 3 months if I’m querying correctly) have more than two revisions despite the additional ‘write tests’ step from the reviewer.

We found that engineers take great joy in trying to break others’ code, but also enjoy collaborating in a scratch-each-others-backs way when they get to give useful feedback. We’ve been able to channel that into great test coverage (around 99% FWIW). We’re pre-launch, so ultimately we may uncover bugs in production at the same rate as traditional code reviewing shops ¯\_()_/¯, but we’ve had close to zero issues needing manual intervention in months of early user testing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: