Hacker Newsnew | past | comments | ask | show | jobs | submit | mgreene's commentslogin

For the lowest risk changes in terms of review comment activity, that appears to almost already be happening.


Not an unreasonable thing to say. I did provide data to back up my point of view so folks are at least free to disagree on the details rather than just the high-level take.


Building upon Microsoft's earlier analysis (https://pdfs.semanticscholar.org/c079/0dc547c56ca48b78bc418b...), our data, which is composed through an objective risk measure, confirms similar findings about code review efficacy w/r/t finding bugs.


The paper's title is a bit provocative but I think the findings are interesting. Mainly around long-held beliefs about what developers perceive as the value vs what is actually happening.

You do bring up a good point about using change defect rate though. I wish the researchers had cited that as the preferred unit of measurement. I did some research on change defect rates on popular open source projects and it's all over the map. Ranging from ~12 - ~40% [1].

The future I'd like to see is as developers we use objective measures to justify time investment for review. This is going to be increasingly important as agents start banging out small bug-fix tickets.

[1] https://www.shepherdly.io/post/benchmarking-risk-quality-kpi...


Shepherdly is a bug prediction platform for pull requests. We wanted to see how risky PRs were managed in open source repositories. How do you think it compares to yours?


WHOOP | Backend, Full stack, iOS Engineers | Boston | Full-time | ONSITE (Remote for COVID)

WHOOP is a fitness tracker that has the goal of optimizing performance for everybody from professional athletes to everyday people hoping to stay a little bit healthier.

* Software Jobs: https://www.whoop.com/careers/

* Tech stack: AWS, Kubernetes, Kafka, Cassandra, Postgres, Java (Backend), Kotlin (Android), React, Swift (iOS).

* Interview process: Initial informational conversation with a recruiter, followed by a remote technical screen. Total interview process is about 3 hours. Our technical interviews are oriented around real technical problems our teams work on.

* Recent Press: https://www.cnbc.com/2020/06/24/pga-tour-procures-smart-band...


How much of the c3.xlarge memory should be reserved for file system caching?


The c1.xlarge referenced in the article, which has more disk than c3.xlarge, really doesn't have room for OS cache unfortunately. You can try experimenting with lowering heap, to leave more for the OS, but we found that HBase needed all we could give it or risk OOM.

I haven't used c3.xlarge in production, because it has so much less disk. But for that reason you could probably get away with less heap in the RS, leaving more for the OS. However, keep in mind that the HBase block cache is optimized for HBase use case, whereas OS cache is not. There have been some profiles done, and block cache usually performs better -- http://hadoop-hbase.blogspot.com/2012/12/hbase-profiling.htm... for example. So I would value that over OS cache on a low memory system.

The ideal, in my opinion is i2.4xlarge. You can give 25GB heap, which is manageable with java7's G1 GC, giving plenty of block cache, and still have 100GB to split between the DataNode and OS cache, or anything else you want to run. I'll cover that in another post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: