Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You seem to be under the impression that there is some 'advanced technology' out there that will magically solve your problems that the US fails to have.

In my experience, many people have a quasi-religious belief in the capability of modern medicine to perform what would otherwise be called a miracle. This belief is typically held without any evidence whatsoever.

In reality, there are a plethora of conditions, some very common and serious, that medicine simply has no idea how to treat. The set of completely treatable / curable conditions is much much much smaller than the set of all possible diseases, yet people act as if it's the opposite. This is why things like 'evidence based' medicine is so dangerous -- we don't have evidence for the vast majority of impactful conditions, simply ignoring patients with these conditions is not a workable solution.



The messaging in the US most commonly used to justify the lack of universal coverage and unreasonable cost of care is that we pay the most because we get the best treatments and best doctors and best outcomes, so I don’t think it’s fair to blame laymen for believing that.

Also, it’s not just patients that think this way. (Or, at least, if the clinicians know, they aren’t saying much to their patients.)

I’ve had docs gush about amazing wonder drugs, then I go and read the actual Phase 3 trial data on the patient information sheet and it has a 15% response (not remission) rate. I’ve been told I’m being given a “gold standard” treatment—but not that the “gold standard” response rate is actually only about 33%, and in another ~33% of cases it makes things worse.

I’ve had doctors refer me for surgery, tell me about how amazing the surgeon is, what a great job they’ll do, that if their own kids were sick they’d send them to this person. When I ask for hard data on the surgeon’s actual success rate for this type of surgery, well, they don’t track that—but look, just trust me, the guy’s realllyyy good.

Out of dozens of specialists I’ve seen over the years, I’ve only had one ever explicitly acknowledge that, yes, I had a real problem, but modern medicine just was not advanced enough yet to identify the cause, so they’re just kind of winging it. For the rest, there are “many new options”, “great responses”, “positive outcomes”, “extremely effective”—or there’s nothing wrong with you, it’s all in your head, and the princess is in another castle.


> When I ask for hard data on the surgeon’s actual success rate for this type of surgery, well, they don’t track that—but look, just trust me, the guy’s realllyyy good.

For good reason. Tracking of clinical outcomes is the wet dream of insurance companies. It's very toxic to the healthcare system, because it pushes practitioners to focus on easy cases where a good outcome is expected and causes major inequalities in access to healthcare.

I'm not saying the present situation is ideal, but for the system as a whole in its current form, tracking clinical outcomes is a very bad idea.


To some degree, there's an even more toxic element of this already in play with the amount of weight the wider US medical system puts behind patient satisfaction surveys.

Many times, things that a patient wants and would make a patient happy are medically contraindicated and lead to worse outcomes, yet there's immense pressure on clinicians to maintain patient satisfaction metrics.

I'm not disagreeing with you at all; more suggesting that we're currently relying on metrics that are even more perilous than actual clinical outcomes.


In patient satisfaction surveys, treatments like invasive surgery usually ranks very low, while treatments like massage rank very highly. One probably saved your life, the other just felt nice, and yet the second one is rated higher.

It's complete insanity to even being comparing treatments that are so different.


Also getting antibiotics for something gets rated pretty high, even if it did worse than nothing.


This is even true when we're not talking about patient satisfaction. There's definitely conflicts for things like infection control, where what we should probably should be measuring (process measures) conflict with what patient advocacy groups care about and are pushing for (deaths).


I could see that problem occurring if the metric was “what is the success rate of everything that Surgeon X does”. I can’t see that problem occurring for “Surgeon X performing Procedure Y has N% of patients reporting relief and M% of patients reporting complications after the surgery”. What am I missing here?

Edit: Follow-up question: notwithstanding the dysfunction of congress and the ability of companies to find loopholes, and assuming no universal health care to eliminate the role of insurers, surely a solution would be to prohibit the use of this information in the same way that ACA prohibits the use of pre-existing conditions to deny coverage?


I'm guessing you haven't heard of Goodhart's Law? (https://en.wikipedia.org/wiki/Goodhart%27s_law) Under your proposal, surgeons will be incentivized to selectively operate on easier patients and minimize their complication rates while not performing surgery on very sick patients who may also need the same surgery.

Different surgeons in different areas treat different kinds of patients. It's hard to accurately measure anything in a meaningful way that should influence decision making. To use your example, surgeon Z may also perform procedure Y but has (N-5)% of patients reporting relief and (M+5)% of patients reporting post-op complications. However, surgeon Z works at a community hospital and treats a poorer patient population with more co-morbidities. Can you really say if surgeon X is better than surgeon Z?


This is also known as confounding by indication, and is something the pharmacoepidemiology field contents with a lot.

And it shows up all the time in things like the Consumer Reports hospital rankings and the like, where hospitals with particularly uncomplicated patient populations come off looking like they're the best hospitals.


Is it really too "hard" to perform rigorous statical analysis? Why can't you factor in patient genetics + background + circumstances to come up with some expected chance of success for each procedure (in fact, isn't that why doctors have such detailed patient histories?). Isn't it the doctor's job to estimate and inform the patient of the expected outcomes?

Doctor's historical success rate exceeds expected success rate on average => Good (or lucky) doctor.


Yes, it's absolutely very difficult to do rigorous statistical analysis.

Genetics is oversimplified to non-physicians. It's cool that we can diagnose and predict the likelihood of getting Huntington's disease using our knowledge of genetics, but extremely few diseases are this simple. There are huge swaths of the human genome that we don't understand but are likely playing some important role in the regulation of other genes and diseases. We are nowhere close to being able to look at a patient's genome to predict anything useful outside of a handful of exceptions.

Patient histories are honestly often garbage—I say that as a physician. I look through dozens of patients' charts every day, and there are constantly errors, incomplete documentation, and fragmented records across multiple institutions. Just last week I read a chart for a patient who had a documented hysterectomy from years ago. The brand new CT scan I saw showed a perfectly normal uterus. Once something goes in a patient's history it's nearly impossible to correct or remove. If some doctor from ages ago said the patient is allergic to medication X, but the patient denies it, what do I do? Usually, we opt to leave the allergy listed out of fear of the consequences if the patient is wrong.


Yup. As a patient I've found a bunch of bogus allergies in my records once that I was able to get fixed, but I know there's other stuff I've tried but the doc wasn't interested in listening. He felt he had "proved" I was mistaken/faking when he didn't listen in the first place. How am I supposed to identify the evil item in a challenge when it isn't even present? (I had said "ham". The challenge was with pork. I didn't say pork--whatever the actual evil item it was something that got added in the process of turning pork into ham.) How do you get a mistake like that out of your records?


I've had doctors make up things as egregious as my height.

I had one doc I had only met remotely who entered a height into my online, shared record that was several inches shorter than I was...back when I was an 11-year-old child. It's been 25 years since I was that short.

I pressed them that they could at least have asked me how tall I am, or even consulted the previous entries by other doctors in their same system.

There wasn't even an attempt at an excuse. It was plain, simple negligence.

That doctor had also been insisting I needed to let him do an exploratory surgery, despite never even having had me come to the office in person, so I noped right out of there and started telling that story to everyone I thought might consider seeing him.


Are you aware of any attempts to use machine learning in medical analysis and outcome prediction? I feel like this is one of the few applications where it could shine. I have no formal training in data science, but everything I've read so far seems to indicate that noisy and unreliable data is not an insurmountable problem.

All of this talk about how "hard" the statistical analysis is, is strange to me. Maybe "advanced" would be a better term? If you get a patient with a contradictory medical history that somehow also contradicts what they are telling you, simply adjust your expected chance of success appropriately (to zero perhaps). In that extreme case, if you get a good outcome, congrats you got lucky. If you don't, it should have 0 impact on how you are evaluated as a doctor.


Before you can even do the statistical analysis you suggest, you need large amounts of high quality data—which we don't have. One place where the US (and the world?) gets data privacy right is in healthcare, but unfortunately that also means it's nearly impossible to create the data sets we need to do the statistical analysis you want.

Institutions face severe penalties for wrongfully sharing patient data, so most opt to just not share any data. Any research that is performed is done internally on local populations with de-identified data sets. A few brave institutions go well out of their way to create and share de-identified data sets publically, but these data sets still undersample the general population. This is a critical problem because certain diseases are highly prevalent in certain regions (e.g., Lyme disease in New England) but unheard of in other regions (e.g., Lyme disease in Colorado). If your ML model is trained on data largely from New England, it's going to diagnose a patient with the classic "target-shaped" rash with Lyme disease even if the patient is from Colorado (high false positive rate). If the model is trained on data from Colorado, it will underdiagnose Lyme disease in patients from New England (high false negative rate). The only way I know to overcome this problem is to create even larger data sets, but this just isn't possible with data privacy laws.


> If the model is trained on data from Colorado, it will underdiagnose Lyme disease in patients from New England (high false negative rate). The only way I know to overcome this problem is to create even larger data sets, but this just isn't possible with data privacy laws.

My understanding is that high dimensionality in the domain is exactly where ML excels (so add location as an input), and that is exactly what a medical diagnosis involves. Perhaps the legislation will get there one day.


Re data privacy, as of last summer in the UK we had to opt out of data sharing of our NHS records with third parties. I am conflicted about this because as you suggested, large amounts of data could be beneficial for things like statistical analysis and machine learning to help medical research. On the other hand, what we know about human nature seems to indicate that there will already be sociopaths slavering over this data in order to enrich themselves at the expense of everybody else.

https://inews.co.uk/opinion/nhs-data-shared-third-parties-we...

Anyway, its out there now.. so watch this space, I guess!


Statistics are not the problem. The data is. 'Big healthcare data' does not exist. Building what you are thinking of would require huge data gathering capabilities, that are very clearly out of reach.

Most big medical co do a lot of data science (Kaiser and others). Very efficient from a managerial pov. Totally useless, medically speaking.


I am.

In that I have several CDC funded projects using machine learning in medical analysis and outcome prediction.

Even on extremely well curated data sets this is a fairly hard problem.


> I feel like this is one of the few applications where it could shine.

Read “The Alignment Problem”, a very good just above pop sci level book about machine learning. They have one example where ML determined seniors with COPD were at reduced risk from pneumonia, and obviously non-sensical result. Patients with COPD wound up in the hospital at a lower rate than average because doctors know they need careful attention right away.


Yeah: https://www.nature.com/articles/d41586-019-03228-6

I’m reluctantly pessimistic about humanity’s near-term capability to appropriately weight input from technology as imperfect but inscrutable as current ml.


Everyone and their mother is working on ML applications in medicine.


> patient histories are often garbage

This is a little like programmers calling the programming language they invented and write in every day garbage. Separate from patient-reported histories, you medical doctors are the ones documenting these histories and hold the decision making power for how it’s done!


I think your analogy is a bit off here.

Your first sentence would be the equivalent of the inventor of patient history data keeping calling patient history keeping a garbage tool. Which is actually something that would be totally OK to do and say if suffixed with "and unfortunately so far nobody has come up with a better tool and it's not for lack of trying".

I'm assuming you didn't mean "you medical doctors" in the sense it's easy to read in. In any case, what you are doing here is telling one doctor that he is bad at the medical history writing and reading job when in fact he is the one telling you how he is able to spot other doctor's mistakes and trying to correct them. This is like telling one developer that he's bad at his job, that "you developers are the ones writing bad code and hold the decision making power for how it's done" when that developer is actually someone that tries to make things better both through his own maintainably written code (medical histories) and helping others in code reviews to make their code better and not let bad code get into Prod (finding errors in existing medical histories and trying to correct them).

That can be very discouraging, being thrown in with the bad apples. And even good apples can have a bad day or misunderstand something. But I guess you are perfect and have never produced a bug in your life.


The number of factors you'd have to consider to achieve that is huge enough that it makes it completely unrealistic, both practically and financially speaking. In fact, I'm 100% convinced that it's currently impossible to build such a system, given the extent of medical knowledge.


I'm skeptical. I concede it's probably a hard problem but there is an entire field dedicated to hard statistical problems called data science. What is the point of having detailed patient histories and data if it can't be used to inform decisions?


If data science was that efficient on healthcare records, we'd know it by now and everyone would do it. There is no 'big healthcare data'. We gather mostly noise, and records are full of blatant mistakes. Medicine is still more art than science today, and records mostly give you a 'feeling' for the patient's condition in light of your MD education, with a little hard data sprinkled on top.


This is a fallacy. At any point in history you can say “if X field was so good, we’d have Y by now”. In 1925 you could’ve said, “if biology’s understanding of bacteria is so good, we’d have antibiotics by now”. Within 5 years, they did.

There is certainly noise in healthcare data especially when patient-reported, but is it noise to say that a patient having X procedure later does or doesn’t have serious complications? Analyzes of medical care and their consequences can be evaluated and it’s not noise

And big healthcare data has lagged, partially because privacy concerns trump sharing. There are companies selling anonymized medical records for basically every American now though. Big data is coming


> Big data is coming

Big _bad_ data... Let's see how we fare in 5y, then. My prediction as a clinician with a special interest in stats: close to zero medical progress. But insurance priced by a ML algorithm, and much greater efficiency in coverage and claim denials.


Due to the Affordable Care Act (Obamacare), medical insurers have very little flexibility in pricing policies. There's not much point in using ML for pricing.

https://www.healthcare.gov/how-plans-set-your-premiums/

The more likely use case for ML is detecting insurance fraud patterns.


> The more likely use case for ML is detecting insurance fraud patterns.

I think that's covered under "much greater efficiency in coverage and claim denials."


The first flu vaccine came about in 1945. Knowing as much as we did about viruses then, you might think we would have a cure for influenza (or the common cold) by now. Here we are almost 80 years later... big data may be coming but if takes that long it won't be in my lifetime.


My impression is that data science on healthcare records is mostly illegal due to HIPAA and other regulations.


It's not illegal. It's just very complicated to do correctly, and you risk large fines if you do it incorrectly.


For most practical purposes, that's has the same effects.


We mostly don't have detailed patient histories. Most of the relevant clinical data that would be needed for such a rigorous statistical analysis isn't recorded as discrete data elements, it's just unstructured text. NLP can be used to extract concept codes but the error rate is high so you still need an experienced (expensive) human to manually fix the errors. No one wants to pay for that.

The codes we do have are mostly CPT4 and ICD-10 for billing purposes. Those are generally pretty accurate, but not detailed enough to reliably assess whether one surgeon is better than another at a particular procedure.


As someone who works on the methodological basis for a number of "Observed/Expected" healthcare metrics in the infection control and antibiotic stewardship space - just, this is pretty hard, even if you are trying to do a rigorous statistical analysis.


Yup, most of the data is junk, and even if the data isn’t junk - it usually doesn’t at all mean what you think it means.


Goodhart's law means you need to gather more data, not less. The more and diverse metrics you have, the more Goodhart pressure you can take.


> Under your proposal, surgeons will be incentivized to selectively operate on easier patients and minimize their complication rates while not performing surgery on very sick patients who may also need the same surgery.

Isn't that already the case, even without formal tracking of outcomes?


Does the fact it's already a problem mean it's ok to make it worse?


No, I haven’t, thanks for mentioning it. Would Goodhart’s Law actually apply here though, since right now the data (as far as I can tell) isn’t being measured at all?

To your counter-example, maybe the metrics I described aren’t good enough and should integrate some disease severity criteria or site-weighting or comorbidity score (although as one continues to subdivide the population this way eventually you end up with n=1 and the results are useless again) but like, surely we should be trying to measure something other than the good feels and word-of-mouth of people who have to work with each other?

It physically hurts my brain when I think about how we measure the dumbest shit in software engineering, like which shade of blue to use to improve clickthroughs[0], but when it comes to even attempting quantification of activities which are literally life or death, sorry, too hard, can’t do it. Surgeons will refuse to do hard procedures, insurers will destroy careers, EMRs are full of bad data (so what is the point of the bloody records if they have become that useless‽), surveying patients would cost too much…

I will ultimately defer to the experience of people in the field—I am not a physician or statistician—but sometimes I feel like I’m just being fed arguments repurposed from the bad cops playbook. Oh, we can’t ever possibly start quantifying individual officers’ use of force, because some parts of the city have more crime, and if we do that then those officers will look worse, so they will stop responding to violent calls in those areas, and there’ll be even more crime, so get off our backs man and stop trying to create more objective metrics for accountability.

To be clear I don’t think you are arguing in bad faith and I don’t intend my statement about accountability to suggest that you personally are trying to avoid it or shield bad actors or anything. What you are saying is probably true and I may be wrong to challenge it at all since I have no personal insight into what is going on behind the scenes, and I genuinely appreciate you answering my questions from your perspective and giving me additional perspectives and things to think about. It just feels so, so frustrating as a patient. All I want is some ability to measure risk that’s better than looking up studies on procedure X on pubmed that I’m unqualified to interpret (and which don’t apply anyway because the lead author of the research won’t be doing my procedure), or shaking the magic eight ball.

If I were a physician, I would absolutely want to track the shit out of my own patient outcomes so I could improve, and the amount of resistance that seems to exist (this is not the first time I’ve talked to docs about this and received similar fatalistic answers) is just baffling to me.

We’re not talking about Frogger here, metrics aren’t some high score, if you have an 80% complication rate for some procedure that isn’t necessarily a reflection on you as a practitioner but it would suggest that there is a problem that needs to be identified (bad procedure, bad training, bad support, bad patient, bad luck). Right now, it seems like no one really knows.

This isn’t bullshit alternative medicine, so why, when I scratch beneath the surface, does it so often feel like it is anyway?

[0] https://www.zeldman.com/2009/03/20/41-shades-of-blue/


I share your frustration and agree with a lot of your points, and in fact I was motivated to solve a lot of these problems while in med school. My thought was that all you need is technical expertise on healthcare data to revolutionize the field. We have the technical expertise, but we just don't have the healthcare data for several reasons.

First, patient privacy laws (while a net good) scare institutions from sharing high quality data. The best you'll get is small batches of de-identified data released infrequently. Patient notes are unlikely to ever be released in large quantities since they can so easily pinpoint some patients.

Second, you need to coordinate thousands of physicians and/or healthcare facilities across the US (or world) to record data on their own performance in a standardized way. Many hospitals do this on some agreed upon metrics (30-day readmission rate, hospital-acquired pneumonia rate, average HbA1c level for a doctor's diabetic patients etc.) largely because they're used to determine government funding/penalties. But at the end of the day, there's no direct incentive for physicians or institutions to collect any other data on their own performance and release it publicly. In fact, there are more risks to doing this than benefits. To solve this problem you need to tie hospital funding with requirements to collect and publicly share performance data while also mitigating punishment.

To physicians' credit, many of us are actually motivated to at least privately collect data on our own performance so that we can improve. But this is incredibly difficult and time consuming—especially for those of us who come into contact with dozens and dozens of patients every day. Sure, better data collection tools would dramatically help us monitor our own metrics, but the only entity with the cash to purchase or create these tools is the hospital, and its reply is going to be, "What's the ROI?" And the answer is honestly probably negative. You may suggest buying/building small relatively inexpensive tools (as I've personally tried), but the hospital isn't interested. Like most large enterprises, hospitals want long-term contracts, dedicated support teams, and tried and true tools. Small tools pose too much of a security risk and maintenance headache.


In my mind it gets worse when the procedure is identified: A difficult shoulder surgery by an orthopedic surgeon gets refused because it might be result in a mediocre outcome and lower his 'success' rate. The patient can't get the surgery because no surgeon wants to 'risk' his success rate numbers.


Seems like a good system to me. If a doctor expects a lower than average success rate for performing some specific operation, he should let some other doctor do it anyway.


In my mind it is not a good system, because if the surgeon is looking at a much-more-difficult than average circumstance, than he could reasonably expect a lower-than-average result, but a result that might be much better than average for a case so difficult.

As a separate observation, any time data is kept, turned into metrics that then become the basis for goals ("I want to have a better-than-average success rate, as a surgeon") then the system gets gamed.

I had a boss once propose to down-rate agile teams that didn't get done everything they took on in a sprint. He apparently didn't realize that teams would immediate game the system by taking on less actual work. They could up their 'point' estimates for each task, and always get the work done.


I think you are right that there is a natural and understandable psychological resistance to a data-based evaluation system. I understand that the doctor may realize something about the patient that will lower his/her chances of success.

I'm arguing that a sufficiently comprehensive system would take in to account whatever that doctor realized (and perhaps much more) and compensate for it when determining expected outcomes.


This assumes a level of knowledge that simply doesn't exist.

A simple illustration: I have medical issues with no meaningful diagnosis, despite seeing many doctors. If the medical community can't figure out what's wrong, how can they have an understanding of all the relevant factors in determining risk?


> I have medical issues with no meaningful diagnosis, despite seeing many doctors.

This isn't directly related to your point, but why is this even the norm? Why are you going from human to human seeking answers to some arcane mystery like you're on a Skyrim quest.

Imagine a system where you wouldn't need to see any doctors. You would type in all of your symptoms as accurately as you can with as much detail as you can, perhaps with a timeline, and as output you'd get the most likely causes (a diagnosis). Maybe that system would even have your medical history (and that of other people and related diagnoses) to better predictions.

It seems to me that a system like this would be significantly better than going to a couple of humans that are arbitrarily local to you and asking them to figure it out. It is slowly improving all the time just in the form of Google and WebMD. Even before 2010 I accurately diagnosed myself with Bell's palsy with the internet and the doctor begrudgingly asked how I had made that diagnosis. Confirmation from the human doctor was nice but redundant.


I'm guessing that the bureaucracy of such a system would be a significant burden. To avoid having the system badly gamed, you'd need a second evaluation, yes? By a _neutral_ party, (not a doctor working in the same practice, e.g.) At added cost and time. Not to mention I've heard more than one surgeon say "That was more difficult than I expected based on the imaging ... once we got in there we found <>".

I'm also arguing that it's not just a psychological resistance to a data-based evaluation system. That people understand the system would be subject to being gamed, and the overall quality of the work would actually suffer. (A bit analogous to how peer-review and the tenure game has interfered with good science practices.)


Other doctors? And where will you find those? No one will ever touch you, except if you pay more. Pay more for worse outcomes, really.


Agree if there is another doctor who feels they could have a better success . And this does happen, doctor will say "I don't feel comfortable doing this".

But there may be no other doctor.

Think of this way - if you're a doctor specializing in the treatment of septicemia, a lot of your patients will die. If you're really good, you'll likely get the hardest cases. So your "success rate" may be lower than another doctor who isn't as good but doesn't see such tough cases.


If a doctor estimates that this particular patient+disease+procedure combination has a lower chance of success than the average patient who needs a similar category of procedure, then unless they're severely mistaken, every other doctor will also estimate the same way and refuse that difficult case.


Surgeons would steer away from difficult, lower-percentage-outcome procedures which is precisely what you don't want.

You want risk-takers who repeatedly tackle the surgeries and (ideally) get a more positive outcome percentage than a newbie.


Meaning they will not take over my risky surgery, and I will still be choosing among risk takers, but with clear stats in my hands.


Surgeons will concentrate on easy procedures and will basically all have an almost identical track record of quasi perfection. So there will be nobody left to perform procedure Y, where Y has an intrinsically high rate of failure. Same thing for difficult patients. No one will touch Mr X who's got a complex problem.


Doctors that work on older patients or worse cases will have worse numbers. Usually that effect far outweighs the skill difference between doctors.


Is the aggregate number reported, i.e. aggregated across all surgeons for procedure Y?


It's also a bad idea because the best surgeons often take on the most difficult cases you can't necessarily compare doctor to doctor without knowing the types of patients they treat.


Have one person do nothing but estimate the difficulty of a case(and be tracked and judged for the quality of their estimates), and the surgeon be tracked and judged on the relative performance given the difficulty.

Then you could make a career out of doing well on difficult cases, or out of doing better-than-average on easy cases, and either would be viable.


There are some things that are helpful that they will answer that have steered me away from a surgeon: How many of these surgeries do you do per year? An answer of 50 or above is good. What is YOUR rate of <specific complication mentioned in the consent>? Keep pushing until they tell you their rate, not the overall rate.


They almost universally don't know their own rate. I don't know my own rates of complications. But, if I want you to go away because you seem to be the kind of guy that will come back to bite me, I'll gladly tell you I have a huge rate of complication.


For a colonoscopy I asked about the risk of a perforated bowel. Their first response was, “that is meaningless, if it happens to you the rate is 100%.” I said if the chances are 50:50, then I am not doing it. They said it has happened twice in their career and based on my lack of risk factors, it would not happen to me.

For my daughters tonsillectomy, the doctor was very happy to share how her stats for post surgery bleeding compared to both other doctors in her group, and the national average. But I live in a Boston suburb and every doctor is a lecturer at either Harvard or Mass General.

Another question to ask is will an intern take part in the surgery. At teaching hospitals the answer is almost always yes. You can ask if they operate at any other hospitals, and again the answer is almost always yes, they operate at a suburban, non-teaching hospital where they will be the only one operating.

I got a little bit humbled at Boston Childrens Hospital. I was doing some Googling about the risks of a CAT scan and asked if they did low-dose ones. They informed me that they in fact invented that procedure. Sure enough, the paper I was looking at was authored by a doctor on their staff.


I have 10+ years of clinical experience in academic hospitals, and have worked in Boston at Brigham and Women's. From this experience, I can tell you 2 things:

1. Being a lecturer at Harvard does not correlate with being a skillful clinician

2. Your view of the clinical system is very skewed, and will bring you more risks than benefits.


Completely agree with point 1. There are many researchers in the area who like to also practice, that lead me to ask the # of surgeries question.

My bias is towards surgery at a good regional hospital (Newton-Wellesley for example) with a surgeon who teaches downtown and does lots of surgeries.

What is my skew that that is bringing me more risk? I used to think all doctors were about the same. Now I realize that is about as true as all baseball players are the same. There are hall of famers as well as some who could be sent down to the minors. The trick is figuring out who is who, because other docs won’t say.


> The trick is figuring out who is who, because other docs won’t say

Precisely. And I assure you, as a patient you can't possibly figure out who's who. Your bias is you think you can.


unfortunately if you need a rare surgery (to treat a rare condition), this doesn't work very well. it also isn't enough for surgeries with subjective outcomes (such as vaginoplasty.) with the former you look for a competent surgeon with many good outcomes on related surgeries of similar complexity, and who keeps up with or participates in research. with the latter... image boards? word of mouth? whoever your insurance covers? I get hung up on that kind of choice.


Wouldn't that be rectifiable by also tracking the statistics of cases which a practitioner reneges or refuses.

Then somebody with high success rate and high refusal rate stands out as a red flag.


> For good reason. Tracking of clinical outcomes is the wet dream of insurance companies.

Sounds like a sufficient reason for single payer healthcare and single payer malpractise compensation even if all the imagined downsides were real.


On the other hand: tracking clinical outcomes actually tells us if a treatment actually works - a net positive for the species, I'd have thought.


Doing RCTs of a treatment tells you if it works. Without random assignment (or some clever experimental design) tracking outcomes really doesn't tell you anything.


You can forbid insurance from using that data by policy. It sounds like the same argument against measuring teacher performance because "toxic".


Risk adjusted scoring is entirely viable in this age of data science, there is just no appetite for it. Doctors fight very hard against it because who actually wants to be held accountable for outcomes?

Insurance companies really aren't the villains in the US healthcare system, they're going to make money no matter what because they pass cost increases on to their subscribers and are capped in how much profit they can make via regulation.


Risk adjusted scoring is currently absolutely not viable. You underestimate the messiness of the healthcare system by a huge margin. We don't even manage to record basic vital signs consistently, so believe me when I tell you that you can forget about any kind of nice statistical trick given the weakness of our data gathering processes. Plus it's not a matter of objectivity. Surgeons will subjectively assess that doing easy cases will be better. And in addition, they'll be correct. That's what really matters.


I literally was at a startup that could do this successfully, so I'm not underestimating the messiness.


I doubt we're speaking of the same thing, here.


>Risk adjusted scoring is entirely viable in this age of data science

It's just not unviable, I don't think it's even possible. As soon as a metric becomes tracked, people are incentivized to game it.

From my own experience as a very high-volume eBay seller, mandating a certain return rate led us to simply discourage customers from using the (convenient, well-designed) integrated returns systems. Mandating that only a tiny fraction of a percentage of items can be cancelled due to being out of stock leads to sellers sending either the wrong item or a fake tracking number (this gets us all the time on AliExpress).

If data-driven software companies can't handle it for something as simple as eCommerce, I have no idea how the medical industry is supposed to get it right.


A good portion of the largest insurance companies are non-profits, look at Blue Cross Blue Shield and their affiliated companies. They still make tons of money, keep tons of cash on hand, enjoy the same high salaries as for-profits (not saying they necessarily shouldn't), and get special tax statuses/breaks.

And the for profit ones are making plenty of money whatever regulations theyre subject to:

>During 2010, Health Care Service Corporation, the parent company of BCBS in Texas, Oklahoma, New Mexico, Montana and Illinois, nearly doubled its income to $1.09 billion in 2010, and began four years of billion-dollar profits.

I'm not saying they're villains, but "they're going to make money no matter what" isn't a compelling argument to me, and I have precisely 0 faith in the government to meaningfully regulate them.


Risk adjusted scoring is done in some areas where we have the data for this (healthcare associated infections and antibiotic usage). And this is a place where hospitals and doctors actively do want it to work, because there are financial penalties associated with it.

It's still a fairly hard problem. I've had several very clever data scientists on teams who have gone "Oh, this is just an X problem..." and then 9 months later they're still trying to get a model to perform better than "Just take the average".


But if the alternative to a 15% response rate is “no treatment” then you are getting the gold standard. In many other countries those medicines aren’t even considered or paid for.


Only if Gold Standard means "not quite as bad as the other options," which I'm not sure is a definition that'd stick.


But that's true in general.

What's another way to describe the "best doctor in the country"?

"Marginally better than the next best option".


If it has a word like 'standard' in it, you would assume there is a measure or definition for that standard, not just 'not the worst.'

Best doctor in the country can quite easily still be "bronze standard."

There's an expectation of quality in saying something is Gold Standard, it's not an entirely arbitrary label.


Maybe it’s a lingo thing. “Standard of care” is used all the time and just means “standard” in the sense of “best supported by data”.

But the standard of care might just be “give morphine to ease pain until death”.


All of this points to one thing

You are being sold

Thanks for sharing, love the % success rate question for surgeons.


I also think that doctors tend to dumb-down and downplay the actual truth of the effectiveness of treatments. Partly because for the majority of patients, being told that a treatment is realllyyy good can actually make it more effective, by the placebo effect. It also makes the patient happier and more confident in the abilities of the doctor.

I noticed that since I got "Dr." put in front of my name on my medical records, doctors tend to firstly ask what I'm a "Dr." in, and secondly tell it to me straight.


I've noticed that too. I went to the ER with a dog bite and the triage nurse was keen to know if I was a medical doctor before she looked at it.


Every doctor is the best.


The problem with empowering people medically is that 10% of people will benefit from it but 90% of people will get info from Facebook and infomercials and fall victim to quacks. Hence everything being gated behind credentials and prescriptions (along with some good old regulatory capture).

I disagree with the characterization of medicine not being able to treat so many things. Many things are incurable but a lot of medicine/public health is so effective we barely think about it. Of course we are going to notice and pay more attention to the things medicine sucks at treating, because they’re real problems that inflict a lot of pain due to the lack of treatment.

But medicine is very good at treating plenty of things like infections (of many different kinds), traumatic/acute injuries, and many disabilities. Most of the chronic issues that medicine fails to address are simply lifestyle issues that medicine tries to alleviate the symptoms for. Yes there are certain conditions medicine doesn’t begin to fully understand like Alzheimer’s or various chronic pain conditions, or where treatment is still pretty middling like Cancer, but a lot of the biggest things are treatable very well - we just don’t notice them much because they are treated so well.


I feel like autoimmune, endocrine, and GI issues, and various combinations of those are the biggest frustration to people, along with complicated surgeries/replacements that we just haven't figured out yet. But those first three are horribly complicated and interrelated systems whose disorders affect probably close to a third of the population, maybe half. Just autoimmune thyroid issues affect about a fifth of women. Not to mention all thyroid issues, autoimmune diabetes, other diabetes, chrons, ibs, fibro myalgia, maybe long covid, etc. You are right to point out that many suffering from those conditions wouldn't even be alive without modern treatments and that we don't notice all that modern medicine can solve. But we also need to acknowledge there are many people for whom we dont have answers yet, and we are using evidence based medicine to say that because we don't have studies yet, there is nothing to be tried. I also think diet and lifestyle changes get undersuggested due to ebm. No one seems as eager to fund a study that tests a diet or activity to treat a condition as they are to fund investigation into a new drug, surgery, or diagnostic equipment.


> I disagree with the characterization of medicine not being able to treat so many things. Many things are incurable but a lot of medicine/public health is so effective we barely think about it. Of course we are going to notice and pay more attention to the things medicine sucks at treating, because they’re real problems that inflict a lot of pain due to the lack of treatment.

> But medicine is very good at treating plenty of things like infections (of many different kinds), traumatic/acute injuries, and many disabilities.

I agree that medicine is very good at treating infections and traumatic/acute injuries. Which disabilities are you referring to? Outside of these categories, what can we we effectively treat or cure? It seems to be very little.


Hip replacements? Cataract surgery? Disfigurements (cosmetic surgery)? - just off the top of my head. My own mother had "cosmetic" surgeries to re-set her toes which had been deformed by decades of fashionable shoes, and to remove varicose veins which are painful. Occasionally one reads about someone's hearing, sight, or power of speech being restored also.

Maybe only a few things, but still affecting millions, if not tens of millions of people.

I also think that medicine is a victim of its own success, in two ways.

One, how many funerals of people in their thirties to fifties does the average person go to, these days, compared to say the 1920s? It's hard to see things that don't happen.

Two, success breeds hubris which breeds a sense of being right whatever the evidence may say.


I agree on all of the above as useful as well.


I feel proponents of evidence based medicine are perfectly aware of the enormity of the problem, and are working hard to improve the situation.


But the point is that laymen are not, and they put too much faith, on average, into our medical institutions.

Our understanding of the human body has advanced enormously with the advent of modern science, but it is still far less complete than most people probably realize when they interact with doctors. Not to mention systemic issues (common to any technical discipline) where medical professionals have to effectively practice with a degree of faith because no one has time to actually review the literature underpinning any given consensus, and that occasionally breeds long lived orthodoxies which do more harm than good...


Who exactly should the laymen put their faith into if not medical institutions? That, and the scientific process, are all we’ve got.


Medical practice and research can be decades away from each Other. Use your own judgement. Smoking was a weight cure before it wasn't.


>In my experience, many people have a quasi-religious belief in the capability of modern medicine to perform what would otherwise be called a miracle. This belief is typically held without any evidence whatsoever.

I found the same thing for science in general. When I did my PhD and saw how the sausage was made, I was blown away by how obviously unscientific and irrational the entire process of science was.


> many people have a quasi-religious belief in the capability of modern medicine to perform what would otherwise be called a miracle.

That's probably because miracles are being pulled off on ocassion


Since when is "evidence based medicine" defined as ignoring patients with currently untreatable conditions. There are enormous amounts of funding and effort constantly devoted to developing new treatments. I'm not sure how else you want to practice medicine other than "evidence based".


> many people have a quasi-religious belief in the capability of modern medicine to perform what would otherwise be called a miracle. This belief is typically held without any evidence whatsoever.

The thing is, application of the germ theory of disease and Harvey's theory of circulation of the blood by action of the heart (and consequent developed understanding of the role of the blood, and of blood types) did produce miracles. Reliably safe milk and meat. Penicillin. Reliably useful blood transfusions. The tetanus vaccine. The polio vaccine.

The great polio epidemic was only four-ish generations ago. In my childhood I knew one or two people in iron lungs, having contracted polio before the vaccine. The vaccine was miraculous to every parent at the time.

The evidence has been culturally transmitted through the generations.


"This is why things like 'evidence based' medicine is so dangerous -- we don't have evidence for the vast majority of impactful conditions, simply ignoring patients with these conditions is not a workable solution."

Citation needed.


What kind of citation are your looking for? It's just obvious to anyone who works in the medical field. There are only evidence based treatment guidelines for a minority of conditions, and those often don't account for individual variations between real patients. So physicians often have to resort to trial and error in order to find an effective treatment.


Ignoring them is unethical. Treating them with an unscientific treatment is unethical.


And the ethical alternative is, for chronic conditions, someone who suffers for their entire life and is not allowed to do anything about it?

Look at every wastebasket diagnosis (yes, that's a real term) out there. There is no "ethical", approved treatment. In fact, there's not even an understanding of what the condition is. Instead, doctors work down a list of bad ideas with their patients: all the various medications, supplements, and even surgeries that have ever reputedly worked. Many have uncertain evidence, many more have no evidence at all. Some patients eventually hit on something that works for them. Others don't.

According to your short statement: that's unethical. Bad. Stop!

So what's the alternative? Suicide? Doing nothing is intolerable.


I think you're being a little unfair to wastebasket diagnoses. you need something for insurance codes, for drug indications, for publishing research on. having a bucket of similar syndromes is a start for drilling down further. and often you can treat things supportively, even if you can't modify the disease itself.

doctors need to be up front with patients about wastebaskets though, and rule out other diagnoses. it's wrong to chalk someone's fits up to FND until you've ruled out epilepsy and other organic causes, for example. and even things like FND are probably "real", we just don't know enough about them yet.


It really depends on how wastebasket diagnoses are used. I've seen doctors lean on them without telling the patient, and the patient turned out to have some other valid diagnosis. It's an awful situation and significantly erodes relationships.

Even someone's fits might not be an FND after ruling everything else out. There are atypical presentations of organic diseases we don't have tests for. It's fine to use wastebasket codes as long as the patient understands, but I've also seen doctors lean on certain things really early in notes (eg FNDs) without much consideration, and it's a little much to me.


I'm criticizing the parent's comment that it's not ethical to treat people with "unscientific" treatments. This is in the context of an article criticizing phenylephrine, which differs from really, deeply unscientific stuff like, say, acupuncture or homeopathic remedies, in that it has studies going both ways but the balance (according to meta-analyses) is that it's useless as a decongestant, and is thus unscientific to have on the shelves.

How does that relate to wastebasket syndromes? At least for the one I have (a migraine variant) -- every single accepted treatment falls in the same basket. Some evidence, but not enough that it's really a good idea to use it. Unless, that is, the syndrome is ruining your life.

And behind this argument that yet more things should be taken off the shelves and regulated, I'll note that the US has one of the most restrictive, patient-unfriendly regulatory atmospheres in the world. It's goddamned ridiculous, pardon my French, that the "solution" to phenylephrine not being a good decongestant would be to regulate it so that it can't be sold without a prescription. Doubly so, in a country with a healthcare industry that's so thoroughly corrupt and dysfunctional that a vast swathe of patients can't afford to even go to a doctor to get whatever tenuous recommendation they may have. (Phenylephrine, by the way, has a number of uses other than decongestion.)

... that was a rant. But this system is truly screwed up, that fact has affected my life quite negatively, and it's annoying that the knee-jerk reaction so many people have is to keep playing along with this completely broken ethical system.


And note that such things don't always remain wastebasket diagnosis. Sometimes we figure out what's actually going on.

Personally, I think many of the cases where something works for one patient but not another is actually saying there's more than one possible cause for the situation.


Or, that the cause is known but there are several potential mechanisms behind it. Usually those mechanisms are not well understood and may in turn be triggered by something that may not have been explained by science yet -- an infection, genetic abnormality or even an injury earlier in life.


Depends heavily on your definition of unscientific. There are many treatment modalities that exist that are considered "ineffective" simply because they don't work en mass on large populations. AKA they don't scale for identified conditions. This can be as much a problem with diagnostics and labels which create cohorts as much as the effectiveness of treatments.


In practice the rest of my life is probably going to be playing whack-a-mole with symptoms without any understanding of what's causing the underlying issues. I have no meaningful diagnosis so by your standard there can be no treatment.


> This is why things like 'evidence based' medicine is so dangerous -- we don't have evidence for the vast majority of impactful conditions, simply ignoring patients with these conditions is not a workable solution.

Maybe that's true at a population level, but I can take it. Just tell me the truth. If you can't treat me, then just say that. I don't want to be "ignored," but I'm not interested in being placated, either.

This, basically: https://www.youtube.com/watch?v=NyugCJ40IIw


Indeed.

My experience has been that 75%-90% of US doctors are borderline incompetent and that I can and do routine out-diagnose my own maladies better. Of course this also means I can lead them by the nose to get them to diagnose anything I please. Which is horrifying - they are Epic Fail if I can do that.

Admittedly I was at one time planning to become a doctor myself so I ravenously consumed everything about biology and medicine while a teen but it seems very few doctors were anything like THAT with passion I had.

Even worse few seem to know what the scientific method is, let alone practice it in any as doctors. This is equally horrifying.


Honestly, the "lead the by the nose" situation is a large improvement from many of the doctors I've met, who will actively ignore any evidence that doesn't fit their favourite diagnosis. Classic example: my partner was impaled through the leg in a workplace accident and had to go to the ER. The doctor declared that the wound was, in fact, a diabetic sore and that my partner had faked the accident to cover it up. Since the "diabetes" had reached the point of developing sores, he was prescribing insulin. However, the insurance company, "as a formality", insisted that he measure her blood sugar. After TWENTY blood tests came back with a perfectly healthy blood sugar level, he stuck with the diabetes diagnosis and just prescribed a different drug for it.

A runner up was a co-worker who WAS diabetic after a pancreatic infection. He later went to the doctor to complain about some knee pain. The doctor looked at the symptoms (e.g. joint paint, diabetes, shortness of breath) and diagnosed asymptomatic obesity. The "asymptomatic" part comes from the fact that my co-worker was built like David Bowie. However, the doctor declared that the remaining symptoms pointed towards obesity and that losing forty pounds would clear up all his issues.


For reals. I have alpha and beta thalassemia minor. It causes funny bloodwork (namely, it looks like I'm anemic). When I first got my results, doctors wanted me to see an oncologist and cardiologist and do all these tests and procedures. I thought it was odd because my bloodwork had always come back this way, so I researched a bunch of conditions that could cause the numbers I had (because my numbers weren't quite conducive to cancer or heart problems either). Anyway, I realized after making lots of lists and ruling things out that I likely had mild thallassemia, and when I finally got tested, it turned out I did.

No surprise there given my ethnic background (which frankly should have been a dead giveaway, because thalassemias are not uncommon at all), but I saved myself from being put on iron supplementation which is already potentially dangerous for a man, but especially dangerous to someone with thalassemia.


The closest we currently get to scientific method in medicine is via double-blind large randomized trials, which is not applicable for a single doctor's practice.


> In my experience, many people have a quasi-religious belief in the capability of modern medicine to perform what would otherwise be called a miracle. This belief is typically held without any evidence whatsoever.

Modern medical results would absolutely be viewed as a miracle to someone just a few decades back. Something like 90% of cancer cases are either cured or successfully suppressed (to the extent that the sufferer ends up dying of some other cause). Almost all endemic diseases have vaccines. Virtually no one dies of a bacterial infection today. Even most autoimmune disorders have effective treatments now.

The fact that there are problems yet to solve in medicine, and remaining voodoo in its practice, still doesn't change the fact that we're living in a miraculous age.


> Something like 90% of cancer cases are either cured or successfully suppressed (to the extent that the sufferer ends up dying of some other cause).

This is not correct. It's true in the US for prostate cancer, which is one of the most notoriously treatable forms of cancer, but it's not true for cancer at large.

(It's also not true for prostate cancer in many other developed countries, which actually have a worse track record at treating cancer than the US does)


90% of cancer cases are cured or suppressed? I have to call BS. Cancer is the number two cause of death in the US.


UK data from 2010-2011:

"Half (50%) of people diagnosed with cancer in England and Wales survive their disease for ten years or more" [0]

Since lots of cancers and lots of deaths are in old people, 10 year survival is quite a high bar.

[0] https://www.cancerresearchuk.org/health-professional/cancer-...


> "Half (50%) of people diagnosed with cancer in England and Wales survive their disease for ten years or more" [0]

Lumping all forms of cancer together is misleading, because cancers have dramatically different mortality rates. You need to separate by type of cancer, or else you're really just measuring the relative prevalence of different cancers.

As it turns out, the UK has a relatively low survival rate of cancers compared to other developed countries, including the US.

https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


That just goes to show how unfair the medical system there is. Not everyone can afford treatment, and those who cannot are already otherwise more at risk due to the affordability of processed foods imposing unhealthy ”lifestyle choices” as well as downright hazardous living and working conditions.


> That just goes to show how unfair the medical system there is. Not everyone can afford treatment, and those who cannot are already otherwise more at risk due to the affordability of processed foods imposing unhealthy ”lifestyle choices” as well as downright hazardous living and working conditions.

I get that this explanation fits with a common preconception of the US, but it doesn't bear out in reality. The US has a higher survival rate for all common types of cancer than all other developed countries, and this has been consistently the case for the last three decades.

https://www.thelancet.com/journals/lancet/article/PIIS0140-6...


> Even most autoimmune disorders have effective treatments now.

As someone who has autoimmune diseases, you must be getting your information from a source I'm not familiar with. Auto-immune disease are a long game of guess, test, and adjust.


Evidence based is good if you can get it. If not things that are suspected or even experimenting with your own body is worth it.

For example if there is no evidence that a better diet or reducing EMF or strength exercise or drinking pure water etc. will help your condition but there is no harm and fairly low cost to try.

Stay sceptical and open minded too.


We have miracle cures for many things, but medicine is worse than ever.

I grew up with ER doctors. There are dozens of things that would have killed you on 1990 that you’ll walk away from today.

But in the slow transition from a professional discipline to a sort of IT help desk for health delivery, billing comes first, and even that sucks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: