> Note that "You don't stop securing it just because you've found one good option" is NOT the same thing as saying "You don't stop securing it until you've closed every possible security gap and compromised usability".
It seems to me that 'defense-in-depth' can very easily become a mantra that inevitably means drifting towards the latter. What are the real guidelines for telling when enough is enough? Because ime people who even can articulate anything along those lines are way, way fewer than people who make appeals to defense-in-depth.
And I think this is part of the problem: without a principled way to assess what is gratuitous, repeated appeals to defense-in-depth will lead to security practices that heavily favor having more measures in place over having a good UX. This is because the environments where information security is most valued are already organizations that frankly, do not give a shit about UX. The customers for cybersecurity products are massive bureaucracies: large enterprises, governments, and militaries. The vendors that sell those products are embedded in a broader market where no software really has to be usable, because there's a fundamental disconnect between the purchasing decision and the use of the software. For all B2B software, the user is not the customer, and it shows in a thousand ways. In infosec things are further tilted in that lots of easy routes to compliance which are terrible for UX are falsely perceived and presented as strictly required, perhaps even by law.
The idea that in a B2B market which primarily serves large organizations and governments, you will get any organic weighing and balancing of security against usability 'for free' is sheer fantasy. So where is the real counterweight to the advice that on its own recommends 'always add more, unless you have good reason not to'?
I think this is a more elegantly stated version of my argument.
It's also why I strongly link this argument to cancer. It's an idea that grows unbounded until it's harmful, and by the time the organization realizes the harm, it's often too late to change.
> I think this is a more elegantly stated version of my argument.
Yep. I think your argument pretty much conveyed the same thing, along with a lot of anger and frustration.
I also agree with that anger and frustration— I've felt the same rage before, when I've been hit with blockers or UX degradation related to nominal or actual attempts to improve security. Restrictions that are ill-motivated (or whose motivations are just not clearly or convincingly communicated) are infuriating.
> by the time the organization realizes the harm, it's often too late to change.
This worry is the twin of the rage, for me, this sense that I can't do anything about it and it's never going to get better. A dreadful, reluctant admission to myself that the only way to stop the continual degradation of my work life will be to uproot myself: give up my job and everything I do like about it, leaving behind people I enjoy working with and reducing the amount of contact I have with them.
Happily, engaging directly with my company's infosec department directly often gives me hope and allays these fears somewhat. But generally, online discussion with people who implement security controls tends to reinforce my worry that, to borrow your metaphor, the disease is systemic and terminal.
Most 'cybersecurity professionals' (who are visible online, at least) transparently do not give a shit about UX, display flagrantly antagonistic attitudes about users and developers, and talk often about defense-in-depth but never articulate any inherent limits for appeals to defense-in-depth beyond 'well don't bother with measures that don't increase security at all'. All of it sends strong signals that people who value UX, DX, autonomy, morale, and well-being, to the extent they are present at all, are outliers in infosec who do not belong and have no hope of being effective.
And then the response to someone openly including a dimension of emotionality in an argument about a security measure they feel is gratuitous and cumbersome is
> Did... did [a cumbersome security measure] hurt you?
Like, seriously? Yes. Indeed it did and does.
But more than the security measures themselves, the pervasive attitude conveyed by that belittling question is the even bigger problem. And it generates many of the small ones.
It seems to me that 'defense-in-depth' can very easily become a mantra that inevitably means drifting towards the latter. What are the real guidelines for telling when enough is enough? Because ime people who even can articulate anything along those lines are way, way fewer than people who make appeals to defense-in-depth.
And I think this is part of the problem: without a principled way to assess what is gratuitous, repeated appeals to defense-in-depth will lead to security practices that heavily favor having more measures in place over having a good UX. This is because the environments where information security is most valued are already organizations that frankly, do not give a shit about UX. The customers for cybersecurity products are massive bureaucracies: large enterprises, governments, and militaries. The vendors that sell those products are embedded in a broader market where no software really has to be usable, because there's a fundamental disconnect between the purchasing decision and the use of the software. For all B2B software, the user is not the customer, and it shows in a thousand ways. In infosec things are further tilted in that lots of easy routes to compliance which are terrible for UX are falsely perceived and presented as strictly required, perhaps even by law.
The idea that in a B2B market which primarily serves large organizations and governments, you will get any organic weighing and balancing of security against usability 'for free' is sheer fantasy. So where is the real counterweight to the advice that on its own recommends 'always add more, unless you have good reason not to'?