Why devs don't like threat modeling: a cognitive science hypothesis
Photo by Yung Chang / Unsplash

Why devs don't like threat modeling: a cognitive science hypothesis

Webs
Webs

I've mentioned this study before but Sweller et al. (1998) point out that humans are bad at complex reasoning particularly long chains of complex reasoning in working memory. They're especially bad when they have no previous experience to reference.

Sweller and Co. looked at chess players and asked them to reproduce board configurations. Experts were able to reproduce board configurations more accurately than novices as long as those board configurations came from previous matches they had played. If the experts were given random board configurations, they faired no better than the novices. The experts were relying on prior experiences stored in long term memory in order to reproduce the boards. The novices had to use means-end analysis (MEA) to logic their way to reproducing the board configurations.

That MEA maxes out or exceeds the capacity of working memory which we call cognitive overload. Working memory is limited so once you burn it up that's it. Your brain can't brain beyond that limit. Long-term memory comes in with the assist by automating chunks of knowledge.

It's why reading letters and sounds as a kid is so taxing but as adults we read and produce words much more effortlessly. It's why driving at first it seems like there's so much to pay attention to but as we reach automaticity with parts of the process, it requires less cognitive load.

Threat modeling is also a complex process. It requires reproducing the architecture, knowing where and how to recognize what is at risk and why, and how to mitigate those threats. A dev might know their architecture (knowing part of it is more likely than all of it), but knowing what is at risk, why, and how to mitigate that risk is a different domain of knowledge. Thus, pulling from the infinite chasm of "what is a threat" with little previous experience to draw from means they're likely engaging in MEA in order to figure it out which leads to cognitive overload.

Threat modeling especially for those of us with wild brains, can venture to all sorts of places when you ask "What is a threat?" The first time I did it my answer was "Elephants". It should be no surprise that things requiring lots of cognitive effort tend to be avoided. As a dev, I can diagram the architecture, but if all the steps beyond that are amorphous and overwhelming chances are I won't do it. Something to consider next time you wonder if you're in infosec why devs don't do more threat modeling.



Join the conversation.