Rational Crime and Subjective Probability
Here’s a personal tidbit about me – I’m a bit of a true crime buff. I often listen to true crime podcasts and audio essays while unwinding at the end of the day, or during my time in the gym. (My wife finds the idea of listening to podcasts talking about horrific crimes to be an incomprehensible method of winding down, nor does she understand how I can possibly enjoy horror films.) One thing I’ve noticed is that in many of the cases I’ve heard, the criminal drastically overestimates their odds of getting away with the crime.
One of my favorite shows in the genre is called The Casual Criminalist. In this show the host, Simon Whistler, is provided with a script from one of his writers that he’s never seen or read before recording. He then reads it off for the first time on the show, frequently breaking from the script to add in some of his own side commentary. Very often, this takes the form of him absolutely roasting the criminal featured in the episode for their ineptitude both in committing their crime and their attempts to cover up what they did, such as this case involving a young man. More famously (or infamously), there is the case of Leopold and Leob, two young men convinced they were capable of masterminding the perfect crime, but whose actual performance was didn’t exactly rise to the level of Professor Moriarty.
In a recent post I once again touched on Gary Becker‘s model of criminal behavior. To briefly recap, Becker modeled choices about committing crimes as a form of rational behavior. Criminals consider the expected payoff of their crime, and compare it the expected cost of committing the crime. The expected cost is a function of the probability and severity of punishment. A severe punishment might provide little deterrence if the odds of being punished are minuscule. And near-certain punishment might also provide little deterrence if the punishment itself is trivial. To put more specific numbers to it, a 0.1% chance of a $1,000 penalty provides as much deterrence as a guaranteed $1 penalty – not much. In the case of Leopold and Leob, even though their crime carried the possibility of a death penalty or a life sentence, their (false) certainty that they could evade detection nullified any deterrence.
This highlights an important point. When speaking of criminals behaving rationally accounting for the probability of punishment, what’s relevant is the criminal’s own subjective estimation of the probability of being caught and punished. When economists talk about people making “rational” decisions, that does not mean their decisions can’t be mistaken or ill-informed. If a criminal drastically underestimates their likelihood of being caught, their estimation of the expected costs of the crime will be lower than the actual costs. As a result, they might end up carrying out a crime that is “not worth it” by their own lights.
This does not show that the criminal was behaving “irrationally” as economists use the term. But if criminals systematically underestimate their likelihood of being caught and punished for a crime, that would lead to criminals overproducing their crimes – criminal acts that might have been deterred if they had a better understanding of the likely outcome will fail to be deterred. This provides one argument in favor of stringent punishment for crimes. If criminals underestimate the expected costs of their behavior because they systematically underestimate the odds of punishment, the only way to raise the expected cost is to increase the severity of punishment.
But sometimes, criminals are able to more accurately assess their odds of getting caught. Consider the case of Joseph DeAngelo, known by many monikers, most famously as the Golden State Killer. His criminal activities carried on from 1974 to 1986. He was finally apprehended in 2018, 40 years after his spree ended. DeAngelo, it turned out, was a police officer. As a result, he knew far better than most how to avoid detection and how investigations would be carried out. He was very careful, for example, to never leave fingerprints behind. But there’s something particularly unusual about DeAngelo – the 40 year gap between when his crime spree ended and him being caught. In those 40 years, he didn’t strike again. This is noteworthy – serial killers almost never stop. A serial killer who stops killing is so unusual that it’s usually taken as a sign that the killer has themselves died or perhaps been arrested for some other crime.
So why did DeAngelo stop? Well, shortly after his final act as the Golden State Killer, something happened that made waves in the law enforcement world. The first court case where DNA evidence was used to secure a murder conviction occurred. This put DNA evidence on the radar for law enforcement officers everywhere – including DeAngelo. And he realized what this meant. While he had taken great care to avoid leaving evidence that police investigators could link to him up until that point, he also knew his DNA would be present at previous crime scenes and very likely new ones as well. Suddenly, the “probability of punishment” variable for him shot up drastically, and combined with the quite high severity of punishment that would await him for his acts, carrying out further crimes was suddenly “not worth it” anymore.
And thus a very brutal and pathological serial killer whose criminal acts had been intensifying and escalating for over a decade suddenly stopped. This makes perfect sense under Becker’s theory of crime. And while rational choice theory isn’t a perfect model to describe all human behavior in all circumstances, I think this shows that it explains a much larger scope than most give it credit for.
(1 COMMENTS)