Beyond buzzwords: the role of the behavioural sciences in dispute resolution

4 May 2021
London International Disputes Week
Beyond buzzwords: the role of the behavioural sciences in dispute resolution

For lawyers, the impact of these biases on judges and arbitrators are particularly interesting, either out of genuine concern for the quality of the judicial decision-making process or to play into these biases as an advocacy tool (or both). Of course, it helps that ‘behavioural law’ studies also make for great headlines and buzzwords. Who isn’t interested in a story about judges rejecting parole applications more often when they are hungry? Or judges punishing prisoners with longer sentences after rolling a pair of dice rigged to land on a high number?

Proceed with caution

It can be tempting to take these headlines and buzzwords at face value; it is not uncommon to hear someone refer in broad terms to “a study they heard about which showed that [insert controversial finding]”. However, in the light of the complexity of the underlying subject matter, it is important to proceed with caution.

For example, the study linking the outcome of parole decisions to judges’ food breaks is cited often (admittedly, including by myself) and could leave even the biggest optimist disillusioned with the justice system. However, as has been pointed out before (see this insightful piece), more detail on the study is required to properly evaluate its findings. For example, it is not entirely clear which specific factor influenced the judges’ decisions (was it food, or rest, or mood?), whether the observed effect can be replicated outside the narrow conditions of the study, and whether there are artefacts within the underlying dataset and analysis which might have contributed to the controversial result. Whilst the ‘headline’ finding of the study as it is often presented (“judges make harsher decisions when they are hangry”) might sound appealing, the science may point to a softer conclusion or, at least, one subject to certain caveats.

Applying behavioural science concepts in legal practice

Using the behavioural sciences in (legal) practice can be a complex exercise, both for decision-makers trying to de-bias their decision-making and advocates seeking an edge by playing into those biases. To employ these concepts effectively, it is necessary to grasp (i) what, exactly, the phenomenon beyond the buzzword or headline entails, (ii) how they are likely to affect a decision-maker, and (iii) how they might be neutralised or gamed/exploited.

The answers to these questions are often not straightforward, for a number of reasons.

First, as alluded to above, it is usually necessary to scrutinise the experimental design and determine how much weight to accord to a study’s findings. This is often easier said than done. Effective scientific critique requires knowledge and skills in which lawyers do not typically receive formal training (though, of course, some sifting can be done on the basis of common sense).

In practice, it can also be difficult to determine where to draw the line. I once wrote a paper on the subconscious impact of repeat appointments on an arbitrator’s ability to remain impartial. I found a study whose results fully supported my hypothesis but whose method relied on an experiment conducted on ~100 or so college graduates. Was the study conclusive evidence of the (in)ability of highly-experienced arbitrators to exercise impartial judgement after a number of appointments by the same party? Probably not, but might it have some probative value? And what about a subsequent study conducted on professional auditors? Was that robust enough to support my conclusions? Even when methodological issues (such as sample size, participant demographics, etc.) are unproblematic, the results of many studies carry a degree of uncertainty as to the implications of their findings in light of the ‘replication crisis’ (i.e. the phenomenon that the results of many studies have not been successfully replicated) – how to decide which studies to ‘trust’?

Second, it is not always clear how any concept in the behavioural sciences translates to a legal setting. In other words, it is not a given that findings from studies conducted on ‘laypersons’ necessarily apply to judges and arbitrators. For example, trained legal decision-makers may demonstrate greater immunity to certain biases (and greater susceptibility to others) than laypersons, and the design of legal proceedings may stimulate reflective thought such that they have a natural de-biasing effect. Some studies suggest, for example, that judges might be better than average at limiting the impact of the representativeness heuristic and framing effects on their decision-making, though they were certainly not immune to a number of other biases in experimental settings. For practical reasons, the specific effect of biases on judges and arbitrators are extremely difficult to test in the ‘real world’, though fortunately more work is being done in closely equivalent lab settings.

Third, if these hurdles are overcome, there is still the question of how a specific bias affects the legal decision-maker in a particular situation. The precise effects of even very robust concepts such as anchoring[1] might be complex to identify and apply in practice: e.g., what actually anchors the decision-maker and when? Is it the claimant’s inflated damages claim, the respondent’s inflated counter-claim, or even the paragraph numbering of the submissions?

The role of the behavioural sciences in dispute resolution

The factors identified above highlight that it can be challenging to determine which heuristics and biases play a role in legal decision-making and what their impact is. For the avoidance of doubt: this is not a plea to disregard behavioural law or the application of concepts from behavioural science to dispute resolution. Quite the opposite. It is clear that heuristics and biases impact legal decision-making. It is not only fascinating to investigate the extent of this impact further, it is also in the interests of justice to improve the quality of decision-making. However, it is key that the practical implications of these complex concepts are not unduly simplified and that any conclusions that are drawn are, in fact, backed by the science. To know if that is the case, it is important to look beyond the buzzwords – even if that may ultimately lead to a less catchy headline.

A session on ‘Psychology of Decision Making in Dispute Resolution’ is taking place on Wednesday 12 May at 15:30 and, in preparation for the session, you are invited to participate in a short survey on decision making. The survey closes for submissions on 6 May 2021.

This blog was written by Rutger Metsch, an associate at Herbert Smith Freehills LLP, a member of LIDW.

 

[1]       The phenomenon that exposure to a seemingly irrelevant number influences the decision-maker’s subsequent numerical judgment. E.g., in the study discussed above, judges were on average giving longer prison sentences after rolling loaded dice that showed a high number (the anchor) than when the loaded dice showed a low number.