Am working now on a few situations where the benefits (and challenges) of using algorithms to automate decisions have come up, e.g. monitoring/escalation, University admissions, credit, etc. So was intrigued by an article in last months' HBR (hbr.org/2017/04/creating-simple-rules-for-complex-decisions) on scoring systems by judges.
Human beings have all sorts of biases making decisions under uncertainty (which is what this is). And, left to their own devices, exhibit a tremendous variability of outcomes not explainable by any evidence provided ("hunch" is highly person- and moment-dependent). So there is a lot of attraction to a big-data solution or even a small & simple one like the checklist in the article. But all such algorithms can only be calibrated inside their comfort zone, whatever it is. Within that zone, it makes sense they will often improve outcomes, and by the way consistency. But not enough thought is put into determining that comfort zone, and making the algorithms self-aware enough to escalate when they switch from interpolation within their calibrated zone to extrapolation. In particular, don't trust any "simple" scorecard that is silently structurally linear, for instance anything where you just "add up" points from different questions or categories. The world is inherently nonlinear. And lack of engagement with where linearity is a good surface of best fit (which is what such an algorithm is doing) means lack of escalation. Comments are closed.
|
Martin PerglerPrincipal, Balanced Risk Strategies, Ltd.. Archives
February 2023
Categories |