Happy New Year; had a busy fall and I see I didn't get around to posting.
Two upcoming conference presentations this month, in both of which I'll focus on parallels between the private and pubic sectors with an emphasis on risk management on organizations that straddle that dividing line in terms of governance, and therefore decision processes and risk appetite.
The request was innocuous and not atypical. Can you help our company develop risk reporting, and put in place a "small risk function" to drive it? Sure, mature risk organizations frame their raison-d'être more broadly, but many successful risk programs have as their genesis a request for better risk reporting.
However, further detail (through an intermediary) was not encouraging. The risk function was to organizationally report to the head of public relations, so as to stay "on message" with stakeholders. Not exactly what a risk professional brought up on a diet of risk management independence, three-lines-of-defense is keen to hear. I nearly declined to initiate discussions, suspecting strongly this might be a perversion of risk management into risk whitewashing instead.
But, a couple of conversations later, we're reshaping it into something that does make sense. It's a company that has been stuck in deterministic, head-in-the-sand, don't-ask-don't-tell thinking. Stakeholders are getting restless, and when they're not getting good answers to their risk-related questions, supplying their own, less-than-perfect answer. Not clear yet if there is a professional collaboration to be had, but we've progressed to a much more mutually satisfactory conversation about how to pragmatically start having a top-management-and-stakeholders dialogue about risk, not focused on staying "on message" but instead broadening the message. For me as consultant, it's a good lesson in humility. Just because the initial framing of the issues rubbed uncomfortably on some core tenets of risk management as conventionally framed doesn't mean there isn't a meaningful opportunity for better engagement with risk and uncertainty.
[Another excerpt from my book. A bit unpolished, but sharing as-is since I've seen several companies falling into this trap recently. Footnotes have ended up just pasted in smaller font with FN: before them]
In [a previous section, not included in this excerpt], we discussed the usefulness of considering a range of expert opinions — predictions, if you will — for important drivers of uncertainty in your business. We also discussed the benefits of bringing experts together, and engaging in a dialogue about the rationale behind their opinions. We even discussed formalized techniques where the dialogue is interspersed with iterative polling, to see whether the discussion increases consensus or hardens into concrete lack of consensus that can underpin useful alternate scenarios to consider.
Unfortunately, there is the old saying, “if all you’ve got is a hammer, everything looks like a nail”. Thoughtful executives who decide to bring probabilistic thinking to their decisionmaking routinely tend to abuse expert polls like this to bootstrap a probabilistic range in a flawed way.
Consider the example in the following table, showing the range of near-term Canadian to US dollar exchange rates forecasts published by major Canadian banks in late February 2018. These numbers represent the expected (or in some cases, most likely) exchange rates from the banks’ models, which are often quite complex and based on various macroeconomic, financial (e.g. interest rate), and momentum/sentiment-driven indicators the banks monitor or predict as well.
An executive needing to stick in a base-case assumption into their short-term financial plan could do a lot worse than to put in the average or median number from these figures, or even just consistently use the prediction of one bank they have chosen, without cherry-picking. But suppose they have now “got religion” about probabilistic thinking, and want not just a point-estimate single base case prediction, but a probability distribution, or at least error bars on the base case estimate. They have a hammer - probabilistic thinking - and see 6 nails: 6 bank estimates.
It seems beautifully set up to do something like the following: throw out the extremes (1.21 and one 1.30 for Q2) as potential outliers and create a range from the rest. “The dollar will most likely be between 1.22 and 1.30”. Or do something mathematically fancier, like calculating quartiles [FN: In this case, this yields roughly the same range. By the way, there are different algorithms for calculating quartiles and other percentiles; if you really want to do this, you should for instance read up on the difference in Excel between the QUARTILE.INC and QUARTILE.EXC function. But in this instance, the issue is elsewhere.] from the “sample” and using them as whiskers around the median: “P25/median/P75 = 1.22/1.26/1.30”.
This is, however, alchemy. It looks impressive, but is meaningless. Each of the banks’ estimates is in itself the expected value of their respective probability distribution of the outcome. They may not have actually calculated it probabilistically, or even refer to it in that way, but implicit in their prediction is a sense that this is the “best single guess” that balances out reasons it might be lower or higher. It systematically averages away any thinking they may have done about plausible ranges of outcomes.
If you’re not persuaded, think of this thought experiment. Suppose there were only one bank, which as a result of its internal deep analysis decided the dollar was exactly equally likely to have a value of 1.0 and 1.5, precisely. If asked for a single point prediction, it would be quite reasonable for the bank to say, “1.25”, the average. The alchemy described above would take that one data point as gospel, and say a dollar will be worth exactly 1.25, when in fact the analysis done would actually be saying 1.25 has zero likelihood, it’s either much lower or higher. This example is extreme, but layers of that type of thinking on a smaller level underpin each bank’s prediction.
So - what can you do?
First, looking at the range of predictions and using heuristics like throwing away the extremes and seeing what is left is a good *qualitative* indicator for the level of confidence you should have in any prediction. As it happens, usually the banks’ average predictions end up being fairly close to each other, within a couple of cents.[FN: The fact at this point in time they were not has hit the popular press, for instance https://www.theglobeandmail.com/report-on-business/top-business-stories/crazy-as-a-loon-the-chasm-in-forecasts-for-the-canadiandollar/article38019810/] The fact the range is now 8+ cents is itself indicative of a greater degree of uncertainty.
Second, if you can, go to the source (or to other sources) to gain a true range of predictions, ideally with scenarios and rationales. For a Canadian company at this time, it is much more important to understand *how low or high could the Canadian dollar plausibly go*, based on how different factors evolve. If you want, you can use sampling on scenarios to give you a range, in the manner discussed in Section [X.X]. As mentioned there, there are all sorts of biases associated with selection and weighting of the scenarios, but you are not falling into the trap of reducing the range of uncertainty just because you don’t see it.
Third, look for something else than nails, and bring out your screwdriver or glue or whatnot. For instance, looking at historical US/Canadian dollar exchange rates, it turns out that about 20% of the time, the rate changes by more than +-5% of the time over a 3 month period. And on average every second year, over some 3 month period the rate jumps by about 10%. Some version of this type of analysis is probably more useful in terms of putting semi-probabilistic error bars around any chosen single prediction than alchemy on the range of banks’ expected value predictions.
[FN: Warning: as discussed in [xxx], a bug as well as feature is that the outcomes of such analysis depend on the choices you made in structuring it. In this instance, it is based on looking at the ratio of month-end exchange rates to the rate 3 months previous, over the past 10 years. And the 20% figure refers to 10% of the time having a decrease of 5% or more, and 10% of the time an increase of 5% or more. Whether this is right framing to use depends on the business problem this is being applied to. For instance, the ideal analysis is different if the goal is to estimate how much quarterly financial results will be influenced by (unhedged) currency exposures integral in the business, compared to if it is to stress test how much could be gained or lost on a single contract that generates a revenue-cost currency mismatch open over a period of 3 months. That is outside the scope of this book.]
I'm making two upcoming conference presentations, both keynote speeches, one followed by moderating an executive panel. I will hopefully will be able to post material soon afterwards; contact me if interested earlier.
My former colleagues at McKinsey have just published an article on stress-testing for nonfinancial companies. Unsurprisingly, I very much agree with them. Whether the goal is stress-testing (a specific risk-management centred viewpoint) or strategic planning, companies need to think more in scenarios to effectively get their arms around the uncertainty they need to navigate.
I've written about this on these blog pages and in an article planned for a Canadian mainline business publication (where it ultimately ended up on the cutting room floor after internal conflict in the editorial board in which I was collateral damage) in the context of the election of Mr Trump as president.
Ultimately, stress testing, scenario planning, and risk management live in the same ecosystem, just in different ecological niches. Ideally they work together, and which one is a priority to exercise for a given company, or provides the best path towards a more holistic approach, depends on the circumstances.
I'm a member of Umbrex, a network of independent consultants with a background in the top strategic consulting firms. While there are a number of companies out there that uberize the consulting model (i.e., do the client development front-end, the billing/support back-end, and subcontract independent consultants to do the actual work), Umbrex is more of a consultants' pub, or actually members-only club. We may form joint teams when an opportunity warrants, but above all, we exchange ideas, and refer our colleagues to trusted experts or service providers when needed. From a client service perspective, it erodes one of the biggest structural advantages of work in a large consulting firm: the ability to quickly find Someone who knows about the Widget market in Micronesia when that's what unexpectedly bubbled up in the project.
We also support and challenge each other on the big picture as well as the nuts and bolts of running an independent consulting practice. A lot of this is on a private mailing list, but the founder of Umbrex, Will Bachman, recently launched a publicly-accessible podcast, and this week I'm his guest, talking both about what I do and how I work.
For anyone else consulting independently, or interested in what it's like, Will's interview with me is #19 of a growing library of episodes, all of which can be listened to not only on the website but as a regular podcast through itunes or other channels.
Belief is stronger than analysis. This is well-known to those who study cognitive biases (Kahneman, Slovic, Ariely, etc.) but interesting to see it confirmed with an experiment involving politics and math: moderately challenging calculations more likely to be "done wrong" if the results go against the subject's political beliefs than if the results are politically neutral. And the more "numerate" the subject, the stronger the effect.
I think this is an important part of the answer for those of us who look at the political situation (anywhere...) and are amazed how "sane" people robustly maintain their belief system impervious to "evidence", and the importance of patently flimsy echo-chamber talking points to anchor those beliefs.
The underlying research is at http://static1.1.sqspcdn.com/…/138…/wp_draft_1.5_9_14_13.pdf The math task was Bayesian inversion of a contingency table, something humans are bad at (e.g. health treatments). I'm not 100% convinced the hypothesis "people do math wrong" (a quality observation about Kahneman's thinking-slow system) is proven versus an alternative hypothesis of "people don't bother to engage thinking-slow if political beliefs provide a salient thinking-fast answer".
Am working now on a few situations where the benefits (and challenges) of using algorithms to automate decisions have come up, e.g. monitoring/escalation, University admissions, credit, etc. So was intrigued by an article in last months' HBR (hbr.org/2017/04/creating-simple-rules-for-complex-decisions) on scoring systems by judges.
Human beings have all sorts of biases making decisions under uncertainty (which is what this is). And, left to their own devices, exhibit a tremendous variability of outcomes not explainable by any evidence provided ("hunch" is highly person- and moment-dependent). So there is a lot of attraction to a big-data solution or even a small & simple one like the checklist in the article. But all such algorithms can only be calibrated inside their comfort zone, whatever it is. Within that zone, it makes sense they will often improve outcomes, and by the way consistency.
But not enough thought is put into determining that comfort zone, and making the algorithms self-aware enough to escalate when they switch from interpolation within their calibrated zone to extrapolation. In particular, don't trust any "simple" scorecard that is silently structurally linear, for instance anything where you just "add up" points from different questions or categories. The world is inherently nonlinear. And lack of engagement with where linearity is a good surface of best fit (which is what such an algorithm is doing) means lack of escalation.
An article in the WSJ (quoted also on vox.com) reports that the Trump transition team pressured government staff to generate unrealistically rosy economic forecasts. "[W]hat’s unusual about the administration’s forecasts isn’t just their relative optimism but also the process by which they were derived. [...] they weren’t derived by any process at all. Instead of letting economists build a forecast, Trump’s budget was put together with transition officials telling the CEA staff the growth targets that their budget would produce and asking them to backfill other estimates off those figures.”
Setting aside questions of delimitations of responsibility and effective expert collaboration, this also goes to the heart of the difference between uncertainty and risk. Navigating uncertainty involves thoughtfully exploring the range of paths the future may follow, and defining and working towards reasonable objectives consistent with that. A "what would you have to believe" scenario can be part of that, but it's hazardous if you bully others to make it your most important or even only one. Managing risks involves exploring concrete reasons why you may fail to achieve your existing objectives. Constructing a forecast that lets you meet them and then working backwards to deduce exactly why your assumptions need to be "too rosy" to achieve that can be extremely helpful. Of course, merely forcing your expert collaborators to feed you back a narrative that falls in line with your marketing message is neither.
Principal, Balanced Risk Strategies, Ltd..