A while back, I did an interview for Will Bachman's Umbrex podcast, "How to survive and thrive as an independent professional". He's recently kindly posted a transcript, which includes the transcriber's well intentioned but hilarious mis-hearing. My book project, "Be Your Own Risk Manager", got turned into "Veer On, Risk Manager!" -- which some would say is a pretty accurate verb for risk management, especially around the 2008/2009 crisis. Cynics would say even now! Full transcript below.
The request was innocuous and not atypical. Can you help our company develop risk reporting, and put in place a "small risk function" to drive it? Sure, mature risk organizations frame their raison-d'être more broadly, but many successful risk programs have as their genesis a request for better risk reporting.
However, further detail (through an intermediary) was not encouraging. The risk function was to organizationally report to the head of public relations, so as to stay "on message" with stakeholders. Not exactly what a risk professional brought up on a diet of risk management independence, three-lines-of-defense is keen to hear. I nearly declined to initiate discussions, suspecting strongly this might be a perversion of risk management into risk whitewashing instead.
But, a couple of conversations later, we're reshaping it into something that does make sense. It's a company that has been stuck in deterministic, head-in-the-sand, don't-ask-don't-tell thinking. Stakeholders are getting restless, and when they're not getting good answers to their risk-related questions, supplying their own, less-than-perfect answer. Not clear yet if there is a professional collaboration to be had, but we've progressed to a much more mutually satisfactory conversation about how to pragmatically start having a top-management-and-stakeholders dialogue about risk, not focused on staying "on message" but instead broadening the message. For me as consultant, it's a good lesson in humility. Just because the initial framing of the issues rubbed uncomfortably on some core tenets of risk management as conventionally framed doesn't mean there isn't a meaningful opportunity for better engagement with risk and uncertainty.
[Another excerpt from my book. A bit unpolished, but sharing as-is since I've seen several companies falling into this trap recently. Footnotes have ended up just pasted in smaller font with FN: before them]
In [a previous section, not included in this excerpt], we discussed the usefulness of considering a range of expert opinions — predictions, if you will — for important drivers of uncertainty in your business. We also discussed the benefits of bringing experts together, and engaging in a dialogue about the rationale behind their opinions. We even discussed formalized techniques where the dialogue is interspersed with iterative polling, to see whether the discussion increases consensus or hardens into concrete lack of consensus that can underpin useful alternate scenarios to consider.
Unfortunately, there is the old saying, “if all you’ve got is a hammer, everything looks like a nail”. Thoughtful executives who decide to bring probabilistic thinking to their decisionmaking routinely tend to abuse expert polls like this to bootstrap a probabilistic range in a flawed way.
Consider the example in the following table, showing the range of near-term Canadian to US dollar exchange rates forecasts published by major Canadian banks in late February 2018. These numbers represent the expected (or in some cases, most likely) exchange rates from the banks’ models, which are often quite complex and based on various macroeconomic, financial (e.g. interest rate), and momentum/sentiment-driven indicators the banks monitor or predict as well.
An executive needing to stick in a base-case assumption into their short-term financial plan could do a lot worse than to put in the average or median number from these figures, or even just consistently use the prediction of one bank they have chosen, without cherry-picking. But suppose they have now “got religion” about probabilistic thinking, and want not just a point-estimate single base case prediction, but a probability distribution, or at least error bars on the base case estimate. They have a hammer - probabilistic thinking - and see 6 nails: 6 bank estimates.
It seems beautifully set up to do something like the following: throw out the extremes (1.21 and one 1.30 for Q2) as potential outliers and create a range from the rest. “The dollar will most likely be between 1.22 and 1.30”. Or do something mathematically fancier, like calculating quartiles [FN: In this case, this yields roughly the same range. By the way, there are different algorithms for calculating quartiles and other percentiles; if you really want to do this, you should for instance read up on the difference in Excel between the QUARTILE.INC and QUARTILE.EXC function. But in this instance, the issue is elsewhere.] from the “sample” and using them as whiskers around the median: “P25/median/P75 = 1.22/1.26/1.30”.
This is, however, alchemy. It looks impressive, but is meaningless. Each of the banks’ estimates is in itself the expected value of their respective probability distribution of the outcome. They may not have actually calculated it probabilistically, or even refer to it in that way, but implicit in their prediction is a sense that this is the “best single guess” that balances out reasons it might be lower or higher. It systematically averages away any thinking they may have done about plausible ranges of outcomes.
If you’re not persuaded, think of this thought experiment. Suppose there were only one bank, which as a result of its internal deep analysis decided the dollar was exactly equally likely to have a value of 1.0 and 1.5, precisely. If asked for a single point prediction, it would be quite reasonable for the bank to say, “1.25”, the average. The alchemy described above would take that one data point as gospel, and say a dollar will be worth exactly 1.25, when in fact the analysis done would actually be saying 1.25 has zero likelihood, it’s either much lower or higher. This example is extreme, but layers of that type of thinking on a smaller level underpin each bank’s prediction.
So - what can you do?
First, looking at the range of predictions and using heuristics like throwing away the extremes and seeing what is left is a good *qualitative* indicator for the level of confidence you should have in any prediction. As it happens, usually the banks’ average predictions end up being fairly close to each other, within a couple of cents.[FN: The fact at this point in time they were not has hit the popular press, for instance https://www.theglobeandmail.com/report-on-business/top-business-stories/crazy-as-a-loon-the-chasm-in-forecasts-for-the-canadiandollar/article38019810/] The fact the range is now 8+ cents is itself indicative of a greater degree of uncertainty.
Second, if you can, go to the source (or to other sources) to gain a true range of predictions, ideally with scenarios and rationales. For a Canadian company at this time, it is much more important to understand *how low or high could the Canadian dollar plausibly go*, based on how different factors evolve. If you want, you can use sampling on scenarios to give you a range, in the manner discussed in Section [X.X]. As mentioned there, there are all sorts of biases associated with selection and weighting of the scenarios, but you are not falling into the trap of reducing the range of uncertainty just because you don’t see it.
Third, look for something else than nails, and bring out your screwdriver or glue or whatnot. For instance, looking at historical US/Canadian dollar exchange rates, it turns out that about 20% of the time, the rate changes by more than +-5% of the time over a 3 month period. And on average every second year, over some 3 month period the rate jumps by about 10%. Some version of this type of analysis is probably more useful in terms of putting semi-probabilistic error bars around any chosen single prediction than alchemy on the range of banks’ expected value predictions.
[FN: Warning: as discussed in [xxx], a bug as well as feature is that the outcomes of such analysis depend on the choices you made in structuring it. In this instance, it is based on looking at the ratio of month-end exchange rates to the rate 3 months previous, over the past 10 years. And the 20% figure refers to 10% of the time having a decrease of 5% or more, and 10% of the time an increase of 5% or more. Whether this is right framing to use depends on the business problem this is being applied to. For instance, the ideal analysis is different if the goal is to estimate how much quarterly financial results will be influenced by (unhedged) currency exposures integral in the business, compared to if it is to stress test how much could be gained or lost on a single contract that generates a revenue-cost currency mismatch open over a period of 3 months. That is outside the scope of this book.]
Am working now on a few situations where the benefits (and challenges) of using algorithms to automate decisions have come up, e.g. monitoring/escalation, University admissions, credit, etc. So was intrigued by an article in last months' HBR (hbr.org/2017/04/creating-simple-rules-for-complex-decisions) on scoring systems by judges.
Human beings have all sorts of biases making decisions under uncertainty (which is what this is). And, left to their own devices, exhibit a tremendous variability of outcomes not explainable by any evidence provided ("hunch" is highly person- and moment-dependent). So there is a lot of attraction to a big-data solution or even a small & simple one like the checklist in the article. But all such algorithms can only be calibrated inside their comfort zone, whatever it is. Within that zone, it makes sense they will often improve outcomes, and by the way consistency.
But not enough thought is put into determining that comfort zone, and making the algorithms self-aware enough to escalate when they switch from interpolation within their calibrated zone to extrapolation. In particular, don't trust any "simple" scorecard that is silently structurally linear, for instance anything where you just "add up" points from different questions or categories. The world is inherently nonlinear. And lack of engagement with where linearity is a good surface of best fit (which is what such an algorithm is doing) means lack of escalation.
Since I consult both in risk management and decisionmaking under uncertainty, I often (including this week!) get asked about what is the difference in the two terms. It's not that easy to answer without resorting to a "we're gonna define it this way, and damn the consequences if other people use the term to mean something a bit different" brute force solution.
Here's an excerpt (from the book I'm writing) on how I see the difference. Basically, it's not about "when is an uncertainty a risk" and more about "where are we in the decisionmaking process" and "is there an expected base case future trajectory (and, in particular, a set of objectives)". Warning: 1400 words.
Comments welcome. We need to be pragmatic rather than prescriptive about this.
As you and your friends and colleagues digest the election news, are you thinking alternative scenarios of what the Trump victory *might* mean for the U.S., for you, for your company, for the world? Or are you merely thinking of what it *most likely will* mean, and adjusting your single, baseline scenario based on the most persuasive random information coming your way?
Most executives I know presumed Trump wouldn't win. Nevertheless, optimal risk management or management-under-uncertainty should have made all of us develop and continually adjust multiple possible scenarios. Not many did. If you didn't have multiple election outcome scenarios 2 months ago, did the narrowing of the odds in the past few weeks prompt you start thinking in that direction? Did you take any anticipatory actions to prepare for whichever scenario was not your expected or preferred one?
More importantly, are you getting away from baseline-only thinking now? There are multiple dimensions against which to tease apart what will happen going forward, for instance:
There is much angst about the anti-elite masses making "crazy, irrational, uninformed" choices such as Brexit or supporting Trump. It's not actually irrational.
In some risk modeling work I do, you calculate the likely range (actually probability distribution) of a company's financial results going forward and compare it to the bare minimum they actually need to achieve to survive. The smaller that safety margin, the less risk the company can take to ensure failure is tolerably unlikely. Paradoxically, at a certain point it makes sense to take *more* risk: when the baseline outcome is actually below the bare minimum needed, what in (American) football is called a Hail Mary pass makes rational sense. A (say) 25% of success is better than guaranteed (continued) failure.
The same is true for the sizable economically-dispossessed, security-concerned, and angry chunk of the population today, whether UK, US, or elsewhere. The current trendline is unacceptable to them, so any alternative is better. A high-risk "crazy, uninformed" one is quite rational -- especially given the paucity of choice.
The establishment strategy of trying to starve the disruptive alternative of its support base merely by highlighting its "craziness" will have limited effectiveness, especially over the long term. It needs a better, more broadly acceptable alternative whose baseline outcome is more broadly embraced to reduce the attractiveness of a blow-it-all-up Hail Mary.
How good really is your company's risk management? The Brexit situation provides a good litmus test.
The Leave win was surprising, but with polls running close to 50-50 in recent weeks, and information markets pegging Brexit likelihood over 20%, a company with prudent and effective strategic risk management cannot say they were "shocked", as in this article from today's WSJ.
I know two companies who anticipated the possibility of Brexit in their last scenario planning or strategic risk assessment exercises. One was a UK-based company, the other a North American one with low direct UK exposure, but correctly fearing a Brexit win would roil financial markets much more broadly. Did your company do something similar? Are you now fleshing out a couple of different possible Brexit evolution narratives going forward and what they might mean for your company, including your value chain partners, in the next few years? Or did your company just assume and trust it "wouldn't happen" and you're now scrambling to think it through?
More broadly, are you considering a longer-term "End of the Globalization Era" doomsday scenario? Something that covers a fundamental reshape of the EU, systematically increased protectionism in the U.S., and a global re-emergence of trade barriers? How it affects not only your direct economic drivers, but those of your customers, your suppliers, your competitors, and other stakeholders? You may not like that scenario, you may not even truly believe in it. But it's in the realm of possibility, and if your company is doing good strategic risk management, in the current environment it should be front and centre in your risk management and strategic planning. What steps should you take now? What plans should you start preparing? What instabilities will harm you and which ones may present an opportunity?
The risk management world is full of checklists, frameworks, and diagnostics on the quality of ERM or risk management more broadly. Sometimes a simple litmus test provided by fate is equally powerful.
I'm not an actuary, but do occasionally work with institutional investors, where the tradeoff between market risk (volatility as well as systemic macro risk) and runout/longevity risk is important, and has significant impact on optimal portfolio construction and risk management in closely held ownership stakes.
Approaching this from the side of personal financial planning, I've appreciated the writing of Prof. Moshe Milevsky in Toronto ("Are you a stock or a bond?"), including the thinking in his book "Pensionize your nest egg" with Alexandra Macqueen. The shift from DB to DC pensions is opening up a longevity risk can of worms many people are not sufficiently concerned about.
Over at my old colleague (25 years ago!) Michael James' blog, I've done some quick analysis that shows very roughly (with crude assumptions) that for a typical North American retiree, longevity risk protection can be worth about as much as an extra 3% per year of investment returns. A value not insignificant given reasonable after-tax, real (post-inflation) portfolio return expectations -- and ripe for capturing (and often in fact captured in large part) by higher and less transparent product fee levels.
Principal, Balanced Risk Strategies, Ltd..