I'm making two upcoming conference presentations, both keynote speeches, one followed by moderating an executive panel. I will hopefully will be able to post material soon afterwards; contact me if interested earlier.
My former colleagues at McKinsey have just published an article on stress-testing for nonfinancial companies. Unsurprisingly, I very much agree with them. Whether the goal is stress-testing (a specific risk-management centred viewpoint) or strategic planning, companies need to think more in scenarios to effectively get their arms around the uncertainty they need to navigate.
I've written about this on these blog pages and in an article planned for a Canadian mainline business publication (where it ultimately ended up on the cutting room floor after internal conflict in the editorial board in which I was collateral damage) in the context of the election of Mr Trump as president.
Ultimately, stress testing, scenario planning, and risk management live in the same ecosystem, just in different ecological niches. Ideally they work together, and which one is a priority to exercise for a given company, or provides the best path towards a more holistic approach, depends on the circumstances.
I'm a member of Umbrex, a network of independent consultants with a background in the top strategic consulting firms. While there are a number of companies out there that uberize the consulting model (i.e., do the client development front-end, the billing/support back-end, and subcontract independent consultants to do the actual work), Umbrex is more of a consultants' pub, or actually members-only club. We may form joint teams when an opportunity warrants, but above all, we exchange ideas, and refer our colleagues to trusted experts or service providers when needed. From a client service perspective, it erodes one of the biggest structural advantages of work in a large consulting firm: the ability to quickly find Someone who knows about the Widget market in Micronesia when that's what unexpectedly bubbled up in the project.
We also support and challenge each other on the big picture as well as the nuts and bolts of running an independent consulting practice. A lot of this is on a private mailing list, but the founder of Umbrex, Will Bachman, recently launched a publicly-accessible podcast, and this week I'm his guest, talking both about what I do and how I work.
For anyone else consulting independently, or interested in what it's like, Will's interview with me is #19 of a growing library of episodes, all of which can be listened to not only on the website but as a regular podcast through itunes or other channels.
Belief is stronger than analysis. This is well-known to those who study cognitive biases (Kahneman, Slovic, Ariely, etc.) but interesting to see it confirmed with an experiment involving politics and math: moderately challenging calculations more likely to be "done wrong" if the results go against the subject's political beliefs than if the results are politically neutral. And the more "numerate" the subject, the stronger the effect.
I think this is an important part of the answer for those of us who look at the political situation (anywhere...) and are amazed how "sane" people robustly maintain their belief system impervious to "evidence", and the importance of patently flimsy echo-chamber talking points to anchor those beliefs.
The underlying research is at http://static1.1.sqspcdn.com/…/138…/wp_draft_1.5_9_14_13.pdf The math task was Bayesian inversion of a contingency table, something humans are bad at (e.g. health treatments). I'm not 100% convinced the hypothesis "people do math wrong" (a quality observation about Kahneman's thinking-slow system) is proven versus an alternative hypothesis of "people don't bother to engage thinking-slow if political beliefs provide a salient thinking-fast answer".
Am working now on a few situations where the benefits (and challenges) of using algorithms to automate decisions have come up, e.g. monitoring/escalation, University admissions, credit, etc. So was intrigued by an article in last months' HBR (hbr.org/2017/04/creating-simple-rules-for-complex-decisions) on scoring systems by judges.
Human beings have all sorts of biases making decisions under uncertainty (which is what this is). And, left to their own devices, exhibit a tremendous variability of outcomes not explainable by any evidence provided ("hunch" is highly person- and moment-dependent). So there is a lot of attraction to a big-data solution or even a small & simple one like the checklist in the article. But all such algorithms can only be calibrated inside their comfort zone, whatever it is. Within that zone, it makes sense they will often improve outcomes, and by the way consistency.
But not enough thought is put into determining that comfort zone, and making the algorithms self-aware enough to escalate when they switch from interpolation within their calibrated zone to extrapolation. In particular, don't trust any "simple" scorecard that is silently structurally linear, for instance anything where you just "add up" points from different questions or categories. The world is inherently nonlinear. And lack of engagement with where linearity is a good surface of best fit (which is what such an algorithm is doing) means lack of escalation.
An article in the WSJ (quoted also on vox.com) reports that the Trump transition team pressured government staff to generate unrealistically rosy economic forecasts. "[W]hat’s unusual about the administration’s forecasts isn’t just their relative optimism but also the process by which they were derived. [...] they weren’t derived by any process at all. Instead of letting economists build a forecast, Trump’s budget was put together with transition officials telling the CEA staff the growth targets that their budget would produce and asking them to backfill other estimates off those figures.”
Setting aside questions of delimitations of responsibility and effective expert collaboration, this also goes to the heart of the difference between uncertainty and risk. Navigating uncertainty involves thoughtfully exploring the range of paths the future may follow, and defining and working towards reasonable objectives consistent with that. A "what would you have to believe" scenario can be part of that, but it's hazardous if you bully others to make it your most important or even only one. Managing risks involves exploring concrete reasons why you may fail to achieve your existing objectives. Constructing a forecast that lets you meet them and then working backwards to deduce exactly why your assumptions need to be "too rosy" to achieve that can be extremely helpful. Of course, merely forcing your expert collaborators to feed you back a narrative that falls in line with your marketing message is neither.
Since I consult both in risk management and decisionmaking under uncertainty, I often (including this week!) get asked about what is the difference in the two terms. It's not that easy to answer without resorting to a "we're gonna define it this way, and damn the consequences if other people use the term to mean something a bit different" brute force solution.
Here's an excerpt (from the book I'm writing) on how I see the difference. Basically, it's not about "when is an uncertainty a risk" and more about "where are we in the decisionmaking process" and "is there an expected base case future trajectory (and, in particular, a set of objectives)". Warning: 1400 words.
Comments welcome. We need to be pragmatic rather than prescriptive about this.
Balanced Risk Strategies has changed web hosting providers. Content, including blog postings, have been fully copied over. Unfortunately, blog post comments (there were not many...) have not made the transition.
The good news is that the new provider has a much better blog platform, including commenting and spam protection.
If by chance you subscribed to this blog via an RSS feed, you will have to change it -- see the RSS button at right. Apologies for the inconvenience.
As you and your friends and colleagues digest the election news, are you thinking alternative scenarios of what the Trump victory *might* mean for the U.S., for you, for your company, for the world? Or are you merely thinking of what it *most likely will* mean, and adjusting your single, baseline scenario based on the most persuasive random information coming your way?
Most executives I know presumed Trump wouldn't win. Nevertheless, optimal risk management or management-under-uncertainty should have made all of us develop and continually adjust multiple possible scenarios. Not many did. If you didn't have multiple election outcome scenarios 2 months ago, did the narrowing of the odds in the past few weeks prompt you start thinking in that direction? Did you take any anticipatory actions to prepare for whichever scenario was not your expected or preferred one?
More importantly, are you getting away from baseline-only thinking now? There are multiple dimensions against which to tease apart what will happen going forward, for instance:
Principal, Balanced Risk Strategies, Ltd..