I'm a member of Umbrex, a network of independent consultants with a background in the top strategic consulting firms. While there are a number of companies out there that uberize the consulting model (i.e., do the client development front-end, the billing/support back-end, and subcontract independent consultants to do the actual work), Umbrex is more of a consultants' pub, or actually members-only club. We may form joint teams when an opportunity warrants, but above all, we exchange ideas, and refer our colleagues to trusted experts or service providers when needed. From a client service perspective, it erodes one of the biggest structural advantages of work in a large consulting firm: the ability to quickly find Someone who knows about the Widget market in Micronesia when that's what unexpectedly bubbled up in the project.
We also support and challenge each other on the big picture as well as the nuts and bolts of running an independent consulting practice. A lot of this is on a private mailing list, but the founder of Umbrex, Will Bachman, recently launched a publicly-accessible podcast, and this week I'm his guest, talking both about what I do and how I work.
For anyone else consulting independently, or interested in what it's like, Will's interview with me is #19 of a growing library of episodes, all of which can be listened to not only on the website but as a regular podcast through itunes or other channels.
Belief is stronger than analysis. This is well-known to those who study cognitive biases (Kahneman, Slovic, Ariely, etc.) but interesting to see it confirmed with an experiment involving politics and math: moderately challenging calculations more likely to be "done wrong" if the results go against the subject's political beliefs than if the results are politically neutral. And the more "numerate" the subject, the stronger the effect.
I think this is an important part of the answer for those of us who look at the political situation (anywhere...) and are amazed how "sane" people robustly maintain their belief system impervious to "evidence", and the importance of patently flimsy echo-chamber talking points to anchor those beliefs.
The underlying research is at http://static1.1.sqspcdn.com/…/138…/wp_draft_1.5_9_14_13.pdf The math task was Bayesian inversion of a contingency table, something humans are bad at (e.g. health treatments). I'm not 100% convinced the hypothesis "people do math wrong" (a quality observation about Kahneman's thinking-slow system) is proven versus an alternative hypothesis of "people don't bother to engage thinking-slow if political beliefs provide a salient thinking-fast answer".
Am working now on a few situations where the benefits (and challenges) of using algorithms to automate decisions have come up, e.g. monitoring/escalation, University admissions, credit, etc. So was intrigued by an article in last months' HBR (hbr.org/2017/04/creating-simple-rules-for-complex-decisions) on scoring systems by judges.
Human beings have all sorts of biases making decisions under uncertainty (which is what this is). And, left to their own devices, exhibit a tremendous variability of outcomes not explainable by any evidence provided ("hunch" is highly person- and moment-dependent). So there is a lot of attraction to a big-data solution or even a small & simple one like the checklist in the article. But all such algorithms can only be calibrated inside their comfort zone, whatever it is. Within that zone, it makes sense they will often improve outcomes, and by the way consistency.
But not enough thought is put into determining that comfort zone, and making the algorithms self-aware enough to escalate when they switch from interpolation within their calibrated zone to extrapolation. In particular, don't trust any "simple" scorecard that is silently structurally linear, for instance anything where you just "add up" points from different questions or categories. The world is inherently nonlinear. And lack of engagement with where linearity is a good surface of best fit (which is what such an algorithm is doing) means lack of escalation.
An article in the WSJ (quoted also on vox.com) reports that the Trump transition team pressured government staff to generate unrealistically rosy economic forecasts. "[W]hat’s unusual about the administration’s forecasts isn’t just their relative optimism but also the process by which they were derived. [...] they weren’t derived by any process at all. Instead of letting economists build a forecast, Trump’s budget was put together with transition officials telling the CEA staff the growth targets that their budget would produce and asking them to backfill other estimates off those figures.”
Setting aside questions of delimitations of responsibility and effective expert collaboration, this also goes to the heart of the difference between uncertainty and risk. Navigating uncertainty involves thoughtfully exploring the range of paths the future may follow, and defining and working towards reasonable objectives consistent with that. A "what would you have to believe" scenario can be part of that, but it's hazardous if you bully others to make it your most important or even only one. Managing risks involves exploring concrete reasons why you may fail to achieve your existing objectives. Constructing a forecast that lets you meet them and then working backwards to deduce exactly why your assumptions need to be "too rosy" to achieve that can be extremely helpful. Of course, merely forcing your expert collaborators to feed you back a narrative that falls in line with your marketing message is neither.
Since I consult both in risk management and decisionmaking under uncertainty, I often (including this week!) get asked about what is the difference in the two terms. It's not that easy to answer without resorting to a "we're gonna define it this way, and damn the consequences if other people use the term to mean something a bit different" brute force solution.
Here's an excerpt (from the book I'm writing) on how I see the difference. Basically, it's not about "when is an uncertainty a risk" and more about "where are we in the decisionmaking process" and "is there an expected base case future trajectory (and, in particular, a set of objectives)". Warning: 1400 words.
Comments welcome. We need to be pragmatic rather than prescriptive about this.
Balanced Risk Strategies has changed web hosting providers. Content, including blog postings, have been fully copied over. Unfortunately, blog post comments (there were not many...) have not made the transition.
The good news is that the new provider has a much better blog platform, including commenting and spam protection.
If by chance you subscribed to this blog via an RSS feed, you will have to change it -- see the RSS button at right. Apologies for the inconvenience.
As you and your friends and colleagues digest the election news, are you thinking alternative scenarios of what the Trump victory *might* mean for the U.S., for you, for your company, for the world? Or are you merely thinking of what it *most likely will* mean, and adjusting your single, baseline scenario based on the most persuasive random information coming your way?
Most executives I know presumed Trump wouldn't win. Nevertheless, optimal risk management or management-under-uncertainty should have made all of us develop and continually adjust multiple possible scenarios. Not many did. If you didn't have multiple election outcome scenarios 2 months ago, did the narrowing of the odds in the past few weeks prompt you start thinking in that direction? Did you take any anticipatory actions to prepare for whichever scenario was not your expected or preferred one?
More importantly, are you getting away from baseline-only thinking now? There are multiple dimensions against which to tease apart what will happen going forward, for instance:
There is much angst about the anti-elite masses making "crazy, irrational, uninformed" choices such as Brexit or supporting Trump. It's not actually irrational.
In some risk modeling work I do, you calculate the likely range (actually probability distribution) of a company's financial results going forward and compare it to the bare minimum they actually need to achieve to survive. The smaller that safety margin, the less risk the company can take to ensure failure is tolerably unlikely. Paradoxically, at a certain point it makes sense to take *more* risk: when the baseline outcome is actually below the bare minimum needed, what in (American) football is called a Hail Mary pass makes rational sense. A (say) 25% of success is better than guaranteed (continued) failure.
The same is true for the sizable economically-dispossessed, security-concerned, and angry chunk of the population today, whether UK, US, or elsewhere. The current trendline is unacceptable to them, so any alternative is better. A high-risk "crazy, uninformed" one is quite rational -- especially given the paucity of choice.
The establishment strategy of trying to starve the disruptive alternative of its support base merely by highlighting its "craziness" will have limited effectiveness, especially over the long term. It needs a better, more broadly acceptable alternative whose baseline outcome is more broadly embraced to reduce the attractiveness of a blow-it-all-up Hail Mary.
How good really is your company's risk management? The Brexit situation provides a good litmus test.
The Leave win was surprising, but with polls running close to 50-50 in recent weeks, and information markets pegging Brexit likelihood over 20%, a company with prudent and effective strategic risk management cannot say they were "shocked", as in this article from today's WSJ.
I know two companies who anticipated the possibility of Brexit in their last scenario planning or strategic risk assessment exercises. One was a UK-based company, the other a North American one with low direct UK exposure, but correctly fearing a Brexit win would roil financial markets much more broadly. Did your company do something similar? Are you now fleshing out a couple of different possible Brexit evolution narratives going forward and what they might mean for your company, including your value chain partners, in the next few years? Or did your company just assume and trust it "wouldn't happen" and you're now scrambling to think it through?
More broadly, are you considering a longer-term "End of the Globalization Era" doomsday scenario? Something that covers a fundamental reshape of the EU, systematically increased protectionism in the U.S., and a global re-emergence of trade barriers? How it affects not only your direct economic drivers, but those of your customers, your suppliers, your competitors, and other stakeholders? You may not like that scenario, you may not even truly believe in it. But it's in the realm of possibility, and if your company is doing good strategic risk management, in the current environment it should be front and centre in your risk management and strategic planning. What steps should you take now? What plans should you start preparing? What instabilities will harm you and which ones may present an opportunity?
The risk management world is full of checklists, frameworks, and diagnostics on the quality of ERM or risk management more broadly. Sometimes a simple litmus test provided by fate is equally powerful.
I'm not an actuary, but do occasionally work with institutional investors, where the tradeoff between market risk (volatility as well as systemic macro risk) and runout/longevity risk is important, and has significant impact on optimal portfolio construction and risk management in closely held ownership stakes.
Approaching this from the side of personal financial planning, I've appreciated the writing of Prof. Moshe Milevsky in Toronto ("Are you a stock or a bond?"), including the thinking in his book "Pensionize your nest egg" with Alexandra Macqueen. The shift from DB to DC pensions is opening up a longevity risk can of worms many people are not sufficiently concerned about.
Over at my old colleague (25 years ago!) Michael James' blog, I've done some quick analysis that shows very roughly (with crude assumptions) that for a typical North American retiree, longevity risk protection can be worth about as much as an extra 3% per year of investment returns. A value not insignificant given reasonable after-tax, real (post-inflation) portfolio return expectations -- and ripe for capturing (and often in fact captured in large part) by higher and less transparent product fee levels.
Principal, Balanced Risk Strategies, Ltd..