A throwaway note on learning myopia and strategic fear

2022-08-02 Fiction

Your horizon of worry should be dictated by the speed with which you can impact your environment; the bigger the change needed to avoid something, the earlier it becomes too late. This much is obvious but, besides the all-too-known short-term bias of people and organizations, there's a separate issue: most organizations are really bad at estimating how much time it takes to change their environment or themselves.

This is partly because long-term prediction in general is exponentially harder in this sort of system (for other systems, convergence makes the long-term easier to predict, but that's another story). Non-linearity not being a concept managerial systems are very comfortable with, a reasonable ability to estimate the next task is extrapolated into a reasonable ability to estimate the project, and, well, no.

There's a deeper reason for reason which is simply the fact that any organization's decision-making system tends to be optimized for a certain frequency - certain decisions are made quarter to quarter, others weekly, etc. Learning tends to happen from cycle to cycle: to a first approximation, every meeting looks at what was right or wrong since the last meeting, adjusts the overall plan, and moves forward. This tends to optimize learning at that time scale, but, precisely because of the non-linear nature of any large project, the organization basically learns to navigate well a road that's leading nowhere.

Specific tools can help here, but the key missing piece is an architectural one. Most organizations lack formal meta-learning: while there are often almost more people making decisions than implementing them, there's usually nobody tasked with evaluating, diagnosing and improving those decision-making processes. Adding metrics, dashboards, or other "data-driven decision-making" tools can aid once issues have been identified, but there's seldom any data gathering of decision-making quality across the organization, much less any formal analysis of it. In absence of this, tools and initiatives are at best shots in the dark - not on itself the most inspiring of decision-making processes.

Part of this gap is a matter of training and tools, but part of it is, as I said, architectural. No decision-making group or person should be expected to monitor, analyze, and improve their processes on their own. It's not impossible — for example, nobody becomes a good chess player without studying their own weaknesses systematically and ruthlessly — but it's not quite how most people are trained or evaluated. We are simply not given (or don't give ourselves) the budget of time, tools, and trust that would make this sort of meta-learning process possible. Perhaps the biggest benefit of an AI-driven decision-making infrastructure is simply that it is by nature an extremely well-logged decision-making system: it doesn't need to be better than humans, and might in fact be, at least at the beginning, a wrapper around mostly-human decision making, but the characteristics and even the limitations of AIs make them by nature much more amenable to analysis and improvement.

The first step, in any case, is organizational, not technological. If a process under the guidance of a group — either a development team or the C-level suite — isn't performing as you think it should, the first question to ask is whose job it is to record and analyze their collective thought processes. If the answer is "no-one" or "themselves, as part of their regular job" then you can't expect them to get better very quickly. Either give them that job explicitly and accept the trade-off — a much lower output of hopefully better decisions — or give somebody else the responsibility (including access to the data and information, and protection when they, unavoidably, piss somebody off).

And in the meanwhile, look at everything currently on the horizon and consider being more worried than you are now. We're all slower than we believe, so it's always later than you think.

Post-credits scene:

None

None