Q: What ongoing economic crisis?
A: Markets might or might not have crashed (depending on how long it takes me to post this), and macroeconomic signals remain "Nervous" rather than "Bleak," but: if you believe the rule of law, scientific capabilities, solid procedural and technological infrastructure, technical expertise, reliable data, and a fluid trade of goods, services, and knowledge are important economic assets — and if you don't then I'm not sure what you make of the last few centuries — then all of those key economic assets are being actively torched as you read this.
We are already poorer. It's not hit yet mainstream financial and economic metrics because we don't track assets like the sanity of the person in charge of US trade policy — not because it's not a factor in the world economy but because influential enough institutions haven't gotten around to acknowledging the need. A world in which there's a nonzero chance that an erratic mercantilist will take control of the Federal Reserve is already a world in trouble.
Q: Alright, so we're all poorer, fear will rise, demand will fall. It happens. Time to cut losses and retrench, right?
A: If it worked, but it won't. Not at least for these assets. Expertise, information, and decision-making capabilities aren't fungible with other resources. An organization (or individual or country) that "saves money" by investing less on its own thinking will make worse decisions, get worse outcomes, and suffer even more from an already bad situation. When you're lost in the woods you don't throw away your compass so you can carry less weight.
This doesn't mean retrenching in some areas won't be the right response (although it's the right response much less often than it appears). It does mean, however, that without a clearer understanding of what's going on that the current default cognitive infrastructure is providing it's very unlikely you will retrench in the right fronts in the right way.
Plenty of organizations cutting costs by replacing expensive in-house expertise with commodity-grade buggy LLMs will become cautionary tales before too long. It's a always a good rule in life to try to avoid becoming a cautionary tale.Q: Then what?
A: It's not a twenty-slides consulting project, but here are some notes on how one goes around to rebuilding an organization's mind:
Although there are more precise languages to speak about this, a good analogy is the computer network of a large organization that just discovered it's been hacked by an adversary intent on sabotage rather than destruction. Some or all of its information might have been leaked. Some of or all of its data might be corrupted. Key systems might not work but pretend they do. And much of it might still be under somebody else's control.
It's tempting, in the abstract, to throw away everything and start from scratch; it's what you should do with something like a single phone or a laptop. But large organizations are too complex for this, and they cannot just stop everything while they rebuild themselves.
Remember that in this analogy the computer network isn't just your organization but the much much wider network of information and expertise it depends on, from news and technical papers to financial regulation, legal systems, and, yes, software infrastructures.
What security engineers usually do in these situations is to establish or (re)build a clean enclave - a basic core of systems known to be safe, usually because those you have built from scratch. Using that as a bridgehead they incrementally recover/clean/rebuild different systems and services, prioritizing key functionalities first, and making sure not to put in place the same vulnerabilities that made the attack possible in the first place. This is rarely a quick process, and almost never results in an exact copy of the original system, but, carefully done, it can be a more robust one.
So where to begin?
The key cognitive asset in any organization is almost always implicit and therefore vulnerable: the causal model sustaining the organization's ability to fulfill its purpose (which might be nothing more than "survive"). The term causal model carries a set of technical meanings but, formally or not, every organization acts as if it had a blueprint of the world that says this thing we want to happen is driven by these other things, which are driven by these other things.. Some of those you might know precisely, some not at all, some you might have complete control over, others a flimsy hope of nudging.
This causal model is what makes an organization work. When it's dysfunctional — not the explicit model, if it has one, but the one implied by its actions — then the organization will at best flail and at worst die.
Causal models can be dysfunctional in different ways. An incomplete list of those, particularly focused in 2025:
- The world changed and your model didn't.
- Somebody convinced or is trying to force you to use a different, broken model.
- Some of the information channels that you feed into the model are broken or malicious.
- Some of the expertise nodes that guide how you interpret and use the model are broken or malicious.
The first step is to write down explicitly the causal model driving the organization's goals; the mechanisms of "this happens because of these other things." This can be done in more or less detail using more or less sophisticated conceptual tools. The more detailed and complete the better, but even a few arrows on a whiteboard can be clarifying for a group that never thought about it before in that way.
The key is that this isn't how the organization works. It's a map of the external world, not of organizational practices. You can't assume the organization's environmental fit - that's the reason why you are doing this. To map the organization instead of the world outside it is like using a compromised computer system to verify the integrity of your network. So instead of looking at internal processes and data you want to go back to the expert consensus: look out for people and sources who are authorities in your area of the world (many of which might indeed be sitting next to you) and use their knowledge to put together, from scratch, a map of how it works.
Q: But, given everything written above about the damage to collective cognitive infrastructures, how do you choose those experts and sources?
A: Quoting Nero Wolfe, "experience as guided by intelligence." Which is another way of saying that there's no platform metric, algorithm, or publication venue that can make the process automatic.
There are certainly useful heuristics, e.g. if a person or venue says something obviously and glaringly false then by definition they aren't acting in intellectual good faith and you can just stop paying attention to them. And certainly you can usually trust people who you have identified as experts on a field to recommend other experts in fields they can be good judges of (that last caveat is very important; a lot of our problems come from people who should have known better trusting the opinion of, e.g., people good at financial engineering on things like sociopolitics or biology).
The bottom line is that most social markers of expertise have, to different degrees in different contexts, been co-opted and to build your causal model you need to develop and exercise the ability to find good sources of expertise. The main usefulness of the process I'm describing is that it gives you a semi-systematic context where you have to.
Once you've finished writing that model, the second step is to figure out how you would plug it into the world. A causal model is in a way a map of everything that could happen: if this, then that, but if this other thing, then something else. To use it an organization needs to know what is happening, so it can decide what to do to influence what will happen.
Once again it's important to emphasize that this isn't an inventory of the organization information sources. You've built a map of a part of the world, and now you have to list what sources are and aren't accessible and reliable for every component of that map. All the caveats mentioned above still apply. If anything, you need to be even more careful in your evaluation, and rely more on expert advice than on convenience or cultural salience.
And never, ever, ever, fall into the fallacy of thinking that just because a field in your database or a headline on a newsletter report has the same name as a component in your causal model then it's a direct measurement of it. People and organizations, even with the best intentions, often have to find proxies for the things they would ideally know, and as those results are packaged and distributed through the media, word of mouth, or even an organization's spreadsheets or analytical systems, this gets forgotten more or less deliberately and at huge cost. It's better to accept you don't know something and act in consequence than to use a bad proxy just to have something.
As an aside, using genAI as a source of expertise or information compounds all of these problems: it can't distinguish between expertise and popularity, and it very much can't distinguish between a term and what it refers to. LLMs have their uses, but it's not any of this.
Once you have at least an sketch of a causal model of the part of the world that you work on and some idea of what information sources you can use to keep track of it, then you have the foundation of a rebuilt cognitive system: a clean, hopefully robust, at least more realistic map of your organization's environment.
I'm closing this post here, because the third step is to go carefully over your organization's processes and compare them with what you know of the world, and that depends very much on the details of an organization. But generally speaking everything an organization does makes sense or not, can work or not, based on how well it fits the way the world works, the quality of its informational inputs, and the depth of the expertise deployed. There's a natural process of system degradation over time as organizations end up evaluating all of these aspects against its own internal norms, not a separate map of the world — e.g. how much data you have and how a certain metric has moved this quarter, not if the metric means what its name implies or whether it's true that what you did has a solid chance of causing a change on what will happen.
The last few years — not for the first or for the last time — have accelerated this process enormously, not because this internal tendency has become worse, but because the world, and the external cognitive infrastructures organizations rely on implicitly, have changed too much too quickly. Hence the need for this sort of deliberate review and fixing, and why it can't be done before building first the causal model to review against.
Q: And at the end of all of this exploration?
A: You'll still be in the world where you started. Probably knowing it better. Hopefully with a clearer sense of what agency you have and how to seek more. These days that feeling can make a big difference. Reason enough to sit down, write down your organization's purpose, and spend some time just looking at the world with fresh eyes.