Cognitive culture, the tech market collapse, and what to focus 2023 startups on

2022-11-11

A beginning is the time for taking the most delicate care that the balances are correct.
(first epigraph of Dune, by Frank Herbert)

Much of strategic and technical investment on AI has the goal, explicit or implicit, of making an organization qualitatively smarter as a whole. Now that valuations are falling left and right it's perhaps more obvious how rarely this happens - how much of the value put in these investments derived either from general market correlations or from the hope, rather than the experience, of radical organizational cognitive improvements.

Anything that makes you smarter is more valuable in a crisis, not less. So why isn't demand for AI exploding in the profitable sense of the expression?

The first level of explanation is that most companies don't have experience in this sort of organizational cognitive enhancement. When they pay for the technology it's a forward-looking bet on new improvements, and in a crisis those are the first investments to be shut down (I'd argue that for many companies this is sub-optimal, but that's a different discussion). The question, then, is a more general one: how come that most companies don't have the kind of concrete experience of increased organizational smarts that would make them trust the idea even during a crisis?

I know the technology, and more so the mathematics, work, and I usually explain this difficulty in using AI to increase organizational intelligence in terms of psychology and politics, a framework that I believe is very useful. There's a complementary one, though, based on the usual anthropological observation that implicit patterns of interaction can be at least as influential as explicit rules, and are usually more difficult to change precisely because they tend to be invisible.

This difficulty shows in an obvious way in the difficulty of fighting harassment of all sorts in organizations where it exists, but there's a cognitive version of this pattern, shown in aspects like:

To put it in another way, if culture is a key aspect of a company, its cognitive culture is a key aspect of it — one that interacts in multiple way with the rest of the culture, from formal practices to informal politics. It's just less often considered, and it's extremely difficult to change. You can't just tell everybody that now everything will be data-driven and send them links to tutorials if everything else they are doing, from the documents they are expected to write to the language in which they are speaking to the way in which they know they are informally, unconsciously evaluated by their managers and peers will remain more or less the same. Every sort of cognitive culture has its own tradeoffs: Changing how you think means spending more time thinking about some things and less time on others. There are opportunity costs, there is expertise no longer useful and expertise now painfully lacking. Ways of evaluation change. It takes among other things an enormous amount of internal trust, specially bottom-up: if you have thrived or survived working in a certain way, if your managers have rewarded you for doing things in one way and not another, it can take more than an statement that they will now be evaluated in a different way before people will risk a painful and, at first, difficult change.

None of this is to impugn the good faith of these attempts; insofar as you can consider an organization to be the sort of collective entity that honestly wants to do something, then, yes, more often than not organizations honestly want to do it. But, just like deep psychological change takes more than an honest willful attempt to do it, there are (far) more failures than successes in deep organizational cognitive change.

Now for optimistic observation: this is exponentially easier the smaller and newer the organization is. Can you make a 1000-, 100-, or even 50-person organization AI-driven in the most interesting and competitively powerful sense of the concept? Based on the above, I'd say "only with heroic levels of effort, money, and luck." Can you make a five-person organization truly AI-driven? Yes! It's intellectually challenging and not really comfortable at first — it's not "buy the right software and a very cool slide deck" — but it's so much easier that it enters the field of the almost repeatable.

The catch is that it's only easy at the beginning of an organization. As soon as you start scaling, whatever cognitive culture you have is much more likely to get worse than to get better or even to stay the same. Every new person, software, and process, has their own assumptions, culture, forms of doing things. This diversity is very powerful, but adapting each new resource to your existing cognitive culture takes time and effort and even so it's only partial. The more you grow, the more what you get is a cognitive culture that's half-way between your original one and the average one of your recruiting pool (and infrastructural environment), so whatever competitive advantage you want to derive from the way your organization thinks as a whole has to be there at the beginning and in excess of what you think you will need. As organizations get larger they get more powerful, more resourceful, with a deeper expertise pool, more contacts, more data, more hardware, better everything - except their culture, cognitive and otherwise. That dilutes over time, so to be great at scale you need to aim at first to unjustifiably good. I use unjustifiably deliberately, because investment culture — there's that word again — rewards growth-focused early investment and iterations, a sort of good-enough-for-next-month approach that's defensible (I do have caveats) for things like software, but not for the architecture of the collective brain.

Next year is going to be one in which VCs will be likely to invest little but at the same time not expect a lot. If you can get around the former, the latter can give you an advantage: time. Time to not grow but instead to work and rework in complete detail the culture and infrastructure of how your less than ten-people company will still think when it has a ten-digit market cap. Not doing the seemingly overkill when you can might kill you when you need to and you can't.

As a sort of postscript, there's One Weird Trick that can help large organizations do something similar: build it inside. Very much not a new division, much less an AI Department, C-level organization, or anything like that. Give 2-6 people an office and get them full time to design an organization from scratch, one with a cognitive culture overkill. Once they have worked out the details, start shifting resources there, very, very carefully. You don't want to reassign branches of the org chart: that would hurt the new organization's cognitive culture more than it would improve it. Reassign or hire people one by one, giving them time to adjust and learn. Give them time to build or adapt the software they need. Be a patient VC, not a CEO. Even better, be a patient researcher watching an experiment. And as it goes well and grows, let it grow - defend it from internal pressures, prevent it from being dismembered and absorbed in a vain attempt from the rest of the organization to "eat the culture" by getting people assigned to their teams.

If it works, when it works, the old organization will go away, and you'll have a new one with the same name, the same assets, the same everything, except a much better cognitive culture. It might sound like an insane way of doing it, but it's not slower or more expensive than trying to shift an already large organization. And whether you're incubating your replacement in-house or not, another version is already growing outside.