Three ways to sabotage a company using AI

2024-06-14

Whether you've just infiltrated a company or have been undermining them from the inside for years, AI tools and the excitement around them has opened up new and powerful opportunities to wreak havoc on your target.

Here are three of the most interesting ones:

Content Flooding

The easiest way to clog a company is to flood it with useless information and time-wasting meetings. This was already known by our predecessors at the OSS (reborn later as the CIA) who wrote the timeless Simple Sabotage Field Manual; it remains the corporate saboteur's version of The Art of War.

And what do most of the popular AI tools do? They make it easier to write emails, reports, and slides! Some of them even facilitate creating superficial queries to databases, multiplying the number of quick "insights" that can be reported. It would be hard to imagine a more powerful tool for a saboteur. Where padding a report with paragraphs of bland verbosity and pages of irrelevant context once took time and energy, now the most mind-numbing text or endless deck can be had literally for the asking.

What's more, you don't need to do most of the asking yourself either. The ubiquitousness of these tools and the pressures and incentives of corporate life are pushing everybody to write, post, and present more simply to keep up with their peers. The result is a self-inflicted distributed denial of attention that has already made many public platforms unusable and is beginning to impact internal decision-making systems.

The experienced saboteur will recall that the cost in time of reading longer emails and sitting through longer meetings, while certainly damaging, isn't the primary way in which content flooding impacts the performance of a target. With increased content comes higher cognitive load. The more extraneous language, details, and facts there is a piece of content, the more work has to be done by the individual and the group to extract from it useful information and turn that into effective decisions - and the more likely it is that they would focus on irrelevant information, misunderstand the overall picture, or make decisions they would never make if working from a terse summary tailored for people who are already, it is assumed, more than familiar with the relevant context and knowledge sets.

All a saboteur needs to do is to be on the lookout for proposals for requirements and limits to content density and length. The personal cost of creating and padding content has gone nearly to zero, so as long as there is no strong and explicit corporate discipline to prevent it we can expect content flooding to do most of our work for us.

Decoy Metrics

A professional might rightly protest that decoy metrics don't belong in this list: they are after all as old as measurement-driven management itself. It's in the nature of the world that the easiest things to measure are the ones that are gameable or irrelevant to the business' ultimate success, and it's in the nature of corporate culture that making a nice-sounding number go up is the easiest way to define, perform, and evaluate management. This happy coincidence makes it difficult sometimes to distinguish between the fellow saboteur and the innocent career-maximizing manager.

But beware! There's a serious danger to us hidden next to the more facile uses of AI. Ruthlessly deployed with political support, AI makes it easier than ever — or, luckily, merely less difficult — to find and monitor metrics that, no matter how unfamiliar or complex to obtain, have large causal impacts on headline numbers. A more worrisome scenario would be hard to imagine: Good causal metrics make management more effective, streamline reporting and decision-making, and generally speaking act as a profound hindrance to our work.

Let's not fall into despair, though. AI makes obtaining effectful metrics less difficult, not easy: incumbent effects, personal incentives, and the sheer conceptual jump necessary to shift corporate culture around this idea are all on the side of the saboteur. The ability of AI to make it easier to query databases and generate reports based on arbitrary views of the data will tend to increase the number of decoy metrics. And there's also a large number of new metrics related to the AI systems themselves. The cautious saboteur will quietly redirect investments and organizational changes away from corporate self-understanding and towards doubling down on management based on the measurement of atomized metrics of unexamined impact. Only a very incompetent or very unlucky saboteur risks failing at this.

Expertise Draining

Stalling a target's activities is good. Making them feel good and successful while doing it is better. But the non plus ultra of the saboteur is to simultaneously remove from a company not just their core competitive advantages but also the means by which they can increase or recover them.

Today the core competitive advantage is expertise — the deep expertise that makes competitors uselessly throw money and time at a problem that can only be solved through understanding and experience. At first sight, then, AI is the ultimate enemy of the long-term saboteur, as it's a technology that takes to unprecedented levels the development, sophistication, and scale of expertise. The organization that knows what they do better than everybody else will not just have an advantage regardless of what else the saboteur might do to it: it will have a natural defense against most forms of sabotage.

Should saboteurs therefore attempt to prevent their targets from adopting AI? No.

Managers deeply dislike expertise. Expertise is expensive. Expertise is slow and tricky to accumulate. Expertise is extremely hard to measure and manage. Expertise is opaque, stubborn, and unwieldy. It's no surprise then that as AI providers offer the possibility of replacing (after a fashion) in-house expertise with simpler calls to third-party APIs that are not just easier to manage but also fashionable to media and investors, companies are rushing to take advantage of it.

The official line is that this is meant to increase the amount of expertise applied by a company, but we saboteurs know how seldom this happens. Cheap and fast more-or-less-equivalent expertise is a fantastic way to improve margins, lower headcount, scale easily, and other undoubtedly good short-term things. It also leads — wherever there isn't a deliberate decision to push back against it — to stop investing in developing new expertise. Partly out of changed incentives, and partly because it's much harder to deepen the understanding of a process when you're asking somebody else's AI to think about it for you.

Outsourcing a process also outsources improving it and renounces the possibility of competitive advantage on it.

This is the saboteurs' dream: in exchange for tempting short-term advantages the target loses the desire and ability to develop proprietary expertise over the long term. It's a leaner and shinier company now. For those who have seen both solid behemoths and nimble startups be burned to ashes by the flash fire of true innovation the signs couldn't be darker or more evident. It takes very little to nudge a target in this direction; oftentimes it would be harder to prevent them from doing it.

What can we conclude about the impact of AI on the subtler aspects of corporate sabotage? The technology might be novel but the basics of the art remain the same and so must our diagnosis: Yes, AI holds the potential of making the saboteur's work much harder or even undoing years of systematic work. But the forces of inertia, culture, and conservatism are as always on the side of the saboteur; as always, the limits of what companies can imagine and perform are much narrower than the technological potential. The careful and ambitious practitioner will be able to leverage corporate culture and inertia and use AI to achieve new heights in this storied profession.