Automate with Care

2024-10-09

The link: Do U.S. Ports Need More Automation? (from Construction Physics)

What it says: It looks at the relatively low performance of U.S. ports compared to worldwide peers, and studies to what degree this is can be attributed to their usually low levels of automation. The results are mixed: not all of the most efficient ports in the world, or even in a given country, are the most automated, although some are; automation projects have sometimes increased performance, but in other cases have had little or even negative impact.

Why?: It pays to read the article (as usual for the newsletter), but there are some common themes. Automation projects usually reduce labor costs — that's the basic point — but sometimes they are only reliable in a narrower set of conditions (e.g., calm weather, or only working with specific sets of inputs). Beyond that, even when an automated activity is more efficient than the manual one, this doesn't necessarily imply that it will increase overall performance: without changes upstream and downstream of the activity, its increased efficiency might remain unexploited, or be even counterproductive by creating downstream jams.

The larger context: "AI" is a more common label than "automation," but either way it's where a lot of the money and attention are nowadays, so this example is an important one. And it generalizes very directly even to pure knowledge work. The industry is littered with examples of low- or negative-ROI projects, even some that worked in a technical sense, because they focused on activities that weren't profit-limiting factors (e.g. made faster processes outside the any critical path), or because they required downstream and upstream adjustments that negated any overall performance improvement.

The takeaway: None of this is to say that AI/automation isn't a tremendously powerful set of technologies ("AI" is actually a cultural label for an always-shifting vaguely defined set of technologies, but that's a different article). But complex systems, from a software infrastructure to a financial company to a port, are, well, complex. You can't blindly change the performance and reliability profiles of individual components and assume this will translate transparently to overall system performance. If you're lucky, sometimes you get supralinear effects (increase a few percentage points the performance of a key subsystem and the overall system multiplies). If you're unlucky, you improve a part of the system and end up breaking the rest. Or rather than luck think of it in terms of engineering: if you have studied and understand the performance characteristics of your system, you will be likely to be able to predict ahead of time the impact of, say, replacing a manual process with an AI-directed one, or at least to set up the right scaffolding to make the experiment in a safe and low-cost way. If you haven't done that analysis beforehand, then any AI project faces the twin dangers of making you lose money by not working or, worse, making you lose money by the side effects of working. It's one of the main reasons why AI improvements are often most effective in "simpler" companies where a top-down understanding of what the key processes and metrics are makes it easier to figure out what activity-level improvements would have reasonably leveraged system-level impacts.