This is not without consequences.
LLMs are built using very sophisticated and expensive technical infrastructures; this gives them a high-tech aura that extends, inaccurately, to what they themselves generate. Text, code, etc. generated by LLMs is plentiful, cheap, easy to customize, and often works well enough at first. It also breaks under pressure. The moment you try to get an LLM to give detail on an answer, to extrapolate to a new context, or generally speaking you just apply any sort of demand on their ability to not just write or code but understand what they are writing or coding about, they collapse in the most damaging way possible: silently.
From an engineering point of view, LLMs are sophisticated tools that output plentiful but low-quality language-like gizmos. Maybe the technology will improve and they'll be able to create intellectually robust output. I suspect whatever AI ends up being able to do that will not look at all like a LLM, but that's irrelevant: right now there's no LLM on the planet that generates cognitively high-quality output you can expand on with confidence.
And that's fine! Not every computer has to have deep space-rated reliability. Not every email has to read as if it was written by Terry Pratchett.
What I want to focus on — what is often overlooked, is this:
The sort of things LLMs create can be used as raw inputs or temporary scaffolding, but they are often used as critical infrastructure themselves.
It's easiest to see it when thinking about code. A one-off script used to poke at a new API is an example of an ephemeral tool, like a quick first study from an artist. It doesn't have to work very well or be robust, extensible, and well-understood in order to be useful. You'll only use it a few times, maybe just during a single afternoon, and throw it away. Fast and good enough is ideal.
But not all code is like that. Production code, of course, but also things like developer tooling, run very frequently and on critical paths. Their robustness, extensibility, ease of understanding, performance profile — those are all high-impact factors on your company's profitability and growth potential. That's long-term high-ROI capital: the time it takes to get a first version is immaterial compared with the expertise and thought you put into it. Getting something that sort-of-works twice as fast, over the life cycle of its impact on your company, is a catastrophic underinvestment compared with something that takes longer to build and works much better. This applies as much to a one-day bugfix as to a one-quarter feature.
Outputs other than code have a subtler but even more meaningful impact on the performance of a company. The takeaway of a data analysis or presentation, even a single graph in a slide, can inform the strategy and day-to-day operations of a company for months: a chart, a slide, a paragraph in a report can become a key component of the wetware that's often the most critical part of a company's collective intelligence. It, too, is a high-ROI, high-sensitivity piece of infrastructure, whether it's referenced every day or it's just internalized from a meeting into a shared assumption.
Using a LLM interface to get a quick data analysis and summarize reports is probably fine for the analytical and strategic equivalent of throwaway scripts, like generating a stack of slick graphs and slides for a presentation to a customer that aren't themselves well-versed in the data. But anything you're going to use to make a decision about what you're going to build, how you're going to run a process, or even what goals are feasible and how you can get to them might well be what makes the difference between success and failure for the organization as a whole.
Deploying a tool that promises to "get to insights faster" or "democratize analysis" (even if they delivered on it, which is a questionable premise on its own) is a deeply self-sabotaging form of capital misallocation. Technology hasn't changed the basics of financial optimization: the higher the multiplier between the performance of a piece of your infrastructure and your ultimate goals, the more you want to invest on them. For key piece of code, tooling, analysis, or report, the speed and cost savings of an LLM-assisted process is insignificant compared to the total investment you will or should do o time, resources, and expertise to make it as good as possible — and what's worse, the first draft generated by an LLM is so unreliable and commonplace that it'd be better to start from scratch anyway.
You don't build a high-performance engine by going to a low-cost supplier, looking for pieces that are more or less ok, and then trying to patch them into something good. You get the money and people to get high-performing parts the first time so you can have a chance of building a high-performance whole. Showing a customer a LLM-generated report with some editing and checking by a human might be the optimal cost-speed-quality compromise. Any presentation or code that might impact the long-run behavior and performance of a company needs to be built as slowly and expensively as feasible for as long as the marginal dollar and hour gets you a better output. You're going to put that in your servers and in your brain. "Cheap and fast" isn't what you want to focus on for anything that goes there.
None of this means "don't use AIs" — cutting-edge products can't be built with forms of high-end AI (although those don't usually look like the natural language prompt-driven systems most people call AI these days). It doesn't even mean "don't use LLMs." There's a place in most organizations for cheap, customizable, low-quality outputs for both internal and external use.
What this means is that you have to understand what of your code, data analysis, words, and slides will become part of your core infrastructure and what won't, and resist the temptation of using sophisticated tools to craft low-quality components for the parts of your company that determine how well it runs or even whether or not it'll explode.