"Second order effect" is just another way of saying "Oops."
(An earlier version of this post got out of hand, so I'll just sketch it.)
- Copilot and other Large Language Models excel at finding and rephrasing relevant fragments of common implementation problems.
- As most developers at most companies spend most of their time implementing ideas already done a thousand times (don't let hype inflation convince you otherwise), LLMs do increase their productivity.
- Unavoidably, managers respond to this by spending less on developers — layoffs, replacements with more junior people, assigning more projects to the same people, etc — which makes next quarter's margins larger and the relevant stakeholders happy.
- That's where we are right now.
- But using a LLM isn't a sophisticated high-capex investment on productivity: everybody can do it and everybody will.
- The newer, smaller, hungrier or just more desperate companies, though, will look at reduced labor costs and instead of thinking of their margins they'll think of their market share and lower prices instead.
- Now the managers that downsized will have to lower prices too. Profits go down, relevant stakeholders are less happy.
- Even worse for them, LLMs don't just increase productivity: they lower barriers of entry at the low end of the skill curve. This adds extra downward pressure on labor costs and margins.
- So most likely you're going to have more developers than you started with to keep up profits. Low-margin markets with low barriers of entry can be mean.
The economics-minded reader will have seen this coming a mile away, and will already know the overall impact once the dust settles:
- Larger but not more profitable companies.
- More developers paid much worse.
- Cheaper and more plentiful software of the "a couple of tutorials and good Stack Overflow queries" kind.
This isn't a contrarian pitch - it's the way this sort of technological impact usually goes. Copilot isn't an industrial robot, it's Excel: a huge productivity booster in the right context, but cheap and easy enough to use that if you use it in your job you aren't paid more for the extra productivity, you're required to use it in order to have a job at all.
A hopeful note for those who enjoy writing software and would like to be paid well to do it is that LLMs are very very bad at understanding the world. They are very good at coming back with plausible answers to common questions, but the more you start making novel questions that combine ideas and explore hypotheses, the more obvious it becomes that they are just making stuff up. It's not bad engineering: they are linguistic models, making stuff up on the go is what they do.
If the value you add is not mainly in implementing a solution but in understanding a part of the world well enough that you can come up with a novel one, that's a skill you'll still be able to charge well for. In a way it's a return to an older view — although always more aspirational than real — of computer programming as an intellectual tool to think about problems, not the problem we have to figure out.
To end up on a curmudgeonly note, I consider even the short-term impact of coding assistants a negative from a software quality point of view: so far I've seen it lead to code that's written faster (good), is less well understood by the developers (bad), tends to be longer (bad), and is less conceptually elegant (very bad). It gets feature request tickets done faster at the cost of maintainability, system integrity, and cumulative knowledge development.
In short, it's going to be an absolute hit across the industry.