Let's start with a General Theory of Hiring Filters.
Like any filtering algorithm, the output of whatever AI you use to help you select candidates will depend on three independent sets of features:
Ideally, your algorithm will focus exclusively on features correlated with performance. This is harder than it looks — partly for reasons we'll touch upon later — so in practice algorithms are trained to replicate existing hiring patterns and therefore their performance. The key problem of hiring isn't that it is expensive but rather that it's difficult to get right. Even organizations like NFL and NBA teams, with lots of resources, data, and clear performance metrics, get draft choices wrong surprisingly often.
Non-gameable uncorrelated features is another way of saying bias. Features like gender, race, age, body type, mobility handicaps, etc, are usually entirely unrelated to job performance yet have strong impact on hiring and promotion patterns. Whether in AI models or more manual filtering processes, this is very illegal, quite immoral, and rather destructive to hiring performance.
What about gameable features uncorrelated with performance? Those are very seldom discussed because many organizations think of them as correlated with performance. Think about things like speech patterns, dress codes, or, closer to usual (for now) AI filters, CV formatting and word choice.
Let's look at that last one. Many hiring filters, automated and otherwise, will reject or at least disfavor people with badly formatted CVs; "how to format your CV" is a staple of job-seeking advice, even in jobs where formatting documents is not a key activity (and even where it is, it's eminently easier to train than other factors people aren't rejected for early in the process). The usual argument to do this is that it shows consciousness and orientation to detail; this would be somewhat defensible if not for the fact that it's much much easier to pretend to be detail-oriented while formatting your CV than it is while learning a complex skill. To put it in another way, the way somebody with a PhD has formatted their CV has zero informational content about their orientation to detail or any other cognitive skill once you factor in that they have a PhD. The same, at a smaller but still sufficient scale, can be said of other skills that take less time and effort but are still much more demanding than getting the right CV format.
What gameable non-correlated features do convey information about is the candidate's willingness to, well, dedicate time and energy to game features not correlated with performance. This is a measure of how much the person wants the job — useful information, to be sure, for companies to negotiate wages and conditions with — and, critically, can be highly predictive of on-the-job success in the sense of getting promotions and so on, but is at best noninformative about their ability in their expertise domain. A distressingly large part of corporate life is getting the right document formatted in the right way with the right phrasing and, often, getting to "the right conclusion," never mind jumping through various hoops that are unrelated to business performance but that serve well your career. This is true in every human organization from armies to prison gangs to governments. That this happens and is a problem is not the point of this post.
The point is that AI filters, in their current large language model-driven implementations, are eminently gameable in ways uncorrelated with performance. The key term is adversarial attacks: just as you can make semantically meaningless changes to images to fool classifiers, you can make semantically meaningless (or even detrimental) changes to texts to manipulate HR AIs. This doesn't need to be a "one WEIRD trick to get hired FOR SURE''; it's enough that it's generally possible and generally useful, which means that, at the margin, HR AIs don't add new features correlated with performance — they are using the same texts HR experts look at — and they often fall into "traditional" forms of bias, and do add new and exciting gameable features uncorrelated with performance.
By definition gameable uncorrelated features favor people who are happy to game them. As a whole, you are lowering the relative impact of work performance and increasing the relative impact of being the sort of person who thrives on figuring out how to play the system rather than ruthlessly focused on domain expertise. You might get both, of course, but you're nudging the scales in the wrong direction.
The bad thing is that this lowers future overall performance.
The really bad thing is that you won't notice, because the people you hire will actually, I suspect, do quite well career-wise. The more impact easily-gameable AIs have on career paths, the more useful it is to learn to game them, and the less useful it is to do pretty much anything else. You can get very far in business (as in everything else) with a minimum of competence and a maximum of charm, and building AIs that can be charmed — and what else is an adversarial attack but a charming gesture that convinces you of something you have no evidence for? — just adds another angle.
This isn't the fault of HR departments or AI developers: lacking good authoritative models of work performance — mostly because most companies' internal models are rather bad, particularly for strategic and intellectual work, and I'm not blaming anybody: it is hard — there's always a temptation to do a thing because we have to do something and it is a something, not because we have strong ex ante reasons to believe it'll work.
At its core this is an intellectual problem. Our hiring AIs have the performance they have because our understanding of on-the-job performance is what it is. It's not nothing, but it bounds what we can do with AIs. Properly speaking, the role of HR overlaps strongly with every operational model and measure: what else is HR but understanding how to best help people do what they do, and how can HR experts do that without a detailed understanding of what it is they do? How can we get AI-level hiring performance without AI-level models of work activities, which ultimately mean AI-level models of companies as a whole?
I fully expect AI to revolutionize HR, not because of what it'll help automate — as I said above, I believe at this point it'll just make things worse even if they look better under legacy metrics — but because of what it'll help understand. A revolution in the self-understanding of companies, which will be impossible without the integration of expertise from HR, AI designers, PM experts, and a long, long list of other domain experts, while difficult and expensive at first, will profoundly change the way candidates are evaluated. This time for the better.