It's the topical angle, but let's not blame algorithms for the United debacle. If anything, algorithms might be the way to reduce how often things like this happen.
What made it possible for a passenger to be hit and dragged off a plane to avoid inconveniencing an airline's personnel logistics wasn't the fact that the organization implements and follows quantitative algorithms, but the fact that it's an organization. By definition, organizations are built to make human behavior uniform and explicitly determined.
A modern bureaucratic state is an algorithm so bureaucrats will behave in homogeneous, predictable ways.
A modern army is an algorithm so people with weapons will behave in homogeneous, predictable ways.
And a modern company is an algorithm so employees will behave in homogeneous, predictable ways.
It's not as if companies used to be loose federations of autonomous decision-making agents applying both utilitarian and ethical calculus to their every interaction with customers. The lower you are in an organization's hierarchy, the less leeway you have to deviate from rules, no matter how silly or evil they prove to be in a specific context, and customers (or, for that matter, civilians in combat areas) rarely if ever interact with anybody who has much power.
That's perhaps an structural, and certainly a very old, problem in how humans more or less manage to scale up our social organizations. The specific problem in Dao's case was simply that the rules were awful, both ethically ("don't beat up people who are behaving according to the law just because it'll save you some money") and commercially ("don't do things that will get people viscerally and virally angry with you somewhere with cameras, which nowadays is anywhere with people.")
Part of the blame could be attributed to United CEO's Muños and his tenuous grasp of at least simulated forms of empathy, as manifested by his first and probably most sincere reaction. But hoping organizations will behave ethically or efficiently when and because they have ethical and efficient leaders is precisely why we have rules: one of the major points of a Republic is that there are rules that constrain even the highest-ranking officers, so we limit both the temptation and the costs of unethical behavior.
Something of a work in progress.
So, yes, rules are or can be useful to prevent the sort of thing that happened to Dao. And to focus on current technology, algorithms can be an important part of this. In a perhaps better world, rules would be mostly about goals and values, not methods, and you would trust the people on the ground to choose well what to do and how to do it. In practice, due to a combination of the advantages of homogeneity and predictability of behavior, the real or perceived scarcity of people you'd trust to make those choices while lightly constrained, and maybe the fact that for many people the point of getting to the top is partially to tell people what to do, employees, soldiers, etc, have very little flexibility to shape their own behavior. To blame this on algorithms is to ignore that this has always been the case.
What algorithms can do is make those rules more flexible without sacrificing predictability and homogeneity. While it's true that algorithmic decision-making can have counterproductive behaviors in unexpected cases, that's equally true of every system of rules. But algorithms can take into account more aspects of a situation than any reasonable rule book could handle. As long as you haven't given your employees the power to override rules, it's irrelevant whether the algorithm can make better ethical choices than them — the incremental improvement happens because it can make a better ethical choice than a static rule book.
In the case of United, it'd be entirely possible for an algorithm to learn to predict and take into account the optics of a given situation. Sentiment analysis and prediction is after all a very active area of application and research. "How will this look on Twitter?" can be part of the utility function maximized by an algorithm, just as much as cost or time efficiencies.
It feels quite dystopic to think that, say, ride hailing companies should have machine learning models to prevent them from suddenly canceling trips for pregnant women going to the hospital to pick up a more profitable trip elsewhere; shouldn't that be obvious from everybody from Uber drivers to Uber CEOs? Yes, it should. And no, it isn't. Putting "morality" (or at least "a vague sense of what's likely to make half the Internet think you're scum") in code that can be reviewed, as — in the best case — a redundancy backup to a humane and reasonable corporate culture, is what we already do in every organization. What we can and should do is to teach algorithms to try to predict the ethical and PR impact of every recommendation they make, and take that into account.
Whether they'll be better than humans at this isn't the point. The point is that, as long as we're going to have rules and organizations where people don't have much flexibility not to follow them, the behavioral boundaries of those organizations will be defined by that set of rules, and algorithms can function as more flexible and careful, and hence more humane, rules.
The problem isn't that people do what computers tell them to do (if you want, you can say that the root problem is when people do bad things other people tell them to do, but that has nothing to do with computers, algorithms, or AI). Computers do what people tell them. We just need to, and can, tell them to be more ethical, or at least to always take into account how the unavoidable YouTube video will look.