For all of its explosive growth, AI still has to make deep roads in management (as opposed to operations) and theoretical science beyond data analysis. But both frontiers will be pushed back faster and further than most people think, and with unprecedented impact.
The difficulty so far lies in the balance between the complexity of the systems we want to control and the limits to experimentation. Computers are superb at learning how to do even extremely complex things, like Go or chess, as long as they are given freedom to experiment. Even complex physical tasks like driving a car are beginning to be manageable through a combination of massive data sets, good simulators, and very large amounts of money spent having cars drive themselves around.
Companies have much smaller margins of error: you can't test thousands of business strategies until a neural network learns what works for you. Scientific research, for example in medicine, faces a related set of issues. Although it's a profoundly experimental science, our tools are so inelegant compared with the (often underestimated in the popular press) complexity of the human body, that we have comparatively little information about what's going on, and relatively blunt tools with which to try to steer it.
The next breakthrough, or rather what's already moving through the "early adopter" phase, lies precisely in algorithms focused on learning how to do things to systems with the minimum of experimentation. The combination of causal models (a type of probabilistic models with some simple but powerful mathematical extensions) and increasingly flexible probabilistic programming systems is moving us, as described in Judea Pearl's technical report, from tools that let us figure out what we know about what's going on based on what we see, to tools that let us ask what will happen if we do something to the system, or what would have happened if we had, and this while integrating in an efficient way both the available data and the conceptual knowledge of human experts.
It all sounds significantly more abstract, and much less exciting, than self-driving cars, but its impact will be seismic. Today days managers and scientists make decisions using superhuman amounts of information; nobody in a competitive organization is expected, or allowed, to operate using only the information they can hold in their head. In a comparatively short time, the only competitive organizations will be those that make decisions using data-driven causal models allowing them to simulate the results, and know the uncertainties, of different actions. Despite their apparent simplicity compared with the ambiguous richness of our intuitions, properly built causal models prove to be more effective, dynamic, and, perhaps above all, scalable than anything a human could master on their own, much less run in their head.
High-level jobs will change, not just in how they are performed but also, to an important degree, in their very nature. Even more significantly, organizations that deploy these technologies will be consistently and qualitatively better at making decisions at all levels, in a cumulative, constantly improving way. Expect the leading business and research organizations to move faster, more effectively, and in stranger ways than ever before, with the gap between winners and losers becoming larger and harder to overcome regardless of money or geography.
That's probably the only constant in the history of the still-ongoing IT revolution: it's not the ability to create new technology or the money to pay for it what defines competitive advantage — after all, most (near-)cutting-edge AI, except in specific industry verticals, is freely available — but the willingness, or rather the eagerness, with which an organization chooses to adopt it.