From ON Magazine, Issue No. 4, 2009
By Andrew Odlyzko
Technology prediction is inherently hard. And it is even harder to predict how society will react to a new product or service.
Potential customers may sneer at a new technology, as happened a decade ago with application service providers (ASPs). Or they may embrace it, as seems to be happening with today’s incarnation of ASPs, cloud computing. (Of course, it is still too early to tell if what we see is truly an enthusiastic embrace, or simply hype generated to stimulate an enthusiastic embrace.) The presence of complicated feedback loops—hype can inspire creation of new applications, which will make a service more attractive and persuade people to try it—makes the task of prediction even harder. So it is no wonder that “progress by mistake” is not just frequent, but almost a rule.
An unexpected killer app
Technology can surprise on the upside as well as the downside. E-mail, which was specifically excluded from the design criteria for the ARPANET, became the “killer app” of that network as well as its descendant, the Internet. Who could have known at the time that the computer mouse, demonstrated by Doug Engelbart more than 40 years ago, would today still be the key device for human-computer interaction? And the World Wide Web, now 20 years old, spread slowly for several years, until the release of the Mosaic browser made it widely accessible—and then it caught fire. But even then, in the first few years, there was considerable speculation that even better tools for accessing information over the Internet might emerge.
What can we conclude from the long history of failed technology predictions? Wide experimentation is certainly called for, as well as maximizing the flexibility of new technologies, in order to accommodate demands that one did not foresee initially. One should not count on serendipity, but be prepared for it. And, of course, we should ride the technology curve, taking advantage of Moore’s Law and similar laws that provide predictable progress in information technologies, at rates that vary from field to field.
Under- and overestimating
Aside from the widely accepted principles above, there are a few other patterns that one can discern in the history of predictions about technology. Thus, although general technology forecasting is unreliable, some predictions have proven correct over an extended period of time. A fairly persistent pattern is the underestimation of the continuing increases in processing power, storage capacities, and communication bandwidth, and overestimation of the extent to which computers can be made to reason like people.
A striking example of this dichotomy is provided by J. C. R. Licklider’s book “Libraries of the Future,” published in 1965. Licklider has the best claim of anybody to be called the “grandfather of the Internet,” as he was the first one to point to computers as being primarily communication devices, not just computing ones, and he set up the program that led to the creation of the ARPANET. In his book, he made many predictions. Some, about development of computer networks, and about digital libraries becoming feasible around the year 2000, are among the finest examples of futurology. But those were based primarily on extrapolations from basic technology trends. Many of his forecasts were wrong, in particular those based on expectations that computers would acquire intelligence.
A similar pattern appears in other areas: Speech recognition has made great strides, primarily by exploiting more powerful technology to do massive pattern matching, rather than by the methods pursued in the 1960s of trying to get computers to understand human speech. Language translation followed the same pattern. And so did chess. The best computer chess programs can handily beat the best human players today, but not by imitating human thought processes. (That presents us with a mystery: Why are there no contests involving pairings of people and computers on each side?)
Human, not artificial, intelligence
With the Web, too, brute force has triumphed, although that brute force is directed by human intelligence, in the form of clever algorithms. (Clever algorithms were also needed for the advances in speech recognition, language translation, and chess.) The popularity of the Web obtained a substantial boost from the appearance of AltaVista, the first popular search engine. AltaVista’s breakthrough, later improved on by Google, was to demonstrate that with sufficient computing, storage, and communications resources, one could do effective, automated crawling and indexing. But AltaVista’s managers, for what seemed to be good business reasons at the time, made the misstep of switching their focus to making AltaVista a portal, and thus facilitated Google’s rise to dominance. Google succeeds largely through use of massive resources, with direction from clever methods, but not ones drawn from conventional AI.
The Web is evolving rapidly. And there are hopes for major breakthroughs based on computer understanding of the growing volume of digitized data. Yet, if we go by historical precedents, such hopes will be disappointed. Computing, storage, and communications are all progressing rapidly, even if somewhat less rapidly than they did a decade ago. Hence, it is most reasonable to expect the incremental improvements they will provide (together with improvements in standard data mining, visualization, databases, and related algorithms) will be the main contributors to the Web’s evolution.