Jeff Harrow is the author and editor of the Web-based multimedia Harrow Technology Report" journal and Webcast, available at TheHarrowGroup.com. He also co-authored the book "The Disappearance of Telecommunications." For more than 17 years, beginning with "The Rapidly Changing Face of Computing," the Web's first and longest-running weekly multimedia technology journal, he has shared with people across the globe his fascination with technology and his sense of wonder at the innovations and trends of contemporary computing and the growing number of technologies that drive them. Harrow has been the senior technologist for the corporate strategy groups of both Compaq and Digital Equipment Corp. He invented and implemented the first iconic network management prototype for DECnet networks. He now works with businesses and industry groups to help them better understand the strategic implications of our contemporary and future computing environments.
This interview is in two parts; Part 1 ran Wednesday.
Q. Dot-coms have bombed. Now nanotechnology is touted as the basis for a new economy. Are we in for the bursting of yet another bubble?
A. Unrealistic expectations are rarely met over the long term. Many people felt that the dot-com era was unrealistic, yet the allure of the magically rising stock prices fueled the eventual conflagration. The same could happen with nanotechnology, but perhaps we have learned to combine our excitement of "the next big thing" with reasonable and rational expectations and business practices. The "science" will come at its own pace -- how we finance that, and profit from it, could well benefit from the dot-bomb lessons of the past. Just as with science, there's no pot of gold at the end of the economic rainbow.
Q. Moore's Law and Metcalf's Law delineate an exponential growth in memory, processing speed, storage and other computer capacities. Where is it all going? What is the end point? Why do we need so much computing power on our desktops? What drives what -- technology the cycle-consuming applications or vice versa?
A. There are always bottlenecks. Taking computers as an example, at any point in time we may have been stymied by not having enough processing power, or memory, or disk space, or bandwidth, or even ideas of how to consume all of the resources that happened to exist at a given moment. But because each of these (and many more) technologies advance along their individual curves, the mix of our overall technological capabilities keeps expanding, and this continues to open incredible new opportunities for those who are willing to color outside the lines.
For example, at a particular moment in time, a college student wrote a program and distributed it over the Internet, and changed the economics and business model for the entire music distribution industry (Napster). This could not have happened without the computing power, storage and bandwidth that happened to come together at that time.
Similarly, as these basic computing and communications capabilities have continued to grow in capacity, other brilliant minds used the new capabilities to create the DivX compression algorithm (which allows "good enough" movies to be stored and distributed online) and file-format-independent peer-to-peer networks (such as Kazaa), which are beginning to change the video industry in the same manner.
The point is that in a circular fashion, technology drives innovation, while innovation also enables and drives technology, but it's all sparked and fueled by the innovative minds of individuals. Technology remains open-ended. For example, as we have approached certain limits in how we build semiconductors, or in how we store magnetic information, we have ALWAYS found ways "through" or "around" them. And I see no indication that this will slow down.
Q. The battle rages between commercial interests and champions of the ethos of free content and open-source software. How do you envisage the field 10 years from now?
A. The free content of the Internet, financed in part by the dot-com era of easy money, was probably necessary to bootstrap the early Internet into demonstrating its new potential and value to people and businesses. But while it's tempting to subscribe to slogans such as "information wants to be free," the longer-term reality is that if individuals and businesses are not compensated for the information that they present, there will eventually be little information available.
This is not to say that advertising or traditional "subscriptions," or even the still struggling system of "micropayments" for each tidbit, are the roads to success. Innovation will also play a dramatic role as numerous techniques are tried and refined. But overall, people are willing to pay for value, and the next decade will find a continuing series of experiments in how the information marketplace and its consumers come together.
Q. Adapting to rapid technological change is disorientating. Toffler called it a "future shock." Can you compare people's reactions to new technologies today -- to their reactions, say, 20 years ago?
A. It's all a matter of rate of change. At the beginning of the industrial revolution, the parents in the farms could not understand the changes that their children brought home with them from the cities, where the pace of innovation far exceeded the generations-long rural change process. Twenty years ago, at the time of the birth of the PC, most people in industrialized nations accommodated dramatically more change each year than early industrial-age farmer would have seen in his or her lifetime. Yet both probably felt about the same amount of "future shock," because it's relative. The "20 years ago" person had become accustomed to that year's results of the exponential growth of technology, and so was prepared for that then-current rate of change.
Similarly, today, schoolchildren happily take the most sophisticated of computing technologies in-stride, while many of their parents still flounder at setting the clock on the VCR -- because the kids simply know no other rate of change. It's in the perception.
That said, given that so many technological changes are exponential in nature, it's increasingly difficult for people to be comfortable with the amount of change that will occur in their own lifetime. Today's schoolchildren will see more technological change in the next 20 years than I have seen in my lifetime to date; it will be fascinating to see how they (and I) cope.
Q. What's your take on e-books? Why didn't they take off? Is there a more general lesson here?
A. The e-books of the past few years have been an imperfect solution looking for a problem. There's certainly value in the concept of an e-book, a self-contained electronic "document" whose content can change at a whim either from internal information or from the world at large. Travelers could carry an entire library with them and never run out of reading material. Textbooks could reside in the e-book and save the backs of backpack-touting students. Industrial manuals could always be on hand (in-hand!) and up-to-date. And more.
Indeed, for certain categories, such as for industrial manuals, the e-book has already proven valuable. But when it comes to the general case, consumers found that the restrictions of the first e-books outweighed their benefits. They were expensive. They were fragile. Their battery life was very limited. They were not as comfortable to hold or to read from as a traditional book. There were several incompatible standards and formats, meaning that content was available only from limited outlets, and only a fraction of the content that was available in traditional books was available in e-book form. Very restrictive.
The lesson is that (most) people won't usually buy technology for technology's sake. On the other hand, use a technology to significantly improve the right elements of a product or service, or its price, and stand back.
Q. What are the engines of innovation? What drives people to innovate, to invent, to think outside the box and to lead others to adopt their vision?
A. People are the engines of innovation. The desire to look over the horizon, to connect the dots in new ways, and to color outside the lines is what drives human progress in its myriad dimensions. People want to do things more easily, become more profitable, or simply do something new, and these are the seeds of innovation. Today, the building blocks that people innovate with can be far more complex than those in the past. You can create a more interesting innovation out of an integrated circuit that contains 42-million transistors today -- a Pentium 4 -- than you could out of a few single discrete transistors 30 years ago.
Or today's building blocks can be far more basic (such as using atomic force microscopes to push individual atoms around into just the right structure.) These differences in scale determine, in part, why today's innovations seem more dramatic.
But at its heart, innovation is a human concept, and it takes good ideas and persuasion to convince people to adopt the resulting changes. Machines don't (yet) innovate. And they may never do so, unless they develop that spark of self-awareness that (so far) uniquely characterizes living things.
Even if we get to the point where we convince our computers to write their own programs, at this point, it does not seem that they will go beyond the goals that we set for them. They may be able to try superhuman numbers of combinations before arriving at just the right one to address a defined problem, but they won't go beyond the problem. Not the machines we know today, at any rate.
On the other hand, some people, such as National Medal of Technology recipient Ray Kurzweil, believe that the exponential increase in the capabilities of our machines -- which some estimate will reach the complexity of the human brain within a few decades -- may result in those machines becoming self-aware.
Send your comments to: [email protected]