Foreign Policy Research Institute A Nation Must Think Before it Acts Internet Connectedness: Unequaled Incubator for Innovation or the Death of Privacy and Civility?

Internet Connectedness: Unequaled Incubator for Innovation or the Death of Privacy and Civility?

On February 3, 2015, the World Affairs Council of Philadelphia, Drexel University, and The Franklin Institute presented a symposium, entitled, rather hyperbolically, 21st Century Technology: New Beginning, or Terrible End. The sessions addressed issues of particular interest to “millennials,” as well as the future of the Internet, genetics in medicine, energy and climate change, space exploration (with special guest speaker, Col. Buzz Aldrin), and transformations of healthcare by “big data.”  Council President Craig Snyder challenged both panelists and the audience to think broadly about both the promises of technology, which, he explained, are exemplified by his childhood experiences of the Apollo Moon landings and Star Trek, but also about the dystopian visions of the Terminator movies and other works of science fiction that explore the Faustian vision of the defeat of humankind at the hands (or robotic claws) of its own inventions. The following essay is my response to this challenge, as well as responses to the questions posed by Mr. Snyder about Internet technology and its future.

*           *           *           *           *

As a student, scholar, teacher and practitioner of innovation and of the science, engineering and technologies that it produces, and, like Craig Snyder, as a child of the first Star Trek generation, I regard that television program and its progeny as far more than a catalog of communicators, warp drives, tricorders, transporters, phasers and talking computers – the tools and toys of the 23rd Century. For me, and for Gene Roddenberry, the series’ creator, Star Trek represents a view of how humans address our most basic social and ethical dilemmas, even as we continue to expand the horizons of our understanding of the universe, and translate that knowledge into new, more powerful technologies.

Star Trek, for all its futuristic toys, also dealt directly with racism, sexism, militarism, the Cold War, ecological destruction, population control, biotechnology, and many other issues of enduring social importance. Like much science fiction, Star Trek’s creators recognized that all basic scientific knowledge, and most of the technology derived from that knowledge is morally neutral – neither intrinsically good nor inherently evil. Rather, it is the way in which such technology is used that imparts its moral value, and it is thus up to society to implement ethical frameworks for the use of technology.

Sadly, social and legal institutions are ill-equipped to deal with the rapid pace of technological change in our world. Like religion, and countless other social institutions, the law is conservative, anchored in traditions of the past, and deeply rooted in precedent. The principle of stare decisis requires that jurists look to the past, in order that disputes are resolved in consistent and predictable ways. Science and technology, however, are forward looking, spawning new ways of thinking, of interacting, of creating value, and accelerating this process by the doubling and redoubling mathematics of exponential growth. This fundamental difference of value and approach means that our institutions and laws will never keep pace with our technology (nor have they ever) because any attempt to regulate technological progress that excludes scientists and technologists, and that refuses to acknowledge the validity of the very methods of science is doomed to failure, both in the long and the short run.

The same impulses that gave us Napster and mass piracy of music online have also given rise to a worldwide resurgence in the popularity of classical music, and the renewed viability of great orchestras and opera companies. The same Big Data tools that permit the intrusive surveillance of the NSA, Google and FaceBook also suggest movies we may want to view, books that we should read, and public health initiatives that may stop pandemics before they spread. The same services that spread distracting cat videos bring us thousands of TED Talks, tens of thousands of free university classes, and the Kahn Academy.

There has always been a sense of unease about the human costs of innovation. Writing in the early 20th Century, Ernst Heidegger complained that because of a new innovation, people in European cities and towns were no longer engaging in valuable face-to-face social interaction. This advance? Indoor plumbing, because it meant that people no longer went to a public well to fetch water and to discuss the news of the day. One might also imagine a philosopher in Imperial Rome expressing the same misgivings about the new public fountains, fed by aqueduct from a distant spring, because people no longer went to the river to wash clothes, fetch water, and discuss the matters of the day. And so it goes…each new technology giving rise to new advantages, while changing older traditions and obviating older methods. Online dating replaced singles bars, which replaced church socials, which replaced arranged marriages, and each generation bemoans the loss of its hallowed traditions (but, as FPRI President Alan Luxenberg has said, “you can’t sit shiva for the death of each tradition”).

Karl Fisch, a teacher in Colorado, collected some reactions of education officials to attempts to bring new technologies into the classroom. School board members and administrators railed against slates, paper, pencils, straight pens, fountain pens, ball-point pens, mimeographs, filmstrips, movies, calculators, computers, Internet connections, and iPads. Now it’s Wikipedia and social media. Each time, these educators worried that some valuable skill (like preparing bark on which to write, or sharpening goose quills to make pens) would be forever lost, and along with it, some important social relationship devalued or altered, to the detriment of our younger generation, which would end up lazy, stupid, and ill-prepared to lead our nation. Needless to say, their worst fears of technologically-induced civilizational decay failed to be realized.

Mr. Snyder’s first discussion question concerned the nature of “connectedness” and how it is changing interpersonal communications and relationships. He asked, “But can’t it be argued that the Internet is a corrosive force on human intellect, distracting us with endless volumes of cute cat videos, pornography, and status updates about where your friends are having breakfast?”

In 1989, while I was starting my first company, I asked the question: what if all of the world’s knowledge could be placed 18” from every child’s nose, accessible just by asking questions? That question led to Homework Helper. Before there was a Google, before there was a “dot com”, before Apple’s Siri, my colleagues and I sought to democratize access to knowledge, confident that this would lead to great good. We saw students asking computers English language questions in those early days, and getting answers drawn from over 600 authoritative publications. Our systems didn’t just help with homework – they transformed it. We tend to forget that when we ask a question of a search engine, the answer is something that another person created and published, and that this is one of the most human interactions we can have, even if we never meet the person who answered the question.

“But,” I hear constantly, “the Internet is filled with false and disgusting information, time-wasting garbage, and outright hateful lies.” Absolutely. Just like every bookshop and library. Just as the response to disgusting views and vitriolic invective should not be restrictions on speech, but rather, more freedom of speech, so, too, the response to cat videos, pornography, and trivia should be TED Talks, webcasts of academic conferences and lectures, and even more open and free university courses online.

We now take for granted that knowledge about every subject is online somewhere. We search for, and find answers about batting averages, as easily as we learn how to repair a leaky faucet or build a helicopter. Soon, a farmer in a village in the Central African Republic will use her mobile phone to learn new ways to support her family, and will take out a micro-loan to start a new venture or to send her daughter to school.

It may be that we cannot enjoy the benefits that networking brings without the dark side, but we should not forget that many of the now-accepted innovations of the Internet such as streaming video and online payment systems originated in the pornography industry. As individuals and families, we may differ in our opinions about the moral worth of the content, but when, as happened a month ago, we convene a summit for high school students from Philadelphia and Taiwan using live video over the Internet, you will have a difficult time convincing me that the harms outweigh the benefits of global connectedness.

The second question concerned the future of privacy: “What do you think are the most important privacy issues involving the use of technology and do you think it is possible that privacy, as it has traditionally been understood, will be maintained as technology moves forward?”

Privacy, as “traditionally understood” was very simple: if you want to keep something private, it should not pass your own lips. From 1890 until about 1990, we in the United States lived in an anomalous privacy bubble, created more-or-less as a backlash against new information technologies by a few lawyers and judges. From Louis Brandeis’ famous Harvard Law Review article, to Judge Prosser’s Restatement of Torts, to Justice William O. Douglas’ Griswold decision, to our “reasonable expectation of privacy, objectively defined,” we departed from the norms of human history and sought to live both privately and anonymously in our urban environments.

Advancing technology, however, began to strike back almost as soon as a right to privacy was pronounced. Wire taps, pen recorders, electronic bugs – the tools of both spies and law enforcement officers – found their way into other, less savory employments. The slippery slope of our “reasonable expectations” saw the limits of privacy contract from the moment we first held them.

When it comes to what should be our sense of privacy on the Internet, I think that is was best characterized by Scott McNealy, founder of Sun Microsystems, in a 1999 response to a question from a reporter: “It’s the Internet. You have no privacy anyway. Get over it.” If we just knew and accepted that advice, I suspect that many, at least those of us born before 1990, would behave very differently online. Instead, we behave in some ways as if we do not believe that our neighbors can see through those panes of glass that we have in the sides of our houses and cars.

There is, however, one hope in this panopticon world: encryption, universally and easily available. The cypherpunk philosophy may be the only refuge of private and free expression and association in a networked world. That, too, has its dark side, since the same encryption that protects my political activity also may protect terrorists plotting and criminals conspiring. Apple’s decision to provide strong encryption in the iPhone and iPad have drawn sharp criticism from the head of the FBI and predictions of the death of a child from a member of Congress. There is a balance to the value of privacy, but one that is not universally understood. Even the recently-announced restrictions on the storage of electronic surveillance information by the NSA do not strike a balance that addresses the chilling effect of universal surveillance, and so, this is an issue that will stay in the center of public debate for many years to come.

Mr. Snyder’s third question concerned “AI” – artificial intelligence.  He asked, “do you see a future with intelligent machines as more like the Jetsons or The Terminator and The Matrix?”

Artificial intelligence is hard. That may be because actual real intelligence is a scarce commodity, and the ability to define intelligence, and to understand how it works is in its infancy. From there, it’s a long way to really intelligent systems. Researchers, however, are actively pursuing the goal, and some optimistic projections place the first glimmers of success within our lifetimes. Today, we have lots of simpleton machines that do amazing things. It’s been over a decade since the best chess player in the world was a human. Watson, IBM’s newest AI project, won Jeopardy against some very smart people. And yet, if you ask Watson to catch a Frisbee, it can’t. Ask chess-champion program Deep Blue to play poker, and you will clean it out on the first hand. These machines are idiots savant.

Isaac Asimov, though, had the right idea when he developed the three (later four) laws of robotics. Those laws amount to a moral code for machines (robots can be taken to be intelligent machines, regardless of whether they look like Robbie The Robot, or a Roomba vacuum.)

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov, Isaac. I, Robot. (1950). We humans can’t agree on even the most basic definitions of what is ethical, and we have been at it for over ten thousand years, so we had better be quite explicit in what is permissible for our machines. We had better not leave this one to chance.

As for whether computers, or networks of computers will ever become super-intelligent and conscious…most bets are that they will, but that their evolution can be guided, as long as those working in the field are capable of long-term thought about such issues. Business and government organizations have a terrible track record when it comes to accepting prior restraints on their activities, so we will have to be especially careful. That is the message being supported by Elon Musk, Stephen Hawking, Bill Gates, and thousands of others, including this author (https://futureoflife.org/misc/open_letter). The Long Now Foundation is hard at work on much of that territory, as well (https://www.longnow.org).

In 1968, our astronauts snapped a photograph that, in an instant, changed our perspectives about our place in the Universe (https://goo.gl/8e4tMq) Suddenly, we were all in it together. When Armstrong and Aldrin left the Moon, they left behind a plaque that read, “We came in peace for all mankind.”

So, is 21st Century technology a new beginning, or a terrible end? In a paper published in 1979, Hans Jonas, a professor of philosophy at the New School concluded, “One part of the ethics of technology is precisely to guard the space in which any ethics can operate. For the rest, it must grapple with the cross-currents of value in the complexity of life.”

Technology is a fundamental part of what it means to be human. We are a lazy species. We have always used our minds to invent technologies that save us work and make life a bit more enjoyable. For almost all of history and prehistory, the pace of technical change could be measured in the tens or hundreds of human generations – too slow to be noticed, remembered, or much considered. It is only in the past five hundred years that our accumulation of knowledge has allowed those who stand on the shoulders of giants to noticeably change their world, and ours. Only in the past two hundred years have our inventions and their consequences come to be potential threats to our survival and that of our planetary ecosystems. We are a young species, just barely out of our hunter-gatherer tribes, having to grapple with space flight, atomic weapons and climate instability — we’d better hope that we are quick learners.