The Virtue of the Open
Reading Lawrence Lessig's Code and Other Laws of Cyberspace and Future of Ideas
Charles Muller, Toyo Gakuen University
December 16, 2002
Thirty spokes join together in the hub.
It is because of what is not there that the cart is useful.
Clay is formed into a vessel.
It is because of its openness that the vessel is useful.
Cut doors and windows to make a room.
It is because of its openness that the room is useful.
Therefore, what is present is used for profit.
But it is in absence that there is usefulness.
Daodejing, Chapter 11
There have been few junctures in our history during which we have experienced an advance in communications technology as rapid as that which has occurred during the past decade. This flood of changes has brought pronounced influence on every realm of intellectual activity: politics, economy, science, the arts, education, technology, and so forth. We have instantaneously become both the publishers and recipients of endless megabytes of new kinds information, delivered with a speed, variety, and volume that we could not have dreamed of fifteen years ago.
This is true in a special sense for those of us for whom research is our primary occupation—who work in the academic institutions where the Internet was originally conceived. Even the least computer-savvy among us has little recourse but to go onto the Web on a regular basis, even if just for the purpose of doing e-mail, filling out applications, placing, or responding to job ads, etc. Those at the other end of the spectrum—the digerati who are taking full avail of computerized resources, may already be spending upwards of 90% of their research time in cyberspace. The Web is here to stay, and its role in the future of publication and data sharing is destined to become close to exclusive.
Despite the seemingly limitless new opportunities for information sharing presented in cyberspace, many of those directly involved with the creation of Web resources at the technical level have been harboring a growing concern regarding movements on the part of governments and large corporations to impose constraints that have the potential to radically transform the present environment—an environment that allows for the free exchange of ideas and innovation. Such constraints, if fully implemented, could well serve to radically damage the Web's unparalleled potential as a resource for the creation, sharing, and preservation of cultural property.
One person who has become especially concerned about these trends, and who has gained some notoriety for his lucid articulation of the potential harm such trends might bring to our culture, is Lawrence Lessig, a scholar from the Stanford Law School.1 Lessig has written two books, both of which have become the subject of extensive discussion in the fields of IT, cyberlaw, and economics. In his first book, entitled Code and Other Laws of Cyberspace (Basic Books, 1999), Lessig argues against the pervasively held misconception that the WWW is something that is by its own "nature" free and open. He explains, to the contrary, how the Web actually has no intrinsic nature—that it can, through its underlying code, be programmed for whatever purpose is desirable to those who have the power to control its destiny. The code that runs the Internet can easily be rewritten such that it can serve as an instrument of control, to an extent not previously seen in our history. This control can be wielded by our governments for political ends, or by corporations who desire to exact control over the marketplace.
In his second book, The Future of Ideas: The Fate of the Commons in a Connected World (Random House, 2001), Lessig builds on the arguments developed in Code, further addressing the problem of how the initial free and open architecture of the web—precisely that which has allowed for the flood of creativity and innovation that we have witnessed over the past decade—represents a dire threat to those major corporations of the Old economy that have made their billions based on their control of the flow of information and cultural assets. He outlines the rapid and far-reaching moves these corporations are taking to ensure the continuity and further extension of their market control—a control that can only be assumed at the expense of the freedom and openness of the Web for both its publishers and consumers. Since what is mainly exchanged on the Web is information (as opposed to material objects), control over the Net entails control over the flow of ideas.
Central to the discourse in Future Of Ideas is the notion of the commons, "...a resource to which anyone within the relevant community has a right without obtaining the permission of anyone else." (Future of Ideas p. 24-25) Commons are often free of charge, but not necessarily so, since their defining character is their capacity to be shared by people without discrimination. For example, we may pay fees for the usage of parks, highways, and telephones, but access to these areas held in common is not subject to discriminatory treatment by a single controlling authority—at least not in a free and reasonably well-governed society.
We can readily understand the seminal role in our society of "real space" commons such as highways, parks, and other kinds of public spaces. But the chief concern of Future of Ideas is the intellectual commons—the open pool of ideas in art, literature, music, commerce, technology, and so on, which Lessig takes to be the lifeblood of the creativity of any thriving society. It is the shared space that allows us to freely draw upon, assimilate, copy, translate, and build upon, the ideas of Shakespeare, Brahms, Einstein, or any other creative master of the past whose works are not closed off by the restrictions of copyrights, licenses, or patenting. In a culture where these ideas are not readily accessible, creativity (in art, music, literature, etc.) and innovation (in science and technology) is stifled.
The first section of Future of Ideas, entitled "dot.commons" discusses, using an array of examples, the role that commons have played in healthy societies as resources for creativity: artistic and literary, as well as scientific and technological. Those of us who have studied little about the U.S. Constitution since our youth cannot but come away from this discussion with a healthy new respect for the remarkable foresight demonstrated by the authors of the Constitution—most notably Thomas Jefferson—who so clearly recognized the vital role played in a culture by the intellectual (or innovation) commons, and the importance of the role of the government in maintaining a well balanced mixture of freedom and control.2
The Three Layers of the Web
To provide a framework for understanding the Internet's design, Lessig introduces a three-layered model for describing communication systems3 that consists of: (1) The physical (or "hardware") layer, which is constituted by the wires and connectors across which communication travels. (2) The middle layer—the "logical" or "code" layer—the computer programming that makes the hardware run. (3) The content layer, situated on top of the first two, which is the information (images, music, texts, movies, etc.) that gets transmitted across the wires.
The original architects of both the Internet (the communications network that is the environment for the Web) and the World Wide Web (the aspect of the Internet that functions on the basic protocols of HTTP and HTML, allowing the publication and transfer of data with graphical representation) consciously designed it to allow for a maximum degree of openness and flexibility. While a fair degree of control is instituted at the physical and content levels (the wires are owned by someone; so is much of the content), the code level has been kept—at least until recently—almost completely open. This design is known as e2e or "end-to-end," which means that the software applications that do things with the content reside almost exclusively in machines at the ends of the network. Very few actions are taken on the information passing in between by the code that handles the information. The network hub machines and the code that handle the information are "dumb"—they don't know whether they are passing images, text, or music. They don't know if the material is personal correspondence, research data, pornography, or anti-government political discourse.4
This decision to maintain openness at the logical level has become the basis for a groundswell of code innovation over the past decade. Thousands of new applications have been created to do a myriad new functions that were, only a decade ago, inconceivable. These innovations in code have in turn made possible the appearance of the incredible amount of new content: new, in the sense of being newly produced, as well as in the sense of being new in character. This open architecture stands in contrast to that of related media such as cable TV, which is fully controlled at all three levels, or the telephone system, which is controlled at the code and physical layers (see table in Future of Ideas p. 25).
While many writers, artists, scholars, entrepreneurs—and perhaps even some intelligent government officials—may see the Internet as an unparalleled opportunity to advance knowledge, enrich culture, spur the economy, and facilitate government, the Net is also seen, especially to Old businesses that have developed themselves through the principles of the Old Economy, as a disruptive technology that has given birth to an environment that can with remarkable speed and unpredictability transform the entire makeup of a given market. Just look, for example, at what Amazon.com did to the business of book selling; or what Napster almost did to music distribution.
During the Web's earliest years, it was not taken seriously by most established institutions and corporations, and so during this time the vanguard of the New—innovators of every stripe and color, were able to do their work largely unimpeded by the power of the Old. The open competition between similar applications engendered rapid and varied innovation. However, those of the Old who wanted to continue to do business as usual soon realized that they could not afford to stand by idly, and so they have begun to move forcefully to limit the free flow of content, and to suppress software innovation. Both kinds of control are enforced at the middle layer of the Net—the code level.
At the moment not much discriminatory control has been implemented at the physical layer. It would be quite possible to do so (for example, the way that cable TV companies regulate access to their content), but with the variety of physical connections still presently available for the Net, as things presently stand, the physical layer cannot be locked down overnight. However, at the logical layer, it is quite easy to write programs that are themselves locked down, and which serve to constrain the distribution of content. This makes it possible to place near-perfect controls on what can be done on the Net.
"But," we might ask, "isn't it quite possible for this code to be broken?" After all, proprietary software can be hacked, reverse-engineered, and in the very worst case replicated in its function by a different programming code. Here, a pivotal external factor comes into play. For when Lessig warns of an Internet controlled by code, he has two kinds of "code" in mind: the first is the "West Coast" code written by the computer programmers—the software. The second is the "East Coast" code written by the lawmakers—the legal code (Code and Other Laws of Cyberspace, p. 53), When used in concert, these two can exercise a completeness in control the likes of which we have never witnessed in the history of our culture.
Of course, the "West Coast" hackers can open up almost any program that they have access to. And any first-rate programmer can write code that emulates the function of a pre-existent program. But the viability of this approach diminishes significantly when the Old has the money and power to exercise its dominion in the realm of "East Coast" code—the legal code, in the form of patents, licenses, copyrights, and plain old legal (and not-so-legal) bully tactics. License and patent control are especially effective when a company gains a monopolistic level of influence over a defined segment of any market, thus making the writing of competing forms of code economically, as well as legally unprofitable. Simply put, once controls have been put into place at the logical level, it is relatively easy to take care of the remainder by having a ready army of high-paid lawyers on hand, and to press the buzzword of cyberlaw bullying: abuse of "IP" (Intellectual Property), a threat that makes network administrators at ISP's and university campuses shut down sites faster than intellectuals labeled as communists ran for cover during the era of McCarthyism.
There are numerous examples of corporate thuggery related by Lessig in Future of Ideas, but few as telling as the cases of MP3.com and Napster, two programs developed for the free sharing of music among music lovers. In both cases, these services were barely up and running before they were attacked by the high-powered legal teams of the music recording industry. And while it is true that the much of the sharing that occurred on Napster was with music that one may not have already purchased, in the case of MP3.com, the system operated on the precondition of one's already owning the CD. Nevertheless, in both cases, the industry was able to capitalize on the ignorance and weaknesses of the courts to quickly and decisively shut down sharing before actual evidence of business damages had even been investigated, much less proven. (pp. 192-196)
Even more frightening is the case of CPHack, a small program that did nothing more than perform a critical analysis of Mattel's Cyber Patrol (a porn-guard software for minors) to see if it really provided the level of protection that its owners claimed. Not only did the U.S. company Mattel employ obedient judges to shut down the Canadian CPHack on a moment's notice: the court's punishment was extended to those who merely linked to the program's site. When the legal weaknesses of the case began to show themselves, Mattel simply bought out the rights to the software. (p. 184-186) Lessig cites numerous other cases regarding textual, digital, and musical materials where laws regarding intellectual property rights were invoked quickly, often with astoundingly stiff penalties attached.
Lessig does not dispute the basic principle of copyright or patent; nor does he deny the basic need for protection of IP. He clearly acknowledges and explains the necessity of these legal mechanisms to ensure that those who choose to create will receive enough remuneration to encourage them to pursue their work. His argument here though, based on a historical overview of the previous legal trajectory of the constitutional notion of intellectual property, is that prior to its application on the Internet, copyright law has always had a good deal of flexibility built into it—what is known as "fair use."
In "real space," as distinguished from cyberspace, when one sings a song, copies a CD, places a poster in one's window with a copyrighted icon, or copies a certain portion of a book, image, etc., the courts have always allowed for a certain amount of leeway, acknowledging the wisdom of the framers of the constitution in recognizing the need for a free flow of ideas in an intellectually healthy society. And regardless of what laws are actually on the books, unless clear harm is demonstrable by the copyright holder against the copyist, copyright charges are seldom prosecuted...that is, in real space. Also, up until relatively recently, copyrights did not extend much beyond the year of the author's passing away.
But in cyberspace, without rational explanation, and most importantly, without the requisite "proof of harm" demanded by the constitution, the injunctions against copyright violations have gone berserk. Lessig shows case after case where even the slightest hint of the filing of a copyright suit on the part of a large corporation has scared network managers and internet providers into removing content—even when no actual copyright violation exists. To make things worse, as a result of new laws that are largely the result of lobbying by Hollywood, terms of copyright are being extended up to a hundred years after the creator's death. Thus,
Each time, it is said...that Mickey Mouse is about to fall into the public domain, the term of copyright for Mickey Mouse is extended...You might think there is something a bit unfair about a regime where Disney can make millions off stories that have fallen into the public domain, but no one else can make money off Disney's work—apparently forever. (p. 107)
Compounding the anxiety here is the fact that not only is the free flow of intellectual and artistic content being shut down by excessive prosecution of copyright restrictions, but a parallel movement is well under way at the logical layer of the Net. In the case of content, the struggle of the Old against the New is usually a case involving corporations that held predominant positions in the entertainment and publication industries. In struggles at the code layer, it is quite often the case that the "Old" forces are relatively new. That is, the companies that have staked out territory at the logical layer of the Internet, and who would like to seal it off with their own code (as well as their own content), are more recently arrived players, such as Microsoft and AOL. On the other hand, the New in this case are the newest of the New—startup corporations and independent programmers everywhere who are working to develop new ways of doing things—especially with a vision toward maintaining the open character of the web.
Freedom Fighters: The Open Coders
Although the issue of control over cyberspace has only begun to appear as a matter of widespread concern relatively recently, there are numerous groups and individuals in the IT community who have been deeply concerned, and actively engaged in resistance of these trends since at least the mid-eighties. Most important here are the members of the various "open-code" movements, writers of software who recognize that the precise reason for the incredible growth potential of computing to be its relatively open degree of code in critical areas.
Much of the impetus of the open code movement can be attributed to the energy and insights of Richard Stallman of the GNU Project, who, after founding the Free Software Foundation [FSF] in 1985, has written, together with his colleagues, mountains of open code under the General Public License [GPL] for software, a license that disallows derivative software from being closed off from the free software commons.5 Apart from Stallman's GNU Project, there presently exists a wide range of projects grouped under the rubric of "open source." Moreover, the basic functional layer of the Web itself runs almost completely on code and standards that are open, including HTML, HTTP, with some of its most seminal applications, such Apache, Perl, and Sendmail being open-code software. The open code trend that many believe to hold greatest potential to have an impact of maintaining some degree of innovation in cyberspace is the Linux operating system.6 At first utilized mostly by computer professionals on servers, it is now a desktop OS that is supported by thousands of applications, and has reached a point of refinement where it can be installed and used by the average computer user.
As with copyright on content, Lessig does not take the position that no software should be patented or licensed. Just as with the authors of novels, a certain amount of legal protection is necessary to ensure that companies and individuals can make a living at writing code. But he fears that the trends being established whereby large corporations engaging in monopolistic practices are able to shut down competing code writers, if left unchecked, will have the effect of stifling virtually all innovation in the code realm. It is a well-established fact that innovation inevitably comes to a halt under the dominion of monopolies.7
Lessig thus advocates caution on the part of judges, legislators, and patent officials against moving too quickly to lock down everything that is not produced outside the domain of a few favored corporations. There is, he admonishes, always an opportunity to enact such laws afterward, once harm has been clearly demonstrated. But to shut down all innovation without having duly checked first is dangerous, since laws are exponentially more difficult and expensive to undo once they have been established.
The future of ideas as foretold by Lessig in the concluding chapters of his book is a decidedly dark one, as he offers a series of poignant examples of one-sided success stories on the part of the powers of the Old to shut down the innovation and creativity of the New, with as of yet, little recognition on the part of legislators and judges of the degree of damage being done to the commons. As seen in the especially frightening Mattel case, "the law has become a tool for effectively disabling the ability of others to criticize a corporation" (187). Therefore, "we should be most concerned when existing interests use the legal system to protect themselves against innovation that might threaten them." (217) Hence it is of vital importance that our judges and legislators to be properly educated about the special characteristics of cyberspace and the fundamental necessity for the maintenance of an intellectual commons.
We, as non-programming end users need not resign ourselves to the conclusion that influence in these matters is beyond our reach. There are concrete actions we may take as individuals to resist these changes, both at the level of code and at the level of content.
First, even if we are not programmers, we can make an impact on trends at the code layer by supporting the growth and development of open-code technologies simply by using them. And it is here that one of Lessig's most interesting observations comes into play: that there are many cases "[W]here the resource has a value because of its openness—where its value increases just because more people use it." (87-88) Many open-code software packages are cost-free, or of minimal price, and those that do cost money are inevitably only a small fraction the cost of their proprietary counterparts. Even on the Windows platform, there are numerous choices of mainstream software that are open-code based, including the superb Mozilla browser; which also comes equipped with a first-rate e-mail program); office suites such as StarOffice/OpenOffice, Abiword, and a whole range of text editors. Of course, the most effective way to resist the movement toward simultaneous proprietary control of the desktop and Internet is to begin using some form of Linux—an option that is growing rapidly more realistic for the average computer user.8
And who is in a better position than we humanities scholars to aid in the protection, and further enhancement of the content layer? We are in a privileged position to support changes in policy at our universities, to call for the recognition of the importance of creation and preservation of the intellectual commons. Beyond this, as highly trained writers and editors, we are in a distinctly advantageous position to participate in the building of a "creative commons"9 by making efforts to publish quality materials on the web where others may have free access to them. Of course, we still need to publish copyrighted articles and books in the traditional manner for a variety of reasons, but our concerns for promotion and tenure should not hold us too rigidly to the demands of the old system, when there is so much to be gained by investing in the new one.
Or, we can claim powerlessness and sit back, clinging to the hope that the system will straighten itself out on its own. If Lessig is right, we do so at our own peril.
1. Lessig is a scholar of constitutional law who is a leading figure in the area of copyright theory and cyberlaw. In 2001 he was listed among the "visionaries" on Business Week's "e.biz25, the magazine's roundup of the twenty-five most influential people in electronic business. A tireless advocate for freedom in cyberspace, and against the extension of copyright, Lessig is widely known for his 1997/1998 involvement in the case of Department of Justice v. Microsoft, and his more recent advocacy in the Supreme Court limitation of copyright case, Eldred v. Ashcroft. He also litigated in defense of Napster and MP3.com. return
2. Economists distinguish the concept of commons into the two types of rivalrous and nonrivalrous. The former can be depleted or dominated (e.g. fishing reserves held by a seacoast community) and thus need to be regulated by some kind of communal or governmental system. The latter are exemplified by ideas. Lessig gives the example of Einstein's theory of relativity, which, no matter how much it is used, remains fully available to all, and thus no controls should be necessary. Here, we can already see a hint of how the error of enforcing rules designed to protect rivalrous resources on nonrivalrous resources could result in problems. See pages 20-21 in Future of Ideas. return
3. Developed by NYU law professor Yochai Benkler. return
4. Tim Berners-Lee, the man most directly responsible for implementing this architecture, chose this direction "humbly" as Lessig puts it, mainly because there was no way of predicting, at the outset, what kind of applications the Web would eventually be used for, and thus he and his collaborators selected the option of not trying to predict, or control its trajectory. See Future of Ideas p. 41-44. return
5. The GPL is elaborated in full at http://www.gnu.org/copyleft/gpl.html. Stallman summarizes it like this:
A program is free software, for you, a particular user, if: You have the freedom to run the program, for any purpose. You have the freedom to modify the program to suit your needs (To make this freedom effective in practice, you must have access to the source code, since making changes in a program without the source code is exceedingly difficult.). You have the freedom to redistribute copies, either gratis or for a fee. You have the freedom to distribute modified versions of the program, so that the community can benefit from your improvements. (p. 59) return
6. The kernel of Linux was first developed in 1991 by a young Finnish computer science student named Linus Torvalds. After Torvalds made the code for Linux available on the Internet under the GPL, programmers around the world joined in to collaborate by the hundreds, producing what has become the fastest-growing operating system in the world. return
7. Lessig gives a few poignant examples of earlier monopolistic situations where innovation came to a standstill, most prominently that of the first telephone giant, AT&T. A more recent example can be seen in the 11/28/02 issue of Business Week where it was noted (p. 13) that during the past few years, software innovation has come to a virtual standstill in all the areas where Microsoft has taken de facto exclusive control, and conversely, all new IT developments during recent years have come, almost without exception, in the areas where Microsoft does not yet control the market. return
8. While just a few years ago Linux was considered inaccessible to all but computer programmers and skilled hobbyists, the most recent popular Linux "distributions" such as Mandrake 9.0 and Red Hat 8.0 can be installed by persons possessing average computing skills, easily and automatically arranging one's desktop in a "dual-boot" arrangement. This allows ready access to one's prior Windows or Mac system, while at the same time allowing for experimentation with Linux. Once inside Linux, one has recourse to all of the open code applications mentioned above, along with a mind-boggling range of applications that have been created by software companies, research groups, and individual programmers. One of the most notable in this regard is the full-featured Linux mail program Evolution, which looks and acts like Outlook, but with many features not included in the Microsoft original. There are photo editors, scanning software, CD burners, along with the famous text editor Emacs, that can be used as an HTML/XML editor, programmer's tool, and for many other purposes. return
9. The name Creative Commons has recently been adopted by a group of copyright specialists, IT professionals, and concerned citizens as the name of an organization dedicated to the building and preservation of an intellectual commons on the Internet. They are developing a meta-data "card catalog" registry of open resources, and also providing a variety of new licensing options for those who would like to place their materials on the web with various sorts of limited licensing protections. See http://www.creativecommons.org. return