26
Engineering a Principle: ‘End-to-End’ in the Design of the Internet Tarleton Gillespie Department of Communication, Cornell University forthcoming, Social Studies of Science, v36 n3, June 2006 Abstract The term ‘end-to-end’ has become a familiar characterization of the architecture of the Internet, not only in engineering discourse, but in contexts as varied as political manifestos, commercial promotions, and legal arguments. Its ubiquity and opacity cloaks the complexity of the technology it describes, and stands in for a richer controversy about the details of network design. This essay considers the appearance, in the 1970s, of the term ‘end-to-end’ in computer science discourse, and how the term became a point of contention within disputes about how to build a packet-switched network. I argue that the resolution of some of those disputes depended on the transformation of the term from descriptor to ‘principle’. This transformation attempted to close specific design debates, and, in the process, made the term dramatically more useful in those discourses beyond engineering that eventually took a keen interest in the design of digital communication networks. The term, drawn from common parlance and given not only meaning but conviction, was shaped and polished so as to be mobile. As such, it actively managed and aligned disparate structural agendas, and has had subtle consequences for how the Internet has been understood, sold, legislated, and even re- designed. Keywords: End-to-end, Internet, architecture, language, discourse, network, engineering, representation, technology, law, copyright Reality is a scarce resource and is, therefore, the site of a constant struggle. If the world is constituted in one way in the service of one set of purposes for one group of people, it is thereby preempted. (Carey 1992: 24) The Definition of the Internet In August 2004, the chief copy editor of Wired News announced that the online publication, prone to capitalizing on the Internet, would no longer be capitalizing the word ‘internet’. He reasoned, ‘In the case of internet, web and net, a change in our house style was necessary to put into perspective what the internet is: another medium for delivering and receiving information. That it transformed human communication is beyond dispute. But no more so than moveable type did in its day. Or the radio. Or television’ (Long, 2004). According to a periodical long fascinated with technological novelty, the birth of the Internet is over; its life as mundane part of our cultural landscape has begun, and we should mark the occasion in language. This grammatical gesture should certainly not be taken as a definitive end. Despite the way it joins countless other similar gestures to slowly naturalize the Internet 1 as a sociotechnical artifact, the process of collectively defining the Internet is still underway. While its founding designers and early enthusiasts may have moved on from their initial proclamations, the process by which the Internet settles in as a component of modern socio-political activity continues: in courtrooms, on the floor of Congress, in corporate boardrooms, in programmers’ cubicles, in college dorm rooms.

Engineering a Principle: 'End-to-End' in the Design of the Internet

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Engineering a Principle: ‘End-to-End’ in the Design of the Internet

Tarleton GillespieDepartment of Communication, Cornell University

forthcoming, Social Studies of Science, v36 n3, June 2006

Abstract

The term ‘end-to-end’ has become afamiliar characterization of the architecture ofthe Internet, not only in engineering discourse,but in contexts as varied as politicalmanifestos, commercial promotions, and legalarguments. Its ubiquity and opacity cloaks thecomplexity of the technology it describes, andstands in for a richer controversy about thedetails of network design.

This essay considers the appearance, in the1970s, of the term ‘end-to-end’ in computerscience discourse, and how the term became apoint of contention within disputes about howto build a packet-switched network. I arguethat the resolution of some of those disputesdepended on the transformation of the termfrom descriptor to ‘principle’. Thistransformation attempted to close specificdesign debates, and, in the process, made theterm dramatically more useful in thosediscourses beyond engineering that eventuallytook a keen interest in the design of digitalcommunication networks.

The term, drawn from common parlanceand given not only meaning but conviction,was shaped and polished so as to be mobile.As such, it actively managed and aligneddisparate structural agendas, and has hadsubtle consequences for how the Internet hasbeen understood, sold, legislated, and even re-designed.

Keywords: End-to-end, Internet, architecture,language, discourse, network, engineering,representation, technology, law, copyright

Reality is a scarce resource and is,therefore, the site of a constant struggle. Ifthe world is constituted in one way in theservice of one set of purposes for onegroup of people, it is thereby preempted.(Carey 1992: 24)

The Definition of the Internet

In August 2004, the chief copy editor ofWired News announced that the onlinepublication, prone to capitalizing on the Internet,would no longer be capitalizing the word‘internet’. He reasoned, ‘In the case of internet,web and net, a change in our house style wasnecessary to put into perspective what theinternet is: another medium for delivering andreceiving information. That it transformedhuman communication is beyond dispute. But nomore so than moveable type did in its day. Or theradio. Or television’ (Long, 2004). According toa periodical long fascinated with technologicalnovelty, the birth of the Internet is over; its lifeas mundane part of our cultural landscape hasbegun, and we should mark the occasion inlanguage.

This grammatical gesture should certainly notbe taken as a definitive end. Despite the way itjoins countless other similar gestures to slowlynaturalize the Internet1 as a sociotechnicalartifact, the process of collectively defining theInternet is still underway. While its foundingdesigners and early enthusiasts may have movedon from their initial proclamations, the processby which the Internet settles in as a componentof modern socio-political activity continues: incourtrooms, on the floor of Congress, incorporate boardrooms, in programmers’ cubicles,in college dorm rooms.

2

This process of negotiation is an element ofthe ‘interpretive flexibility’ described by Pinchand Bijker (1987). Every technology is shapedby a process of social definition, in which thoseinvested in it struggle not only to implement thetechnology, but also to narrate what it is andwhat it is for. These negotiations occur in a rangeof contexts, and each is associated with specifickinds of consequences: a court decision aboutappropriate Internet use can powerfully articulatewhat the Internet is, but so can the design of anew peer-to-peer application meant to defy thatdecision. If we hope to understand the Internet asa social as well as a technical artifact -- as an‘Internet culture’2 developing around andalongside the Internet itself -- we must certainlyconsider these disputes closely. They offer vitalinsights into the social construction of acommunication technology: one still in itsadolescence, despite the grammatical dictates ofonline news services.

In this essay I would like to address adimension of these disputes that often goesunnoticed, and has largely been overlooked inthe literature on the social construction of theInternet. This dimension is hinted at, butobscured by, the stylistic change proposed byWired News. While these debates continue,something has indeed been stabilized. Even asthe cultural, legal, and economic implications ofthe Internet remain open for discussion, theInternet is increasingly taken to be a particularmaterial thing with an identifiable, persistent,and describable shape. While computerscientists, engineers, and network operators stillargue about how this or that element is or shouldbe designed, for the casual user, the Internetsimply is.

The fact that the technical infrastructure of theInternet seems increasingly stable is particularlyimportant when it comes to debates about its useand consequences that occur outside ofengineering circles. Discussions aboutappropriate legislation for the Internet, orappropriate new services to market with it, orappropriate ways to use it, partly depend onassumptions about how it works. Much of thisnegotiation over what the Internet is and should

be is conducted by non-engineers: lawyers,politicians, users, educators, parents, andadvertisers. Often participants in these debatesmake claims about how the Internet works asproof that it should be used in a particular way.But how does a non-engineer come tounderstand the Internet’s material workings, tothe extent necessary to participate in suchnegotiations? How do they know how theInternet works, how it does what it purports todo, how it is designed, and what the implicationsof its network architecture are? They are by nomeans experts on the technical dimensions ofcomputer networks, yet they argue about theimpact and use of a massively intricatetechnological system. As such, their claims aboutwhat the Internet is as a social phenomenon mustlargely take on faith what the Internet is as amaterial artifact. And, I will argue, many of theseassumptions about the technology are embeddedin the language that accompanies it, tokens thatstand in for an understanding of the technologyitself and are increasingly taken for granted asthe technology becomes commonplace.

Consider a brief example: In AT&T v. City ofPortland, the 9th Circuit Court of Appealsdeliberated on whether the government authorityin charge of allocating cable franchises inPortland, Oregon, had the right to demand thatAT&T / TCI open their cable broadband lines tocompeting Internet service providers (ISPs). Thecourt decided that cable does not fall under the‘common carrier’ rules that govern othertelecommunication services, such as telephony.The details of the case are insignificant for ourpurposes. What is important is the moment inwhich the court compares the telecomarrangement to the Internet:

The Internet’s protocols themselvesmanifest a related principle called ‘end-to-end’: control lies at the ends of the networkwhere the users are, leaving a simplenetwork that is neutral with respect to thedata it transmits, like any common carrier.On this rule of the Internet, the codes of thelegislator and the programmer agree.3

3

It is a small moment in an arguably smalldecision, but a revealing one. There is a blithematter-of-factness to the court’s shorthanddescription of the structure of the technology,positing a facile analogy between telephone andInternet network design. In this two sentencedescription, one of the most powerful courts inthe nation says that the Internet has a certainshape, and lends its authority to the belief thatthis shape means that it (and systems like it)must be regulated in ways suited to that shape.

The notion that the Internet has an ‘end-to-endarchitecture’ is an increasingly familiardescription of the structure and design ofcomputer networks, one that has been significantin these broader cultural negotiations about whatthe Internet is and should become. To someextent it is an ‘accurate’ portrayal of thearchitecture of this technology, but in other waysit is not: the extent to which it is ‘true’ is itselfdisputed in engineering debates about networkdesign that continue to this day. But, theportrayal is far from neutral. ‘End-to-end’ andsimilar characterizations are deliberate glosseson the technology, symbolic representations ofthe very shape of the thing in question.

Where does a term like ‘end-to-end’ comefrom? The phrase obviously has a commonplacemeaning, or constellation of meanings. Mostfamiliar perhaps is the colloquial way ofemphasizing the particularly large quantity ofthings by suggesting we ‘lay them end-to-end’ --conjuring up visions of a line of somethings,linked-sausage-style, making abstract ringsaround the planet.4 The term has also enjoyedmore specific technical meanings, not in thecontext of network design but in buildingconstruction: the layout of floorboards, perhaps,or the components of a plumbing system. It canalso express the distance traveled over a boundedspace (‘we walked that trail from end-to-end’), aslightly different connotation that implies notinterconnected components, but the completepath they represent.5 But how did this term cometo stand in as an icon for the shape of theInternet, inside of engineering debates and out,such that authoritative claims about the

technology’s value, purpose, and impact couldbe built upon it?

In order to reveal the movement andtransformation of this term, I collecteddocuments dealing with the architecture ofpacket-switched networks, technical andotherwise. My interest in this term was firstdrawn by Lawrence Lessig’s use of it in hisarguments on the connection between Internetarchitecture and its subsequent regulation(Lessig, 2001a, 2002). From his work, and fromothers making similar non-technical argumentsabout the consequences of the technicalarchitecture, I traced the term back into theengineering literature in which that architecturewas being debated. Very often, these non-engineers pointed to a single reference as theorigin of the term, or the key statement of itssignificance: a 1984 paper by Jerome Saltzer,David Reed, and David Clark. Later, I willdiscuss how and why this article has come tostand as the definitive regarding this term. Insidethe engineering debates, I collected over 400papers from the 1970s, 1980s, and 1990s thatused the term ‘end-to-end’ in reference tonetwork design; I drew these primarily fromjournals published by the Association forComputing Machinery (ACM) made available intheir Digital Library, and the journals andconference proceedings series published by theInstitute for Electrical and Electronics Engineers(IEEE) made available in their Xplore database.These collections were chosen not becausenetwork design was decided exclusively withinthese journals, but because they offered a broadenough range of the relevant debates to reveal ina broad sense the ways in which the term wasbeing used. When particular papers in otherjournals were regularly referenced as importantto the network design debate, I added them to thecollection. In order to more fully understand therole played by Saltzer, Reed, and Clark’s paper, Isought out other papers in which they referencedor defended their original argument, papers inwhich they were subsequently cited for theircontribution, and papers by others that weremore skeptical of their argument. Finally, fromsimple web searches I found examples of the

4

quite common commercial usage of the term; Icollected a number of these instances as well, inorder to complement the technical, legal, andpolitical uses of the term with examples of itsuse inside of a promotional discourse.

By considering this diverse, but by no meansexhaustive, collection of documents and the wayin which this term has been used, defined, andchoreographed as part of a technical and politicalproject, I will reveal how ‘end-to-end’ came toserve as a descriptor of the structure of theInternet. First, I will describe the technicaldebates about network design in which this termbegan to appear, characterizing a set of differentmeanings the term seemed initially toencompass. Then, I will look closely at theSaltzer, Reed, and Clark paper that engaged withthese debates and attempted to articulate andchampion a design principle that, they argued,could unite a scattered set of design strategies --and gave that principle the name ‘end-to-end’. Iwill argue that this document reveals acharacteristic kind of rhetorical work inengineering, whereby both the materials and thelinguistic shape of the artifact are designedtogether, with the future deployment of bothalready in mind. Then I will consider a numberof contexts in which this term was taken upoutside of engineering debates and used torepresent the symbolic shape of the Internet, toargue that matching political arrangementsshould be adopted. Finally, by looking at recentdisputes about digital copyright, I will suggestthat the conceptual shape of this term and itsprevalence as a shorthand for the shape of theInternet has had consequences for the characterand outcome of those debates.

At stake, I believe, are the followingquestions: who designs the ‘architecture’ of theInternet, in the discursive sense? How does thisarticulation of the technology happen in andacross multiple discourses? How do institutionalagendas and tensions shape that process? Howdoes a term like this move and change, and whoinstigates these maneuvers? What are theconsequences of this process for the political,legal, cultural, and technical life of the Internet?And finally, how should we account for the role

of language in the process of designing,articulating, and speaking about the structure of atechnology?

The Debate over Network Architecture

Designing a complex artifact like a distributedcomputer network is both a material and adiscursive feat; as much as engineers design newmaterial arrangements of circuits and chips andwires, they also design arrangements of words,concepts, and meanings (Agre, 1997: 14). Theextent to which their proposals are considered,appreciated, and deployed depends both on theway their material designs offer compellingsolutions to apparent problems, and the way theirdiscursive designs offer compellingcharacterizations of what the tool canaccomplish, as well as how, and mostimportantly, why it can do so.

Through the 1970s and into the 1980s,significant research was underway at numerouscomputer science labs, corporate R&Ddepartments, and independent researchcommunities, all working towards thedevelopment of computer networks based on theconcept of packet-switching. Packet-switching,the idea that information can be broken up intopieces, flung across a network, and recombinedby the receiver, is often taken colloquially as thekey innovation that made the Internet possible.6But, as much as it was a solution to a series ofnetwork design challenges, it also posed a newarray of problems; it was the beginning of adesign strategy, but by no means the end. Therewere several, significant problems ofimplementation still left to work out: decidinghow the network should handle delays,congestion, lost packets, duplicate packets,errors, security, etc. and, more generally, whatthe network should be responsible for ensuringand where the necessary functions should belocated.

It was in the context of these debates that theterm ‘end-to-end’ first appeared and took root asa technical term. In these early days of inter-network design, disputes over these problemsbegan to coalesce around two models; as these

5

models began to take shape, they each offered anincreasingly coherent vision for how to solvemany or even all of these technical challenges.For instance, any network system must be able tosufficiently ensure the reliability of informationtransmission. There will inevitably be failures inthe system; consequently, mechanisms must bein place that will recognize such failures and beable to recreate the lost data transfer operations,such that the information gets through regardlessof the interruption. Previous data transmissionnetworks (including, until quite recently, theU.S. telephone system) solved this problem bybuilding in a number of intermediary pointsbetween sender and receiver; each of these pointsis responsible for the information that reaches it,and temporarily preserves a copy of all data thatpass through until their arrival at the next point isassured. This model, in which intermediatepoints ‘maintain state’, resolves interruptions andbreakdowns by returning to the last point thedata had reached and re-sending them from thatpoint forward. In simpler terms, data movingfrom point A to point Z must pass through eachintermediate point (J, K, L, etc.). If there is a lossof data at point N, the system returns to point Mand restarts the data transfer from there.

Some believed that a similar model would beappropriate for packet-switching networks; thehardware at each intermediate node in thenetwork could be given the necessary capacity(data storage, applications that can react tointerruptions and recreate messages, etc) toperform this function. This requires an‘intelligent’ network (Isenberg 1997), in whicheach of the interior passage points must bedesigned to have the necessary resources tomanage the flow of data. This model generallycame to be known as the ‘virtual circuits’ model,a nod to the circuits used in analog telephony,and was given official life in the x.25 and x.75proposals made by the Consultative Committeeon International Telegraphy and Telephony[CCITT] in 1976 (Abbate, 1999: 148-156).7

Others, however, argued that this strategywould not best take advantage of a packet-switching network; as an alternative, they arguedfor a ‘datagram’ model of network design.

Rather than making the interior passage pointsresponsible for maintaining state and overseeingdata transfer, they argued that only the end-points should manage data; the interior pointsshould be ‘dumb’ terminals that merely passalong any information that happens to arrive attheir doorsteps. To assure reliability, one endpoint sends the data; the other end point, uponreceiving those data, sends a message confirmingreceipt. In the event of a system failure, theorigin point would not receive confirmation and,after a predetermined amount of time, wouldsend the data again. A and Z oversee the transfer;J, K, and L just do as they’re told; if A neverhears back from Z, A starts the whole processover. Proponents argued this ‘stupid’ networkwould be more efficient, demanding less of itsinterior passage points, although retransmissionwould consume more resources, since data musttraverse the entire route again.8 (For a discussionof the relative merits of these two approaches,see for example Gitman [1976], Coviello [1979],Lavia and Rhynas [1979], Postel [1980], Boggset. al. [1980], Rybczynski [1980].)

Though couched almost exclusively intechnical terms, this was an argument not onlyabout the ‘best’ solution, but a broaderdisagreement about what these networks wouldeven be for -- in other words, what criteria definewhat counts as ‘best’. If the computer network isprimarily for the transmission of voice or forreal-time streaming of data, in which the arrivalof data packets must be timely and sequential, orfor remote sharing of computer resources inwhich delays would hold up other users andwaste expensive computer processing cycles, thereliability assurances and internal mechanisms ofthe virtual circuits model offered greaterpromise. If, on the other hand, the computernetwork is primarily for the exchange of data andasynchronous communication, where packetorder is of less concern and errors can berectified without consequence, the datagrammodel seems preferable.

It’s worth noting that much of this research ondatagram approaches came out of aninterconnected web of academic computerscience programs (MIT’s Laboratory for

6

Computer Science, the Stanford ResearchInstitute, and UCLA, among others), computermanufacturers (particularly Bolt, Baranek, andNewman and the Network AnalysisCorporation), and military R+D divisions(DARPA and the Defense CommunicationsAgency). Most of the support for the virtualcircuits architecture came, not surprisingly, fromtelephony research labs (such as those at Belland Bell Canada).9 It should come as littlesurprise that the telephony corporations tended topoint to voice and streaming as the logical usesto which a network would be put, while thecomputer science programs emphasizedasynchronous data exchange and remotecomputer time-sharing, tending to urge eachtowards the respective network models. Andwith the presence of telephony and computercorporations conducting and sponsoringresearch, this debate was already both implicitlypolitical and quietly commercial: Who would runsuch networks? This debate can be read as aharbinger of the competition between computerand telephony industries over networkdominance that followed.

In a broad sense, the datagram model largelytriumphed in the design of the Internet: most ofthe functions expected of the network arehandled by the sender and receiver terminals,and the nodes internal to the network have beenlargely freed of all functions except receivingand passing along data packets. The Internetdoes not depend on a dedicated connection in theway that the virtual circuit model proposed. This‘connectionless’ design strategy is, however, farfrom total: some routing and congestion controlfunctions are implemented by the routers. Inparticular, servers (an element of the networkthat, depending on how you look at it, is eitherinternal to the network or the terminal points,either an ‘end’ or not) do the most to ensurereliable data transmission, impose security, andease network congestion.

The debate is also far from over. Discussionscontinue about the proper design of computernetworks, and proposals reminiscent of thevirtual circuits model regularly resurface.Especially as the Internet has shifted from a

limited-access research tool to a public andcommercial network, new interests and prioritieshave emerged that make the old compromises ofthe datagram model look more like ‘vexingdysfunctionalities’ (David, 2001: 23), anddesigners have suggested numerous alternativesthat depend on asking routers to do much morethan send and receive.10

It was during these early debates that the term‘end-to-end’ began to appear, drawn from itsmore commonplace meaning to refer to the shapeof the computer network or the location ofnetwork functions, typically those proposed inthe datagram model. And like its morecommonplace meanings, the use of the term inthese engineering discussions was by no meanssingular, though its connotations were roughlyconsistent. For the most part, the term was usedto describe features of, or problems with, thenetwork: yet even within this descriptivefunction, there was significant variety.

For instance, ‘end-to-end’ was sometimesused simply to indicate that two networks wereadjacent, and met at a single junction point thatwas an end-point for each network. This use wasmost similar to the term’s more general meaning,i.e. sausage links. ‘This paper describes a new24-channel cable carrier system which is end-to-end compatible with the Western Electric N3’(Poling, 1967: 642). This usage was morecommon to discussions of cable networks, andlargely disappeared from discussions ofcomputer networks. (For similar uses, see Cristal[1966]; Getgen [1967].)

Much more common was to use ‘end-to-end’to describe the particular route a single packet ofdata would or might take in a single instance,from start to finish. ‘An important performancemeasure of these networks is end-to-end delay,which is the elapsed time from the arrival of amessage at its source node to the successfuldelivery of this message to its destination’(Wong, 1978: 344). In the early discussions ofnetwork design, minimizing delay was of primeimportance. What came to be a logical metric forsuch delays was the time taken by a data packetfrom start to finish; while the point of packet-switching design is that packets take different

7

routes to their destination, aggregate measures ofend-to-end travel time could speak to thenetwork’s average speed and capacity. (Forsimilar uses, see Coviello [1979]; Girard andOuimet [1983]; Lavia and Rhynas [1979];Morling, Cain, Neri, Longhi-Gelati, and Natali[1983]; Papantoni-Kazakos [1979]; Tobagi,Borgonovo, and Fratta [1983]; Wong, Sauve,and Field [1982]; Yuill and Pickholtz [1982].)

A third descriptive use of the term referred tothe network’s capacity to handle some functionsat the end-points in the system. One commonexample of this was in discussions aboutmechanisms for acknowledging the receipt of acomplete message. Many proposed that thereceiver send an acknowledgement (or ‘ack’) tothe sender, to signal a completed transaction.‘The ETE (end-to-end) Ack serves as a flowregulator between origination and destinationand as an indicator to the origination node (uponreceiving an ETE Ack) that the destination nodehas correctly received the message’ (Gitman,1976: 1258). Though clearly related to theprevious use, the emphasis here is on the agencyand responsibility of the end-points rather thanthe character of the route between them. Thismeaning was widely used in the network designdiscussions throughout the 1970s. (For similaruses, see Baccelli, Gelenbe, and Plateau [1984];Belsnes [1977]; Boggs, Shoch, Taft, andMetcalfe [1980]; Campbell [1978]; Cerf andKhan [1976]; Chan and Georganas [1979];Chatterjee, Georganas, and Verma [1977];Davies [1972]; Gien and Zimmerman [1979];Kleinrock and Kermani [1980]; Postel [1980];Pouzin [1981]; Voydock and Kent [1983].)

Finally, it is worth noting a fourth use of theterm, one that did not begin to appear until the1980s. As the datagram model of network designchampioned by computer scientists won out overthe virtual circuit model, researchers at several ofthe corporate telecommunications labs,particularly at Bell, shifted their efforts todeveloping telephone networks that used digitalrelays rather than analog switches. There was agreat deal of discussion of how to get digital allthe way to the ‘last mile’ -- into the home, so thatthe features offered by digital technology (speed,

tailoring services to particular users, interactivefunctions) would not be hampered by a lingeringanalog link. In this research, the term ‘end-to-end’ resurfaced:

The world is in the midst of revolutionarychanges in the ways in which various typesof information are generated, and how theyare used. To meet fully the needs of thisinformation age will require correspondingadvances in the capacity to store, transfer,process and deliver information in a host ofdifferent forms and formats. Many of theseadvances will come within the framework ofthe Telecommunications Network of thefuture. This paper describes the SwitchedDigital Capability (SDC) which is expectedto be a major ingredient in the transitionfrom what to date has been largely a voicenetwork, to a ‘hybrid’ network supportive ofa wide range of voice and ‘data’ services.SDC will bring end-to-end switched digitalconnectivity as an important early step in theevolution of the 100-year old ‘telephone’network to the integrated services digitalnetwork (ISDN) of its second century.(Kelcourse and Siegel 1983: 29)

(For similar uses, see Bhusri [1984]; Decina[1982]; Gruber and Le [1983]; and Rybczynski[1980].) It is worth noting two things: first, theterm can just as easily be adopted to describe anetwork designed according to the virtualcircuits logic; it is not inherently matched to oneparticular design. Second, the term fits quiteneatly not only an argument for the design of thetelecommunication network, but also a series ofclaims about social change through technology.This will not be the last time that a congruenceof technical and social proclamations built onthis term will appear.

‘With neither explicit recognition nor muchconviction...’

In these debates, researchers adopted anumber of strategies to make the most persuasivecase for this or that argument. Some argued for

8

their position by proposing specific solutions tospecific problems, hoping they would provecompelling. Others proposed models fornetworks, even designed their own, to illuminatethe potential of their design strategy. Finally, afew attempted to articulate a design principle, toconvince in the abstract rather than on a case-by-case basis. One particular paper that intervenedin these design debates adopted this last strategy,and used ‘end-to-end’ to name the proposeddesign principle. The work undertaken in thatpaper helped to make this term a powerful andmobile resource within those debates, and gave itthe capacity to move beyond engineeringdisputes about the Internet as a technicalphenomenon, entering into political, legal, andeconomic disputes about the Internet as a socialphenomenon.

In April 1981, at the Second InternationalConference on Distributed Computing Systems,Jerome Saltzer, David Reed, and David Clarkpresented a paper entitled ‘End-to-EndArguments in System Design’. The paper waslater published in 1984, in the ACM journalTransactions in Computer Systems. This was asignificant contribution to the discussion, in partbecause of the profile and credibility of itsauthors. All three were working at MIT onprojects funded by the Department of Defense’sAdvanced Research Projects Agency (DARPA):Saltzer and Reed were part of MIT’s Laboratoryof Computer Science, while Clark was the ChiefProtocol Architect of DARPA’s Internet Project.

Saltzer, Reed, and Clark’s paper is ofparticular (though by no means singular)importance for the design of the Internetinfrastructure, because it articulated, clarified,and helped champion the datagram model; it hasbeen influential in the disciplines of engineeringand computer science, becoming one of the‘greatest hits’11 regularly presented in mostintroductory Computer Science courses. Moreimportantly, the paper is central to the rhetoricalpower of the term ‘end-to-end’, as suggested byits regular citation in engineering and non-engineering discussions as the iconic origin ofthe end-to-end principle, so often taken to be thestructure of the Internet. Many who subsequently

cite the paper describe it as a landmarkcontribution to the debates. In 1996, BrianCarpenter authored ‘RFC 1958: ArchitecturalPrinciples of the Internet’. This ‘request forcomments’, intended as part of the writtendocumentation for the Internet, points to Saltzer,Reed, and Clark’s argument as one of theguiding principles for network design, aprinciple beloved by a community skeptical ofprinciples:

Many members of the Internet communitywould argue that there is no architecture, butonly a tradition, which was not written downfor the first 25 years (or at least not by theIAB). However, in very general terms, thecommunity believes that the goal isconnectivity, the tool is the InternetProtocol, and the intelligence is end to endrather than hidden in the network ... The keyto global connectivity is the inter-networking layer. The key to exploiting thislayer over diverse hardware providingglobal connectivity is the ‘end to endargument’. (Carpenter, 1996)

The principle is also referenced in RFC 2775(Carpenter, 2000) in light of the challenge it hasfaced during the Internet’s growing pains, andagain in RFC 3439 (Bush and Meyer, 2002), anupdate of RFC 1958, which suggests that theend-to-end principle leads directly to theprinciple of simplicity in network design. In theNational Research Council’s 2001 report TheInternet’s Coming of Age (NRC 2001), the end-to-end argument is cited under the subheading‘success by design’. Saltzer, Reed, and Clark’spaper is the only citation for the term, alongsidea reference to RFC 1958.12

Saltzer, Reed, and Clark argue that, in nearlyall cases, it is best to put the functions of thenetwork in the ends rather than in theintermediary points.13 First, this arrangement ofmechanisms is typically more efficient. Moreimportantly, measures performed inside thenetwork are often insufficient; if end-to-endmeasures are therefore necessary anyway, theinternal measures become redundant. For

9

instance, one network might ensure thesuccessful delivery of a message by a ‘hop-by-hop’ acknowledgement system: every time thedata made one step in the journey, the two nodeswould check that the data was unchanged; if nonodes fail in this responsibility and everyindividual data packet is assured across everystep of the journey, the network may assume thatthe entire message was successfully delivered.However, the recipient still needs to be sure thatthe content of the message is identical to themessage sent. The internal measures cannotassure that they are identical, since they onlyencountered individual packets, and might notrecognize that data had been changed -- an end-to-end acknowledgement is still necessary. If thatis the case, Saltzer, Reed, and Clark (1984: 278-82) argued, then the network designer could doaway with all of the internal acknowledgementmechanisms, since the single ‘end-to-end’confirmation will assure the reliable delivery ofall the data across the entire journey. They go onto demonstrate this same redundancy inquestions of security, performance andcongestion, suppression of duplicate messages,and ensuring message order.

Saltzer, Reed, and Clark also note that, ifthese network functions were located at the endsrather than built into the network, the networkwould be more amenable to innovation, even touses and applications unanticipated by networkdesigners. Single users could add capacities totheir own terminals, and so long as the network’srole was still the transmission of data packets,the internal points in the network couldaccommodate them. If, on the other hand,functions were built into the network hardwarethat imposed specific requirements on the data,the entire network would have to be upgraded toaccommodate every new use -- which wouldtend to discourage all but the most essentialinnovations.

What do Saltzer, Reed, and Clark accomplishin their paper? In one sense, they merely renamethe datagram model. At one point, they explicitlytranslate the existing debate into their preferredterms:

Much of the debate in the network protocolcommunity over datagrams, virtual circuits,and connectionless protocols is a debateabout end-to-end arguments. A modularityargument prizes a reliable, FIFO (first in,first out) sequenced, duplicate-suppressedstream of data as a system component that iseasy to build on, and that argument favorsvirtual circuits. The end-to-end argumentclaims that centrally provided versions ofeach of those functions will be incompletefor some applications, and those applicationswill find it easier to build their own versionof the functions starting with datagrams.(Saltzer, Reed, and Clark 1984: 286)

Moreover, they make clear that the solutionsthey propose here are not their own: ‘Theindividual examples of end-to-end argumentscited in this paper are not original; they haveaccumulated over the years’ (Saltzer, Reed, andClark, 1984: 285), Even the term itself is by nomeans their own; as I have already suggested,both the common meaning and its specific andmultivalent descriptive use in network designwere already well established. It had even beenused to articulate the datagram model as anormative design strategy.

Saltzer, Reed, and Clark’s contribution to thedebate is best understood in more rhetoricalterms. In their paper, they engineer a principlefor network design, and draw upon the explicitmeanings and the array of available connotationsfor ‘end-to-end’ to characterize that principle. Bydrawing together and articulating existing designstrategies as part of a coherent design principle,Saltzer, Reed, and Clark intervene in the ongoingdebate on a more abstract level, building onrecognizable engineering problems butanswering them with a sweeping claim to solveall engineering problems in the same way.

Consider the introductory paragraph, in whichthe authors stake out the discursive terrain uponwhich they hope to intervene, and begin tocharacterize the principle they will offer:

Choosing the proper boundaries betweenfunctions is perhaps the primary activity of

10

the computer system designer. Designprinciples that provide guidance in thischoice of function placement are among themost important tools of a system designer.This paper discusses one class of functionplacement argument that has been used formany years with neither explicit recognitionnor much conviction. However, theemergence of the data communicationnetwork as a computer system componenthas sharpened this line of functionplacement argument by making moreapparent the situations in which and thereasons why it applies. This paper articulatesthe argument explicitly, so as to examine itsnature and to see how general it really is.The argument appeals to applicationrequirements and provides a rationale formoving a function upward in a layeredsystem closer to the application that uses thefunction. (Saltzer, Reed, and Clark 1984:277)

The aspiration here is clear: Saltzer, Reed, andClark aim to articulate ‘design principles’ bygathering under a single rhetorical umbrella aseries of engineering arguments ‘with neitherexplicit recognition nor much conviction’. Thischaracterization is a revealing one; it is the‘recognition’ and ‘conviction’ that they are after.The task then, is simple: ‘This paper articulatesthe argument explicitly’, and in doing so‘provides a rationale’ for current and futuredesigns. ‘end-to-end’ is transformed from acollection of designs into a claim, an argument, apremise -- from an idea to an ideal. It issomething that can, from this point on, standalone -- and stand for. If accepted as a principle,it can be portable: picked up as a convenientshorthand, easily taught, even offered as asolution to new engineering challenges beyondthe design of computer networks.14

Saltzer, Reed, and Clark are also careful toposition their principle as the capstone to a seriesof debates and to link it to several other well-regarded solutions. As noted above, they makeexplicit reference to the debate between virtualcircuit and datagram approaches to network

architecture, reframing the debate so that theirprinciple responds to it. They further certify theirprinciple by aligning it with other well-receiveddesign principles, most notably the idea oflayers:

It is fashionable these days to talk aboutlayered communication protocols, butwithout clearly defined criteria for assigningfunctions to layers. Such layerings aredesirable to enhance modularity. End-to-endarguments may be viewed as part of a set ofrational principles for organizing suchlayered systems. (Saltzer, Reed, and Clark,1984: 287)

In doing so, they work to craft their term, and thedesign strategy it represents, as the underlyingprinciple that will both champion a model ofnetwork architecture, and unite a series ofsquabbles into a single-purpose school of design.

At this point, this account may seem like aclassic story about controversy and closure: anongoing knowledge dispute in which both theclaims and the purposes of those claims are upfor debate. Fought by competing interests withdivergent agendas, the controversy cannot beresolved by any sort of appeal to someconvenient ‘truth’ about communication practiceor the fundamental ‘nature’ of technologicaldesign. Instead, closure of the controversy isbrought about by the clever characterization ofthe problem in new terms, terms that help gatherconsensus among enough designers thatalternatives begin to fall away, and theunderlying priorities of those designers areinstalled as the normative understanding(Collins, 1985; Shapin and Schaffer, 1985; Pinchand Bijker, 1987; Gieryn, 1999). In Latour’sterminology, we might see Saltzer, Reed, andClark as making themselves into ‘obligatorypassage points’ within the field of networkdesign; to the extent that the ‘end-to-end’principle they engineered and championed mightsettle in as accepted practice, they would becomethe spokespersons for the principle, gainingstatus and credibility in the field as the ‘fathers’

11

of the network architecture the principle wasused to produce (Latour, 1987).

To some extent, this is what Saltzer, Reed,and Clark were up to. Many who subsequentlyreference the ‘end-to-end argument’, both insideand outside of engineering discourse, point toSaltzer, Reed, and Clark’s article as its originpoint, despite the fact that their article admittedlycribbed its claim from others. That previouswork is regularly elided in subsequentreferences. For example, in the NationalResearch Council report, when the end-to-endprinciple is cited, the footnote to the 1984 paperbegins ‘This was first expressed in ...’ (NRC,2001: 36).

All three of the paper’s authors make regularreference to the argument in their own laterwork, often speaking as keepers of its truemeaning and lamenting the extent to which it isviolated.15 In one instance, nearly twenty yearsafter their first paper, they observe that theirargument has become ‘part of the vocabulary ofnetwork protocol and operating system designers... a class of system design principles thatorganize and guide the placement of functionwithin a system ... a strong rationale to justifydesign choices’ (Reed, Saltzer, and Clark, 1998).

But while the design principle is regularlycited and is now a mainstay in computer sciencecurricula, it certainly did not bring closure to thedebates it responded to, at least not in any simplesense. Debates concerning the virtual circuit anddatagram models for arranging networkfunctions continued through the 1980s,sometimes with explicit reference to Saltzer,Reed, and Clark’s articulation of the end-to-endprinciple,16 but just as often without. AsTanenbaum and Van Renesse (1985: 430-431)put it, ‘It is our observation that both argumentsare valid, but, depending on whether one istrying to forge a collection of small computersinto a virtual uniprocessor or merely accessremote data transparently, one or the other willdominate’. These debates erupted again andagain in the 1990s around new proposals, likethose for active networking and quality ofservice.17

As a principle, however, the term has beendeployed in engineering literature with the‘conviction’ that Saltzer, Reed, and Clark hopedto give it. In particular contexts it began to standin as an emblem for a design strategy, andincreasingly as the name for the very shape ofnetwork architecture. In some ways, ‘end-to-end’is the architecture of the Internet in name only,or in only the most abstract sense. But this isimportant in and of itself, in that the symbolicshape of the Internet, with its specifically crafteddiscursive character, has since been taken up indiscourses well beyond engineering andcomputer science, drawing on the connotationsalready given to it there, and further reworking itso as to have life inside of those political, legal,and commercial disputes within which theincreasingly public Internet would soon finditself.

Making It Move

It is clear that the term ‘end-to-end’ hasplayed an important, though by no means simple,role in the engineering debates over approachesto and purposes of network design. At first aprimarily descriptive term, it was transformedinto a design principle, though this effort was byno means complete or commanding. But if westop at this point, we have only half the story.For just as engineers design their materialartifacts both to respond to their own concernsand to eventually leave their hands and be usedby others, so their language is designed not onlyto persuade each other but also to leaveengineering discourse and participate in widerdebates. The deliberate depoliticization ofengineering discourse, cleansed of its socialramifications as mere problem-solving, onlymasks the way in which both artifact anddiscourse are deliberately built both to grapplewith political questions, and to move into otherarenas where they may contribute to politicalprojects. The Internet itself is one of thosecontributions; a term like ‘end-to-end’ is another.

In legal controversies, commercialadvertisements, and political screeds, the term‘end-to-end’ has found purchase, as a symbolic

12

representation of the Internet and its functioning.But, though it invokes roughly the samesemantic point as in the engineering discourse,the resonance of the term in each context is quitedifferent. The adoption of ‘end-to-end’ inside ofnon-technical discourses may seem, at firstglance, to be the clumsy appropriation of aprecise engineering term by people poorlytrained in technical principles, transforming itinto a looser, more metaphorical icon. But acloser look at its characterization and strategicdeployment, both inside of the engineeringdiscourse and in its later manifestations, suggeststhat something more subtle and significant isgoing on. It is the term’s mobility -- its capacityto work across domains, and to both maintain acertain meaning and adapt to the particularconnotations of its new discursive context -- thatwarrants attention. And it speaks to thediscursive work involved in the design oftechnology and the Internet’s social constructionas a mobile and powerful artifact.

It is this capacity to move across discourses,to deliver value even as it connects with differentelements of a new discourse, and to stay intactdespite its semantic fluidity, that I want toconsider further. As James Bono notes,technoscientific discourse, while it seems toaspire to be autonomous and insulated, mustconstantly register with other contemporarydiscourses; ‘by fixing meanings in highlyspecific, local, though still plastic, ways, thediachronic dimensions of scientific discoursecome to constitute a synchronically coherent, ifnow metaphorically reordered and situated,language’ (Bono, 1990: 77). Since this meansthat scientific discourse is inextricably linked toother discourses, these metaphoric associationsmust tangle and reconcile with broader socialand cultural shifts in meaning and values (Bono,1990: 80).

Political

As the Internet became a culturalphenomenon, self-appointed techno-futurists likeJohn Perry Barlow, Nicholas Negroponte, andHoward Rheingold began to loudly proclaim the

potential impact of the new technology inutopian, technomorphic terms. They imagined adramatic and unavoidable revolution in socialand political organization, one that would beinduced by the Internet and World Wide Web,and used the technologies as metaphors for thekinds of social arrangements that they believedwould sweep away the dead infrastructure ofpost-industrial modernism. Technology hadpolitical implications, according to the Wiredcrowd, and those implications were undeniablyliberatory. It is in these stylized politicaldeclarations that the term ‘end-to-end’, dressedin its techno-populist connotations, first foundpurchase outside of the specific engineeringdiscourse from which it was spawned.

Barlow begins his online manifesto‘Censorship 2000’ with the now ubiquitousmantra credited to John Gilmore, ‘the Internettreats censorship as though it were a malfunctionand routes around it’. He then wonders if thewords still apply:

Of course, they remain true -- to the extentthat cyberspace exists in an end-to-end datacloud through which any packet may travelto its address by multiple, unfiltered routes.But increasingly, those routes are beingchanneled and filtered, while their originsand ends get monitored and placed underlegal constraint.

That the Internet’s original absence ofpredetermined information circuits andcentral switches had political implicationswas, for some of its many fathers, a featurewith intended social consequences. I onceasked one of them if he had simply beenthinking of designing a system that couldn’tbe decapitated by nuclear attack. ‘I can’tspeak for the others’, he said, ‘but I wasthinking of a system that didn’t have ahead’. (Barlow, 2000)

Once willing to insist, with Gilmore, that thetechnical design of the Internet guaranteed itsability to elude traditional regulation, Barlowlaments with some surprise that such control

13

mechanisms are being deployed by states andcorporations: ‘I would continue to be sanguineabout this global outbreak of legislative andregulatory bit-blockage, maintaining my faith inthe ability of the Net to route around it -- and,more important, in the persistent cluelessness ofthe oppressive -- but I find that the forces ofcontrol have become more sophisticated’(Barlow, 2000). The fact that he is surprised thatrestrictive governments like China, Myanmar,and Kazakhstan are regulating access to theInternet and even to computer hardware, and thateven Western nations like Germany,Switzerland, and the U.S. are demanding thatISPs regulate access to particularly offensivematerials, is a reminder that he presumed thisend-to-end network to be a guarantee of end-userfreedom -- whereas its designers may have meantonly locating control at the ends, a subtle butimportant difference. While Barlow does notreveal which ‘father’ he spoke to, it is importantto note that his contrast between free networkand controlled network is not a neat match to theprinciple articulated by Saltzer, Reed, and Clarkfrom which he borrows the term.

As Barlow lists the various ways in whichgovernments and corporations are attempting toimpose mechanisms of censorship onto theInternet, he premises his argument on theassumption that the Internet as it was designed --that is, end-to-end -- was a technical guarantee ofindividual freedom, a bulwark againstcensorship. The biggest threat to online speech,as Barlow sees it, is not in aspects of theInternet’s current design, but in the effort ofstates and powerful corporations to alter thatarchitecture, the

many initiatives that would move us awayfrom the end-to-end, packet-switched model-- the model to which John Gilmore referredin his quote at the beginning of this article --and toward a circuit-switched network ratherlike the phone system the Internet isreplacing ... the eventual effect likely couldbe the conversion of the data cloud into acomplex of predetermined routes that wouldbe easily monitored and censored either by

those who operated them or by thegovernments within whose jurisdictions theyoperated. (Barlow, 2000)

Again, the distinction between end-to-end andvirtual circuit design, once a question of wherecontrol should be placed, becomes a politicalargument about whether control should beimposed. In Barlow’s essay, there is a neatdiscursive fit between the populist politicalarrangements he seeks and the technical designof the network that he believes hands userspower -- the one taken to be the model for theother -- and only a change in the architecture willthreaten to undercut that freedom.

The bigger point to be made here is the way inwhich Barlow (and he’s by no means alone) talksabout the Internet a s end-to-end; theunquestioning use of the term to simply name thematerial shape of the network both fuels hispolitical argument, and overlooks the extent towhich network design is much more complexthan the term suggests. What was a designprinciple, a characterization of a set of similartechnical solutions abstracted from their detailsto make a point, becomes here the de facto namefor the Internet’s structure. The icon comes tostand in for the technology, and as such serves asa political rallying cry. The Internet is end-to-end, until someone makes it otherwise.

Not only is the term a powerful symbol,invoking the kind of libertarian, individualfreedom Barlow desires, but it obscures thecomplexity of the network’s design -- acomplexity that helps explain the complexity ofattempts to regulate it. In another article focusedmore specifically on copyright, Barlow suggeststhat ISPs like CompuServe should not be heldlegally responsible for users uploadingcopyrighted works without permission, ‘sinceCompuServe cannot practically exercise muchcontrol over the flood of bits that passes betweenits subscribers’ (Barlow, 1994). This notion ofwhat an ISP can and cannot do is mistaken, inpart because of the tendency to think of it asinternal to this end-to-end system, and thereforejust dumb routers. In fact, ISPs have a great dealof ability to regulate what their subscribers post;

14

when it comes to the process of packet-switching, CompuServe is in some ways anendpoint in the network too.

Legal

The broadly political claims of Barlow andothers like him point to, but only begin to reallyengage with, the complex political and legaldebates about the place of the Internet in society.Of particular concern has been the application ofexisting laws to online communication, aconcern that has already reached the highestcourts in the United States.18 But even in theseincreasingly specialized debates, the term ‘end-to-end’ has proven a compelling and persuasiveicon of Internet structure.

Lawrence Lessig has been perhaps the mostprominent and provocative voice in the debateabout digital copyright, and more generally abouthow law and policy should apply to networkedtechnologies. In an article for the FinancialTimes he makes a similar point to Barlow’s,arguing that:

The internet was born a ‘neutral network’,but there are pressures that now threaten thatneutrality. As network architects have beenarguing since the early 1980s, its essentialgenius was a design that disables centralcontrol. ‘Intelligence’ in this network is allvested at the ‘end’ or ‘edge’ of the internet.The protocols that enable the internet itselfare as simple as possible; innovation andcreativity come from complexity added atthe ends. This ‘end-to-end’ design madepossible an extraordinary range ofinnovation. When Tim Berners-Lee startedto sell the idea of a ‘World Wide Web’, hedid not need to seek the approval of networkowners to allow the protocols that built theinternet to run. (Lessig, 2002)

Like Barlow, Lessig builds his politicalargument on the underlying assumption that theInternet is end-to-end, indeed was born that way,and that this design ensures an egalitarianneutrality that is only threatened by changes to

that design. Drawing explicitly on Saltzer, Reed,and Clark’s paper (but emphasizing theirargument that end-to-end design facilitatesinnovation, rather than simply being moreefficient) he renames the datagram vs. virtualcircuits argument as a contrast between anetwork ‘commons’ and a network that is‘owned’.

Again, there tends to be a subtle shift from aquestion of where control is located to whether itcan be imposed at all. Sometimes, Lessig (2000)describes these design strategies as ‘principlesthat govern its control’, but at other moments,the rhetorical temptation of positing a free vs.controlled network proves too seductive: ‘At thecore code layer, the network was free. The end-to-end design assured that no network ownercould exercise control over the network’ (Lessig,2001b). It is important to remember that, thoughthe proponents of end-to-end design knew theywere implicitly arguing against the likes of BellTelephone, the principle that control should bebuilt into the ends does not in any way excludethe possibility that the entire network itself mightbecome the domain of, or be ‘owned’ by, asingle institution. The principle applied as muchto MIT’s internal network as it did to theInternet, according to its proponents. But inLessig’s rhetoric, an end-to-end network is a freeone.

He furthers this argument by drawing aconnection between this technical structure and alegal structure with similar sociopoliticalaspirations:

We can now see how the end-to-endprinciple renders the Internet an innovationcommons, where innovators can develop anddeploy new applications or content withoutthe permission of anyone else. Because ofe2e, no one need register an application with‘the Internet’ before it will run; nopermission to use the bandwidth is required.Instead, e2e means the network is designedto assure that the network cannot decidewhich innovations will run. The system isbuilt -- constituted -- to remain open to

15

whatever innovation comes along. (Lessig,2001a: 40)

The ‘commons’, a longstanding principle insideof property law, represents those material orcultural resources that do not belong to aparticular owner and are potentially available toall. The term has taken on renewed importancefor intellectual property law and thecontroversies around copyright, referring notonly to unowned work, but those works that oncebelonged to a copyright owner but have falleninto the public domain after the copyrightexpired. Here Lessig leans on the rhetoricalpower of that term, itself an engineered iconinside of legal discourse, to suture together a newmetaphor: the ‘innovation commons’. With thistechno-legal-discursive hybrid he simultaneouslydescribes the Internet and makes a claim for howit should be judged in the eyes of the law (as ashared public good, rather than, say, as a linkedarray of privately owned goods or spaces).19 Theresonance of ‘end-to-end’ (or its cuteabbreviation ‘e2e’, itself a means of furtherpreparing the term for travel) offers a compellingway to fortify this claim, calling explicitly on itssense of ‘neutrality’, ‘free’dom, and ‘open’nessto superimpose the technological arrangementand his favored legal solution.

‘End-to-end’ proves a powerful symbolicterm, sometimes for the shape of the Internetwith the most potential for resisting restriction,sometimes for the principle of non-discrimination that Lessig uses it to defend. Heeven extends ‘end-to-end’ metaphorically backto the telephone system, to note that:

when the United States broke up AT&T in1984, the resulting companies no longer hadthe freedom to discriminate against otheruses of their lines. And when ISPs soughtaccess to the local Bell lines to enablecustomers to connect to the Internet, thelocal Bells were required to grant accessequally. This enabled a vigorouscompetition in Internet access, and thiscompetition meant that the network couldnot behave strategically against this new

technology. In effect, through a competitivemarket, an end-to-end design was created atthe physical layer of the telephone network,which meant that an end-to-end design couldbe layered on top of that. (Lessig, 2001b)

Here the term is far from a technical description:he is referring to a virtual circuits network, albeita regulated one. ‘End-to-end’ here means theprinciple of a network freed of restrictions, onedisabled from discriminating against uses orusers.

It is important to note that the term ‘end-to-end’ brings with it a number of connotations,depending on what is asked of it. These are notinherent to the term itself, but are produced, tothe extent that the conceptual shape proposed by‘end-to-end’ resonates with other spatial andorganizational metaphors in our culture.Certainly, it fits neatly with and reinforces otherpopular metaphors that have been used todescribe the Internet: ‘cyberspace’, ‘informationsuperhighway’, ‘electronic frontier’ ,‘blogosphere’. But more than that, to the extentthat our understanding of hierarchical socialarrangements is often represented spatially interms of centers and peripheries, describing asystem as ‘end-to-end’ tends to invoke an aura ofpopulist participation, democratic egalitarianism,openness and neutrality, fairness andinclusiveness. To the extent that systems ofpower are often characterized in a spatial sense,where those in the center have power and thoseat the ends do not, ‘end-to-end’ seems to whispera call for the reversal of that kind of power, arefusal of hierarchy and regulation, a freedomfrom domination.20 The question of the relativepower of endpoints and centerpoints maps wellonto the slightly broader question of the relativepower between end users and central operatorsinvolved in a communication network, and ontothe much broader question of the relative powerbetween citizens and mediating institutions insocial networks.

Commercial

16

While the strident hype exemplified byBarlow resonated with 60’s counterculturalrhetoric,21 it also dovetailed neatly with themassive investment of dollars and dreams by thedotcoms in the late ‘90s. So the least surprising,though probably the most dramatic, repurposingof ‘end-to-end’ is its appearance and circulationin promotional discourse. While Barlow invokes‘end-to-end’ to remind us of a flattened technical(and perhaps political) hierarchy, and Lessigcalls on it to demand a legal regime that holdsthe Internet to its potential as a public good,corporations eager to sell us Internet and wirelessservices have borrowed the term to signal acommitment to comprehensive solutions andextensive customer service. Examples are easy tocome by:

AT&T and Time Warner today announcedthe formation of a significant strategicrelationship that will include a joint ventureto offer AT&T-branded cable telephonyservice to residential and small businesscustomers over Time Warner’s existingcable television systems in 33 states ...AT&T Chairman and Chief ExecutiveOfficer C. Michael Armstrong said,‘Today’s announcement with Time Warnerwill significantly advance AT&T’s ability tooffer end-to-end ‘any distance’communications services to Americanconsumers and businesses’.22 [Feb 1, 1999]

. . .

Tune in to Sun Microsystems, Inc. this weekat Cable 2000 show (Booth #3765). Sun willdemonstrate end-to-end interactivetelevision and broadband Internet solutionsto enable cable operators and programmingcreators to develop, provision and deploynew two-way services and content.23 [May8, 2000]

. . .

Ericsson is establishing itself as the leadingmobile Internet supplier. Unlike any othervendor, Ericsson is able to combine itscapabilities in mobile networks, terminals,

3G technologies, multimedia applications,carrier-class IP backbone systems, andprofessional services to provide a complete,end-to-end solution for any conceivablemobile application.24 [Dec 1, 2000]

. . .

Nokia (NYSE:NOK) and IBM (NYSE:IBM)today announced that they will togetherprovide end-to-end secured businessapplications on advanced mobile handsetstargeted at the enterprise market.25 [Dec 17,2002]

This use of the term also makes claims about thedesign of the network; in fact, it speaks of anetwork that was not even in the purview of theengineers that championed the term. They drawtogether, or perhaps conflate, two sets ofconnotations: both the sense of locating power atthe ends a la Saltzer, Reed, and Clark, andthereby in the hands of users a la Barlow andLessig, and the more commonplace meaning ofextensiveness, comprehensiveness, totalcoverage. Doing so, with a term that also signalsInternet, new technology, progress, and so forth,helps sell the technologies and applications thesevendors offer.

Curiously, the use of the term inside of thispromotional discourse elides a distinct differencebetween the networks being offered and thedesign principle articulated as ‘end-to-end’.When Clark, writing with Marjory Blumenthal,recently revisited the ‘end-to-end’ argument inlight of technological and institutional changessurrounding the wider uses of the Internet, heaccused hardware manufacturers and serviceproviders of forcing Internet design away fromtheir beloved principle. ISPs, in an attempt todifferentiate their services from theircompetitors, want to offer an array of user-friendly bells and whistles, which tend to requirethat the network be ‘aware’ of particular usersand uses; but ‘any action that an ISP undertakesto enhance its role beyond basic packetforwarding is not likely to be compatible withend to end thinking, since the ISP does not havecontrol over the end-points’ (Blumenthal &

17

Clark, 2001: 18). The kind of consumer synergyinside these commercial ‘end-to-end’ promisesis, in Clark’s view, precisely antithetical to end-to-end network design. Nevertheless, the termseems able to move into the sales pitches of suchproviders without stumbling on thiscontradiction.

The Implications of ‘End-to-End’: theCopyright Debate

‘End-to-end’ developed its metaphoricalresonance inside of engineering discourse. Notonly was it deployed as a conceptual benchmarkfor a philosophy of network design; it alsobecame a commonplace and unproblematicdescriptor in more specific engineeringdiscussions. ‘End-to-end’ has become a rationalewith its own justification, a living categoryincreasingly obvious to others. It has provenitself to be a successful design strategy, butperhaps only to the extent that a number ofsuccessful design strategies have been convenedunder its symbolic umbrella. Yet each successlends the term discursive force, each use of theterm refines and polishes it as a persuasiveconcept.

This is not the kind of phenomenon for whichconsequences are easy to pinpoint. They arebound to be subtle and correlational, embeddedin the very language we use. The impact of thelanguage used to describe a technology willalways travel alongside the technology itself; itis a particular challenge to separate theconsequences of the terminology from theconsequences of the artifact it describes. AsBazerman (1999: 159) puts it: ‘If Edison’s lightand power had not become parts of dailymaterial practice, all the technical, legal,financial, industrial, and public relations bubbles... would have burst, leaving little meaning,value, or presence ... In an important sense, thematerial bottom line warrants the meaning of thewords. But the meaning, location, and form ofthe material are, in an equally important sense,mediated by the words. Electricity andincandescent light, when they appeared, becamematerial representations of ideas they were

realizing’. It may be a futile exercise to evendistinguish between the two, since we neverexperience either in isolation.

However, I want to highlight certain featuresof the debates inside of one particular legalcontroversy -- copyright -- features that aresomewhat easier to explain if we assume that themeanings and priorities embedded in ‘end-to-end’ have been taken to heart. Barlow, Lessig,and the marketing divisions of AT&T, TimeWarner, Sun, Ericsson, and Nokia found specificvalue in the term, for describing the Internet inways amenable to their own agendas. But manyothers who are similarly responsible for andinfluential in debates about the purposes andvalue of the Internet -- copyright lawyers,industry lobbyists, politicians, activists, users --seem to have embraced, if not the term itself, atleast its particular characterization of the shapeand character of the Internet. What follows aremerely correlations, which admittedly may ormay not suggest causation.

In recent debates about copyright law and thedistribution of music and other cultural worksonline, those on both sides have been strikinglyobstinate. Rather than looking for assurances thatthe legal and technological regulation of controlwould be built to best preserve certain rights andvalues, users largely refused the idea of controlaltogether; at the same time, the music and filmindustries demanded almost total restriction.According to users, information would be free,and there was nothing government orcorporations could do about it; according to theindustry, information would be free, so drasticmeasures had to be taken to avoid total collapse.This ideological chasm proved difficult tobridge, even as particular cases arose. When theRIAA sued the makers of the Napster p2p26 file-trading system, fans were certain that Napstercould not be shut down; as it became clear that itcould and would be, they boasted that moredecentralized applications like Gnutella andKazaa could not be shut down. Steps taken bythe industry have come as a series of surprises tothe p2p faithful, all of whom seemed sure theirparticular Titanic was unsinkable. On both sidesof the divide, the decentralized, egalitarian,

18

‘stupid’ network simply meant that the powerresided at the ends only. In this case, end-to-end-ness, not the actual design of the network but theiconographic principle it represents and theegalitarian connotations it implies, may havesuggested that file-trading would be unstoppable.

Even the culture industry, which obviouslyhad a vested interest in imposing systems ofregulation for the online transmission of datathey considered to be theirs, were surprisinglyslow to pursue the bottleneck tactics that theyeventually did. In the years before Napster, asthe mp3 compression format took root anddigital copies of popular music becameincreasingly available on the web, the RIAAspent an inordinate amount of its time goingdirectly after end-users, trying to make examplesof particularly active providers of music. Onlywhen Napster arrived did they petition the courtsto hold intermediaries liable. Now, their tacticsinclude going after those intermediaries:pressuring ISPs and universities to regulate use,prosecuting the designers of tools and services.Even those efforts have been meant primarily toforce ISPs to reveal the names of particularusers, as part of a renewed effort to use high-profile subpoenas to go after end-users.27 Thisstrategy came relatively late in the game --perhaps too late.

The 1998 Digital Millennium Copyright Act(DMCA) gave ISPs, routers, and search enginesa clear protection from liability for copyrightviolations that might have depended on theirmaterial resources. If a commercial serviceprovider or a search engine is alerted to the factthat it is hosting or linking to a website that is inviolation of copyright, it need only promptlyrespond to the ‘take-down’ request in order toavoid even secondary liability. But consider thelanguage used to indicate who enjoys thisbenefit:

A service provider shall not be liable ... forinfringement of copyright by reason of theprovider’s transmitting, routing, orproviding connections for, material througha system or network controlled or operatedby or for the service provider, or by reasonof the intermediate and transient storage of

that material in the course of suchtransmitting, routing, or providingconnections, if--

(1) the transmission of the material wasinitiated by or at the direction of a personother than the service provider;

(2) the transmission, routing, provision ofconnections, or storage is carried outthrough an automatic technical processwithout selection of the material by theservice provider;

(3) the service provider does not select therecipients of the material except as anautomatic response to the request of anotherperson;

(4) no copy of the material made by theservice provider in the course of suchintermediate or transient storage ismaintained on the system or network in amanner ordinarily accessible to anyone otherthan anticipated recipients, and no such copyis maintained on the system or network in amanner ordinarily accessible to suchanticipated recipients for a longer periodthan is reasonably necessary for thetransmission, routing, or provision ofconnections; and

(5) the material is transmitted through thesystem or network without modification ofits content. (Section 512a, DigitalMillennium Copyright Act)

This characterization of the role ofintermediaries is primarily built on a history ofregulation of telephone networks, where thecourts and regulatory agencies generally adopteda convenient distinction between ‘content’ and‘conduit’. Like the telephone, mechanisms andinstitutions merely providing a conduit forinformation cannot be held responsible for thatinformation. It is this distinction that indemnifiesthe telephone network if I call someone to plan acrime. But the language of the DMCA’s limitedliability clause also invokes the principle of ‘end-to-end’, though not in name: the ISP or searchengine can only enjoy this limited liability if itmerely provides an automatic process oftransmission and routing, does not save copies of

19

the transmitted material, and does not modify thematerial along the way. This is not the actualend-to-end design of network architecture, but isinstead the underlying idea of ‘end-to-end’written into the code of law. In fact, it may bewritten back into the technology itself: in light ofthis law, network engineers who might preferadding ‘virtual circuit’ features like activenetworking or quality of service mechanismswill enjoy substantially less indemnity fromDMCA copyright charges than a network thatchooses to remain ‘stupid’, to honor the end-to-end argument.

Conclusion

The struggle over a technology and its socialcharacter is fought, then, on both material andsymbolic grounds. Those eager to stake a claimfor a technology like the Internet aim to wieldnot only political power over the design process,but also discursive power to circulate compellingmetaphors: metaphors that characterize thetechnology in ways that elevate the uses theyprefer and position them best as the keepers ofthat technology. By highlighting certain featuresof the technology and obscuring others, theyhave the power to frame the entire technology interms of an assumed set of priorities, and to linkit not just to a particular agenda, but to quietlyalign it with other social and political projects.

This points, first, to the need to consider morecarefully the relationship between language andtechnology. First, the dynamics of language inthis case are complex. Meaning is both persistentand malleable; interpretation is both polysemicand consequential. We must look at language asit is designed and deployed to intervene indebates about what a technology can and shouldbe. As Nissenbaum (2004: 216) reminds us, ‘ourconcepts are teleological, not only shaping ourthoughts and utterances but facilitating, makingawkward, or even depriving us of the facility tothink and talk about certain things’. Sally Wyattrecognizes the double complexity of metaphoriclanguage applied to technology: ‘metaphors mayconvey something about the future functions andtechnological configurations of the Internet andthey also may reveal the political assumptions

and aspirations of those who deploy them’(Wyatt, 2000: 111). ‘End-to-end’ is emblematicnot only of the prevalence of spatial metaphorsfor the Internet Wyatt is describing, but ofl a n g u a g e t h a t ( o v e r ? ) e m p h a s i z e s‘unstructuredness’ and a matching freedom ofmovement, at the very moment that theirstructural control mechanisms are beinginstalled.

Second, it is crucial that we expand thisattention to the agency behind such discourse, totake into consideration the cultural andinstitutional contexts of its construction anddeployment. This agency is too often cloaked bythe language itself: ‘as metaphors becomeembedded within discourses and as actorsbecome less reflexive in their use and choice ofmetaphors, it may appear that the metaphorsthemselves are becoming active agents carryingwith them expectations about the future’ (Wyatt,2000: 111). As Turner insists, discussing the‘frontier’ iconography common to Internetdiscourse:

... this rhetoric has emerged less from themists of literary history than from thedeliberate efforts of a particular communityof computer manufacturers, softwaredevelopers, corporate consultants andacademics. ... this emerging elite has drawnon the rhetoric of the electronic frontier inorder to identify and manage a series ofanxieties brought about by broad changes inwork practices and personal mobility overthe last twenty-five years – changestriggered by and pervasive in the computerand software industries themselves. (Turner,1999)

By highlighting the players in thesedeveloping communities (in the newsgroups ofthe Well, in the editing rooms of Wired, on theboard of the Electronic Frontier Foundation)Turner pinpoints the value they have sought incrafting and deploying the ‘frontier’ metaphor,the work done to manage its impact onengineering, cultural, and regulatory discourse,and the ways in which the metaphor moved from

20

these specific communities into the broadercultural conversation. Similarly, the engineersdebating the proper design of computernetworks, and the legal and commercial voicesdiscussing its proper implementation, draw uponparticular terms and broader discursive frames tomove align the technology with their particularpurposes. At the same time, we cannot entirelyreduce this to the simple, deliberate choices ofdiscrete individuals; as David Edge (1974, 137)expressed it, ‘in some important sense, we do not"choose" a successful metaphor -- rather, itchooses us’.

Finally, this requires us to recognize anotherway in which technical design is a particularlyimportant process and specifically a politicalone. Not only do designers have a great deal ofsay about the artifacts they produce, but they alsointervene through the language they adopt. Thematerial and symbolic shape of the artifact aredesigned together, and the political valence ofthat artifact is already being grappled with,though often cloaked in a discourse that performsits political neutrality. The linguistic barrierbetween engineering and the broader discoursesof politics, law, and commerce is a significantone. Engineering discourse has quite deliberatelyestablished a distinction between engineeringand all things social and political, as a means topreserve its ‘functional autonomy’ (Mulkay,1976) from the messiness of politics and tomaintain a convenient distance from the ethicalimplications of their work. But this does notmean the discursive choices do not also embodypolitical standpoints, even as they obscure them.

If ‘end-to-end’ was chosen in part because ofits egalitarian connotations, then we mustrecognize engineers as deliberate political actors.Typical of engineering writing, the language inSaltzer, Reed, and Clark’s paper is deliberatelyapolitical, as are the papers they were respondingto. As Reed remembers it, he and his co-authors‘attempted to provide the argument in a valueneutral way in the paper, so the reader couldmake his/her own judgment about its relevanceto their situation’.28 However, as most computerscientists will also admit, they were keenly

aware of the political ramifications of theirargument:

The functions that are placed according tothe end-to-end argument are functions thatare inherently cultural or political --security, reliability, modifiability,congestion control, routing, privacy -- [they]are end-to-end precisely because theyinvolve human needs, human evaluation,and human benefits that must be traded offagainst each other ... In fact, the idea ofpursuing a thing called ‘the Internet’ (an ur-network-of-networks) was a political choice-- that universal interoperability wasachievable and desirable. It’s parallel to‘One Europe’ or ‘World Government’,though not the same. The engineers involvedwere not ignorant of the potentialimplications at the political level of thatchoice.29

Just as the design was fundamentally thoughquietly political, so was the symboliccharacterization of that design as ‘end-to-end’.The term has connotations, and its selection wasno accident. The engineers who adopted it chosethe term because it was rhetorically persuasivewithin a technical dispute, even if they were notexplicit about why. And to the extent that theterm was persuasive inside of the engineeringdiscourse, ‘end-to-end’ could then resonate inmuch the same way elsewhere. The effort tomake it resonate, or resonate-able, was part ofthe agenda in the Saltzer, Reed, and Clark paper;they used the term in a way that was alreadyobliquely aware and quietly confronting of itspolitical connotations. Not only what is in a termlike ‘end-to-end’, but also what is not, helps tomap the Internet onto a set of political projectsthat both precede the design of the Internet, drawon it for justification, and carry it forward.

And finally, we should note that the termschosen by engineers to describe theirtechnologies can drive not only the cultural anddiscursive life of the technology, but also itssubsequent redesign. Discursive tokens like‘end-to-end’ are not only adopted as commercial

21

pitches or policy shorthand, but also in thedescription of the structure of the technology asit is being revised and revamped. Even as wecontinue to ‘design’ the Internet as a social,legal, and political phenomenon, we are alsoredesigning it technologically. With work on ahigh-speed Internet2 backbone and the recentproliferation of wireless access networks,metaphors for the material shape of the Internet

may eventually be re-materialized in the Internetitself, porting their connotations into futureversions of the technology, and transforming thepromise they pursue into a self-fulfillingprophesy.

Tarleton Gillespie is an Assistant Professor in the Department of Communication at CornellUniversity, and is affiliated with the Department of Science and Technology Studies and theDepartment of Information Science. His first book, Technology Rules: Copyright and the Re-alignmentof Digital Culture, will be published by MIT Press in 2006. email: [email protected]

References

Abbate, Janet (1999) Inventing the Internet (Cambridge, MA: MIT Press).Agre, Phil (1997) Computation and Human Experience (Cambridge: Cambridge University Press).Baccelli, Francois, Erol Gelenbe, & Brigitte Plateau (1984) ‘An End-to-End Approach to theResequencing Problem’, Journal for the ACM 31/3: 474-85.Barlow, John Perry (2000) ‘Censorship 2000’, discussion group post, available athttp://amsterdam.nettime.org/Lists-Archives/nettime-l-0007/msg00049.html. Last accessed January 13,2005.Barlow, John (1994) ‘The Economy of Ideas: A Framework for Rethinking Patents andCopyrights in the Digital Age’, Wired, 2.03: 85-90, 126-9, available athttp://www.wired.com/wired/archive/2.03/economy.ideas_pr.html. Last accessed January 13, 2005.Bazerman, Charles (1999) The Languages of Edison’s Light (Cambridge, MA: MIT Press).Belsnes, Dag (1976) ‘Single-Message Communication’, IEEE Transactions on Communications 24/2:190-4.Bhusri, Gurcharan (1984) ‘Considerations for ISDN Planning and Implementation’, IEEECommunications Magazine 22/1: 18-32.Blumenthal, Marjory & David Clark (2001) ‘Rethinking the Design of the Internet: The End-to-EndArgument vs. The Brave New World’, ACM Transactions on Internet Technology 1/1: 70-109.Boggs, David, John Shoch, Edward Taft & Robert Metcalfe (1980) ‘Pup: An InternetworkArchitecture’, IEEE Transactions on Communications 28/4: 612-24.Bono, James (1990) ‘Science, Discourse, and Literature: The Role/Rule of Metaphor in Science’, inStuart Peterfreund (ed.), Literature and Science: Theory and Practice (Boston: NortheasternUniversity Press): 59-89.Bush, R., & Meyer, D. (2002) ‘RFC 3439: Some Internet Architecture Guidelines and Philosophy’,available at http://www.faqs.org/rfcs/rfc3439.html. Last accessed December 5, 2004.Campbell, Carl (1978) ‘Design and Specification of Cryptographic Capabilities’, IEEECommunications Society Magazine, November: 15-19.Carey, James W. (1992) Communication as Culture: Essays on Media and Society (New York:Routledge).

2

Carpenter, Brian (1996) ‘RFC 1958: Architectural Principles of the Internet’, available athttp://www.faqs.org/rfcs/rfc1958.html. Last accessed January 13, 2005.Carpenter, Brian (2000) ‘RFC 2775: Internet Transparency’, available athttp://www.faqs.org/rfcs/rfc2775.html. Last accessed January 13, 2005.Cerf, Vint & Robert Khan (1974) ‘A Protocol for Packet Network Intercommunication’, IEEETransactions on Communications 22/5: 637-48.Chan, Jackson & Nicholas Georganas (1979) ‘Dimensions of Message-Switched Computer-Communication Networks with End-to-End Window Flow Control’, Computer Communications(ACM) 2/4: 161-164.Chatterjee, A., Nicholas Georganas, & P. K. Verma (1977) ‘Analysis of a Packet-Switched Networkwith End-to-End Congestion Control and Random Routing’, IEEE Transactions on Communications25/12: 1485-9.Collins, Harry (1985) Changing Order: Replication and Induction in Scientific Practice (London: SagePublications).Coviello, Gino (1979) ‘Comparative Discussion of Circuit- vs. Packet-Switched Voice’, IEEETransactions on Communications 27/8: 1153-60.Cristal, Edward (1966) ‘Coupled-Transmission-Line Directional Couplers with Coupled Lines ofUnequal Characteristic Impedances’, IEEE Transactions on Microwave Theory and Techniques 14/7:114-9.David, Paul (2001) ‘The Beginnings and Prospective Ending of "End-to-End": An EvolutionaryPerspective on the Internet’s Architecture’, working paper, available athttp://www-econ.stanford.edu/faculty/workp/swp01012.pdf. Last accessed January 13, 2005.Davies, Donald (1972) ‘The Control of Congestion in Packet-Switching Networks’, IEEETransactions on Communications 20/3: 546-50.Decina, Maurizio (1982) ‘Managing ISDN through International Standards Activities’, IEEECommunications Magazine 20/5: 19-25.Digital Millennium Copyright Act [‘DMCA’] (1998); Pub. L. No. 105-304, 112 Stat. 2860 (1998)(codified at 17 U.S.C. 512 and in various sections of Chapter 12 of Title 17 of the U.S.C.).Edge, David (1974) ‘Technological Metaphor and Social Control’, New Literary History 6/1: 135-47.Fry, Michael (1992) ‘Practical Trade-offs for Open Interconnection’, Proceedings of the 1992 ACMAnnual Conference on Communications: 1-8. Kansas City, MO.Getgen, Lawrence (1967) ‘Design of the 46B Cable Carrier Terminal’, IEEE Transactions onCommunication Technology 15/4: 637-42.Gien, Michael & Hubert Zimmerman (1979) ‘Design Principles for Network Interconnection’, inProceedings of the Sixth Symposium on Data Communications (ACM):109-19. Pacific Grove, CA.Gieryn, Thomas (1999) Cultural Boundaries of Science (Chicago: University of Chicago Press).Gillespie, Tarleton (2004) ‘Copyright and Commerce: The DMCA, Trusted Systems, and theStabilization of Distribution’, The Information Society 24/4: 239-54.Girard, Andre & Ouimet, Yves (1983) ‘End-to-End Blocking for Circuit-Switched Networks:Polynomial Algorithms for Some Special Cases’, IEEE Transactions on Communications 31/12: 1269-73.Gitman, Israel (1976) ‘Comparison of Hop-By-Hop and End-To-End Acknowledgment Schemes inComputer Communication Networks’, IEEE Transactions on Communications 24/11: 1258-62.Gruber, John & Nguyen Le (1983) ‘Performance Requirements for Integrated Voice/Data Networks’,IEEE Journal on Selected Areas in Communications 1/6: 981-1005.Isenberg, David (1997) ‘The Rise of the Stupid Network’, Computer Telephony August: 16-26.Johns, Adrian (1998) The Nature of the Book: Print and Knowledge in the Making (Chicago:

3

University of Chicago Press).Kelcourse, Frank and Eugene Siegel (1983) ‘Switched Digital Capability: An Overview’, IEEECommunications Magazine, January: 29-31.Kleinrock, Leonard & Parviz Kermani (1980) ‘Static Flow Control in Store-and-Forward ComputerNetworks’, IEEE Transactions on Communications 28/2: 271-9.Lampson, Butler (1983) ‘Hints for Computer System Design’, ACM Operating Systems 15/5: 33-48.Lantz, Keith, Judy Edighoffer, & Bruce Hitson (1986) ‘Towards a Universal Directory Service’,Operating Systems Review 20/2: 43-53.Latour, Bruno (1987) Science in Action: How to Follow Scientists and Engineers through Society(Cambridge, MA: Harvard University Press).Lavia, A. & D. J. Rhynas (1979) ‘A Virtual Circuit Based Transaction Service’, ACM Proceeds of theSixth Symposium on Data Communications: 69-74.Lessig, Lawrence (2000) ‘It’s the Architecture, Mr. Chairman’, op-ed version of testimony give ‘In theMatter of the AT&T / MediaOne Merger’, available online athttp://www.lessig.org/content/testimony/cable/Cable.html. Last accessed January 13, 2005.--- (2001a) The Future of Ideas: The Fate of the Commons in a Connected World (New York: RandomHouse).--- (2001b) ‘The Internet Under Siege’, Foreign Policy, November 1, available online athttp://www.lessig.org/content/columns/foreignpolicy1.pdf. Last accessed January 13, 2005--- (2002) ‘A Threat to Innovation on the Web’, Financial Times (December 12), available online athttp://www.interesting-people.org/archives/interesting-people/200212/msg00053.html. Last accessedJanuary 13, 2005.Long, Tony (2004) 'It's Just the "internet" Now', Wired (August 16), available online athttp://www.wired.com/news/culture/0,1284,64596,00.html. Last accessed, January 13, 2005.Morling, Richard, Gerald Cain, Giovanni Neri, Marco Longhi-Gelati, & Pierantonio Natali (1983)‘Design of a High-speed Word-Switched Transport Station’, IEEE Journal on Selected Areas inCommunications 1/5: 740-50.Mulkay, Michael (1976) ‘Norms and Ideology of Science’, Social Science Information 15: 637-56.National Research Council: Committee on the Internet in the Evolving Information Infrastructure,Computer Science and Telecommunications Board (2001) The Internet’s Coming of Age (WashingtonD.C.: The National Academies Press).Nissenbaum, Helen (2004) ‘Hackers and the Contested Ontology of Cyberspace’, New Media andSociety 6/2: 195-217.Papantoni-Kazakos, P. (1979) ‘The Potential of End-to-End Observations in Trouble Localization andQuality Control of Network Links’, IEEE Transactions on Communications 27/1: 16-30.Pinch, Trevor & Weibe Bijker (1987) ‘The Social Construction of Facts and Artifacts: Or, How theSociology of Science and the Sociology of Technology Might Benefit Each Other’, in Weibe Bijker,Thomas P. Hughes & Trevor Pinch (eds), The Social Construction of Technological Systems(Cambridge: MIT Press):17-50.Poling, Philip (1967) ‘Engineering and Application Features of the 4161B Cable Carrier System’,IEEE Transactions in Communication Technology 15/4: 642-7.Postel, Jon (1980): ‘Internetwork Protocol Approaches’, IEEE Transactions in Communications28/4: 604-11.Pouzin, Louis (1981) ‘Methods, Tools, and Observations on Flow Control in Packet-Switched DataNetworks’, IEEE Transactions on Communications 29/4: 413-26.Reed, David (2000) ‘The End of the End-to-End Argument’, online post, available athttp://www.reed.com/dprframeweb/dprframe.asp?section=paper&fn=endofendtoend.html. Last

4

accessed January 13, 2005.Reed, David, Jerome Saltzer, & David Clark (1998) ‘Active Networking and End-to-End Arguments’,online commentary, available athttp://web.mit.edu/Saltzer/www/publications/endtoend/ANe2ecomment.html. Last accessed January13, 2005.Rybczynski, Antony (1980) ‘X.25 Interface and End-to-End Virtual Circuit Service Characteristics’,IEEE Transactions on Communications 28/4: 500-10.Saltzer, Jerome, David Reed & David Clark (1984) ‘End-to-End Arguments in System Design’, ACMTransactions on Computer Systems 2/4: 277-88. Available online athttp://www.reed.com/Papers/EndtoEnd.html. Last accessed January 13, 2005.Shapin, Steven & Simon Schaffer (1985) Leviathan and the Air-Pump: Hobbes, Boyle, and theExperimental Life (Princeton: Princeton University Press).Svobodova, Liba (1984) ‘File Servers for Network-based Distributed Systems’, ACM ComputingSurveys 16/4: 353-98.Tanenbaum, Andrew & Robbert Van Renesse (1985) ‘Distributed Operating Systems’, ACMComputing Surveys 17/4: 419-70.Tennenhouse, David, Jonathan Smith, W. David Sincoskie, David Wetherall, & Gary Minden (1997)‘A Survey of Active Network Research’, IEEE Communications Magazine 35/1: 80-86.Tennenhouse, David & David Wetherall (1996) ‘Toward an Active Network Architecture’, ACMComputer Communication Review 26/2: 5-18.Tobagi, Fouad, Flaminio Borgonovo, & Luigi Fratta (1983) ‘Expressnet: A High-PerformanceIntegrated-Services Local Area Network’, IEEE Journal on Selected Areas in Communications 1/5:898-913.Turner, Frederick. (1999) ‘Cyberspace as the New Frontier? Mapping the Shifting Boundaries of theNetwork Society’, posted to the Red Rock News Eater Service, available athttp://groups.yahoo.com/group/rre/message/1099. Last accessed January 13, 2005.Turner, Frederick (2002) ‘From Counterculture to Cyberculture: How Stewart Brand and the WholeEarth Catalog Brought Us Wired Magazine’ Ph.D. dissertation. Univ. of California, San Diego.Voydock, Victor & Stephen Kent (1983) ‘Security Mechanisms in High-Level Network Protocols’,ACM Computing Surveys 15/2: 135-71.Wong, J. W. (1978) ‘Queueing Network Modeling of Computer Communication Networks’, ACMComputing Surveys 10/3: 341-51.Wong, J. W., Jacques Sauve, & James Field (1982) ‘A Study of Fairness in Packet-SwitchingNetworks’, IEEE Transactions on Communications 30/2: 346-53.Wyatt, Sally (2000) ‘Talking About the Future: Metaphors of the Internet’, in Nik Brown, BrianRappert & Andrew Webster (eds), Contested Futures: A Sociology of Prospective Techno-Science(Aldershot: Ashgate Press): 109-26.Yuill, Stuart & Raymond Pickholtz (1981) ‘Performance Modeling for Packet Networks with SatelliteOverflow Channels’, IEEE Transactions on Communications 29/6: 808-15.

Notes

I want to thank in particular Professors Jerome Saltzer, David Reed, and David Clark for the insightsthey provided into the intellectual context of their paper. I am also grateful for the helpful comments ofFred Schneider, Chandra Mukerji, Fred Turner, Josh Greenberg, Shay David, and especially thepatience and support of Lucy Suchman and Michael Lynch.

5

1 For the remainder of this essay, I will continue to capitalize ‘Internet’, despite the suggestion of theWired News copyeditor. I hope that the argument that follows, like others in this field, suggest that thistechnology is still far from stabilized as a sociocultural phenomenon.2 I use the term ‘Internet culture’ not mean ‘cyberculture’, with its implicit contrast to or rejection ofrealspace activity; instead I use it in the way Adrian Johns (1998) uses ‘print culture’ -- the technologyas well as the institutions, conventions, and rules that develop around it and help make it what it is, andgive the technological potential its social particularity.3 AT&T v. City of Portland, 2000 U.S. App. LEXIS 14383, *23 (9th Cir. 22 June, 2000).4 The Oxford English Dictionary contains one reference to the phrase in this specific meaning, acomment from Tyndall in 1860 that ‘39000 waves of red light placed end to end would make up aninch’.5 ‘The Air Ground-pipe, laid the whole length of the Green-house ... and reaching from end to end’(Evelyn, 1664: cited in the Oxford English Dictionary).6 See, for example, Abbate (1999: 7-42).7 A particularly useful discussion of these two approaches appears in Postel (1980); in some sense, hisdiscussion is parallel to Saltzer, Reed, and Clark’s, without the normative claim that one design modelis preferable over the other. See also Abbate (1999: 156-61).8 I use transmission reliability merely as one example; while it was perhaps the biggest concern in thedebate between the virtual circuit and datagram models, a similar discussion developed in regards tomanagement of network congestion and delay, the process of alerting a receiver that data is on its way,and early considerations of security.9 Generally, see Abbate (1999).10 See Blumenthal and Clark (2001) for an excellent discussion of this changes and the pressure theyexert on traditional network design principles.11 Prof. Fred Schneider, informal communication, September 2, 2004.12 It is important to note that David Clark was a special advisor to the committee that crafted thisresearch document.13 In their paper, they describe this principle as applying to ‘systems’, which in their discourse include‘networks’, but also computer systems, operating systems, and the like. Still, nearly all of theirexamples are of networks, and they speak directly to the debates about network design; in some sense,the expansion of this point to all ‘systems’ is a way to give it even more footing in the networkargument -- a principle that is undeniable because it is in fact bigger than the question of networkdesign.14 For a consideration of their argument, see Tanenbaum and Van Renesse (1985).15 See Blumenthal and Clark (2001); Reed (2000); Reed, Saltzer, and Clark (1998).16 For example: Lampson (1983), Lantz, Edighoffer, and Hitson (1986), and Svobodova (1984), whoare generally in agreement with the end-to-end principle; Fry (1992), and Tanenbaum and van Renesse(1985), who are somewhat more critical.17 See, generally, NRC (2001: 107-150). See also Tennenhouse and Wetherall (1996); Tennenhouse,Smith, Sincoskie, Wetherall, and Minden (1997); Reed, Saltzer, and Clark (1998).18 For more on Internet copyright controversies, see Gillespie (2003).19 And, in case this use of the term seems too far removed from the engineering discourse in which itoriginated to even make the comparison, rest assured that we have not traveled so far; Lessig, in hisexpert testimony for the Napster case, quotes the AT&T v. City of Portland decision in yet anotherdiscussion of ‘end-to-end’, and cites Saltzer, Reed, and Clark’s essay as the origin of the term. See

6

‘Expert Report of Professor Lawrence Lessig Pursuant to Federal Rule of Civil Procedure 26(a)(2)(B)’,June 2000; A&M Records, Inc. v. Napster, Inc., 239 F. 3d 1004 (9th Cir., 2001).20 Ironically, the Saltzer, Reed, and Clark piece uses both an ‘end-to-end’ spatial metaphor as well asan ‘upper-lower’ one -- but, in their model the ends reside at the upper level. To the extent that wetypically imagine the control of power at a ‘higher’ point than those subject to that power, their use ofa ‘high-low’ metaphor disrupts the mapping of their claim onto traditional understandings of control.21 see Turner (2002).22 AT&T press release, ‘AT&T and Time Warner Form Strategic Relationship to Offer CableTelephony’, available athttp://articles.findarticles.com/p/articles/mi_m0UNZ/is_1999_Feb_8/ai_53732153 last accessed March2, 2005.23 Sun Microsystems press release, ‘Sun Showcases End-To-End Solutions for Building and DeployingInteractive Digital Television and Broadband Services at NCTA’s Cable 2000’, available athttp://www.sun.com/smi/Press/sunflash/2000-05/sunflash.20000508.1.html last accessed March 2,2005.24 Ericsson press release, ‘Ericsson mobile Internet network architecture -- end-to-end technologyoverview’, available at http://www.ericsson.hu/press/eripress_2000_12_01_230.shtml last accessedFebruary 6, 2003.25 IBM and Nokia press release, ‘Nokia and IBM to Provide End-To-End Secured Enterprise BusinessApplications on Advanced Mobile Handsets’, available athttp://www.nokiausa.com/about/newsroom/article/1,1046,986,00.html last accessed March 2, 2005.26 ‘p2p’, or ‘peer-to-peer’, is itself another token, and a particularly powerful one, in the copyrightdebate. It borrows much of its discursive force from ‘end-to-end’, adding a further connotation ofegalitarian with the word ‘peer’. I intend to discuss this example fully in a more thorough version ofthis analysis.27 See the memorandum opinion in RIAA v. Verizon, available athttp://www.eff.org/Cases/RIAA_v_Verizon/20030121-riaa-v-verizon-order.pdf.28 Prof. David Reed, personal communication, December 2, 2004.29 Ibid.