Collaboration with Cloud Computing || Cloud Computing

Embed Size (px)

Text of Collaboration with Cloud Computing || Cloud Computing

  • 15

    Cloud Computing 2CHAPTER

    INFORMATION INCLUDED IN THIS CHAPTER Protocols used Defining the cloud Software as a service Infrastructure as a service Platform as a service

    INTRODUCTIONIts nothing new to suggest that more and more of life is revolving around our inter-actions with computers and more specifically with other computers and people over the Internet. We are finding more and more ways to get things done making use of the Internet connections that have become more and more ubiquitous. Coffee shops, fast-food places, more traditional restaurants, health professionals. What do they all have in common? Often, they have complimentary wireless access, provid-ing us with a way to get online. Additionally, people are getting so-called smart-phones which also provide them with a way to stay online from nearly anywhere they might be. We can complain about texting and driving but its not just SMS. How many people do you know who Facebook and drive?

    So, what does all this have to do with the cloud? Well, we are more and more connected, and many of the ways we are connected these days is through some sort of in the cloud technology. Businesses are also finding ways to streamline their operations and reduce infrastructure, operations footprint, and overall cost. They are doing this in a number of ways because the cloud (and Ill stop putting it in quotes soon) has a number of facets. Its not simply a way to stick all of your iTunes library or a bunch of documents you want to share with someone else. It has become a lot more, and there are a number of technologies that have allowed all of this to happen, finally.

    What makes me say finally? Well, the reality is that technology has been going back and forth between local and remote for a while now. Life started with with Cloud Computing. DOI: 2014 Elsevier Inc. All rights reserved.2014

  • 16 CHAPTER 2 Cloud Computing

    everything centralized since all we had was very large systems. There was no way to have local storage so it had to be centralized. In the 1980s, the arrival of PCs allowed for local storage and a movement away from the exclusively centralized, mainframe approach, although it took some time for the local storage to get big enough and cheap enough to really be effective. Applications were a mix of local and centralized, though.

    In the 1990s, there was again a movement toward centralized, remote applica-tions as the Web took off. There was a lot of talk about thin clients and dumb ter-minals making use of applications living on remote servers using Web interfaces or even remote procedures or methods. The thin client and dumb terminal approach never really took off, and there was still a preponderance of local storage and local applications. The mix of financials and functionality was never really able to make the push to using remote applications over local applications. In the late 1990s and early 2000s, I was working at a large network service provider and we started talking about offering services in the cloud which made it the first time I had heard that term. When the dotcom collapse really started taking everyone down, the potential clients and the resources necessary to create those services and applica-tions were just no longer available. Again, services and applications remained local.

    Now, were into the 2010s and youll hear in the cloud everywhere you go. There is an enormous movement of applications, functionality to a remote model with remote functionality and remote storage. What is it that has made that possi-ble? Some of it has been the rise in virtual technology, some has been an improve-ment in the protocols, and there is also the economies of scale that come with enormous storage and processing power. On top of that, last mile network services like cable modem, Digital Subscriber Line, and cellular data networking have pro-vided the ability to have Internet services nearly everywhere you go. All of that storage and processing can be shared across a large number of clients, and it makes sense as a business opportunity as well as a way to cut costs and reduce internal risk to the organization, especially when you consider the large number of potential customers and their ability to consume services from anywhere.

    The protocols that made it possibleCloud services didnt just happen overnight, of course, and there are a number of incremental advances that have taken place in how we offer services and applica-tions over networks that have made cloud services possible. While the origins of many of these concepts go back decades in at least their conceptual stages, the con-crete introduction of some of the really important ones began in the late 1980s, just as the Internet itself was really starting to come together with all of the disparate research and educational networks being pulled together into a larger, single net-work that was being called the Internet.

  • 17The protocols that made it possible

    HTTP and HTMLHypertext has been a concept since at least the 1960s, and it was made possible in the late 1980s with the introduction of a protocol, a language, and a set of applications that brought it all together. Initially, this was done to simply share information between scientists and researchers around the world. Eventually, the rest of the world started to figure out all of the cool and interesting things that could be done using a couple of very simple protocols. The implementation of hypertext in language form is HyperText Markup Language (HTML). A basic example of HTML would be something like

    My page

    Hello, everybody!

    You can see HTML is a reasonably easy to read, structured formatting language. It uses tags to indicate elements, and each tag could potentially have parameters that would alter the behavior of the tag. An example is the body tag, which has two parameters. The first, text, sets the color of the text, while the second parameter, bgcolor, sets the color of the background of the page. Its not challenging to learn basic HTML, but basic HTML wasnt enough so that evolved into Dynamic HTML (DHTML) because static text wasnt particularly interesting and it was thought that people wanted movement and interaction in their Web pages. DHTML is a combi-nation of HTML with a client-side scripting language like JavaScript providing a more interactive experience with what was previously a static page.

    Its important to note here that HTML continues to go through a lot of evolution and the latest, HTML5, is completely different from previous versions. The param-eters mentioned above are no longer supported in HTML5. HTML5 is part of the continued evolution toward more interactive Web pages. HTML5 has not yet sta-bilized as a standard at the time of this writing, though they are releasing candidate recommendations. The expectation is that a final, stable standard will be available in 2014. In addition to markup, HTML5 has support for application programming interfaces (APIs), which previous versions of HTML did not have.

    The HTML is the visual representation but we still need a way to get the HTML to us. Certainly existing protocols like FTP could have been used but the existing protocols didnt offer the right level of information necessary to provide functions like detailed errors or redirection. When you are interacting with a Web server, you may generate errors on the server for a number of reasons including the page not being found or a more serious error with the server itself. You might also generate a server-side error if the programmatic components had an error in them on the server side. These types of interactions would be impossible with previous document retrieval protocols like FTP. Additionally, FTP was designed in a way that might

  • 18 CHAPTER 2 Cloud Computing

    cause problems on the application side. As a result of all of this, Tim Berners-Lee and his team at the European Organization for Nuclear Research (CERN), devel-oped a new protocol called the HyperText Transfer Protocol (HTTP), designed to transfer HTML documents from a server to a client. The initial version of HTTP was very simple, providing a way to make a request to the server and get a response back. Later, the protocol was altered to include more information like metadata, extended negotiation, and extended operations.

    Accept: text/html,application/xhtml + xml,application/xml;q = 0.9,*/*;q = 0.8Accept-Encoding:gzip,deflate,sdchAccept-Language:en-US,en;q = 0.8Connection:keep-aliveHost:www.washere.comUser-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36

    The listing shows a sample HTTP request using a current version of the proto-col. The original published version was 0.9. 1.0 was published in 1996 followed by 1.1 in 1997 where we are today after some modifications in 1999. HTTP is one of the fundamental building blocks of the way applications are delivered today. The methods that HTTP allows in addition to the data the can be conveyed using HTTP makes it a very flexible protocol for the delivery of information and functionality in interacting with users. Reasonably simple and extensible has made it very useful.

    XMLAt this point, we have a way of describing what a page of information should look like and a way of delivering the pages to clients so they can be displayed. There are even programming languages that can be used. One thing missing from what weve been talking about so far is a way of transmitting complex data from the client to the server and vice versa. The HTTP allows for passing parameters but, while you can provide a variable name for each parameter, you cant describe the data at all