52
INTERNET I.T TELECOMMUNICATION NETWORK A telecommunications network is a collection of terminals, links and nodes which connect together to enable telecommunication between users of the terminals. Networks may use circuit switching or message switching. Each terminal in the network must have a unique address so messages or connections can be routed to the correct recipients. The collection of addresses in the network is called the address space. The links connect the nodes together and are themselves built upon an underlying transmission network which physically pushes the message across the link. Examples of telecommunications networks are: Chiarg Pahuja Page 1 100690826551

Telecommunication network2222

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Telecommunication  network2222

INTERNET I.T

TELECOMMUNICATION NETWORK

A telecommunications network is a collection of terminals, links and nodes which connect together to enable telecommunication between users of the terminals. Networks may use circuit switching or message switching. Each terminal in the network must have a unique address so messages or connections can be routed to the correct recipients. The collection of addresses in the network is called the address space.

The links connect the nodes together and are themselves built upon an underlying transmission network which physically pushes the message across the link.

Examples of telecommunications networks are:

computer networks the Internet the telephone network the global Telex network the aeronautical ACARS network

Chiarg Pahuja Page 1100690826551

Page 2: Telecommunication  network2222

INTERNET I.T

Messages and protocols

Messages are generated by a sending terminal, then pass through the network of links and nodes until they arrive at the destination terminal. It is the job of the intermediate nodes to handle the messages and route them down the correct link toward their final destination.

The messages consist of control (or signaling) and bearer parts which can be sent together or separately. The bearer part is the actual content that the user wishes to transmit (e.g. some encoded speech, or an email) whereas the control part instructs the nodes where and possibly how the message should be routed through the network. A large number of protocols have been developed over the years to specify how each different type of telecommunication network should handle the control and bearer messages to achieve this efficiently.

Terminals are the starting and stopping points in any telecommunication network environment. Any input or output device that is used to transmit or receive data can be classified as a terminal componentTelecommunications processors support data transmission and reception between terminals and computers by providing a variety of control and support functions. (i.e. convert data from digital to analog and back) Telecommunications channels are the way by which data is transmitted and received. Telecommunication channels are created through a variety of media of which the most popular include copper wires and coaxial cables. Fiber-optic cables are increasingly used to bring faster and more robust connections to businesses and homesIn a telecommunication environment computers are connected through media to perform their communication assignmentsTelecommunications control software is present on

Chiarg Pahuja Page 2100690826551

Components

All telecommunication networks are made up of five basic components that are present in each network environment regardless of type or use. These basic components include terminals, telecommunications processors, telecommunications channels, computers, and telecommunications control software.

Page 3: Telecommunication  network2222

INTERNET I.T

all networked computers and is responsible for controlling network activities and functionality. Early networks were built without computers, but late in the 20th century their switching centers were computerized or the networks replaced with computer networks.

Network structure

In general, every telecommunications network conceptually consists of three parts, or planes (so called because they can be thought of as being, and often are, separate overlay networks):

The control plane carries control information (also known as signalling). The data plane or user plane or bearer plane carries the network's users'

traffic. The management plane carries the operations and administration traffic

required for network management.

Example: the TCP/IP data network

The data network is used extensively throughout the world to connect individuals and organizations. Data networks can be connected together to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of many data networks from different organizations all operating under a single address space.

Terminals attached to TCP/IP networks are addressed using IP addresses. There are different types of IP address, but the most common is IP Version 4. Each unique address consists of 4 integers between 0 and 255, usually separated by dots when written down, e.g. 82.131.34.56.

TCP/IP are the fundamental protocols that provide the control and routing of messages across the data network. There are many different network structures that TCP/IP can be used across to efficiently route messages, for example:

wide area networks (WAN) metropolitan area networks (MAN) local area networks (LAN) campus area networks (CAN) virtual private networks (VPN)

Chiarg Pahuja Page 3100690826551

Page 4: Telecommunication  network2222

INTERNET I.T

generally belong to a single organization. The equipment that interconnects the network, the links, and the MAN itself are often owned by an association or a network provider that provides or leases the service to others A MAN is a means for sharing resources at high speeds within the network. It often provides connections to WAN networks for access to resources outside the scope of the MAN.

Active networking Access network Core network Coverage (telecommunication) Double-ended synchronization Federation (information technology) MVNE MVNO Network node Nanoscale network / Optical fiber Submarine communications cable Optical mesh network

Wireless Application Protocol

Wireless Application Protocol (WAP) is an open international standard. A WAP browser is a commonly used web browser for small mobile devices such as cell phones.

Before the introduction of WAP, mobile service providers had extremely limited opportunities to offer interactive data services, but needed interactivity to support Internet and Web applications such as:

Email by mobile phone Tracking of stock-market prices Sports results News headlines Music downloads

Chiarg Pahuja Page 4100690826551

Page 5: Telecommunication  network2222

INTERNET I.T

The Japanese i-mode system offers another major competing wireless data protocol.

The bottom-most protocol in the suite, the WAP Datagram Protocol (WDP), functions as an adaptation layer that makes every data network look a bit like UDP to the upper layers by providing unreliable transport of data with two 16-bit port numbers (origin and destination). All the upper layers view WDP as one and the same protocol, which has several "technical realizations" on top of other "data bearers" such as SMS, USSD, etc. On native IP bearers such as GPRS, UMTS packet-radio service, or PPP on top of a circuit-switched data connection, WDP is in fact exactly UDP.

WTLS, an optional layer, provides a public-key cryptography-based security mechanism similar to TLS.

WTP provides transaction support (reliable request/response) adapted to the wireless world. WTP supports more effectively than TCP the problem of packet loss, which occurs commonly in 2G wireless technologies in most radio conditions, but is misinterpreted by TCP as network congestion.

Finally, one can think of WSP initially as a compressed version of HTTP.

This protocol suite allows a terminal to transmit requests that have an HTTP or HTTPS equivalent to a WAP gateway; the gateway translates requests into plain HTTP.

Wireless Application Environment (WAE)

The WAE space defines application-specific markup languages.

For WAP version 1.X, the primary language of the WAE is WML. In WAP 2.0, the primary markup language is XHTML Mobile Profile.

Chiarg Pahuja Page 5100690826551

Page 6: Telecommunication  network2222

INTERNET I.T

HistoryThe WAP Forum dates from 1997. It aimed primarily to bring together the various wireless technologies in a standard is protocol In 2002 the WAP Forum was consolidated(along with many other forums of the industry) into OMA (Open Mobile Alliance)which covers virtually everything in future] of wireless data services.

WAP Push development

WAP Push Process

WAP Push has been incorporated into the specification to allow WAP content to be pushed to the mobile handset with minimum user intervention. A WAP Push is basically a specially encoded message which includes a link to a WAP addressWAP Push is specified on top of WDP; as such, it can be delivered over any WDP-supported bearer, such as GPRS or SMS Most GSM networks have a wide range of modified processors, but GPRS activation from the network is not generally supported, so WAP Push messages have to be delivered on top of the SMS bearer.

On receiving a WAP Push, a WAP 1.2 (or later) -enabled handset will automatically give the user the option to access the WAP content. This is also known as WAP Push SI (Service Indication) A variant, known as WAP Push SL (Service Loading), directly opens the browser to display the WAP content, without user interaction. Since this behaviour raises security concerns, some handsets

Chiarg Pahuja Page 6100690826551

Page 7: Telecommunication  network2222

INTERNET I.T

handle WAP Push SL messages in the same way as SI, by providing user interaction.

The network entity that processes WAP Pushes and delivers them over an IP or SMS Bearer is known as a Push Proxy Gateway

ITS Features

Since the processing of the SAP Internet Applications Components (IACs) takes place in the SAP System, the benefits of the SAP infrastructure apply immediately to Web applications. Some of the features of ITS are:

Scalability Handling of logon and user sessions in the SAP System Transactional consistency for Web applications Multi-language capabilities Code page conversions Full integration with the ABAP Workbench Change and Transport System User management and authorization concept Scalability Handling of logon and user sessions in the SAP System Transactional consistency for Web applications Multi-language capabilities Code page conversions Full integration with the ABAP Workbench Change and Transport System User management and authorization concept

ITS Architecture

The Internet Transaction Server is the link between the Web and SAP. It is composed of two separate programs: WGate (Web Gateway) and AGate (Application Gateway), which may reside on the same computer or on separate computers connected by a TCP/IP network. The graphic below shows the components of ITS which are explained in detail in the following sections:

Chiarg Pahuja Page 7100690826551

Page 8: Telecommunication  network2222

INTERNET I.T

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.

Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and

Chiarg Pahuja Page 8100690826551

Page 9: Telecommunication  network2222

INTERNET I.T

small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.

The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population used the services of the Internet.

The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

At the IPTO, Licklider's successor Ivan Sutherland in 1965 got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran, who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. Roberts had worked at the MIT Lincoln Laboratory originally established to work on the design of the SAGE system. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet.

Sutherland's successor Robert Taylor convinced Roberts to build on his early packet switching successes and come and be the IPTO Chief Scientist. Once there,

Chiarg Pahuja Page 9100690826551

Page 10: Telecommunication  network2222

INTERNET I.T

Roberts prepared a report called Resource Sharing Computer Networks which was approved by Taylor in June 1968 and laid the foundation for the launch of the working ARPANET the following year.

After much work, the first two nodes of what would become the ARPANET were interconnected between Klein rock's Network Measurement Center at the UCLA's School of Engineering and Applied Science and Douglas Engelb art's NLS system at SRI International (SRI) in Menlo Park, California, on 29 October 1969. The third site on the ARPANET was the Culler-Fried Interactive Mathematics center at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971.

In an independent development, Donald Davies at the UK National Physical Laboratory developed the concept of packet switching in the early 1960s, first giving a talk on the subject in 1965, after which the teams in the new field from two sides of the Atlantic ocean first became acquainted. It was actually Davies' coinage of the wording packet and packet switching that was adopted as the standard terminology. Davies also built a packet-switched network in the UK, called the Mark I in 1970. Bolt, Beranek & Newman (BBN), the private contractors for ARPANET, set out to create a separate commercial version after establishing "value added carriers" was legalized in the U.S. The network they established was called Tele net and began operation in 1975, installing free public dial-up access in cities throughout the U.S. Tele net was the first packet-switching network open to the general public.

Following the demonstration that packet switching worked on the ARPANET, the British Post Office, Tele net, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976. X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net, and Packet Satellite Net during the same time period.

Chiarg Pahuja Page 10100690826551

Page 11: Telecommunication  network2222

INTERNET I.T

The early ARPANET ran on the Network Control Program (NCP), implementing the host-to-host connectivity and switching layers of the protocol stack, designed and first implemented in December 1970 by a team called the Network Working Group (NWG) led by Steve Crocker. To respond to the network's rapid growth as more and more locations connected, Vinton Cerf and Robert Kahn developed the first description of the now widely used TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by 1 January 1983 when all hosts on the ARPANET were switched over from the older NCP protocols.

T3 NSFNET Backbone, c. 1992

In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network that became operational in 1988. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF. The NSFNET backbone was upgraded to 45 Mbps in 1991 and decommissioned in 1995 when it was replaced by new backbone networks operated by commercial Internet Service Providers.

The opening of the NSFNET to other networks began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of

Chiarg Pahuja Page 11100690826551

Page 12: Telecommunication  network2222

INTERNET I.T

1989. Other commercial electronic mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) began operations: UUNET, PSINet, and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet (by that time renamed to Sprintnet), Tymnet, Compuserve and JANET were interconnected with the growing Internet in the 1980s as the TCP/IP protocol became increasingly popular. The adaptability of TCP/IP to existing communication networks allowed for rapid growth. The open availability of the specifications and reference code permitted commercial vendors to build interoperable network components, such as routers, making standardized network gear available from many companies. This aided in the rapid growth of the Internet and the proliferation of local-area networking. It seeded the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.

INTERNET TECHNOLOGY PROTOCOLS

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task

Chiarg Pahuja Page 12100690826551

Page 13: Telecommunication  network2222

INTERNET I.T

Force (IETF) The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called aRequest for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.

The Internet Standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the Transport Layer connects applications on different hosts via the network (e.g., client–server model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer, the Link Layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011. A new protocol version, IPv6, was developed in the mid 1990s which provides vastly

Chiarg Pahuja Page 13100690826551

Page 14: Telecommunication  network2222

INTERNET I.T

larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion. IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Structure

The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such asGEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list ofacademic computer network organizations).

Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale

Chiarg Pahuja Page 14100690826551

Page 15: Telecommunication  network2222

INTERNET I.T

and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated

Information

Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is a global set of documents, images and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs allow providers to symbolically identify services and clients to locate and address web servers, file servers, and other databases that store documents and provide resources and access them using the Hypertext Transfer Protocol (HTTP), the primary carrier protocol of the Web. HTTP is only one of the hundreds of communication protocols used on the Internet. Web services may also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

World Wide Web browser software, such as Microsoft's Internet Explorer, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, let users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information.

The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. Publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition, however. Many individuals and some

Chiarg Pahuja Page 15100690826551

Page 16: Telecommunication  network2222

INTERNET I.T

companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work. Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.

When the Web began in the 1990s, a typical web page was stored in completed form on a web server, formatted with HTML, ready to be sent to a user's browser in response to a request. Over time, the process of creating and serving web pages has become more automated and more dynamic. Websites are often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organization or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Communication

Electronic mail, or email, is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Pictures, documents and other files are sent as email attachments. Emails can be cc-ed to multiple email addresses.

Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the

Chiarg Pahuja Page 16100690826551

Page 17: Telecommunication  network2222

INTERNET I.T

early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.

Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialing and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without abackup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Wii, PlayStation 3, and Xbox 360 also offer VoIP chat features.

Intel Core i9 Processor Features

Socket LGA1366 compatible (Untel X58 motherboard compatible) 32nm technology Six Core 12 MB L3 cache Speed 2.4 Ghz+

Chiarg Pahuja Page 17100690826551

Page 18: Telecommunication  network2222

INTERNET I.T

Here we have some Screenshot and benchmark details of Intel Core i9 Processor

v

Gulf town

Gulf town or West mere-EP is the codename of a six-core hyper threaded Intel processor able to run up to 12 threads in parallel. It is based on West mere micro architecture, the 32 nm shrink of Nehal em. Originally rumored to be called the Intel Core i9, it is sold as an Intel Core i7. The first release was the Core i7 980X in the first quarter of 2010, while its server versions are the Xeon 3600- and 5600-series. The i7-970 has recently been released, with a 24x locked multiplier.

First figures indicate that at equivalent clock rates, depending on the software, it has up to 50% higher performance than the identically clocked quad core Bloomfield Core i7 975. However, consumer software that utilizes six real and six virtual cores is still quite rare, and not every multithreaded program is able to take advantage of this many cores. Despite having 50% more transistors, the CPU strongly benefits from the 32-nm process, drawing the same or even less power

Chiarg Pahuja Page 18100690826551

Page 19: Telecommunication  network2222

INTERNET I.T

(depending on the operating system) than its Bloomfield predecessors with merely four cores. The thermal design power (TDP) of all planned models is stated to be 130 watts.

Gulftown is the first six-core dual-socket processor from Intel, following the quad-core Bloomfield and Gainestown (a.k.a. Nehalem-EP) processors using the same LGA 1366 package, while the earlier Dunnington six-core processor is a Socket 604 based multi-socket processor. The CPUID extended model number is 44 (2Ch) and two product codes are used, 80613 for the UP desktop/server models and 80614 for the Xeon 5600-series DP server models. In some models, only four of the six cores are enabled.

Overview

Brand Name (list) Cores L3 Cache

Core i7-9xx

6

12 MB

Core i7-9xxX[6][7]

Xeon 36xx

Xeon 56xx 4-6

Intel processors

Multi-core processorDual-core processors

Chiarg Pahuja Page 19100690826551

Page 20: Telecommunication  network2222

INTERNET I.T

Diagram of a generic dual-core processor, with CPU-local level 1 caches, and a shared, on-die level 2 cache.

An Intel Core 2 Duo E6750 dual-core processor.

An AMD Athlon X2 6400+dual-core processor.

A multi-core processor is a single computing component with two or more independent actual processors (called "cores"), which are the units that read and execute program instructions.[1] The data in the instruction tells the processor what to do. The instructions are very basic things like reading data from memory or sending data to the user display, but they are processed so rapidly that we experience the results as the smooth operation of a program. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package.

Processors were originally developed with only one core. A many-core processor is a multi-core processor in which the number of cores is large enough that

Chiarg Pahuja Page 20100690826551

Page 21: Telecommunication  network2222

INTERNET I.T

traditional multi-processor techniques are no longer efficient— largely due to issues with congestion in supplying instructions and data to the many processors. The many-core threshold is roughly in the range of several tens of cores; above this threshold network on chip technology is advantageous.

A dual-core processor has two cores (e.g. AMD Phenom II X2, Intel Core Duo), a quad-core processor contains four cores (e.g. AMD Phenom II X4, the Intel 2010 core line that includes three levels of quad-core processors, see i3, i5, and i7 at Intel Core), and a hexa-core processor contains six cores (e.g. AMD Phenom II X6, Intel Core i7 Extreme Edition 980X). A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared memory inter-core communication methods. Common network topologies to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores, heterogeneous multi-core systems have cores which are not identical. Just as with single-processor systems, cores in multi-core systems may implement architectures such as superscalar, VLIW, vector processing, SIMD, or multithreading.

Multi-core processors are widely used across many application domains including general-purpose, embedded, network, digital signal processing (DSP), andgraphics.

The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can be parallelized to run on multiple cores simultaneously; this effect is described by Amdahl's law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or even more if the problem is split up enough to fit within each core's cache(s), avoiding use of much slower main system memory. Most applications, however, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem[2]. The parallelization of software is a significant ongoing topic of research.

Terminology

The terms multi-core and dual-core most commonly refer to some sort of central processing unit (CPU), but are sometimes also applied to digital signal

Chiarg Pahuja Page 21100690826551

Page 22: Telecommunication  network2222

INTERNET I.T

processors (DSP) and system-on-a-chip (SoC). Additionally, some use these terms to refer only to multi-core microprocessors that are manufactured on the same integrated circuit die. These people generally refer to separate microprocessor dies in the same package by another name, such as multi-chip module. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted.

In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units (which often contain special circuitry to facilitate communication between each other).

The terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an especially high number of cores (tens or hundreds).

Some systems use many soft microprocessor cores placed on a single FPGA. Each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core.

Development

While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern. These physical limitations can cause significant heat dissipation and data synchronization problems. Various methods are used to improve CPU performance. Some instruction-level parallelism (ILP) methods such assuperscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code. Many applications are better suited to thread level parallelism (TLP) methods, and multiple independent CPUs are commonly used to increase a system's overall TLP. A combination of increased available space (due to refined manufacturing processes) and the demand for increased TLP led to the development of multi-core CPUs.

Chiarg Pahuja Page 22100690826551

Page 23: Telecommunication  network2222

INTERNET I.T

Commercial incentives

Several business motives drive the development of dual-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which drove down the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be utilized in the design, which increased functionality, especially for CISC architectures.

Eventually these techniques reached their limit and were unable further to improve CPU performance. Multiple processors had to be employed to gain speed in computation. Multiple cores were used on the same chip to improve performance, which could then lead to better sales of CPU chips which had two or more cores. Intel has produced a 48-core processor for research in cloud computing.

Technical factors

Since computer manufacturers have long implemented symmetric multiprocessing (SMP) designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known.

Additionally:

Advantages

The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock-rate than is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often.

The largest boost in performance will likely be noticed in improved response-time while running CPU-intensive processes, like antivirus scans, ripping/burning media (requiring file conversion), or file searching. For example, if the automatic

Chiarg Pahuja Page 23100690826551

Page 24: Telecommunication  network2222

INTERNET I.T

virus-scan runs while a movie is being watched, the application running the movie is far less likely to be starved of processor power, as the antivirus program will be assigned to 0a different processor core than the one running the movie playback.

Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core-design. Also, adding more cache suffers from diminishing returns.

Multi-core chips also allow higher performance at lower energy. This can be a big factor in mobile devices that operate on batteries. Since each core in multi-core is generally more energy-efficient, the chip becomes more efficient than having a single large monolithic core. This allows to get higher performance with less energy. The challenge of writing parallel code clearly offsets this benefit.

Disadvantages

Maximizing the utilization of the computing resources provided by multi-core processors requires adjustments both to the operating system (OS) support and to existing application software. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications. The situation is improving: for example the Valve Corporation's Source engine offers multi-core support and Crytek has developed similar technologies for CryEngine 2, which powers their game, Crysis. Emergent Game Technologies' Gamebryo engine includes their Floodgate technology which simplifies multicore development across game platforms. In addition, Apple Inc.'s latest OS, Mac OS X Snow Leopard has a built-in multi-core facility called Grand Central Dispatch for Intel CPUs.

Chiarg Pahuja Page 24100690826551

Page 25: Telecommunication  network2222

INTERNET I.T

Integration of a multi-core chip drives chip production yields down and they are more difficult to manage thermally than lower-density single-chip designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. If a single core is close to being memory-bandwidth limited, going to dual-core might only give 30% to 70% improvement. If memory bandwidth is not a problem, a 90% improvement can be expected. It would be possible for an application that used two CPUs to end up running faster on one dual-core if communication between the CPUs was the limiting factor, which would count as more than 100% improvement.

Hardware

Trends

The general trend in processor development has moved from dual-, tri-, quad-, hexa-, octo-core chips to ones with tens or even hundreds of cores. In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. There is also a trend of improving energy-efficiency by focusing on performance-per-watt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (i.e. laptop computers and portable media players).

Architecture

The composition and balance of the cores in multi-core architecture show great variety. Some architectures use one core design repeated consistently

Chiarg Pahuja Page 25100690826551

Page 26: Telecommunication  network2222

INTERNET I.T

("homogeneous"), while others use a mixture of different cores, each optimized for a different, "heterogeneous", role .

The article CPU designers debate multi-core future by Rick Merritt, EE Times 2008, includes comments:"Chuck Moore [...] suggested computers should be more like cellphones, using a variety of specialty cores to run modular software scheduled by a high-level applications programming interface.

Software impact

An outdated version of an anti-virus application may create a new thread for a scan process, while its GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, a multicore architecture is of little benefit for the application itself due to the single thread doing all heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interleaving of processing on data shared between threads (thread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level multiprocessor hardware. Although threaded applications incur little additional performance penalty on single-processor machines, the extra overhead of development has been difficult to justify due to the preponderance of single-processor machines. Also, serial tasks like decoding the entropy encoding algorithms used in video codecs are impossible to parallelize because each result generated is used to help create the next result of the entropy decoding algorithm.

Given the increasing emphasis on multicore chip design, stemming from the grave thermal and power consumption problems posed by any further significant increase in processor clock speeds, the extent to which software can be multithreaded to take advantage of these new chips is likely to be the single greatest constraint on computer performance in the future. If developers are unable to design software to

Chiarg Pahuja Page 26100690826551

Page 27: Telecommunication  network2222

INTERNET I.T

fully exploit the resources provided by multiple cores, then they will ultimately reach an insurmountable performance ceiling.

The telecommunications market had been one of the first that needed a new design of parallel datapath packet processing because there was a very quick adoption of these multiple-core processors for the datapath and the control plane. These MPUs are going to replace the traditional Network Processors that were based on proprietary micro- or pico-code.

Parallel programming techniques can benefit from multiple cores directly. Some existing parallel programming models such as Cilk++, OpenMP, FastFlow, Skandium, and MPI can be used on multi-core platforms. Intel introduced a new abstraction for C++ parallelism called TBB. Other research efforts include the Codeplay Sieve System, Cray's Chapel, Sun's Fortress, and IBM's X10.

Multi-core processing has also affected the ability of modern computational software development. Developers programming in newer languages might find that their modern languages do not support multi-core functionality. This then requires the use of numerical libraries to access code written in languages like C and Fortran, which perform math computations faster than newer languages like C#. Intel's MKL and AMD's ACML are written in these native languages and take advantage of multi-core processing.

Managing concurrency acquires a central role in developing parallel applications. The basic steps in designing parallel applications are:

Partitioning 

The partitioning stage of a design is intended to expose opportunities for parallel execution. Hence, the focus is on defining a large number of small tasks in order to yield what is termed a fine-grained decomposition of a problem.

Communication 

The tasks generated by a partition are intended to execute concurrently but cannot, in general, execute independently. The computation to be performed in one task will typically require data associated with another task. Data must then be

Chiarg Pahuja Page 27100690826551

Page 28: Telecommunication  network2222

INTERNET I.T

transferred between tasks so as to allow computation to proceed. This information flow is specified in the communication phase of a design.

Cloud computing

Cloud computing refers to the use and access of multiple server-based computational resources via a digital network ( WAN, Internet connection using the World Wide Web, etc.). Cloud users may access the server resources using a computer, netbook, pad computer, smart phone, or other device. In cloud computing, applications are provided and managed by the cloud server and data is also stored remotely in the cloud configuration. Users do not download and install applications on their own device or computer; all processing and storage is maintained by the cloud server. The on-line services may be offered from a cloud provider or by a private organization.

Cloud Computing visual diagram

Introduction

In the past, computing tasks such as word processing were not possible without the installation of application software on a user's computer. A user bought a license

Chiarg Pahuja Page 28100690826551

Page 29: Telecommunication  network2222

INTERNET I.T

for each application from a software vendor and obtained the right to install the application on one computer system. With the development of local area networks (LAN) and more networking capabilities, the client-server model of computing was born, where server computers with enhanced capabilities and large storage devices could be used to host application services and data for a large workgroup. Typically, in client-server computing, a network-friendly client version of the application was required on client computers which utilized the client system's memory and CPU for processing, even though resultant application data files (such as word processing documents) were stored centrally on the data servers. Multiple user licenses of an application were purchased for use by many users on a network.

Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's web browser, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The term "software as a service" (SaaS) is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is "The Cloud".

Any computer or web-friendly device connected to the Internet may access the same pool of computing power, applications, and files in a cloud-computing environment. Users may remotely store and access personal files such as music, pictures, videos, and bookmarks; play games; or do word processing on a remote server. Data is centrally stored, so the user does not need to carry a storage medium such as a DVD or thumb drive. Desktop applications that connect to internet-host email providers may be considered cloud applications, including web-based Gmail, Hotmail, or Yahoo! email services. Private companies may also make use of their own customized cloud email servers for their employees.

Cloud computing technologies are regarded by some analysts as a technological evolutionor may be seen as a marketing trap by others such as Richard StallmanConsumers now routinely use data-intensive applications driven by cloud technology that may have been previously unavailable due to cost and deployment complexityIn many companies, employees and company departments are bringing a flood of consumer technology into the workplace, which raises legal compliance and security concerns for the corporation which may be relieved by cloud computing

Chiarg Pahuja Page 29100690826551

Page 30: Telecommunication  network2222

INTERNET I.T

How it works

A cloud user needs a client device such as a laptop or desktop computer, pad computer, smart phone, or other computing resource with a web browser (or other approved access route) to access a cloud system via the World Wide Web. Typically the user will log into the cloud at a service provider or private company, such as their employer. Cloud computing works on a client-server basis, using web browser protocols. The cloud provides server-based applications and all data services to the user, with output displayed on the client device. If the user wishes to create a document using a word processor, for example, the cloud provides a suitable application running on the server which displays work done by the user on the client web browser display. Memory allocated to the client system's web browser is used to make the application data appear on the client system display, but all computations and changes are recorded by the server, and final results including files created or altered are permanently stored on the cloud servers. Performance of the cloud application is dependent upon the network access, speed and reliability as well as the processing speed of the client device.

Since cloud services are web-based, they work on multiple platforms, including Linux, Macintosh, and Windows computers. Smart phones, pads and tablet devices with Internet and World Wide Web access also provide cloud services to telecommuting and mobile users.

A service provider may pool the processing power of multiple remote computers in a cloud to achieve routine tasks such as backing up of large amounts of data, word processing, or computationally intensive work. These tasks might normally be difficult, time consuming, or expensive for an individual user or a small company to accomplish, especially with limited computing resources and funds. With cloud computing, clients require only a simple computer, such as netbooks, designed with cloud computing in mind, or even a smartphone, with a connection to the Internet, or a company network, in order to make requests to and receive data from the cloud, hence the term "software as a service" (SaaS). Computation and storage is divided among the remote computers in order to handle large volumes of both, thus the client need not purchase expensive hardware or software to handle the task. The outcome of the processing task is returned to the client over the network, dependent on the speed of the Internet connection.

Chiarg Pahuja Page 30100690826551

Page 31: Telecommunication  network2222

INTERNET I.T

Technical description

The National Institute of Standards and Technology (NIST) provides a concise and specific definition:

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Cloud computing provides computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the component devices or infrastructure required to provide the service.

Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized resourcesIt is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet This may take the form of web-based tools or applications that users can access and use through a web browser as if they were programs installed locally on their own computers. Cloud computing providers deliver applications via the internet, which are accessed from a Web browser, while the business software and data are stored on servers at a remote location. In some cases, legacy applications (line of business applications that until now have been prevalent in thin client Windows computing) are delivered via a screen-sharing technology such as Citrix XenApp, while the computing resources are consolidated at a remote data center location; in other cases, entire business applications have been coded using web-based technologies such as AJAX.

Most cloud computing infrastructures consist of services delivered through shared data-centers. The Cloud may appear as a single point of access for consumers' computing needs; notable examples include the iTunes Store and the iOS App Store. Commercial offerings may be required to meet service level agreements (SLAs), but specific terms are less often negotiated by smaller companies

Chiarg Pahuja Page 31100690826551

Page 32: Telecommunication  network2222

INTERNET I.T

Risks

Cloud computing's users are exposed to risks mainly associated with:

1) Information security and users' privacy

Using a service of cloud computing to store data may expose the user to potential violation of privacy. Possession of a user's personal information is entrusted to a provider that can reside in a country other than the user's. In the case of a malicious behavior of the cloud provider, it could access the data in order to perform market research and user profilingIn the case of wireless cloud computing, the safety risk increases as a function of reduced security offered by wireless networks. In the presence of illegal acts like misappropriation or illegal appropriation of personal data, the damage could be very serious for the user, with difficulty to reach legal solutions and/or refunds if the provider resides in a state other than the user's country.

In the case of industries or corporations, all the data stored in external memories are seriously exposed to possible cases of international or cyber_espionage.

2) International, political and economic problems

May arise when public data are freely collected and privately stored from cloud's archives located in a country other than those of the cloud's users. Crucial and intellectual productions and large amounts of personal informations are increasingly stored in private, centralized and partially accessible archives in the form of digital data. No guarantee is given to the users for a free future access.

Issues are related with the location of the cloud's archives in a few rich countries. If not governed by specific international rules:

1. it could increase the digital divide between rich and poor nations (if the access to the stored knowledge will be not freely ensured to allbeing the intangible property considered as a strategic factor for the modern knowledge-based economies it could favorite big corporations with "polycentric bodies" and "monocentric minds" only located in the "cloud's countries".

Chiarg Pahuja Page 32100690826551

Page 33: Telecommunication  network2222

INTERNET I.T

3) Continuity of service

Delegating their data-managing and processing to an external service, users are severely limited when these services are not operating. A malfunction also affects a large number of users at once because these services are often shared on a large network. As the service provided is supported by a high-speed Internet connection (both in download and upload), even in the event of an interruption of the line connection due to the user's Internet Service Provider (ISP) he or she will face a complete paralysis of the work.

4) Data migration problems when changing the cloud provider

Another issue is related with the data migration or porting when a user wants to change his cloud provider. There is no defined standard between the operators and such a change is extremely complex. The case of bankruptcy of the company of the cloud provider could be extremely dangerous for the usersOverview

Comparisons

Cloud computing shares characteristics with:

1. Autonomic computing — computer systems capable of self-management Client–server model – client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients)Grid computing — "a form of distributed computing and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."

2. Mainframe computer — powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing Utility computing — the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity

3. Peer-to-peer – distributed architecture without the need for central coordination, with participants being at the same time both suppliers and consumers of resources (in contrast to the traditional client–server model).

4. Service-oriented computing – Cloud computing provides services related to computing while, in a reciprocal manner, service-oriented computing

Chiarg Pahuja Page 33100690826551

Page 34: Telecommunication  network2222

INTERNET I.T

consists of the computing techniques that operate on software-as-a-service Characteristics

The key characteristic of cloud computing is that the computing is "in the cloud"; that is, the processing (and the related data) is not in a specified, known or static place(s). This is in contrast to a model in which the processing takes place in one or more specific servers that are known. All the other concepts mentioned are supplementary or complementary to this concept.

Architecture

Cloud computing sample architecture

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over loose coupling mechanism such as messaging queue.

The two most significant components of cloud computing architecture are known as the front end and the back end. The front end is the part seen by the client, i.e., the computer user. This includes the client’s network (or computer) and the applications used to access the cloud via a user interface such as a web browser. The back end of the cloud computing architecture is the cloud itself, comprising various computers, servers and data storage devices.

History

The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network ,and later to depict the Internet in computer network diagrams as an abstraction of the underlying

Chiarg Pahuja Page 34100690826551

Page 35: Telecommunication  network2222

INTERNET I.T

infrastructure it representsCloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture, autonomic, and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports themThe underlying concept of cloud computing dates back to the 1960s, when John McCarthy opined that "computation may someday be organized as a public utility." Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility.

The actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s offered primarily dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to utilize their overall network bandwidth more effectively. The cloud symbol was used to denote the demarcation point between that which was the responsibility of the provider and that which was the responsibility of the user. Cloud computing extends this boundary to cover servers as well as the network infrastructure The first scholarly use of the term “cloud computing” was in a 1997 lecture by Ramnath Chellappa. After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.[24][25] The first exposure of the term Cloud Computing to public media is by GoogleEx CEO Eric Schmidt at SES San Jose 2006 It was reported in 2011 that Amazon has thousands of corporate customers, from large ones like Pfizer and Netflix to start-ups. Among them also include many corporations that live on Amazon's web services, including Foursquare, a location-based social networking site; Quora, a question-and-answer service; Reddit, a site for news-sharing and BigDoor, a maker of game tools for Web publishers. In 2007, Google, IBM and a number of universities embarked on a large-scale cloud computing research project In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-

Chiarg Pahuja Page 35100690826551

Page 36: Telecommunication  network2222

INTERNET I.T

funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "[o]rganisations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas.

Chiarg Pahuja Page 36100690826551