1
CYBER SECURITY

Session 5A – Cyber Security - School of Computing & Mathematical

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

CYBER SECURITY

 

Detecting Intrusions in the Cloud Environment

Áine MacDermott, Qi Shi, Madjid Merabti, and Kashif Kifayat

PROTECT: Research Centre for Critical Infrastructure Computer Technology and Protection School of Computing and Mathematical Sciences,

Liverpool John Moores University, Liverpool, L3 3AF, UK

[email protected] and {q.shi,m.merabti,k.kifayat}@ljmu.ac.uk

Abstract—Due to the scalability of resources and

performance, as well as improved maintainability, it is apparent

that cloud computing will eventually reach IT services that are

operating critical infrastructures. Since IT infrastructures have

become an integral part of almost all organisations, cloud

computing will have a significant impact on them. The scale and

dynamic nature of cloud computing cause challenges for their

management, including investigating malicious activity and/or

policy failure. Sufficient security measures need to ensure the

confidentiality, integrity and availability of the data in the cloud.

Hosting infrastructure services, and storing sensitive data in the

cloud environment brings with it security and resilience

requirements that existing cloud services are not well placed to

address. Protecting sensitive critical infrastructure data in the

cloud computing environment, through the development of

innovative techniques for detecting intrusions is the current focus

of our work.

Keywords—critical infrastructure protection; SCADA; control

system; security; cloud computing; risk management; trust.

I. INTRODUCTION

Critical infrastructures, such as the power grid and water distribution facilities, include a high number of devices over a large geographical area. These infrastructures face significant threats as the growth in the use of industrial control systems such as SCADA (supervisory control and data acquisition) systems, and their integration to networks to coordinate and manage the actions of these devices. While this provides great capabilities for operation, control, and business, this augmented interconnectedness also increases the security risks due to cyber-related vulnerabilities they possess. The importance of protecting these infrastructures has been particularly highlighted by the increase in advanced persistent threats (APTs) [1], such as ‘Stuxnet’ [2], ‘Duqu’ [3] and ‘Flame’ [4], which were designed to target these control systems and disrupt their functionality. Effective protection of critical infrastructures is therefore, crucial, as it is apparent that existing methods do not meet the security requirements of such interconnected infrastructures.

As more sectors adopt cloud services in their computing environment, the trend will also reach IT services operating critical infrastructure. There needs to be an assurance that the cloud computing environment can provide proficient protection of the sensitive critical infrastructure data. The reality of today’s advanced malware and targeted attacks in that 100% protection is not realistic. Reducing attack vectors and marginalising the impact of an attack is the practical approach.

Infrastructures will inevitably fully grasp the benefits being offered by cloud computing. Once their services are in the cloud environment resourceful functionality is essential.

This paper provides an overview of the critical infrastructure protection problem, and how vendors currently are not ready to utilise the cloud computing paradigm. The layout of this paper is as follows: Section II provides a background on critical infrastructure and the vulnerabilities they possess due to their integration of IT into their functions. Section III details benefits and drawbacks of the different service levels of cloud computing. Existing approaches focused on protecting cloud services are explored in section IV. Section V describes our observation on tackling this protection problem. Section VI presents our conclusions and areas for future work.

II. BACKGROUND

The US Department of Homeland Security defines critical infrastructure as “the assets, systems, and networks, whether physical or virtual, so vital to the government that their incapacitation or destruction would have a debilitating effect on security, national economic security, public health or safety, or any combination thereof” [5]. Many of the nation’s critical infrastructures have historically been physically and logically separated systems that had little interdependence [6]. However, because of advances in information technology and efforts to improve efficiencies in these systems, infrastructures have become increasingly automated and interlinked. These improvements have created new vulnerabilities relating to equipment failure, human error, as well as physical and cyber-related attacks.

Within the US alone, critical infrastructures include approximately 28,600 networked Federal Deposit Insurance Corporation (FDIC) institutions, two million miles of pipeline, 2,800 power plants (with 300,000 production sites), 104 nuclear power plants, 80,000 dams, 60,000 chemical plants, 87,000 food processing plants, and 1,600 water treatment plants [7]. There has been a growing recognition that control systems are vulnerable to cyber attacks from numerous sources, including hostile governments, terrorist groups, disgruntled employees, and other malicious intruders. Smart attacks and coordinated attacks could have severe impacts to the stability, performance, and economics of the infrastructure.

ISBN: 978-1-902560-27-4 © 2013 PGNet

A. Control System Overview

A process control system frequently used in critical infrastructures and factory automation is a supervisory control and data acquisition (SCADA) system. It monitors switches and valves, controls temperature and pressure conditions, and collects and logs field data. SCADA systems typically monitor and report these values to control room operators. Today’s installations have vulnerabilities through commonly deployed communication channels. These vulnerabilities are escalating due to the adoption of open systems concepts within specific vendor SCADA solutions, the use of the Internet as a communications channel, and the increased integration of common TCP/IP (Transmission Control Protocol/Internet Protocol) protocol-based corporate communication networks with SCADA applications. This trend is amplified by the growing need to link real-time data generated by SCADA applications with business systems to complement the decision-making activities and optimise the production of the company [8].

Systems were originally designed with performance as the priority and security was considered later. Systems were designed to be efficient and possessed minimal processing power, small memory capacity and unsecured communication capabilities. These systems and their processors, memory, and communication capabilities do not readily allow for the addition of security as functionality. When a security breach occurs in a SCADA system, the results are often different than on traditional IT systems. SCADA systems are rarely patched or updated because of a concern that the patch itself could have a negative impact on the operation within the system. Unexpected or unplanned network traffic can affect operations, and in certain cases result in the improper control of physical devices or outputs [9].

Figure 1 illustrates an arrangement of a critical infrastructure and its components. Human-machine interfaces (HMI) are

used to view the occurrences of the control system. The SCADA network possesses application servers, communication servers, historian database and using communication links, controls and maintains field sensors and actuators. Orders are communicated from the SCADA level downwards to production lines. Low-level sensors are used to gather data in order to provide a view of the situation at the lowest level. This information is propagated upwards to the information system. Most industrial plants now employ networked process historian servers for storing process data and other interfaces. Modern infrastructures largely make use of IT technologies, where wireless sensor networks have become an integral part of virtually any infrastructure.

B. Threats

As control networks evolved, the use of TCP/IP and Ethernet became common place and interfacing to business systems became the norm. The result was that the closed trust model no longer applied, and vulnerabilities in these systems began to appear. In particular, network security problems from the business network and the world at large could be passed onto process and SCADA networks, putting industrial production, environment integrity and human safety at risk [10].

Open communication protocols such as Modbus and DNP3 are used to achieve interoperability. Many SCADA protocols use TCP/IP and provide no additional authentication and protection. The majority of SCADA protocols possess the following vulnerabilities [11]:

• No authentication implemented to control access to stations, sub stations and sensors.

• No encryption to protect the data flow in networks

• No access control to regulate system functions

Figure 1: Critical infrastructure components

Vulnerabilities in the TCP/IP protocol include IP spoofing and man-in-the-middle attacks. Additionally, the standardisation of software and hardware used in SCADA systems potentially makes it easier to mount SCADA-specific attacks, as was evident in the case of Stuxnet.

Stuxnet, discovered in June 2010, is an example of malware created to specifically target control systems. Stuxnet was a computer worm that attacked Iran’s uranium enrichment program. It targeted the SCADA system in the infrastructure and exploited the fact that the Siemen’s PLC (programmable logic controller) did not require authentication to upload rogue ladder logic, making it easy for the attackers to inject their malicious code into the system [12]. Duqu is a computer worm that was discovered in September 2011, and was also related to the Stuxnet worm. It was believed to have been designed to steal sensitive data in order to launch further cyber attacks [1]. ‘Flame’, malicious malware in the same league as Stuxnet, had gone undetected for almost two years, and was discovered in May 2012 [4]. Industrial and political espionage are considered to be the main purposes of these pieces of malware, which used many methods to gather intelligence and could be modified to change its functionality at any time [10].

This daunting increase in cyber-related attacks specifically targeting the systems, which control these infrastructures, means that sufficient security metrics needs to be developed. However, with the complexity of these systems and their differentiating environments, achieving this has proven difficult. Detailed knowledge is required for control system attacks as these systems are highly customised, and their configuration and functionality can differ depending upon their environment. Initiatives, such as the UK Cyber Security Strategy [13], have been introduced to tackle this problem and provide regulation and activities for protecting these infrastructures.

The UK Cyber Security Strategy published in November 2011 is an example of the UK government supporting the importance of protecting our nation's critical infrastructure [8]. The importance of developing rigorous security standards for

IT products and services applied to the government and its public sector network is emphasised. Additionally, the Centre for Protection of National Infrastructure (CPNI) is working infrastructure companies to ensure they take the necessary steps to protect key systems and data.

The United States of America has taken a similar approach and is working alongside Homeland security in undergoing innovative research working towards raising the awareness of the importance of the nation’s critical infrastructure and strengthening their ability to protect it. In February 2013, President Barack Obama issued an Executive Order on cyber security and a Presidential Policy Directive on critical infrastructure and resilience. These two actions were formed to strengthen the security of physical assets.

These protective steps cover people, physical assets, and communication systems that are indispensable for socioeconomic well-being. Critical infrastructure protection methods and resources deter or mitigate attacks against critical infrastructures and focus on protecting those assets considered invaluable to society.

III. CLOUD COMPUTING BENEFITS & DRAWBACKS

Security, control, and trust are issues that deter organisations from fully adopting cloud computing and profiting from its many advantages. Cloud computing is a style of computing where elastic IT related capabilities are provided as optimised, cost effective and on-demand utility like services to customers using Internet technologies. This can be considered a major evolution of e-business, as cloud computing helps enterprises to create and deliver IT solutions in a more flexible and cost effective way [25]. As more sectors adopt cloud services in their computing environment, the trend can also reach the IT services operating critical infrastructure.

Cloud computing proposed that, one day, all levels would become virtualised, i.e. “everything-as-a-service.” Critical infrastructure currently makes use of the benefits offered by general IT services, so benefiting from the intricate cloud computing paradigm is expected. Embracing the cloud

Level Service level Users Security requirements Threats

Application level

Software as a Service (SaaS)

Person or organisation who subscribes to a service offered by a cloud provider and is accountable for its use.

� Access control � Communication protection � Data protection from

exposure � Privacy in multitenant

environment � Service availability � Software security

� Data interruption � Exposure in network � Interception � Modification of data at rest and

in transit � Privacy breach � Session hijacking � Traffic flow analysis

Virtual level Platform as a Service (PaaS) Infrastructure as a Service (IaaS)

Person or organisation that deploys software on a cloud infrastructure

• Access control • Application security • Cloud management control

security • Communication security • Data security • Secure images • Virtual cloud protection

• Connection flooding • DDoS • Defacement • Disrupting communications • Exposure in network • Impersonation • Programming flaws • Software modification • Software interruption • Session hijacking • Traffic flow analysis

TABLE 1: SECURITY REQUIREMENTS AND THREATS

compact similar alerts and correlate alerts coming from heterogeneous platforms on several sites to detect intrusions that are more complex.

Cloud computing deteriorates the perception of perimeter security. It has become impossible to place a virtual moat around an organisations ‘castle’, as an abundance of services have been outsourced. Security should be implemented in every layer of the cloud application architecture. Cloud defence strategy needs to be distributed so that it can detect and prevent the attacks that originate within the cloud itself and from the users using the cloud technology from different geographic locations through the Internet. Furthermore, there is a lack of collaboration among different components within a cloud provider or among different cloud providers for detection or prevention of attacks. The ability for groups of network sensors to share temporal potential threat data may enable a more timely identification of an attacker, as well as a mechanism for shared defensive capabilities.

V. OUR OBSERVATION

In cloud environments, network perimeters will no longer exist from a cloud user’s perspective, which renders traditional security protection methods such as firewalls not applicable to cloud applications. Furthermore, detection techniques for protecting control systems are not sufficient for such specialised environments. Control systems are important for the functioning of critical infrastructure, and their protection is imperative. Applicable security measures need to be tailored to the environments of these infrastructures.

SCADA systems have evolved over the years from being monolithic, to distributed, to networked. The next progression could be to the cloud environment. Research has shown that cloud computing will eventually reach the IT services that are operating critical infrastructures [13, 19, 24]. There is a similarity between critical infrastructure and cloud computing, as they are primarily large distributed data sets and may possess the same underlying issues. The emergence of the cloud computing paradigm could be beneficial for the operation and performance of these complex infrastructures.

The natural progression of this utilisation is determining what services could be shifted from a critical infrastructure environment to a cloud environment. In addition, how the functionality of these services can be improved, and how the protection issues differ from the traditional critical infrastructure environment, are major considerations.

Critical infrastructure control systems are becoming more interconnected and strive to keep up to date with the technological advances. From surveying the literature, it is clear that critical infrastructure vendors will inevitably take advantage of the benefits offered by cloud computing. This succession is not going to occur overnight, but when it does, it is imperative that vendors are prepared and have sufficient security procedures and policies in place. Determining how this will occur is important. We are currently exploring how services and sensitive data could be shifted to the cloud environment and assessment of what benefits and risks could be associated with deploying each service [26].

One way in which vendors could utilise the environment is through the deployment of a secure private cloud. The analysis of logs generated from the historian could provide efficient data processing and extraction of the system behaviour. This could overcome the challenges associated with processing the massive data sets generated by the control systems. The Verizon Data Breach Investigations Report [27] states that when viewing logs after an incident, in the majority of cases, the data was there to be found beforehand, it just needed to be viewed. It is on this point why we deem effective analysis of the historian logs could be beneficial. However, without stability in the log process, the option to go back to the past might be lost.

Log generation and storage can be complicated by several factors, including:

• A high number of log sources

• Inconsistent log content

• Lack of structure among generated logs

• Formats

• Timestamps among sources

• Increasingly large volumes of data

• Not calculating the proper events per second (EPS) and losing logs due to saturation.

This collection of data could also be used to perform behavioural analysis and modelling of the flow of information. Looking for trends and subtle changes in the data would be beneficial in achieving state awareness. Behaviour modelling can take place without affecting the system in any way, which is an imperative aspect. By monitoring the evolution of the plant process states, and tracking down when the industrial process is entering into a critical state, it would be possible to detect these attack patterns (known or unknown) aiming at putting the process system into a known critical state by using state of commands. In control system architectures, the major cyber-attack vector is the flow of network commands [28].

This is just one way in which critical infrastructure could utilise the cloud environment. Cloud computing usage is growing and soon the vast majority of organisations will rely on some form of cloud computing services. This makes cloud computing services critical in themselves. When cyber attacks and cyber disruptions happen, millions of users are affected. Cloud computing is being adopted also in critical sectors such as finance, energy and transport [25].

VI. CONCLUSIONS & FUTURE WORK

It is apparent that cloud computing will eventually reach IT services that are operating critical infrastructures. In the cloud environment, data is stored, and managed by a third party. If you want to monitor your data from the cloud provider or other users, this makes intrusion detection more demanding. Protecting your data from insider or outsider attacks is a challenging research area, as the platform is not in your control. No satisfactory solutions are available to address the issues of intrusion detection in the cloud environments for

critical infrastructure system protection. So far, existing work on intrusion detection focuses mainly on network and cloud services or systems. Not much has been done to address issues of intrusion detection for critical infrastructure systems, and even less effort has been made to explore the impact of combining the two areas, which is the focus of our work.

Our future work involves developing our idea to tackle network availability attacks against the cloud environment. Network attacks, such as distributed denial of service (DDoS) aim to use up or overwhelm the communication and computational resources and to result in delay or failure of communication. The current lack of collaboration among different components within a cloud provider or among different providers for detection or prevention of attacks is an area we aim to focus on also. Cloud service providers have the scale and resources to address and prevent cyber attacks in a more professional way than most other organisations. It is important to prevent and mitigate the impact of cyber attacks by creating also logical redundancy – that is, to use different layers of defence and to use separate systems with a different logical structure, to cross-check transactions and to detect attacks [25].

VII. REFERENCES

[1] R. Brewer, “Protecting critical control systems,” Network Security, vol. 3, pp. 7–10, Mar. 2012.

[2] T. Miyachi, H. Narita, H. Yamada, and H. Furuta, “Myth and reality on control system security revealed by Stuxnet,” in 2011 Proceedings of SICE Annual Conference (SICE), 2011, pp. 1537–1540.

[3] McAfee Labs, “2012 Threats Predictions,” Santa Clara, CA 95054, 2012.

[4] Lance Whitney, “Flame can sabotage computers by deleting files, says Symantec,” CNET, 2012. [Online]. Available: http://news.cnet.com/8301-1009_3-57458712-83/flame-can-sabotage-computers-by-deleting-files-says-symantec/. [Accessed: 08-Aug-2012].

[5] Department of Homeland Security, “CIPR Month 2012,” Homeland Security, 2012. [Online]. Available: http://www.dhs.gov/cipr-month-2012. [Accessed: 03-Jan-2013].

[6] S. M. Rinaldi, J. P. Peerenboom, and T. K. Kelly, “Identifying, understanding, and analyzing critical infrastructure interdependencies,” IEEE Control Systems Magazine, vol. 21, no. 6, pp. 11–25, 2001.

[7] A. Miller, “Trends in Process Control Systems Security,” IEEE Security and Privacy Magazine, vol. 3, no. 5, pp. 57–60, Sep. 2005.

[8] T. Kropp, “System threats and vulnerabilities [power system protection],” IEEE Power and Energy Magazine, vol. 4, no. 2, pp. 46–50, 2006.

[9] R. E. Johnson, “Survey of SCADA security challenges and potential attack vectors,” in 2010 International Conference for Internet Technology and Secured Transactions (ICITST), 2010, pp. 1–5.

[10] E. J. Byres, M. Franz, and D. Miller, “The Use of Attack Trees in Assessing Vulnerabilities in SCADA Systems,” in IEEE International Infrastructure Survivability Workshop (IISW’04), 2004.

[11] V. Igure, S. Laughter, and R. Williams, “Security issues in SCADA networks,” Computers & Security, vol. 25, no. 7, pp. 498–506, Oct. 2006.

[12] K. Zetter, “Researchers Release New Exploits to Hijack Critical Infrastructure,” Wired.com, 2012. [Online]. Available:

http://www.wired.com/threatlevel/2012/04/exploit-for-quantum-plc/. [Accessed: 10-Apr-2012].

[13] Cabinet Office, “The UK Cyber Security Strategy Protecting and promoting the UK in a digital world,” London, UK, 2011.

[14] M. T. Khorshed, a. B. M. S. Ali, and S. a. Wasimi, “A survey on gaps, threat remediation challenges and some thoughts for proactive attack detection in cloud computing,” Future Generation Computer Systems, vol. 28, no. 6, pp. 833–851, Jun. 2012.

[15] K. Annapureddy, “Security Challenges in Hybrid Cloud Infrastructures,” in Aalto University, T-110.5290 Seminar on Network Security, 2010.

[16] P. Mell and T. Grance, “The NIST Definition of Cloud Computing Recommendations of the National Institute of Standards and Technology,” Gaithersburg, MD 20899-8930, 2011.

[17] Z. Mahmood and C. Agrawal, “Intrusion Detection in Cloud Computing environment using Neural Network,” International Journal of Research in Computer Engineering and Electronics, vol. 1, no. 1, pp. 1–4, 2012.

[18] M. P. K. Shelke, M. S. Sontakke, and A. D. Gawande, “Intrusion Detection System for Cloud Computing,” International Journal of Scientific & Technology Research, vol. 1, no. 4, pp. 67–71, 2012.

[19] S. Taghavi Zargar, H. Takabi, and J. Joshi, “DCDIDP: A Distributed, Collaborative, and Data-driven Intrusion Detection and Prevention Framework for Cloud Computing Environments,” Proceedings of the 7th International Conference on Collaborative Computing: Networking, Applications and Worksharing, pp. 332–341, 2011.

[20] OTE, “Discussion on the Challenges for the Development of a Context for : Secure Cloud computing for Critical infrastructure IT,” Greece, 2012.

[21] S. Chen, S. Nepal, and R. Liu, “Secure Connectivity for Intra-cloud and Inter-cloud Communication,” 2011 40th International Conference on Parallel Processing Workshops, pp. 154–159, Sep. 2011.

[22] H. Hamad and M. Al-Hoby, “Managing Intrusion Detection as a Service in Cloud Networks,” International Journal of Computer Applications, vol. 41, no. 1, pp. 35–40, Mar. 2012.

[23] S. N. Dhage, B. B. Meshram, R. Rawat, S. Padawe, M. Paingaokar, and A. Misra, “Intrusion detection system in cloud computing environment,” Proceedings of the International Conference & Workshop on Emerging Trends in Technology - ICWET ’11, no. Icwet, p. 235, 2011.

[24] J. Lee, M. Park, and J. Eom, “Multi-level Intrusion Detection System and log management in Cloud Computing,” 2011 13th International Conference on Advanced Communication Technology (ICACT), no. 1, pp. 552–555, 2011.

[25] D. M. A. C. Dekker, “Critical Cloud Computing-A CIIP perspective on cloud computing services,” Greece, 2013.

[26] M. Zhou, R. Zhang, W. Xie, W. Qian, and A. Zhou, “Security and Privacy in Cloud Computing: A Survey,” 2010 Sixth International Conference on Semantics, Knowledge and Grids, pp. 105–112, Nov. 2010.

[27] W. Baker, A. Hutton, C. D. Hylender, J. Pamula, D. Ph, M. Spitler, M. Goudie, C. Novak, M. Rosen, P. Tippett, C. Chang, and J. Fisher, “2011 Data Breach Investigations Report,” 2011.

[28] A. Carcano, A. Coletta, M. Guglielmi, M. Masera, I. Nai Fovino, and A. Trombetta, “A Multidimensional Critical State Analysis for Detecting Intrusions in SCADA Systems,” IEEE Transactions on Industrial Informatics, vol. 7, no. 2, pp. 179–186, 2003.

Virtualisation Without a Hypervisor in Cloud Infrastructures: An Initial Analysis

William A. R. de Souza and Allan Tomlinson Information Security Group

Royal Holloway, University of London Egham Hill, Egham, United Kingdom

[email protected], [email protected]

Abstract—Virtualisation is a fundamental technology for data

centres and cloud architectures. The central component of

virtualisation is the hypervisor, which may be considered as a

virtual machine with high privileges in the system that plays a

fundamental role in the virtualised environment. In order to

perform this role, a hypervisor is built as a complex and large

piece of software. Because of this, it has a great surface of attack

and it is a main target for attackers in virtualised environment.

Many approaches have been presented to mitigate the threats

against hypervisors, e.g. minimise its code, add extra code to

verify its integrity and harden it. A NoHype architecture is a new

approach to this problem that proposes simply eliminate the

hypervisor. In this paper we do an initial analysis of this

approach. We show that, although it is a feasible architecture

that can be implemented with today's commodity hardware, it

does not mitigate all the threats to the hypervisor, it introduces

new threats and restrains scalability of cloud architecture.

Keywords— Virtualisation; hypervisor; cloud computing;

security.

I. INTRODUCTION

Virtualisation is a technique used to simulate one or more computers in a single physical machine. By physical machine we mean a hardware device, such as a PC, a server or mobile device. It allows us to run several different environments with multiple operating systems (guest OS) on the same physical machine (host hardware).

Two main components of a virtualised environment are the virtual machine (VM) and the virtual machine monitor (VMM) or hypervisor. A virtual machine is a software implementation that provides an isolated software container where an OS and its applications can run [1]. The same resources available in the underlying physical machine (although not necessarily the same amount) are available to the virtual machine. A hypervisor is a high privileged VM that manages all other virtual machines in the same virtualised environment [1]. It consists as a layer between the VMs and the hardware and controls the guest OSs access to the machine resources.

Virtualisation is a central technology in data centres and has laid the foundation for advances in this kind of computer facility, enabling the cloud infrastructure and cloud computing [1].

The three main categories of virtualisation are: Full virtualisation, Paravirtualisation and Hardware-assisted virtualisation [1][18][19].

Full Virtualisation provides a complete abstraction of the guest OS, simulating the underlying hardware in such a manner that the guest OS is not aware about the virtualisation and has the impression that all hardware resources are allocated to it. This is achieved by a combination of binary translation and direct execution [18]. Binary translation is a technique that replaces non-virtualisable instructions with new sequences of instructions. In order to improve the performance, user level code is directly executed on the processor. No modifications are necessary neither the guest OS nor the underlying hardware. VMware’s virtualization products and Microsoft Virtual Server are examples of full virtualisation [18].

Paravirtualisation address the non-virtualisable instructions problem by modifying the guest OS kernel and replacing these instructions with hypercalls that communicate directly with the virtualisation layer, which provides hypercall interfaces for other critical kernel operations [18]. Thus, the hypercall plays the same role in Paravirtualisation that binary translation plays in the Full virtualisation techniques. For instance, the instruction IOINSR, used when the guest software attempted to execute an I/O instruction, should be replace with a new sequence of instructions in binary translation or should be transformed in a hypercall if Paravirtualisation is being used. A hypercall is similar to a system call used in a OS, that is why in order to use Paravirtualisation it is necessary modify the guest OS. In this case, commodity OS cannot be used. The Citrix XenServer is a example of Paravirtualisation. Some OS as Ubuntu and Red Hat also offer support for Paravirtualisation.

Hardware-assisted virtualisation is a set of features developed by hardware vendors in order to provide hardware mechanisms to simplify the use of virtualisation. It targets privileged instructions and includes a new CPU feature that allows the hypervisor to run in a new root mode below ring 0. Thus privileged and sensitive calls are set to automatically trap to the hypervisor, eliminating the need for either binary translation or hypercalls. Examples of this technology include Intel Virtualization Technology (VT-x) and AMD’s AMD-V [18].

ISBN: 978-1-902560-27-4 © 2013 PGNet

A cloud infrastructure provides a set of resources for customers to run their applications and store their information. By taking advantage of virtualisation, the cloud infrastructure allows several virtual machines, from different customers, to exist in the same physical machine permitting economies of scale and providing a dynamic and scalable set of resources at a cost affordable for customers. So, customers can purchase the amount of resources that they need for their applications.

That main feature of cloud infrastructure is also the main concern for customers, since the sharing environment is suitable for a malicious party to attack the assets in the infrastructure [2] [3]. Thus, a malicious VM can attack another VM running on the same server, the hypervisor or the hardware infrastructure, potentially exploiting a wide range of vulnerabilities [4]. Some work has shown that this is possible, as in the case of a successful execution of code on the host from a guest OS in a VMware environment [5] and an exploitation of the Xen hypervisor that allows to include a backdoor functionality inside of it [6].

There are many approaches to mitigate those threats. One approach is adding extra code to the hypervisor in order to verify its integrity [7] [8]. Conversely, another approach is minimising the hypervisor [9] [10] in order to diminish the attack surface, leaving just the essential functionality in the hypervisor. The most common approach is hardening the hypervisor [11] [12] [13].

The 'no hypervisor' strategy [14] [15] proposes a radical new approach. Rather than defending the hypervisor, they remove the attack surface by getting rid of the hypervisor, but preserving the semantics of virtualisation. In those works, the authors present an architecture called NoHype that is focused on Cloud Computing. From this point, we will use the terms NoHype architecture or NoHype system, interchangeably.

The NoHype architecture is built upon the Full virtualisation technique. Paravirtualisation and Hardware-assisted virtualisation are not directly considered in NoHype. However although not explicit in [14][15], it is clear that since NoHype eliminates the hypervisor then binary translation, or any other type of technique to address non-virtualisable instructions, is not needed after the disengagement stage.

In this context, this work does an initial analysis of how much the no hypervisor strategy can help in a cloud computing infrastructure. Thus, we discuss how safe it can be, the threats that it can mitigate, the limitations of this model and most importantly, if it is introducing new threats in the cloud infrastructure. As we will see from the analysis, virtualisation without a hypervisor is a feasible architecture and can be implemented with today's commodities hardware. But, it does not mitigate all the threats posed by a hypervisor and introduces new threats. Besides, it restrains one important feature of cloud architecture: scalability.

The remainder of this work is organised as follows. In Section 2 we discuss background on hypervisors. In Section 3 we explain the hypervisor attack surface. In Section 4 we present the no hypervisor architecture. We conduct a brief analysis of the no hypervisor architecture in Section 5 and present a conclusion and work in progress in Section 6.

II. THE HYPERVISOR

The hypervisor is a VM with elevated privileges and plays a main role in the virtualised environment. Among its tasks, we have management of VMs (the guest OSs), scheduling, memory management, maintaining VM state, creating partition to VM with isolation and so on. Some required features for a hypervisor are security, since it is a main target for attacks, and resource scalability on-the-fly, i.e., the hypervisor should be able to allocate more resources from the host system without stopping the VM that needs the resource. The performance and scalability of a hypervisor contributes to the quality of the virtualisation in a cloud infrastructure.

There are two main types of hypervisors: bare-metal hypervisor (or type 1) and hosted hypervisor (type 2) [1].

The bare-metal hypervisor runs directly on the hardware platform and it is a kind of thin OS. It controls and handles the resources of the hardware, scheduling VM and the access to resources. Besides, it monitors the guests OS. The type 1 is preferred in environments that require high efficiency. Some examples of commercial type 1 hypervisors are VMware ESX, Citrix XenServer e Microsoft Hyper-V.

The hosted hypervisor runs on top of an OS environment, as a process. In this sense, it manages and controls resources presented by the underlying OS. It is normally used in systems that require a variety of input/output devices and efficiency is not a critical factor. Some examples of commercial type 2 hypervisors are Parallels workstation, Microsoft virtual server, VMware server and VMware workstation.

III. UNDERSTANDING THE HYPERVISOR ATTACK SURFACE

A VM exit is a kind of trap-and-emulate virtualisation implementation and occurs when the guest VM’s code tries executing a privileged instruction. This is an error since the VM is in user mode. So the guest VM’s code is interrupted (trap) and the hypervisor code begins to execute to handle the privileged instruction (emulate) [1]. VM exits are rather frequent. For instance, in an idle VM running on top of Xen 4.0, the VM exits occur ∼600 times/s [15].

In [15] we can see a more detailed experiment on VM exits with Xen 4.0 hypervisor. This is the major entry point for attacks in the hypervisor, since a malicious VM could force a VM exit to occur, trying to simulate an execution of privileged instructions, and inject malicious code or cause a malfunction in the hypervisor. By Injecting code, a malicious VM can violate confidentiality, integrity and availability of other VMs and of the hypervisor.

IV. NO HYPERVISOR ARCHITECTURE

The main idea of the no hypervisor (NoHype) architecture, as proposed in [14] and [15], is eliminating the hypervisor attack surface altogether. In this way, there is no more need for the virtual machines to interact with the hypervisor when they are executing. However, in NoHype architecture the semantic of virtualisation is preserved, since it is possible to run and manage virtual machines as is done in the cloud infrastructures.

No special hardware is necessary to do this, so today’s commodity hardware can be used to host NoHype architecture. Thus, getting rid of the hypervisor, one can get rid of the attacks that a hypervisor would be vulnerable.

A. The Threat Model for NoHype Architecture

The objective of NoHype is protecting the cloud infrastructure against attacks perpetrated through or against the hypervisor by the guest VMs. The idea is eliminating the interaction between VMs and hypervisor, preventing attacks. The threat model [16] of the NoHype architecture is shown in Fig. 1 as a Data Flow Diagram (DFD).

In the threat model we can see that the cloud infrastructure provider, the cloud management software and the modified guest OS (key idea 3, below) are assumed not to be malicious and they are included in a "trust" boundary. The cloud provider modifies the guest OS, accordingly with NoHype requirements and make it available for customers. The process "Modify OS for NoHype" is responsible for this task. The cloud management software offers an interface for customers manage their VMs. The multiple process "Enable services on

VMs" allows start, stop, migrate and all others services related to a VM. A VM starts a VM exit, as described in section 3, so the hypervisor identifies the exit type and executes the suitable action by means of the process "Emulate instructions". After completing the action, the hypervisor return the control of execution for the VM.

It makes no assumption about the customer, other than the responsibility for protecting their applications in a VM. So, the customers are included in an "unknown" boundary.

B. NoHype Key Ideas

NoHype considers the main roles of a hypervisor in today's cloud infrastructure and provides the same functionality by another means; capitalizing on the cloud model and on the resources available in commodity hardware. Comparatively, the main resources managed by the hypervisor (its main role) are processor cores, memory, I/O devices and interrupts and timers. The key ideas of NoHype infrastructure are to pre-allocate memory and cores, use only virtualised I/O devices, short-circuit the system discovery process and avoid indirection.

Fig. 1. DFD for the threat model for NoHype architecture.

The key ideas are detailed below [14] [15]:

1) Key idea 1: Pre-allocate memory and cores. The hypervisor dynamically manages the memory and processor cores, so VMs can be promised more resources than are actually physically available. Since in cloud the customer specifies the resources needed before a VM is created, NoHype can pre-allocate processor cores and memory, enforcing memory isolation by means of hardware paging mechanisms.

2) Key idea 2: Use only virtualised I/O devices.

Virtualisation software emulates I/O devices. NoHype dedicates I/O devices to the guest VM, since the devices themselves are virtualised and there is just a few devices needed in the cloud infrastructure, as network connection (NIC), storage, and graphics card.

3) Key idea 3: Short-circuit the system discovery process.

In order to run in a different kind of platform, an OS tries to discover the configuration of the host system. NoHype uses a temporary hypervisor and needs a modified guest OS (provided by the cloud infrastructure) in order to allow

hardware discovery only during the bootup and caching the system configuration data for later use.

4) Key idea 4: Avoid indirection. Hypervisors need to map the virtual view to real hardware (indirection). NoHype dedicates processor cores to a VM. So, a guest VM can access the real processor ID, eliminating the need for indirection.

V. A INITIAL ANALYSIS OF THE NO HYPERVISOR

ARCHITECTURE

In [15] there is a security analysis of the NoHype architecture. We complement this analysis and add some new points of view related to the security and the operation of the cloud infrastructure using NoHype system, describing item by item identified, as showed below.

A. System Management Software

After implementing a NoHype system in a server, we need the system management software to perform some of the hypervisor tasks, as start, stop and migrates VMs. This is assumed to be secure in NoHype. However, since NoHype is intended to be used in a real world scenario, in a cloud environment using NoHype architecture, the system management software is still an important entry point and target for attacks, since it runs in privileged mode. It is noteworthy that there is no interaction between the system management software and the VM guest code in NoHype. The threats to the system management could come from the interaction with other components of the cloud infrastructure, such as the cloud management software, the cloud provider or the modified guest OS, which could be previously compromised by a malicious party.

B. Temporary Hypervisor

NoHype architecture proposes altogether to get rid of the hypervisor. But it still needs the hypervisor, even though it is just in the bootup process. Although the VM guest is disengaged from the temporary hypervisor before VM guest is able to execute its code, if the temporary hypervisor is compromised from a previous attack, all proposed security for NoHype system could be compromised. Moreover, the temporary hypervisor stays active for the whole lifetime of the system: this means that it remains a prime target for attacks.

C. Data Cached from the System Discovery Process

NoHype utilises a temporary hypervisor and a modified guest OS to perform the system discovery tasks. This is an important feature of OSs that allows them to run in different hardware platforms. In order to avoid the need for a hypervisor during the lifetime of the guest VM, the data collected during the system discovery process is cached; then a guest VM can query the data as often as it needs. The problem is to guarantee that this data cached will not be modified by another malicious VM. It is probably done by the same memory protection mechanisms, as EPT, depends on the memory region where the data is cached.

D. Kill VM Routine

The kill VM routine is a NoHype piece of code that is triggered any time a VM does some illegal action and consequently causes a VM exit, as implemented in NoHype system. Thus, a VM exit is illegal in the system. But, since NoHype needs a temporary hypervisor, a VM exit is not always illegal. As soon as the guest VM is disengaged, a flag in the memory is set, indicating the illegal condition of the VM exits. Since is not clear where in the memory this flag is and what part of the architecture is responsible for managing this flag, we cannot know how hard is, for an attacker, to change this flag. So, an attacker could just change this flag to allow VM exit and compromise the system. Also, the kill VM routine must be protected itself.

E. Denial of Service by Means of IPI

In the NoHype system, a VM can send interprocessor interupts (IPI) to other cores, as much as it wishes. As a consequence, a malicious VM can send several IPI for a core where a target VM is running or for the core 0, where the system management software is running, as defined by NoHype system.

In order to mitigate this threat, NoHype system uses a flag (for each type of IPI) in a shared region in the memory in a manner that a VM sending an IPI can set the flag and the VM receiving the IPI can check and clear the flag. So, a VM receiving an IPI can ignore this IPI, if the flag is not set. Allegedly the security of this process is based on the fact that "no VM can access memory of another VM", so an attacker will not be able to set the flags. But, it is not clear how a VM can differentiate a legal IPI from an IPI sent by an attacker and since an attacker can be a (malicious) VM, he can set his own flags and the VM receiving the IPI can do nothing about this, but receive the IPI. Besides, since the region in memory that holds the flag is shared, why wouldn't the attacker (a malicious VM) have access to it?

An experiment performed by the authors in [15] has indicated that this is not a serious problem, since the ability to send IPI by a VM could not stop its targets.

F. Hardware Dependency for Isolation

The isolation in the NoHype system is dependent on hardware mechanisms. Especially in the case of memory, isolation heavily depends on the correct functioning and implementation of the extended page table (EPT) in order to guarantee confidentiality and integrity. Thus, the EPT is a critical point in the system and beyond the control of NoHype. It must be considered in the threat model and mitigated.

G. Infrastructure Mapping

Since NoHype eliminates the hypervisor, VMs are closer to the underlying hardware. Thus, a malicious VM could map the underlying hardware infrastructure and perform side-channel attacks, as described in [17]. In this way, a malicious VM could identify where its target VM is performing in order to perpetrate an attack against it. Singly, it cannot be a problem in NoHype system. But, considering the possibility of previous attacks being successful, as suggested in the previous

sections, discovering where the target VM is running is an important task performed by an attacker in order to advance in his exploitation.

H. Pre-allocation x scalability

NoHype capitalises on the fact that customers need to determine the amount of resources they want for their applications, in a cloud environment. So, NoHype can assign cores and portions of memory for these customers in advance (pre-allocation), which is the basis of its operation. But, the ability to dynamically increase the amount of hardware resources for a customer is one of the main features in a cloud infrastructure and therefore a serious problem in the NoHype architecture.

VI. CONCLUSION AND WORK IN PROGRESS

The NoHype architecture proposes a radical new approach to address the matter of security involving the hypervisor: get rid of it! It presents four key ideas: pre-allocate memory and cores, use only virtualised I/O devices, short-circuit the system discovery process and avoid indirection. Basically, NoHype identifies the main roles of a hypervisor and searches for some other manner to do the same thing, in order to eliminate the hypervisor.

Although the NoHype system is a feasible architecture that can be implemented with today's commodity hardware, it has a set of issues that could lead a cloud provider to stay with a traditional hypervisor infrastructure. For instance, it does not mitigate all the threats posed by a hypervisor, it introduces new threats to the virtualised environment and it restrains scalability, which is one important feature of cloud architecture.

Our work is establishing an extended threat model for the NoHype architecture and identifying entry points in the architecture which can be attacked or can enable attacks. Using Data Flow Diagrams in deeper levels, we will investigate the data flows within the system, the processes transforming these data, the components involved and the types of boundaries in NoHype and in the cloud. Ultimately our goal is to accurately identify the entry points and threats to this architecture with a view to mitigating any vulnerabilities.

REFERENCES [1] A. Silberschatz, P. Galvin, and G. Gagne, Operating System Concepts,

9th ed. Hoboken, NJ: Wiley, 2013.

[2] M. Christodorescu, R. Sailer, D.L. Schales, D. Sgandurra, and D. Zamboni, D, "Cloud security is not (just) virtualization security: a short paper," Proceedings of the 2009 ACM workshop on Cloud computing security, ACM, 2009, 97-102.

[3] A.S. Ibrahim, J.H. Harris, and J. Grundy, "Emerging Security Challenges of Cloud Virtual Infrastructure," in Proceedings of APSEC 2010, Cloud Workshop, Sidney, Australia, 20 nov2010.

[4] D. Shackleford, Virtualization Security: Protecting Virtualized Environments. Indianapolis, IN: Sybex, 2013.

[5] K. Kortchinsky, “Hacking 3D (and Breaking out of VMWare),” BlackHat USA, 2009.

[6] R. Wojtczuk, “Subverting the Xen hypervisor,” BlackHat USA, 2008.

[7] A.M, Azab, P. Ning, E.C. Sezer, and X. Zhang, "HIMA: A Hypervisor-Based Integrity Measurement Agent," Proceedings of the 2009 Annual Computer Security Applications Conference, IEEE Computer Society, 2009, 461-470.

[8] A. M. Azab, P. Ning, Z. Wang, X. Jiang, X. Zhang, and N. C. Skalsky, "HyperSentry: Enabling stealthy in-context measurement of hypervisor integrity," in ACM Conference on Computer and Communications Security (CCS), pages 38–49, October 2010.

[9] A. Seshadri, M. Luk, N. Qu, and A. Perrig, "SecVisor: A tiny hypervisor to provide lifetime kernel code integrity for commodity OSes," SIGOPS Oper. Syst. Rev., 41(6):335–350, December 2007.

[10] J. M. McCune, Y. Li, N. Qu, Z. Zhou, A. Datta, V. Gligor, and A. Perrig, "TrustVisor: Efficient TCB reduction and attestation," in IEEE Symposium on Security and Privacy, pages 143–158, May 2010.

[11] C. Li, A. Raghunathan, and N. K. Jha, "Secure virtual machine execution under an untrusted management OS," in Proceedings of the Conference on Cloud Computing (CLOUD), July 2010.

[12] R. Sailer, E. Valdez, T. Jaeger, R. Perez, L. V. Doorn, J. L. Griffin and G. S. Berger, "sHype: Secure hypervisor approach to trusted virtualized systems," Technical Report RC23511, IBM Research, 2005.

[13] U. Steinberg and B. Kauer, "NOVA: A microhypervisor-based secure virtualization architecture," in European Conference on Computer Systems, April 2010.

[14] E. Keller, J. Szefer, J. Rexford, and R. B. Lee, "NoHype: Virtualized cloud infrastructure without the virtualization," in International Symposium on Computer Architecture (ISCA), June 2010.

[15] J. Szefer, E. Keller, R.B. Lee, and J. Rexford, "Eliminating the hypervisor attack surface for a more secure cloud," in Proceedings of the 18th ACM conference on Computer and communications security, ACM, 2011, 401-412.

[16] F. Swiderski and W. Snyder, Threat Modeling. Redmond, WA: Microsoft Press, 2004.

[17] T. Ristenpart, E. Tromer, H. Shacham, and S. Savage, "Hey, you, get off of my cloud: Exploring information leakage in third-party compute clouds," in ACM Conference on Computer and Communications Security (CCS), November 2009.

[18] VMware. Understanding Full Virtualization, Paravirtualization, and Hardware Assist. 2007. http://www.vmware.com/files/pdf/VMware_paravirtualization.pdf. Accessed in 20/03/2013.

[19] Shackleford, D. Virtualization Security: Protecting Virtualized Environments Sybex, 2013.

Abstract—In the cloud era, security has become a renewed

source of concerns. Distributed Denial of Service (DDoS) and

the Economical Denial of Sustainability (EDoS) that can

affect the pay-per-use model, which is one of the most

valuable benefits of the cloud, can again become very

relevant especially with the introduction of new policies in

enterprises such as the “Bring Your Own Device” (BYOD).

The hypothesis is that the attacks can exploit the Identity

and Access Management (IAM) vulnerabilities in the BYOD

implementation in the enterprises which are customers for

the cloud. Attackers can gain access to the internal network

of an enterprise to generate EDoS attacks against the cloud

by exploiting the absence of a unified management of

heterogeneous platforms of the devices which are used in the

BYOD environment. This can affect the enterprise itself

(Direct DDOS) or other enterprises using the cloud service

provider (Indirect DDOS).

Therefore, this paper presents a novel framework called

DDoS- Mitigation System (DDoS-MS) which can be used to

encounter EDoS attacks by testing two packets from the

source of requests (legitimate or malicious) to establish the

legitimacy of the source. It uses two types of examinations,

Graphic Turing Test (GTT) and Crypto Puzzles.

The novelty of the proposed framework lies in testing only

two packets from any source instead of testing all packets.

This will decrease the end-to-end latency. Moreover, we use

two types of tests; one authenticates the user while the other

authenticates the packet.

Keywords- BYOD - DDoS - Direct Distributed Denial of

Service (DDDoS)- Indirect Distributed Denial of Service

(IDDoS)- EDoS

I. INTRODUCTION

Cloud computing is a new computing paradigm which

involves delivering services and applications to customers

by adopting an on-demand basis through the Internet.

These applications and services employ huge data centres,

owned by cloud service providers (CSPs) around the

world, and strong servers connected to create what is

known as a "cloud" by hosting web servers and web

applications [22]. Cloud has several features enable it

to serve its customers efficiently. Cloud features include

scalability, flexibility, on-demand, and elasticity [21].

As a result of having these features, the cloud

customers can obtain some benefits directly when they

adopt the cloud. The most important benefits are

decreasing the cost, boosting storage capacity, and

decreasing IT overheads and concerns [31],[22].

Cloud services are offered on diverse levels- infrastructure

as a Service (IaaS), software as a Service (SaaS), and

platform as a Service (PaaS). In addition, cloud services

are classified into three chief deployment models in

terms of the type of users that can access them; these

models are private cloud, public cloud, and hybrid cloud

[28].

Cloud computing offers distinct services to its users;

however, security concerns form a major reason for many

organisations taking the decision not to migrate to cloud

technology. Various aspects which may pose security

threats to the cloud user, or even the cloud provider.

These threats are classified into four groups, which are

policy and organisational risks, technical risks, physical

security issues, and legal risks [6]. In general, the most

important challenges related to the security in the cloud

computing are:

Policy and Organisational Risks include Loss

of Control [23], Compliance Risk [25],

Portability Issue [6], and End of Service [14].

Legal Issues including Contracts, Service Level

Agreements (SLAs) designing and applying [16],

Data Location [22],[25], Investigation Support

[14] and Data Deletion [6],[27].

Physical Security Issues [26],[7],[3].

Technical Risks including Virtualisation

Vulnerabilities [29], Availability, Confidentiality

and Integrity [20], Encryption Issues [22],[25],

Job Starvation Issues [29], Data Segregation

[14],Web Application Security Issues [8], Multi-

tenancy Security [6], Network Attacks such as

Distributed Denial of Service (DDOS), Man in

the Middle Attack (MITM), IP Spoofing, Port

Scanning [19].

II. DISTRIBUTED DENIAL OF SERVICE (DDOS)

AND ECONOMIC DDOS

In network security, there are several types of attacks

which can harm the network resources and services.

Distributed Denial of Service (DDoS) is one of the most

known attacks. It aims to prevent the legitimate user from

accessing the network resources or affect the availability

of the network resources at all "by absorbing all available

bandwidth" [13]. To emphasise the impact of DDoS, a

new type of DDoS attack called Economical Denial of

Sustainability (EDoS) was introduced in [10]. EDoS is

"packet flood that stretches the elasticity of metered-

services employed by a server, e.g., a cloud-based server"

[12]. EDoS attack can be generated by distantly run bots

"to smoothly (with low rate to avoid triggering security

alarms) flood a targeted cloud service by undesired

requests". Therefore, the cloud service employment "will

be scaled up to satisfy the on-demand requests". As cloud

A New Method to Mitigate the Impacts of Economical Denial of Sustainability

Attacks Against the Cloud

Wael Alosaimi and Khalid Al-Begain University of South Wales

Pontypridd, CF37 1DL, United Kingdom

{ wael.alosaimi, k.begain }@southwales.ac.uk

ISBN: 978-1-902560-27-4 © 2013 PGNet

depends on the pay-per-use base, the user's bill will be

charged for these faked requests, "leading to service

withdrawal or bankruptcy" [1]. At the end, the cloud

provider will lose its customers, as they will believe that

the on-premise data centre is better and cheaper for them

than the cloud, which enforce them to pay for services

they did not request. Moreover, the cloud provider must

pay, to the vendors, for the infrastructure regardless their

clients' withdrawal [9],[13]. Hence, the providers are

affected negatively by EDoS attacks more than their

customers.

Distributed Denial of Service (DDoS) solutions can be

classified into two categories; reactive and proactive

solutions. Reactive solutions mean that the defence

system is waiting the attack to occur then try to mitigate

its impacts. On the other side, proactive solution involves

treating the source of packets before reaching to the

protected server [17],[4],[13].

The filtering systems are considered as reactive

solutions. However, Overlay-based techniques are

considered as proactive solutions. These techniques

include other components beside the filters. They depend

on distributed firewalls or nodes (the node can be a virtual

machine or an application), and hiding the location of the

protected server [17,4,13].

The authors will browse four proposed frameworks for

DDoS mitigation. They are CLAD, SOS, WebSOS, and

DaaS, respectively.

Cloud-based Attack Defense system (CLAD) is

"running on cloud infrastructures as a network service to

protect Web servers". It depends on the huge cloud

infrastructure which is considered as a “super computer”.

Therefore, "any network-layer attacks to a single CLAD

node can be defeated by the whole cloud infrastructure"

[18]. CLAD consists of a DNS server, and a group of

CLAD nodes. "Each CLAD node works as a Web proxy

with several control mechanisms, which are admission

control, authentication, network-layer filtering,

preemption and congestion control mechanisms" [18].

The limitation of this technique is that it "could

increase the end-to-end latency due to indirection where

the overlay network mediates all the traffic between the

clients and the target server" [1].

"Secure Overlay Services (SOS) was the first solution

to explore the idea of using overlay networks for

proactively defending against DoS attacks" [15]. SOS

architecture consists of a set of nodes which are classified

into four groups. The first group is the Secure Overlay

Access Points (SOAP), while the second collection is the

overlay nodes which connect SOAP nodes with the third

group .i.e., Beacon nodes. The last group is the Secret

Servlets. It reduces the possibility of harmful attacks by

"performing intensive filtering near protected network

edges", and by "introducing randomness and anonymity

into the architecture, making it difficult for an attacker to

target nodes along the path to a specific SOS-protected

destination" [11]. Morein, Stavrou, Cook, and Keromytis

[17] noticed that "one of the largest drawbacks to SOS, as

it precludes casual access to a web server by anonymous,

yet benign users" [17].

Morein, Stavrou, Cook, and Keromytis [17] presented

an approach called WebSOS. It has the same architecture

of SOS, but differs from it in some implementations. They

described it as "an architecture that allows legitimate users

to access a web server in the presence of a denial of

service attack". The architecture employs a mixture of

"Graphic Turing tests, cryptographic protocols for

information origin authentication, packet filtering, overlay

networks, and consistent hashing to provide service to

casual web-browsing users" [17].

The writers stated that the WebSOS uses graphic

Turing tests to "distinguish between human users and

automated attack zombies". CAPTCHA (Completely

Automated Public Turing test to Tell Computers and

Humans Apart) is a "program that can generate and grade

tests that most humans can pass, but automated programs

cannot. It is implemented at the entry point of the overlay

to verify the presence of a human user" [17].

DDoS Mitigation as a Service (DaaS) tackles the DDoS

problems by "facilitating the harness of Internet idle

resources from any existing or future system/service,

without modification, to create a metered-intermediary

pool with resource that exceed those of bots" [12].

Al-Haidari, Sqalli, and Salah [1] studied this technique,

and observed some limitations on it. First, mobile devices

cannot leverage from the services because their less

power. Second, "a puzzle accumulation attack" which can

be created "when attackers send huge number of requests

for puzzles without solving them". Last, "when an

attacker requests high difficulty puzzles without solving

them", the server creates "a channel with a high difficultly

puzzle leading to a problem of difficulty inflation where

the legitimate clients also have to solve such high

difficulty puzzles" [1].

There is a number of additional methods for tackling

the EDoS attacks. Al-Haidari, Sqalli, and Salah [1]

proposed EDoS-Shield. Its main idea is "to verify whether

the requests coming from the users are from a legitimate

person or generated by bots, the VivinSandar and Shenai

Framework proposed by [30] relies on a firewall, which is

working as a filter. It receives the request from the client,

and redirected to a Puzzle-Server. The Puzzle-Server

sends a puzzle to the client, who either sends a correct or

false answer of the puzzle. If the answer is correct, the

server will send a positive acknowledgment to the firewall

which will add the client to its white list and will forward

the request to the protected server to get services.

Otherwise, the firewall will receive a negative

acknowledgment and put the client in its black list [30].

Al-Haidari, Sqalli, and Salah [2] advocated a solution as

an enhancement to their EDoS-Shield framework, to

mitigate the EDoS attacks originating from spoofed IP

addresses. They made use of the time-to-live (TTL) value

found in the IP header to facilitate detecting the IP

spoofed packets. TTL value indicates the maximum

lifetime of an IP packet, to prevent it from circling on the

network forever in a routing loop presence". The packet

will be discarded when its TTL value is zero. Otherwise,

the router, which the packet passes through it, will

decrease the TTL field by one [2].

Finally, In-Cloud eDDoS Mitigation Web Service

(Scrubber Service) framework is introduced as an on-

demand service. It depends on the In-Cloud Scrubber

Service which generates and verifies the Client puzzle

(crypto puzzle) to authenticate the clients. The user must

solve the crypto puzzle by brute force method. "The

Service Provider switches either in Normal mode or

suspected mode depending on the situation. Whenever the

Service Provider enters into suspected mode, an on-

demand request is being sent to In-Cloud eDDoS

mitigation service" [13].

This framework depends on the puzzle concept

approaches only. The puzzle servers are used mostly to

check the network-layer DDoS attacks, which are easier in

detection than the application-layer attacks. The network-

layer attacks can be also detected using the traffic rate.

The framework focuses on the bandwidth load more than

the server load. This means that the developers believe

that the application-layer attack is more important, as it is

more harmful in impact and more difficult in detection.

III. IDENTITY AND ACCESS MANAGEMENT

In the cloud model, the request is delivered as a service

to customers, generally through a web browser. Therefore,

"network-based controls are becoming less relevant" and

are increased or replaced by user access controls, such as

privilege management, provisioning, and authentication,

which the users must focus on them in order to protect the

data hosted by the cloud provider [24]. Current Identity

and Access Management (IAM) methods grant

authenticated users access to particular resources based on

the roles of these users in the enterprise (Role-Based

Access Control (RBAC) model) or on specific attributes

determined for the users in addition to their roles

(Attributes-Based Access Control (ABAC) model).

The effectiveness of these methods depends on the

efficient of the authentication techniques used, and the

base of providing privileges to the users which must be

granted precisely according to their positions (roles)

taking into account that these privileges must be offered at

the least degree.

IAM methods are based on the following assumptions:

1. Each user has privileges designated according to

his/her roles or attributes.

2. Each user should be assigned with privileges no more

than the entitled to his/her role.

3. Each resource (object) is accessed only by authorised

users, who have the required privileges to access such

resource.

4. Each user must be authenticated before granting

him/her access to the network/website using multifactor

authentication techniques.

5. Each packet must be authenticated before permitting it

access to the network using proper authentication

techniques.

6. The networks must be designed in a way which protects

the server against adversaries and paves a good

infrastructure for applying efficient IAM methods and

policies.

From the above assumptions, it is noticed that there are

two levels of authentication. The former is the higher

level which tends to authenticate the user. Actually, the

proposed framework performs this by using the Graphical

Turing Test (GTT) to distinguish between the human user

and machines in order to avoid botnets attacks. The latter

is the lower level authentication which tests the received

packet, which has some fields in its IP header can be

considered as indicators that the defence system can

depends on to take a decision of permitting or denying the

packet. Hence, our framework uses the puzzles to detect

the unauthorised user who may pass the GTT test as he is

a human user. Actually, to make a balance between the

control and convenience in our proposed system, the users

should think that they are trusted. Indeed, they are not, as

their packets are tested using puzzles. That means their

legitimacy is examined underneath.

IV. BRING YOUR OWN DEVICE (BYOD)

Bring Your Own Device (BYOD) depicts the

phenomenon which has been spread recently among

several enterprises, small and large. This trend is about

allowing the employees to bring their own mobile devices

at their works' locations in order to access the enterprise

database, file servers, and email. The devices including

laptops, tablets, and smart phones [5].

The companies cannot forbid this trend, especially with

the high request of their employees to allow it. The

employees think that BYOD is an attractive factor to

increase their loyalty and productivity. Actually, the

discussion is not about denying or allowing BYOD, it is

nowadays about BYOD management [5].

The most observable benefit to the companies which

adopt the BYOD is saving costs as a result of avoiding

purchasing mobile devices for their employees. Moreover,

the productivity and loyalty will be increased among the

employees as they are more comfortable in using their

own devices, which may contain their preferred

applications, files, and websites. This augmenting in the

productivity, loyalty and satisfaction will support the

company and facilitate accomplishing its targets [5].

The problem is about accessing the organisational data

from these devices, which are not owned or configured by

the organisation. Each kind of devices depends on a

unique platform which has distinctive features. This

requires the IT staff to be aware of all developments in the

devices market in terms of new issued types, updated

features, or upgraded software might be provided. Even

though the unified management’s policies and techniques

are applied, the accelerating developments in the field

may generate holes in the security systems, especially in

the interval between emersion of new devices/features and

the response of the IT department’s members to that new

development. The risks can come from outside the

network’s firewall like all other networks or from inside

the network (behind its firewall). This is because the

devices access the network from the internal environment,

and the users can be diverse.

V. DIRECT DDOS AND INDIRECT DDOS

ATTACKS

We can classify the enterprises which allow users to

access their networks into three categories: high access-

controlled, medium access-controlled, and low access-

controlled enterprises.

The high access-controlled enterprises ensure that their

employees' devices are under the full control of the

enterprise. These devices can be issued by the enterprise,

or brought by the employees but not allowing these

devices to be used unless they are configured and

authenticated by the enterprise's IT department.

We mean by medium access-controlled enterprises

those organisations which have a mixture of staff and

other types of users. An example of these organisations is

the universities, which have two types of users. One type

can access from inside the network (behind the firewall)

and the other type can only access from outside the

firewall. The low access-controlled enterprises allow users

to access their network using their own devices with

temporarily usernames and passwords. For example, some

companies organise exhibitions or open fairs. They give

the exhibitors and users an access to a wireless network

operated by the company as an attractive service during

the exhibition period. In this case, the company needs to

impose policies and control access to these services by

manage the used devices. However, it is difficult to do

such as the exhibition period is short and it is

unreasonable to ask the users to submit their devices to

the company for configuration. In fact, they will not

accept this.

Some of these enterprises, which are high, medium, or

low access-controlled, may use the cloud. Hence, they

will create problems to the cloud when their systems are

compromised by attackers. With regard to the DoS

concept, DDoS attack can be created against a customer

of the cloud, by exploiting the weaknesses in its IAM

system. We classify this attack according to its impact to

two types, Direct DDoS (DDDoS) and Indirect DDoS

attack (IDDoS).

DDDoS attack is the attack which affects the customer

directly. In which case, customer's own network is

compromised. The attackers can use this as an attack

platform against the cloud itself. If the attacker succeeds

to generate DDoS attacks against the cloud provider, other

customers of this provider will be affected indirectly. In

this case we call it Indirect DDoS attack (IDDoS). The

risk can come from the fact that the cloud provider cannot

have full control on what the customer policy contain.

VI. THE PROPOSED FRAMEWORK

A. DDoS-MS Principals and Techniques

As shown above, each suggested solution for tackling

the DDoS issue has its limitations. The proposed

framework builds of these previous methods but is

designed to mitigate the economical effects of the

Distribute Denial of Service (DDoS) and is called DDoS-

Mitigation System (DDoS-MS).

DDoS-MS can be provided as a cloud service. It

consists of a firewall (or virtual firewall), a verifier

node(s), a client puzzle server, a DNS server, green nodes

in front of the protected server(s), and a filtering router.

The firewall has white and black lists for the sources of

packets depending on the result of the verification

process. Green nodes hide the location of the protected

server and the server does not receive any packet except

the packet which is forwarded by these nodes through the

filtering router. The router forwards the packets which are

coming from the green nodes only, and rejects any other

packet. Fig. 1 shows the framework's components.

DDOS-MS idea is to test the first two packets of each

session in two successive stages. The former is done by

the verifier node(s), which use the Graphical Turing Test

(GTT) in verifying the packets. The latter is performed by

the client puzzle server, which use a crypto puzzle to

verify the source of packets.

B. The Framework Assumptions

The proposed framework is based on the following

assumptions in order to limit its scope:

1. The enterprise (University) is a customer for a cloud

provider, and it allows BYOD.

2. The framework can be used in the customer's system

and in the provider's system.

3. The framework is provided as a cloud service to the

customer. To avoid spoof addresses in the enterprise and

attack the provider, the framework must be deployed in

the enterprise.

4. The attacker in this case is a persistent intelligent, not

random or selector attacker.

5. The attacker's target is to generate DDoS attacks

against the cloud itself to affect the pay-per-use model by

exploiting the vulnerabilities in the customer's system.

6. The framework tests the first two packets which come

from any source, assuming that the packets are not

fragmented, so the TTL values will not change according

to the several paths the fragmented packets can use to

reach to the destination.

The idea behind testing only the first two packets is to

enhance the EDoS-Shield framework advantage in the

decrement of the end-to-end latency. The role of the

verifier node is to verify the sources and distinguishes the

legitimate client from the botnets. The second verification,

which is performed by the puzzle server, is confirming the

legitimacy of the source and to strengthen the verification

process. DDOS-MS assumed the following scenarios:

C. Scenarios

1) The Scenario of testing the first packet

1. The user sends a packet to the protected cloud server.

2. The firewall receives the packet and checks its lists.

3. The packet source's address does not exist neither in

the white list nor in the black list. So, the firewall

forwards the packet to the verifier node.

4. The verifier node sends a Graphical Turing Test (GTT)

to the user.

5. If the user passes the test, the verifier node sends a

positive acknowledgment to the firewall. Otherwise, the

firewall will receive a negative acknowledgment.

6. If the firewall receives a negative result from the

verifier node, the request will be refused and the user IP

address, its TTL (Time To Live) value, and the start time

of attack (timestamp) will be placed in the black list.

7. Otherwise, the firewall will forward the packet to the

DNS server, which forward it to the green nodes and

consequently to the protected server through the filtering

router. The firewall will place the user IP address and its

TTL (Time To Live) value in the white list. Lastly, the

requested service and data will be sent directly to the user.

2) The Scenario of testing the second packet (If the

source of the second packet is a legitimate user)

1. The firewall receives the packet and checks its lists.

2. The packet source's address exists in the white list. If

the packet's TTL value is equal to the TTL value which is

recorded in the white list, then the firewall forwards the

packet to the client puzzle server. Otherwise, the packet

will be dropped and its details will be just removed from

the white list.

3. The client puzzle server sends a crypto puzzle to the

user. If the user passes the test, the client puzzle server

will send a positive acknowledgment to the firewall.

Otherwise, a negative acknowledgment will be received

by the firewall.

4. If the firewall receives a negative result from the client

puzzle server, the request will be refused and the user

details will be just removed from the white list.

5. Otherwise, the firewall will forward this packet and all

subsequent packets from this user to the protected server,

and will update the user details in the white list.

6. Lastly, the requested service and data will be sent

directly to the user.

3) The Scenario of testing the second packet (If the

source of the second packet is malicious) 1. In this scenario, the packet source's address exists in the

black list. So, the firewall compares the recorded values of

the source (TTL and the timestamp). If the packet's TTL

value is equal to the registered TTL value in the black list

[OR] the start time of the packet is the same start time of

the previous malicious packet, then the firewall will refuse

the current request, update the attacker's details in the

black list, and refuse any coming packets from this

source.

2. Otherwise, the firewall will forward the packet to the

verifier node to test it using a Graphical Turing Test

(GTT).

3. If the firewall receives a negative result from the

verifier node, then the attacker details will be updated in

the black list, his/her current request will be refused, and

any coming packets from this source will be also refused.

4. Otherwise, the packet will be sent to the puzzle server,

which will send a crypto puzzle to the user.

5. If the user passes the test, the firewall will forward this

packet to the protected server to get the requested services

and will just remove the user details from the black list.

6. Otherwise, the current request will be refused, the

attacker details will be updated in the black list, and any

coming packets from this source will be also refused.

Figure1. DDoS-MS Architecture

ACKNOWLEDGMENT

The authors would like to thank Taif University in Saudi Arabia for their sponsorship of the study.

VII. CONCLUSION AND FUTURE WORK

The researchers in the security field need to improve

the traditional methods which are used against the attacks,

as the attackers are developing their skills and overcome

most of the existed defence solutions.

The DDoS-MS solution which is provided in this paper

can be considered as an attempt in this regard. It tries to

be leveraged from the existing solutions, and avoids their

limitations in order to produce a robust and effective

solution against EDoS attacks. In the future, a practical

proof-of-concept will be performed in a reliable

environment to prove the efficiency of the framework.

The framework's scope will be expanded to include some

of the excluding cases and we will investigate them.

These cases include using a counter for TTL values,

working with dynamic IP addresses, and working with IP

packets fragmentation.

A sample simulation based evaluation of the proposed

system is being implemented. So, the proposed system

will be evaluated by recording the simulation results and

monitoring the system performance in order to decide if

the system can accomplish its targets in encountering the

DDoS attacks effectively, or it will need to be modified to

achieve its desirable objectives.

REFERENCES .

[1] Al-Haidari, F., Sqalli, M., AND Salah, K., 2011. EDoS-Shield - A

Two-Steps Mitigation Technique against EDoS Attacks in Cloud Computing. In 2011 Fourth IEEE International Conference on

Utility and Cloud Computing. IEEE, pp. 49–56.

[2] Al-Haidari, F., Sqalli, M., AND Salah, K., 2012. Enhanced EDoS-Shield for Mitigating EDoS Attacks Originating from Spoofed IP

Addresses. In 2012 IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications IEEE,

pp. 1167–1174.

[3] Aslan, T., 2012. Cloud physical security considerations. IBM Cloud. Available at:

http://thoughtsoncloud.com/index.php/2012/02/cloud-physical-

security-considerations/ [Accessed April 15, 2012]. [4] Beitollahi, H., AND Deconinck, G., 2008. FOSeL: Filtering by

Helping an Overlay Security Layer to Mitigate DoS Attacks.

2008 Seventh IEEE International Symposium on Network Computing and Applications. IEEE, pp. 19–28.

[5] Burke, J., 2011. Bring Your Own Device Risks and rewards. Tech

Republic. Available at: http://www.techrepublic.com/blog/tech-

manager/bring-your-own-device-risks-and-rewards/7075.

[Accessed February 27, 2013].

[6] ENISA, 2009. Cloud Computing Risk Assessment. European

Network and Information Security Agency, pp.9-10. Available at:

http://www.enisa.europa.eu/act/rm/files/deliverables/cloud-computing-risk-assessment. [Accessed April 11, 2012].

[7] Fortinet, C., 2011. Network and physical security in the cloud. Asia

Cloud Forum. Available at: http://www.asiacloudforum.com/content/network-and-physical-

security-cloud [Accessed April 14, 2012].

[8] Heng, C., 2011. Security Issues in Writing PHP Scripts -And How PHP 4.1.0 / 4.2.0+ Will Change Your Scripts. Available at:

http://www.thesitewizard.com/archive/phpsecurity.shtml

[Accessed April 10, 2012]. [9] Hoff, C., 2009. A Couple Of Follow-Ups On The EDoS (Economic

Denial Of Sustainability) Concept. Rational Survivability.

Available at: http://rationalsecurity.typepad.com/blog/2009/01/a-couple-of-followups-on-my-edos-economic-denial-of-

sustainability-concept.html [Accessed April 12, 2013].

[10] Hoff, C., 2008. Cloud Computing Security: From DDoS (Distributed Denial Of Service) to EDoS (Economic Denial of

Sustainability). Rational Survivability. Available at:

http://rationalsecurity.typepad.com/blog/2008/11/cloud-

computing-security-from-ddos-distributed-denial-of-service-to-

edos-economic-denial-of-sustaina.html.[Accessed April 7, 2013]

[11] Keromytis, A., Misra, V., AND Rubenstein, D., 2002. SOS : Secure Overlay Services. SIGCOMM, pp. 61–72.

[12] Khor, S., AND Nakao, A., 2011 DaaS: DDoS Mitigation-as-a-Service. 2011 IEEE/IPSJ International Symposium on

Applications and the Internet, pp. 160–171.

[13] Kumar, M., Sujatha, P., Kalva, V., Nagori, R., Katukojwala, A., AND Kumar, M., 2012. Mitigating Economic Denial of

Sustainability (EDoS) in Cloud Computing Using In-cloud

Scrubber Service. 2012 Fourth International Conference on Computational Intelligence and Communication Networks, pp.

535–539.

[14] Kuyoro S., Ibikunle F., and Awodele, O., 2011. Cloud Computing Security Issues and Challenges. International Journal of

Computer Networks (IJCN), 3(5), pp. 247–252.

[15] Lakshminarayanan, K., Adkins, D., Perrig, A. AND Stoica, I., 2004. Taming IP Packet Flooding Attacks. ACM SIGCOMM Computr

Communication Review, 34(1), pp.45-50.

[16] McBorrough, W., 2010. The Real Arguments For Cloud Computing.

Available at: http://ezinearticles.com/?The-Real-Arguments-For-

CloudComputing&id=4333231. [Accessed February 28, 2012].

[17] Morein, W., Stavrou, A., Cook, D., Keromytis, A., Misra, V., AND Rubenstein, D., 2003. Using graphic turing tests to counter

automated DDoS attacks against web servers. Proceedings of the

10th ACM conference on Computer and communication security - CCS ’03, pp. 8-19.

[18] Ping, D., AND Nakao, A., 2010. DDoS defense as a network

service. 2010 IEEE Network Operations and Management Symposium - NOMS 2010, pp. 894–897.

[19] Raju, B., Swarna, P. AND Rao, M., 2011. Privacy and Security

issues of Cloud Computing. International Journal, 1(2), pp.128-136.

[20] Ramgovind, S., Eloff, M., AND Smith, E., 2010. The Management

of Security in Cloud Computing. In Information Security for South Africa (ISSA). IEEE, pp. 1–7.

[21] Raths, D., 2008. Cloud Computing: Public-Sector Opportunities

Emerge. Available at: http://www.govtech.com/gt/articles/387269. [Accessed January

25, 2012].

[22] Rittinghouse, J., and Ransome, J., 2010. Cloud Computing:

Implementation, Management, and Security, USA: Taylor and

Francis Group, LLC,

[23] Roberts, J.C. & Al-Hamdani, W., 2011. Who can you trust in the cloud? In Proceedings of the 2011 Information Security

Curriculum Development Conference on InfoSecCD 11. ACM

Press, pp. 15-19. [24] Sabahi, F., 2011. Virtualization-Level Security in Cloud Computing.

In Communications Software and Networks (ICCS), IEEE, pp.

250-254.

[25] Sangroya, A., Kumar, S., Dhok, J. AND Varma, V., 2010. Towards

Analyzing Data Security Risks in Cloud Computing

Environments. ICISTM, pp. 255-265.

[26] Sitaram, D. AND Manjunath, G., 2012. Cloud Security

Requirements and Best Practices. In MOVING TO THE CLOUD: Developing Apps in the New World of Cloud Computing. USA:

Elsevier, p. 309.

[27] Slack, E., 2011. How do you know that “Delete” means Delete in Cloud Storage? Available at: http://www.storage

switzerland.com/Articles/Entries/2011/8/16_How_do_you_know

_that_Delete_means_Delete_in_Cloud_Storage.html [Accessed April 9, 2012].

[28] Steve, H., 2008. Cloud Computing Made Clear. Business Week. 59

(1). [29] Vaidya, V., 2009. Virtualization Vulnerabilities and Threats : A

Solution White Paper. RedCannon Security, Inc, pp.1-7.

Available at: http://www.redcannon.com/vDefense/VM_security_wp.pdf

[Accessed April 13, 2012].

[30] VivinSandar, S., AND Shenai, S., 2012. Economical Denial of Sustainability (EDoS) in Cloud Services using HTTP and XML

based DDoS Attacks. International Journal of Computing

Applications, 41(20), pp. 11-16.

[31] Vouk, M., 2008. Cloud Computing- Issues, Research and

Implementations. Journal of Computing and Information

Technology, 16 (4), pp. 235-246.

Secure Cloud Computing for Critical Infrastructure: A Survey

Younis A.Younis, Madjid Merabti and Kashif Kifayat

School of Computing and Mathematical Sciences,

Liverpool John Moores University,

Liverpool, L3 3AF, UK

[email protected] {m.merabti, k.kifayat}@ljmu.ac.uk

Abstract—Cloud computing has been considered as one of the

promising solutions to our increasing demand for accessing and

using resources provisioned over the Internet. It offers powerful

processing and storage resources as on-demand services with

reduce cost, and increase efficiency and performance. All of these

features and more encourage enterprises, governments and even

critical infrastructure providers to migrate to the cloud. Critical

infrastructures are considered as a backbone of modern societies

such as power plants and water. However, with all of these

promising facilities and benefits, there are still a number of

technical barriers hinder utilizing the cloud such as security and

quality of services. The target of this survey is to explore

potential security issues related to securing cloud computing for

critical infrastructure providers. It highlights security challenges

in cloud computing and investigates the security requirements for

various critical infrastructure providers.

Keywords—cloud computing; critical infrastructure; security;

limitations;

I. INTRODUCTION

In the last few years, we have seen a dramatic growth in IT

investments, and a new term has come on the surface which is

cloud computing. The National Institute of Standards and

Technology defines the cloud computing as “a model for

enabling ubiquitous, convenient, on-demand network access to

a shared pool of configurable computing resources (e.g.,

networks, servers, storage, applications, and services) that

can be rapidly provisioned and released with minimal

management effort or service provider interaction”[1]. It has

five essential characteristics: on-demand self-service,

measured service, rapid elasticity, broad network access and

resource pooling. It is aiming at giving capabilities to use

powerful computing systems with reducing the cost and

increasing the efficiency and performance [1].

However, with all of these promising facilities and

benefits, there are still a number of technical barriers that may

prevent cloud computing from becoming a truly ubiquitous

service. Especially where the customer has strict or complex

requirements over the security of an infrastructure [2]. The

latest cyber-attacks on high profile firms (Amazon, Google

and Sony’s PlayStation) and the predictions of more cyber-

attacks on cloud infrastructure are threatening to slow the

take-off of cloud computing. The numbers of cyber-attacks are

now extremely large and their sophistication so great, that

many organizations are having trouble determining which new

threats and vulnerabilities pose the greatest risk and how

resources should be allocated to ensure that the most probable

and damaging attacks are dealt first. These security concerns

and attacks could slow the growth of the cloud computing

market, which is expected to reach $3.2 billion by the end of

2012 in Asia alone from $1.87 billion last year, while the

global market could reach $55 billion in 2014 [3].

Cloud computing gives a new hope for meeting various

requirements of service providers and consumers as well,

when they look at what the cloud can offer to them. A new

report from The Economist Intelligence Unit and IBM finds

that among 572 business leaders surveyed, almost three-

fourths indicate their companies have piloted, adopted or

substantially implemented cloud in their organizations and

90% expect to have done so in three years. Moreover, the

number of respondents whose companies have “substantially

implemented” cloud is expected to grow from 13% today to

41% in three years [4]. The unique benefits of cloud

computing are provided the basis to many critical

infrastructure providers to migrate to the cloud computing

paradigm, for example, IBM and Cable & Wireless (C&W)

have announced plans to collaborate in the development of a

cloud-based smart metering system[5]. This system aims at

deploying about 50 million smart meters in the UK by 2020.

BT has deployed a new cloud-based supply chain solution to

increase the operational efficiency, improve customer service

and optimize reverse logistics [6]. In April 2013, the National

Grid, the UK’s gas and electricity network, has announced to

replace its own internal datacenters with a CSC-hosted cloud

[7].

The critical infrastructure is an essential asset for the

maintenance of vital societal such as power distribution

networks and financial systems [8]. In cloud environment,

critical infrastructure providers would require scalable

platforms for their large amount of data and computation,

multi-tenant billing and virtualization with very strong

isolation, Service Level Agreement (SLA) definitions and

automatic enforcement mechanisms, end-to-end performance

and security mechanisms. However, these requirements might

not be met by the cloud computing service providers as they

suffer from some challenges and threats. Our objective is to

look at the cloud computing security challenges, which hinder

migration to the cloud and the requirements of critical

infrastructure providers to utilize the cloud.

ISBN: 978-1-902560-27-4 © 2013 PGNet

The rest of this paper is structured as follows. Section 2

explores the security challenges in cloud computing.

Requirements for different critical infrastructure areas such as

health sector, smart girds and telecommunication field are

illustrated in section 3. The conclusion from this research and

our future work are presented in section 4.

II. SECURITY CHALLENGES IN CLOUD COMPUTING

In the cloud computing, critical aspects of security can be

gleaned from reported experiences of early adopters, also from

researchers analyzing and experimenting with available

service provider platforms and associated technologies.

Security is the greatest inhibitors for adoption and the primary

concern in the cloud computing. As the cloud computing is a

modern way to access and use computing resources over the

Internet, so it inherits some security risks and vulnerability

from the conventional Internet, such as data confidentiality,

integrity, and availability, and etc. Moreover, cloud computing

has brought new concerns have to be considered such as

moving and storing in the cloud with probability to reside in

other country, which has different regulations. This section

highlights security-related issues that are believed to have

long-term significance for cloud computing.

A. Data security and privacy

One of the critical aspects in cloud competing security is

protecting data integrity, availability and confidentiality. Data

will be stored and moved in a shared environment managed by

various service providers, and it is likely to be located in a

different country that has other regulations. It could face a

various kind of regulations which might reveal it partially or

completely even when it stayed in the national borders. The

data could be passed to a third party for using it in any other

purposes, for instance, in advertisements, which could lead to

significant security problems. Integrity of data that is sorted in

the cloud has to be insured without downloading it, as it will

be costly for customers, especially with huge amounts of data.

Furthermore, data is always dynamic either in the cloud or

anywhere else, so it could be updated, appended, deleted and

so on [9].

As data is stored in different servers that located at

different places, so data availability will become a big concern

due to some factors such as bandwidth efficiency, one cloud is

partly unavailable and so on. For example, Microsoft’s Azure

cloud service faced severs degradation for nearly 22 hours due

to problems related to network upgrading [3]. A cloud service

provider also has to ensure its computing resources are fully

usable and available at all times. Computing resources could

be inaccessible for many reasons such as natural disaster or

denial of service.

Protecting data privacy is another important aspect in

cloud computing security. Cloud computing is a shared

environment, which uses sharing infrastructure. So, data may

face a risk of disclosure or unauthorized access. Sharing the

cloud computing resources with protecting customers’ privacy

is a big challenge. For delivering a secure Multi-tenancy in the

cloud computing, isolation is needed to ensure each

customer’s data has been isolated from others’ data. As the

data may be transferred between countries, so it could face

different kind of regulations and legal systems. Data

anonymity might be utilized for ensuring the customers data

privacy and security.

Data sanitization is how to make sure any sensitive data

has been deleted from storage devices either when they are

removed or the data has to be cleared. Data provision is

aiming at meeting the data forensics in the cloud which means

registering who has either accessed the data or modified it. So,

a secure provision is needed to attest the ownership and any

access with modification [10].

B. Security attacks and threats

In cloud computing, a service provider has a big role to

deal with all kinds of threats and attacks that they or their

customers could face. Most of the attacks which organizations

have faced are come as a result of vulnerability that

organizations have in their systems. In addition, cloud

computing inherits a number of security attacks from

conventional distributed systems, which could have a huge

impact in its services such as malicious code (viruses, Trojan

horses), back door, man-in-the middle attack, replay attack,

spoofing, social engineering, TCP hijacking, password

guessing and so on [11]. By the way, cloud computing has

brought its unique security threats and concerns.

Cloud malware injection attack - It is on the top list of

attacks. it aims at injecting a malwares service, application or

virtual machine into the cloud system [12].

Metadata spoofing attack - A web service’s server provides

the metadata documents, which store all information about the

web service invocation such as message format, security

requirements, network location, etc. to the service clients. So,

this attack aims at reengineering a web service’s metadata

descriptions in order modify the network endpoints and the

references to security policies [13].

Account and service hijacking - This threat could happen

when an attacker hacks into a web site that is hosted in a cloud

service provider and then secretly installing their software and

control the cloud provider infrastructure [14].

Unknown risk profile - It can come as a result of caring

about what features and functionalities can be gained from

adopting cloud services without considering how security

producers and technologies will be developed, who has access

to the data and what happen when the data disclose for any

reason [14].

Malicious insiders - It can be caused by lacking of

transparency into provider process and how the access to

virtual assets will be granted to employees. This threat can be

more complicated due to the lack of visibility into how

employees’ roles and responsibilities will be updated when

their jobs or behavior is changed [14].

Shared technology’s vulnerabilities - Cloud computing will

use the same infrastructures used in the Internet, and it will be

shared among the cloud consumers. So, all current problems

the infrastructures have will migrate to the cloud without

being ready to migrate because most of its components were

not designed for sharing resources in the cloud [14].

Abuse and nefarious use of cloud computing - According to

the CSA, it is the top threat to the cloud computing as an

attacker can use the available computing power of cloud’s

infrastructure to attack any target by spreading malware and

spam such as botnet [14].

Insecure application programming interface - As cloud

service providers depend upon APIs to deliver services to their

customers, APIs must have secure authentication, encryption,

activity monitoring mechanisms and access control [14].

C. Other security challenges

In order to depict the whole pictures, we have to consider

other challenges, which each one of them needs another

survey.

Access controls and Identity Management (IdM)

It is a big concern, which it could cause serious security

problems, lead to reveal customers’ data and give

attackers ability to infiltrate organizations assets. Identity

management (IdM) is another important aspect in cloud

computing security, that aims at performing the

authentication among heterogeneous clouds to establish a

federation, but it suffers from some problems related to

interoperability between various security technologies

[15].

Monitoring

In the cloud, there is a huge demand of using monitoring

activities either for insiders or outsiders [2].

Risk analysis and management

It is a very important aspect in the cloud computing

security. It is about reducing the load in cloud computing

by checking any risk in the data before delivering it

consumers [16].

Service Level Agreement

Relations between cloud service providers and consumers

have to be described with a service level agreement,

which uses to define services and ways to deliver these

services to consumers [17].

Accounting

It is one of the crucial aspects that should be considered in

evolving and deploying services in the cloud as it

supports network management [18].

Heterogeneity

Cloud computing services are delivered by a big number

of service providers and using different types of

technologies, which might cause heterogeneous problems.

Heterogeneity can come as a result from differences at

various levels either software or hardware level [19].

Virtualization

Virtualization is one of many ways used in cloud

computing to meet their consumer necessities, but it

brings its unique vulnerability.

Compliance

Cloud computing has a lack of proper mechanisms for the

compliance management. These mechanisms have to deal

with concerns related to compliance and prevent any

serious problem can be caused to data security and

privacy [20].

Trust Management

In cloud computing environment, there is a huge demand

of establishing a reasonable and practical model for

managing a trust relationship among cloud computing

entities [2].

Cross-Organizational Security Management

Achieving and maintaining security requirements and

compliance with SLAs are big challenges to service

providers in the cloud computing. Moreover, ensuring and

maintaining security requirements need the involvement

Fig 1. Security requirements in different type of CI services

of several organizations to achieve proper security

settings that meet security necessities in cloud computing

environments, which called organizational security

management or cross-organizational [21].

Policies

In the cloud computing, a well-written policy is needed to

state security guidelines and security procedures, which

are used to implement technical security solutions [2].

Security in the web browser

At the beginning, web browser has enabled a number of

features included cookies and encryption, which were

accepted since that time. Later, these features are not

enough for handling consumers’ necessities of

sophisticated shopping and banking systems in shared

open environments like the cloud [22].

Extensibility and Shared Responsibilities

Either end users or cloud computing service providers

should care about securing the cloud computing. Up to

now, there is no a clear clue about how security duties

should be assigned in the cloud computing and who is

responsible for what [23].

III. SECURE CLOUD SERVICES FOR CRITICAL

INFRASTRUCTURE PROVIDERS

As the benefits of cloud computing are hard to be ignored,

many critical infrastructure providers are aiming to utilize the

unique benefits of cloud computing and migrate to the cloud

computing paradigm. For example, the National Grid, the

UK’s gas and electricity network, has announced to replace its

own internal datacenters with a CSC-hosted cloud [7].

However, moving to cloud without addressing all of the

previous mentioned cloud security challenges is not going to

happen soon. In this section we are to investigate security

requirements for different critical infrastructure providers such

as smart grids, telecommunication, transportation and finance.

A. Requirement analysis

Critical infrastructure providers operate using varied kind

of infrastructures and may have different security

requirements in their unique environments. A successful

migration of various critical infrastructure providers to the

cloud would need to meet all of their requirements. We have

investigated and analysed the security requirements of various

critical infrastructure services (shown in Fig 1)to find the

common security requirements such as data security,

compliance and audit, cryptography and access control. An

access control system has been found as one of the core

requirements.

In cloud computing, information is come from multiple

sources, which need to be secured and controlled accurately.

Data should be available only to authorize users, secured from

attempting to alter it and on hand at any time is being

accessed. Privacy of consumers should be insured at any stage

either when data is collected and processed or when it

transferred. So, assurance of 100% availability, integrity and

confidentiality is crucial for clouds[24]. Furthermore, the

situation in cloud computing might be different from other IT

fields as their data can be revealed for some reasons such as

court orders. So, cloud service providers have to state that in

their terms and policies. Privacy issues have to be considered

here as well, as data may face different kind of regulations,

and any security or privacy policy should illustrate that.

Moreover, aggregating data from multiple sources could

reveal sensitive information about consumers unintentionally

and moving the aggregating data form one place to another

can lead to violate the privacy of the data [25]. Clouds’

consumers have to know in advance where their data will be

resided and how will be segregated in order to avoid data

leakage problems. In addition, lack of visibility about the way

data is stored and secured, lead to a number of concerns have

to be considered when moving to the cloud. Data centres have

to deal with a huge amount of data that collected from

everywhere in the cloud. Data centres are not stand alone; it

has to be connected to other data centres. So, security and

latency should be managed in a proper way [24].

Moving any organization to the cloud needs thinking

critically about using multiple sources of identity with

different attributes and ability to identify all the entities

involved in a transaction [24]. Access control mechanisms

have to be sufficient and may allow consumers to define

access policies to their data and utilities. Furthermore,

consumers should be allowed to specify and update access

polices on data they own. User credentials should be known in

advance where are stored in either organizations’ servers or

the cloud, in order to avoid disclosure problems. Last but not

least strong mutual identification and authentication between

users and network are still open an research area either for

cloud computing or for any system want to migrate to the

cloud [26]. Moreover, there is a huge demand of having a

proper polices which can organize relations between

consumers, utilities and third parties, but using security and

privacy policies should not introduce unacceptable latencies.

Compliance, security-breach audit and forensics are used

to insure no one violates or attacks the security within the

system [24]. In addition, cloud computing service providers

have to apply the right operating models and services to meet

compliance and security regulations.

Virtualization is a key element in cloud computing, which

brings well known benefits to the cloud, yet it has a number of

security concerns such as Hypervisor security and

performance concerns [26]. Supporting scalable multi-tenant

billing and very robust isolation are major requirements of any

tempted to deploy a system in the cloud. However, in cloud

computing there might be multiple networks running in the

same infrastructure. So, strong isolation is another requirement

to guarantee there is no security or performance interference

between cloud tenants. Metering and changing for virtual

resources consumptions are needed in cloud computing [27].

There are a number of issues should be considered such as

customisation of applications and services, dealing with

latency, eliminating any technical barriers and sorting out

complexity integrating cloud services with existing legacy

environments. Highly configurable, secure, virtual machines

that provide granular control and allow easy customization are

required as well [26]. Moreover, As cloud computing is an

environment that has shared platforms, shared storage and

shared network, thus it has to ensure its components work

together to achieve intended mission regardless providers,

storage, OS, etc. [24].

Web applications which used in the Internet have their

own vulnerabilities that have not been solved yet, and these

applications are being used again in the cloud to deliver

services without a clear clue how their weakness will be sorted

out and their impact on cloud users. Additionally, other

challenges might be obstacles moving quickly to the cloud

such as meeting security requirements of enterprises,

performance, scaling operations, cost-effectively, dynamic and

size of communication environment, increased size and

complexity operations, changing technology and complexity

of services and heterogeneity [27].

Risk analysis and management consist of business risk

analysis, a technical risk analysis and infrastructure risk

analysis [28]. It is used to deal with dynamic and random

behaviors of consumers and mitigate risks involved when

consumers utilizing cloud. A Security incident is one of major

questions for any organizations want to move to the cloud as

what has to be done if the cloud faces any security incidents

and steps to be followed to mitigate that incident. Security’s

incidents management has to be stated in any agreement

between consumers and the cloud [29].

Security and privacy issues, latency, audit and monitoring,

reliability, network connectivity and third parties have to be

negotiated and addressed in SLA. Cloud computing

consumers require SLA definitions and automatic enforcement

mechanisms that guarantee sustained and verifiable end-to-end

performance. The SLA must state how isolation, bandwidth

on-demand and quality of service will be insured as well [30].

Encryption is often used to secure data in untrusted storage

environment such as cloud computing. However, it can be a

time and cost consumer if it does not be handled in a proper

way, and it could cause additional storage and bandwidth

usage. Key management is another complicated problem,

which needs more attention [24].

Consumers are not adequately informed about what can be

gained by moving to the cloud computing, and the risk

associated with that moving. Consumers should be engaged in

the moving process, and in any further action as they have

always considered “the weakest link” [24].

IV. CONCLUSION AND FUTURE WORK

Cloud computing has got a significant interest in both

academic and industry fields, as it is considered a backbone of

future modern societies. It will reduce costs and increase

economic efficiencies. Critical infrastructure providers are

looking as others for facilitating and enjoying the cloud

computing features. However, without appropriate solutions

for a considerable number of security and privacy challenges,

the cloud computing adoption will not happen soon. In this

survey, we have reviewed significant problems to cloud

computing security and analysed security requirements for

various critical infrastructure providers.

A reliable access control system is a crucial requirement to

secure clouds from unauthorised access. Access control

systems in cloud computing can be more complex and

sophisticated due to dynamic resources, heterogeneity and

diversity of service.

Our future work will be focusing on developing a novel

access control model for cloud computing to meet the security

requirements of critical infrastructure providers. It will look at

proposing and implementing a security policy to meet the

requirements of critical infrastructure providers and proposing

an efficient enforcement method to enforce the security policy

in the proper layer.

REFERENCES

[1] P. Mell and T. Grance, “The NIST definition of cloud

computing,” NIST special publication, 2011. [Online].

Available:

http://csrc.nist.gov/publications/nistpubs/800-

145/SP800-145.pdf. [Accessed: 15-Oct-2012].

[2] Q. Zhang, L. Cheng, and R. Boutaba, “Cloud

computing: state-of-the-art and research challenges,”

Journal of Internet Services and Applications, vol. 1,

no. 1, pp. 7–18, Apr. 2010.

[3] W. a Jansen, “Cloud Hooks: Security and Privacy

Issues in Cloud Computing,” in 2011 44th Hawaii

International Conference on System Sciences, 2011,

pp. 1–10.

[4] S. Berman and L. Kesterson-Townes, “The power of

cloud. Driving business model innovation,” 2012.

[5] D. du Preez, “IBM and Cable & Wireless to gather

smart meter data in the cloud,” Computing.Co.UK,

2011. [Online]. Available:

http://www.computing.co.uk/ctg/news/2035755/ibm-

cable-wireless-gather-smart-meter-cloud. [Accessed:

20-Sep-2012].

[6] CBR Staff Writer, “BT adds new cloud-based solution

to supply chain solution portfolio,” Cloud Platform,

2012. [Online]. Available:

http://cloudplatforms.cbronline.com/news/bt-adds-

new-cloud-based-solution-to-supply-chain-solution-

portfolio-241012. [Accessed: 26-Oct-2012].

[7] P. Danny, “Green light for National Grid’s cloud

move,” Computing.Co.UK, 2013. [Online]. Available:

http://www.computing.co.uk/ctg/analysis/2257295/gre

en-light-for-national-grid-s-cloud-move. [Accessed:

22-Apr-2013].

[8] M. Merabti, M. Kennedy, and W. Hurst, “Critical

infrastructure protection: A 21 st century challenge,”

in International Conference on Communications and

Information Technology (ICCIT), 2011, 2011, pp. 1–6.

[9] C. Wang, Q. Wang, K. Ren, and W. Lou, “Ensuring

data storage security in Cloud Computing,” in 2009

17th International Workshop on Quality of Service,

2009, pp. 1–9.

[10] R. Lu, X. Lin, X. Liang, and X. Shen, “Secure

provenance: the essential of bread and butter of data

forensics in cloud computing,” in ASIACCS ’10

Proceedings of the 5th ACM Symposium on

Information, Computer and Communications Security,

2010, pp. 282–292.

[11] R. Krutz and R. Vines, Cloud security: A

comprehensive guide to secure cloud computing. John

Wiley & Sons, 2010, p. 384.

[12] M. Jensen, J. Schwenk, N. Gruschka, and L. Lo

Iacono, “On Technical Security Issues in Cloud

Computing,” in 2009 IEEE International Conference

on Cloud Computing, 2009, pp. 109–116.

[13] M. Jensen, N. Gruschka, and R. Herkenhöner, “A

survey of attacks on web services,” Computer Science-

Research …, vol. 24, no. 4, 2009.

[14] D. Hubbard and M. Sutton, “Top Threats to Cloud

Computing V1. 0,” Cloud Security Alliance, 2010.

[Online]. Available:

https://cloudsecurityalliance.org/topthreats/csathreats.

v1.0.pdf. [Accessed: 12-Apr-2013].

[15] S. Lar, X. Liao, and S. Abbas, “Cloud computing

privacy & security global issues, challenges, &

mechanisms,” in Communications and Networking in

…, 2011, pp. 1240–1245.

[16] M. R. Aswin and M. Kavitha, “Cloud intelligent track

- Risk analysis and privacy data management in the

cloud computing,” in 2012 International Conference

on Recent Trends in Information Technology, 2012,

pp. 222–227.

[17] T. Chauhan, S. Chaudhary, V. Kumar, and M. Bhise,

“Service level agreement parameter matching in cloud

computing,” in 2011 World Congress on Information

and Communication Technologies, 2011, pp. 564–570.

[18] I. Ruiz-Agundez, Y. K. Penya, and P. G. Bringas, “A

Flexible Accounting Model for Cloud Computing,” in

2011 Annual SRII Global Conference, 2011, pp. 277–

284.

[19] S. Crago, K. Dunn, P. Eads, L. Hochstein, D.-I. Kang,

M. Kang, D. Modium, K. Singh, J. Suh, and J. P.

Walters, “Heterogeneous Cloud Computing,” in 2011

IEEE International Conference on Cluster Computing,

2011, pp. 378–385.

[20] D. Schleicher, C. Fehling, S. Grohe, F. Leymann, A.

Nowak, P. Schneider, and D. Schumm, “Compliance

Domains: A Means to Model Data-Restrictions in

Cloud Environments,” in 2011 IEEE 15th

International Enterprise Distributed Object

Computing Conference, 2011, pp. 257–266.

[21] S. Thalmann, D. Bachlechner, L. Demetz, and R.

Maier, “Challenges in Cross-Organizational Security

Management,” in 2012 45th Hawaii International

Conference on System Sciences, 2012, pp. 5480–5489.

[22] T. Wadlow and V. Gorelik, “Security in the Browser,”

Communications of the ACM, vol. 7, no. 2, p. 40, Feb.

2009.

[23] C. Aete, “7 areas of shared responsibility for public

cloud security,” hp Cloud Source Blog, 2012.

[Online]. Available:

http://h30507.www3.hp.com/t5/Cloud-Source-Blog/7-

areas-of-shared-responsibility-for-public-cloud-

security/ba-p/117425. [Accessed: 12-Aug-2012].

[24] W. Group, “Guidelines for Smart Grid Cyber Security:

Vol. 1, Smart Grid Cyber Security Strategy,

Architecture, and High-Level Requirements,”

National Institute of Standards and Technology, 2010.

[Online]. Available:

http://csrc.nist.gov/publications/nistir/ir7628/nistir-

7628_vol1.pdf.

[25] S. Rani and A. Gangal, “Security Issues of Banking

Adopting the Application of Cloud Computing,”

International Journal of Information Technology, vol.

5, no. 2, pp. 243–246, 2012.

[26] S. Subashini and V. Kavitha, “A survey on security

issues in service delivery models of cloud computing,”

Journal of Network and Computer Applications, vol.

34, no. 1, pp. 1–11, Jan. 2011.

[27] M. Mujinga and B. Chipangura, “Cloud computing

concerns in developing economies,” in Australian

Information Security Management Conference, 2011.

the 9th Australian Information Security Management

Conference, 2011.

[28] E. Bezerra, “Critical telecommunications

infrastructure protection in Brazil,” in Critical

Infrastructure Protection, First IEEE International

Workshop on, 2005.

[29] A. Sharma, “Data Management and Deployment of

Cloud Applications in Financial Institutions and its

Adoption Challenges,” International Journal of

Scientific & Technology Research, vol. 1, no. 1, pp. 1–

7, 2012.

[30] Andras Vajda, Stephan Baucke, “Cloud Computing

and Telecommunications: Business Opportunities,

Technologies and Experimental Setup,” in World

Telecommunications Congress (WTC), 2012, 2012,

vol. 0091, no. C, pp. 1–6.