6
CLOUDS-Pi: A Low-Cost Raspberry-Pi based Testbed for Software-Defined-Networking in Cloud Data Centers Adel Nadjaran Toosi, Jungmin Son, Rajkumar Buyya Cloud Computing and Distributed Systems (CLOUDS) Laboratory School of Computing and Information Systems The University of Melbourne, Australia {anadjaran,rbuyya}@unimelb.edu.au ABSTRACT Software Defined Networking (SDN) is rapidly transforming the networking ecosystem of cloud computing data centers. However, replicating SDN-enabled cloud infrastructures to conduct practical research in this domain requires a great deal of effort and capital expenditure. In this paper, we present the CLOUDS-Pi platform, a testbed for conducting research on SDN-enabled cloud computing. As part of it, Open vSwitch (OVS) is integrated with Raspberry-Pis, low- cost embedded computers, to build up a network of Open- Flow switches. We provide two use cases and perform vali- dation and performance evaluation for our testbed. We also discuss benefits and limitations of CLOUDS-Pi in particular and SDN in general. Keywords Software Defined Networking (SDN); Cloud computing; Rasp- berry Pi; OpenDaylight; OpenStack; Data center networks; Testbed 1. INTRODUCTION Software-defined networking (SDN) is an emerging ap- proach to computer networking that separates the tightly coupled control and data (forwarding) planes in traditional networking devices. Thanks to this separation, SDN can provide a logically centralized view of the network in a single point of management. This is achieved via open interfaces and abstraction of lower-level functionalities and transforms the network to a programmable platfrom that can dynam- ically adapt its behavior. Early supporters of SDN were among those who believed that network device vendors were not meeting service providers’ needs, especially in terms of innovation and the development of required features. Other supporters were those who aimed at harnessing their in- expensive processing power of commodity hardware to run their network. The need for such agile, flexible, and cost- efficient computer networks has consequently formed the nu- cleus for the global efforts towards SDN [15]. Cloud computing [5] is a successful computing paradigm, which delivers computing resources residing in providers’ data centers as a service over the Internet on a pay-as-you- go basis. With the growing adoption of cloud, data centers hosting cloud services are rapidly expanding their sizes and increasing in number. Therefore, resource management in clouds’ large-scale infrastructure becomes a challenging is- sue. In the meantime, SDN is increasingly being accepted as the new generation of networks in cloud data centers where there is a need for efficient management of large multi-tenant networks of such dynamic and ever-changing environments. In fact, SDN not only reduces the complexity seen in today’s cloud data center networks but also helps cloud providers to manage network services from a central management point. The peaceful meeting of cloud computing and SDN em- barks on significant innovation and research activities to fuse these together [12]. However, evaluation and experimen- tation of SDN-based applications for cloud environments presents many challenges in terms of complexity, scaling, accuracy, and efficiency [6, 3]. Moreover, rapid and afford- able prototyping of SDN-enabled clouds is difficult and sig- nificant capital expenditure is required to replicate practi- cal implementations obstructing the evaluation of research in this domain. Even though software emulators such as Mininet [11] expedite rapid prototyping of SDN on a single machine, there is not enough support for network dynamic- ity and the performance of the virtualized hosts and Virtual Machines (VMs) in such tools [10]. With these issues in mind, this paper proposes the system architecture and design of CLOUDS-Pi, our testbed plat- form in the CLOUDS laboratory for conducting research on SDN-enabled cloud computing. We focus on the cost- effectiveness of our setup by reusing existing equipment and coping with the budget and space limitations of an academic research laboratory. One of the most important benefits of SDN is that commodity hardware can be used to build networking devices. Therefore, we use Raspberry Pis, low- cost and small single-board computers, to build a small-scale data center network. To make a switch out of Raspberry Pi, we integrate each Pi with an Open vSwitch (OVS) [16], which is one of the most widely used virtual switches in SDN. The rest of the paper is organized as follows. Section 2 dis- cusses the system architecture of the CLOUDS-Pi platform along with design choices and lessons learned. Employing two use cases, we validate and evaluate our testbed in Sec- tion 3. Following that in Section 4, we discuss challenges and limitations of SDN along with opportunities and future research directions for SDN in clouds. Finally we conclude our paper in Section 5.

CLOUDS-Pi: A Low-Cost Raspberry-Pi based Testbed for ... · PDF fileMininet [11] expedite rapid prototyping of SDN on a single machine, ... Given that the OpenStack controller resides

  • Upload
    lyphuc

  • View
    215

  • Download
    1

Embed Size (px)

Citation preview

CLOUDS-Pi: A Low-Cost Raspberry-Pi based Testbed forSoftware-Defined-Networking in Cloud Data Centers

Adel Nadjaran Toosi, Jungmin Son, Rajkumar Buyya

Cloud Computing and Distributed Systems (CLOUDS) LaboratorySchool of Computing and Information Systems

The University of Melbourne, Australia{anadjaran,rbuyya}@unimelb.edu.au

ABSTRACTSoftware Defined Networking (SDN) is rapidly transformingthe networking ecosystem of cloud computing data centers.However, replicating SDN-enabled cloud infrastructures toconduct practical research in this domain requires a greatdeal of effort and capital expenditure. In this paper, wepresent the CLOUDS-Pi platform, a testbed for conductingresearch on SDN-enabled cloud computing. As part of it,Open vSwitch (OVS) is integrated with Raspberry-Pis, low-cost embedded computers, to build up a network of Open-Flow switches. We provide two use cases and perform vali-dation and performance evaluation for our testbed. We alsodiscuss benefits and limitations of CLOUDS-Pi in particularand SDN in general.

KeywordsSoftware Defined Networking (SDN); Cloud computing; Rasp-berry Pi; OpenDaylight; OpenStack; Data center networks;Testbed

1. INTRODUCTIONSoftware-defined networking (SDN) is an emerging ap-

proach to computer networking that separates the tightlycoupled control and data (forwarding) planes in traditionalnetworking devices. Thanks to this separation, SDN canprovide a logically centralized view of the network in a singlepoint of management. This is achieved via open interfacesand abstraction of lower-level functionalities and transformsthe network to a programmable platfrom that can dynam-ically adapt its behavior. Early supporters of SDN wereamong those who believed that network device vendors werenot meeting service providers’ needs, especially in terms ofinnovation and the development of required features. Othersupporters were those who aimed at harnessing their in-expensive processing power of commodity hardware to runtheir network. The need for such agile, flexible, and cost-efficient computer networks has consequently formed the nu-cleus for the global efforts towards SDN [15].

Cloud computing [5] is a successful computing paradigm,which delivers computing resources residing in providers’data centers as a service over the Internet on a pay-as-you-go basis. With the growing adoption of cloud, data centershosting cloud services are rapidly expanding their sizes andincreasing in number. Therefore, resource management in

clouds’ large-scale infrastructure becomes a challenging is-sue. In the meantime, SDN is increasingly being accepted asthe new generation of networks in cloud data centers wherethere is a need for efficient management of large multi-tenantnetworks of such dynamic and ever-changing environments.In fact, SDN not only reduces the complexity seen in today’scloud data center networks but also helps cloud providers tomanage network services from a central management point.

The peaceful meeting of cloud computing and SDN em-barks on significant innovation and research activities to fusethese together [12]. However, evaluation and experimen-tation of SDN-based applications for cloud environmentspresents many challenges in terms of complexity, scaling,accuracy, and efficiency [6, 3]. Moreover, rapid and afford-able prototyping of SDN-enabled clouds is difficult and sig-nificant capital expenditure is required to replicate practi-cal implementations obstructing the evaluation of researchin this domain. Even though software emulators such asMininet [11] expedite rapid prototyping of SDN on a singlemachine, there is not enough support for network dynamic-ity and the performance of the virtualized hosts and VirtualMachines (VMs) in such tools [10].

With these issues in mind, this paper proposes the systemarchitecture and design of CLOUDS-Pi, our testbed plat-form in the CLOUDS laboratory for conducting researchon SDN-enabled cloud computing. We focus on the cost-effectiveness of our setup by reusing existing equipment andcoping with the budget and space limitations of an academicresearch laboratory. One of the most important benefitsof SDN is that commodity hardware can be used to buildnetworking devices. Therefore, we use Raspberry Pis, low-cost and small single-board computers, to build a small-scaledata center network. To make a switch out of RaspberryPi, we integrate each Pi with an Open vSwitch (OVS) [16],which is one of the most widely used virtual switches inSDN.

The rest of the paper is organized as follows. Section 2 dis-cusses the system architecture of the CLOUDS-Pi platformalong with design choices and lessons learned. Employingtwo use cases, we validate and evaluate our testbed in Sec-tion 3. Following that in Section 4, we discuss challengesand limitations of SDN along with opportunities and futureresearch directions for SDN in clouds. Finally we concludeour paper in Section 5.

Compute 8

eth2

Compute 7

eth2

Compute 6

eth2

Compute 5

eth2

Compute 4

eth2

Compute 3

eth2

Compute 2

eth2

Compute 9

eth2

Management Network (OpenStack)

SDN ControllerOpenStack Controller

Gatewayeth1

eth4

eth1eth1eth1eth1eth1eth1eth1 eth1

Control Network (OpenFlow)

Internet

eth3

eth2

eth5

NetGear FS516

TP-Link TL-SG1016D

Data Network (Tenants)

Pi Sw

itch

Figure 1: System Architecture of CLOUDS-Pi

2. SYSTEM ARCHITECTUREWe present the CLOUDS-Pi platform along with the phys-

ical infrastructure setup and utilized software.Physical infrastructure: The main aim of our small

cloud data center is to provide an economical testbed forconducting research in Software-Defined Clouds (SDCs) [4].Therefore, we focus on reusing existing infrastructure, equip-ment, and machines (hosts) connected through a network ofOpenFlow [13] switches made out of Raspberry Pis, usedby others as compute resources [20]). Our platform is com-prised of a set of 9 heterogeneous machines with specifi-cations shown in Table 1. To keep up with the commonpractice, we use three separate networks namely, 1) man-agement, 2) data and 3) control. The management networkis used by OpenStack1, our deployed cloud operating system,for internal communication between its components and re-source management. It constitutes a 16-port 10/100MbpsEthernet Switch (NetGear Model FS516) connecting hostsand OpenStack controller. The data network is used fordata communication among VMs deployed within the cloudenvironment and providing Internet access to them throughthe gateway host. This network is 100Mbps fat-tree [2] net-work built on top 10 Raspberry Pis (Pi 3 MODEL B) withOVS integrated each playing a role of 4-port switch withan external port for control. Raspberry Pi 3 Model B hasfour USB 2.0 port and only one Ethernet interface. There-fore, we used TP-Link UE200 USB 2.0 to 100Mbps Ethernetadapters to add four extra Ethernet interfaces to the Rasp-berry Pi switch. The control network connects control portson Raspberry Pi Switches to the SDN controller and trans-fers OpenFlow packets between the controller and switches.Figure 1 depicts our data center setup including the network

1OpenStack,

topology for management, data, and control networks andFigure 2 depicts the CLOUDS-Pi platform.

Software: We installed CentOS 7.02 Linux distributionas the host operating system on all nodes. Then using RDOPackstack,3 we installed OpenStack to build our cloud plat-form. Given that the OpenStack controller resides beside theSDN controller, we used one of our more powerful machines(IBM X3500) as the controller node. All other nodes playthe same role of compute hosts in the design. In addition,we created NAT forwarding rules on the controller usingLinux iptables to enable all other nodes to connect to theInternet. In this way, the controller node configured as thedefault Gateway for the external network access (Internetaccess) for all other nodes and OpenStack VMs. We alsosetup L2TP/IPsec Virtual Private Network (VPN) serverusing OpenSwan4 and xl2tpd Linux packages on the con-troller node to provide direct access to VMs for our cloudusers outside the data center network (see Figure 3).

CLOUDS-Pi uses OpenDaylight (ODL)5, one of the pop-ular open-source SDN controllers, to provide the brains ofthe network and handle OpenFlow capable Raspberry Piswitches. ODL is installed on the same host as the Open-Stack controller and manages OpenFlow switches via con-trol network (see Figure 3). Every Raspberry Pi in oursetup uses a Debian-based Linux operating system of Rasp-bian Version 8 (jessie)6 and have OVS Version 2.3.0 installed2CentOS, https://www.centos.org/download/3Packstack - RDO, https://www.rdoproject.org/install/packstack/4OpenSwan, https://www.openswan.org/5OpenDaylight,https://www.opendaylight.org/6https://www.raspberrypi.org/downloads/raspbian/

Table 1: Specifications of machines in CLOUDS-Pi.

Machine CPU Cores Memory Storage

3 x IBM X3500 M4 Intel(R) Xeon(R) E5-2620 @ 2.00GHz 12 64GB (4 x 16GB DDR3 1333MHz) 2.9TB4 x IBM X3200 M3 Intel(R) Xeon(R) X3460 @ 2.80GHz 4 16GB (4 x 4GB DDR3 1333MHz) 199GB2 x Dell OptiPlex 990 Intel(R) Core(TM) i7-2600 @ 3.40GHz 4 8GB (2 x 4GB DDR3 1333MHz) 399GB

Figure 2: The CLOUDS-Pi Platform

as an SDN-capable virtual switch. We configured OVS asa software switch having all USB-based physical interfacesconnected as forwarding ports and used Raspberry Pi built-in interface as a OpenFlow control port.

3. USE CASESWe introduce two use cases to illustrate how the CLOUDS-

Pi platform and its SDN-enabled features can be utilized tooffer solutions and drive research innovations. The first usecase shows dynamic flow scheduling for efficient use of net-work resources in a multi-rooted tree topology. The seconduse case demonstrates that the communication cost of pair-wise VM traffic can be reduced by exploiting collocation andnetwork locality through live VM migration.

OpenStack OpenDaylight

Gateway

VM Placement

VM Migration

VM Monitoring

Flow Scheduling

Network Management

Network Monitoring NAT

eth1

eth2 eth3eth5eth4

VPNOpenswan

xl2tpd

Management Network Control Network Data Network

Inte

rnet

Controller Node (CentOS)

Figure 3: Software Stack on the Controller Node

3.1 Dynamic Flow SchedulingData center network topologies such as fat-tree typically

Figure 4: Physical Network Topologies detected by Open-Daylight

consist of many equal-cost paths between any given pairof hosts. Traditional network forwarding protocols oftenselect a single deterministic path for each pair of sourceand destination and sometimes protocols such as Equal-CostMulti-Path (ECMP) routing [7] is used to evenly distributeload on multiple paths based on a certain hashing function.These static mapping of flows to paths do not take into ac-count network utilization and duration of flows. We proposeand demonstrate the feasibility of building a dynamic flowscheduling for a given pair of hosts in our multi-rooted treetestbed using ODL APIs.

The proposed dynamic flow scheduling algorithm aims tomaximize the bandwidth between these given hosts whileensuring that the latency is minimized. The algorithm re-ceives IP addresses of a given pair of hosts as input. It findsmultiple paths of equal length between the source and thedestination and iteratively (e.g., every 15 seconds) checks thestatistics on those paths. Next, it finds the path with leastutilization and pushes essential forwarding rules to Open-Flow switches to redirect traffic to that path.

In order to evaluate the impact of the proposed dynamicflow scheduling on bandwidth, using ipref3 in TCP mode,we generated 10 synthetic and random flows between dif-ferent hosts in the network. We measured the transmis-sion time and available bandwidth for the flow of interest(given pair of hosts) with or without enabling dynamic flowscheduling. Figure 4 shows a graphical representation ofnetwork topology detected by ODL User Interface (DLUX)along with the labeled given pair of hosts. Table 2 showsthe available bandwidth and transmission time for 700MBof data between the given hosts. By enabling dynamic flowscheduling, transmission time is reduced by 13.6% comparedto the static routing method. The average bandwidth wasalso improved from 35.24Mb/s with no flow management to40.61Mb/s with dynamic flow scheduling.

The performance gain of the proposed dynamic scheduling

Table 2: Transmission time and average bandwidth with andwithout dynamic flow scheduling

MethodTransmission

Time (s)Bandwidth

(Mb/s)

Without DynamicFlow Scheduling

159 35.24

With DynamicFlow Scheduling

137 40.81

is heavily dependent on the rates and durations of the flowsin the network. Results also demonstrate the feasibility ofbuilding a working prototype of dynamic flow scheduling inthe CLOUDS-Pi platform.

3.2 Virtual Machine ManagementLive migration is one of the core concepts in modern data

centers which allows moving a running VM between physicalhosts with no impact on the VM availability. While VMs indata centers are often migrated between hosts to reduce en-ergy consumption or for the maintenance purposes, live VMmigration also provides an opportunity to enhance link uti-lization and reduce network-wide communication cost. Thiscan be done by relocating communicating VMs to hostswith near vicinity and less number of connecting links inthe higher layers of the data center network topology.

We measure the bandwidth between two communicatingVMs running on physical hosts connected through core switches(Labeled hosts in Figure 4). We run experiments by movingthese two VMs closer to each other in the network topol-ogy. First, we perform a VM migration to remove any coreswitches on the connecting shortest path and then anothermigration to remove both core and aggregation switches.Meanwhile, we investigate the impact of migrations on theavailable bandwidth.

During the experiment, random and synthetic backgroundtraffic is generated between hosts both periodically and con-stantly. The graph in Figure 5 shows available bandwidthfrom the source to the destination VM measured with iperf3tool. Before the first migration (time 0 to 70), the sourceVM is placed in the host marked as 1 in Figure 4, on adifferent pod from the destination VM. As shown in thegraph, during this period the average bandwidth is roughly71 Mb/s and fluctuates considerably due to the backgroundtraffic generated by other hosts. After 70 seconds, a live VMmigration is performed to move the source VM to the hostmarked as 2 in Figure 4 and connected to a different ac-cess switch in the same pod of the destination VM. Duringthe live migration process, the VMs’ networking becomesunavailable for a short period. After this migration, band-width becomes less fluctuating and increases to 79Mb/s onaverage because less number of background flows are con-flicting on the traffic of source/destination VM pair. Thelast migration is done after 170 seconds of the experiment.The source VM is migrated to the host marked as 3 whichis connected to the same access switch as the destinationVM. With this migration, bandwidth becomes more stableand rises to the average of 86Mb/s where least backgroundtraffic affects the networking performance of the VMs.

The results demonstrate the feasibility of building a pro-totype system allowing network-wide communication mini-mization in a cloud data centre and open up the research

2502252001751501251007550251

90

80

70

60

50

40

30

20

10

0

Time (s)

Ban

dw

idth

(M

B/s

)

Figure 5: Impact of source VM live migration on the band-width of communicating VM pair

possibility on joint VM and traffic consolidation.

4. LIMITATIONS AND FUTURE DIREC-TIONS

Our results demonstrate the potential of the CLOUDS-Pi platform in enabling investigation on different aspects ofSDN-enabled cloud computing. One of the main benefitsof CLOUDS-Pi for conducting research on SDN in cloudcomputing compared to other prototyping methods or em-ulators such as Mininet is the option of VM management.VM management and the VM migration possibility, in par-ticular, is an important aspect of cloud computing which al-lows for VM consolidation to reduce power consumption andoverbooking resources to increase cost efficiency. Thanks toOpenStack setup in our platform, we can jointly leveragevirtualization capabilities and SDN for performing researchon VM and traffic consolidation [18].

Another benefit of CLOUDS-Pi is the possibility of con-ducting innovative research in traffic control and networkmanagement with high accuracy and performance fidelity.As shown in the first use case in Section 3, using ODLnorthbound APIs and its flow scheduling feature, we caninvestigate dynamic traffic engineering and load balancingfor reducing network congestion with a level of confidencethat would otherwise be beyond our reach.

Apart from network management, other research direc-tions that can be followed on our testbed platform includesbut not limited to Security (e.g., Network intrusion detec-tion [21]), Quality of Service support [8], Service FunctionChaining [14], and application-specific networking [22].

The current setup of CLOUDS-Pi has limited number ofresources. The number of switches and hosts used in build-ing the CLOUDS-Pi platform is far from enough to test thescalability of approaches that need to be deployed in clouddata centers hosting tens of thousands servers and thou-sands of network switches. In addition, since we are usingUSB 2.0 ports on Raspberry Pis with a nominal bandwidthof 480 Mb/s and 100 Mb/s USB to Ethernet adapters asswitch ports, our network bandwidth, even though suitablefor the scale of our testbed, is much lower than gigabit orterabit networks used in real-world cloud data centers. Al-though CLOUDS-Pi offers a suitable environment to carryout empirical research into SDN and cloud computing with-

out expenditure of full-size testbeds, we recommend the useof simulators such as CloudSimSDN [19] to evaluate scala-bility of policies.

Besides our testbed limitations, currently SDN itself facessome challenges such as scalability, reliability, and securitythat hinder its performance and application. The logicallycentralized control in SDN causes scalability concerns andespecially the controller scalability is one of the problemsthat needs special attention. The scalability issue of SDNbecomes more evident in large-scale networks such as clouddata centers compared to small networks [9]. Controller dis-tribution and the use of east/west APIs is one way to over-come computational load on the controller but it brings con-sistency and synchronization problems [9]. It also requiresstandard protocols for an interoperable state exchange be-tween controllers of different types.

The centralized control plane of SDN, for example thesingle host deployment of ODL controller in our testbed,has a critical reliability risk, such as a single point fail-ure. To tackle this problem, running backup controllers is acommonly used mechanism. However, defining the optimalnumber of controllers and the best locations for the primarycontrol and the backup controllers is challenging. Synchro-nization and concurrency issues also need to be addressedin this regard [1].

SDN has two basic security issues which are [17]: 1) thepotential for single point of attack and failure and 2) thesouthbound API connecting the controller and data-forwardingdevices is vulnerable to interception and attacks on com-munications. Even though TLS/SSL encryption techniquescan be used to secure communications between controller(s)and OpenFlow switches, the configuration is very complex,and many vendors do not provide support of TLS in theirOpenFlow switches by default. SDN security is critical sincethreats can degrade the availability, performance and in-tegrity of the network. While many efforts are currentlybeing made to address SDN security issues, this topic is ex-pected to draw increasing amounts of attention.

5. CONCLUSIONSWe presented CLOUDS-Pi, a low-cost testbed environ-

ment for SDN-enabled cloud computing. All aspects of ourplatform from its overall architecture to detailed softwarechoices are explained. We also demonstrated how economi-cal computers such as Raspberry Pis can be used to mimica network of OpenFlow switches in SDN-enabled cloud datacenter in scale. In order to evaluate our testbed, two usecases for dynamic flow scheduling and virtual machine man-agement are identified. We discussed benefits and limita-tions of CLOUDS-Pi as a research platform for different as-pects of SDN in clouds and proposed future research direc-tions. Our work demonstrated the potential of CLOUDS-Pienvironment for conducting practical research and prototyp-ing SDN-enabled cloud computing environments.

6. REFERENCES[1] I. F. Akyildiz, A. Lee, P. Wang, M. Luo, and

W. Chou. A roadmap for traffic engineering insdn-openflow networks. Computer Networks,71(Supplement C):1 – 30, 2014.

[2] M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable,commodity data center network architecture.SIGCOMM Comput. Commun. Rev., 38(4):63–74,2008.

[3] K. Alwasel, Y. Li, P. P. Jayaraman, S. Garg, R. N.Calheiros, and R. Ranjan. Programming sdn-nativebig data applications: Research gap analysis. IEEECloud Computing, 4(5):62–71, September 2017.

[4] R. Buyya, R. N. Calheiros, J. Son, A. V. Dastjerdi,and Y. Yoon. Software-defined cloud computing:Architectural elements and open challenges. In 2014International Conference on Advances in Computing,Communications and Informatics (ICACCI),Proceedings of the 2014 International Conference onAdvances in Computing, Communications andInformatics (ICACCI), pages 1–12, Sept 2014.

[5] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, andI. Brandic. Cloud computing and emerging itplatforms: Vision, hype, and reality for deliveringcomputing as the 5th utility. Future GenerationComputer Systems, 25(6):599 – 616, 2009.

[6] M. Gupta, J. Sommers, and P. Barford. Fast, accuratesimulation for sdn prototyping. In Proceedings of thesecond ACM SIGCOMM workshop on Hot topics insoftware defined networking, Proceedings of the 2ndACM SIGCOMM workshop on Hot topics in softwaredefined networking, pages 31–36. ACM, 2013.

[7] C. E. Hopps. Analysis of an equal-cost multi-pathalgorithm. RFC2992, 2000.

[8] M. Karakus and A. Durresi. Quality of service (qos) insoftware defined networking (sdn): A survey. Journalof Network and Computer Applications, 80:200 – 218,2017.

[9] M. Karakus and A. Durresi. A survey: Control planescalability issues and approaches in software-definednetworking (sdn). Computer Networks, 112:279 – 293,2017.

[10] H. Kim, J. Kim, and Y.-B. Ko. Developing acost-effective openflow testbed for small-scale softwaredefined networking. In Proceedings of the16thInternational Conference on Advanced CommunicationTechnology, Proceedings of the16th InternationalConference on Advanced Communication Technology,pages 758–761, Feb 2014.

[11] B. Lantz, B. Heller, and N. McKeown. A network in alaptop: rapid prototyping for software-definednetworks. In Proceedings of the 9th ACM SIGCOMMWorkshop on Hot Topics in Networks, Proceedings ofthe 9th ACM SIGCOMM Workshop on Hot Topics inNetworks, page 19. ACM, 2010.

[12] D. S. Linthicum. Software-defined networks meetcloud computing. IEEE Cloud Computing, 3(3):8–10,May 2016.

[13] N. McKeown, T. Anderson, H. Balakrishnan,G. Parulkar, L. Peterson, J. Rexford, S. Shenker, andJ. Turner. Openflow: Enabling innovation in campusnetworks. SIGCOMM Comput. Commun. Rev.,38(2):69–74, Mar. 2008.

[14] A. M. Medhat, T. Taleb, A. Elmangoush, G. A.Carella, S. Covaci, and T. Magedanz. Service functionchaining in next generation networks: State of the artand research challenges. IEEE CommunicationsMagazine, 55(2):216 – 223, 2017.

[15] T. D. Nadeau and K. Gray. SDN: Software DefinedNetworks: An Authoritative Review of Network

Programmability Technologies. ” O’Reilly Media, Inc.”,2013.

[16] N. Networks. Open vswitch: An open virtual switch.http://openvswitch.org/, Sept. 2017.

[17] S. Scott-Hayward, G. O’Callaghan, and S. Sezer. Sdnsecurity: A survey. In 2013 IEEE SDN for FutureNetworks and Services (SDN4FNS), Proceedings ofthe 2013 IEEE SDN for Future Networks and Services(SDN4FNS), pages 1–7, Nov 2013.

[18] J. Son, A. V. Dastjerdi, R. N. Calheiros, andR. Buyya. Sla-aware and energy-efficient dynamicoverbooking in sdn-based cloud data centers. IEEETransactions on Sustainable Computing, 2(2):76–89,2017.

[19] J. Son, A. V. Dastjerdi, R. N. Calheiros, X. Ji,Y. Yoon, and R. Buyya. Cloudsimsdn: Modeling andsimulation of software-defined cloud data centers. InProceedings of the 15th IEEE/ACM InternationalSymposium on Cluster, Cloud and Grid Computing,Proceedings of the 15th IEEE/ACM International

Symposium on Cluster, Cloud and Grid Computing,pages 475–484, May 2015.

[20] F. P. Tso, D. R. White, S. Jouet, J. Singer, and D. P.Pezaros. The glasgow raspberry pi cloud: A scalemodel for cloud computing infrastructures. In 2013IEEE 33rd International Conference on DistributedComputing Systems Workshops, Proceedings of the33rd IEEE International Conference on DistributedComputing Systems Workshops, pages 108–112, July2013.

[21] Q. Yan, F. R. Yu, Q. Gong, and J. Li.Software-defined networking (sdn) and distributeddenial of service (ddos) attacks in cloud computingenvironments: A survey, some research issues, andchallenges. IEEE Communications Surveys Tutorials,18(1):602–622, 2016.

[22] S. Zhao and D. Medhi. Application-aware networkdesign for hadoop mapreduce optimization usingsoftware-defined networking. IEEE Transactions onNetwork and Service Management, PP(99):1–1, 2017.