13
Performance Evaluation of Container- based Virtualization for High Performance Computing Environments Miguel G. Xavier, Marcelo V. Neves, Fabio D. Rossi, Tiago C. Ferreto, Timoteo Lange, Cesar A. F. De Rose [email protected] Faculty of Informatics, PUCRS Porto Alegre, Brazil February 27, 2013

Peformance Evaluation of Container-based Vi

Embed Size (px)

Citation preview

Page 1: Peformance Evaluation of Container-based Vi

Performance Evaluation of Container-based Virtualization for High Performance Computing Environments

Miguel G. Xavier, Marcelo V. Neves, Fabio D. Rossi, Tiago C. Ferreto, Timoteo Lange, Cesar A. F. De Rose

[email protected]

Faculty of Informatics, PUCRSPorto Alegre, Brazil

February 27, 2013

Page 2: Peformance Evaluation of Container-based Vi

Outline• Introduction• Container-based Virtualization• Evaluation• Conclusion

Page 3: Peformance Evaluation of Container-based Vi

Introduction• Virtualization

• Hardware independence, availability, isolation and security• Better manageability• Widely used in datacenters/cloud computing• Total cost of ownership is reduced

• HPC and Virtualization • Usage scenarios

• Better resource sharing• Custom environments

• However, hypervisor-based technologies in HPC environments has traditionally been avoided

Page 4: Peformance Evaluation of Container-based Vi

Container-based Virtualization• A lightweight virtualization layer • Non virtualized drivers• Linux-Vserver, OpenVZ and LXC

Miguel Xavier
Container-based systems offer a lightweight virtualization layer which promises a near-native performance
Page 5: Peformance Evaluation of Container-based Vi

Evaluation• Experimental Environment

• Cluster composed by 4 nodes • Two processors with 8 cores (without threads)• 16GB of memory

• Evaluations• Analyzing the best results of performance

• Through micro-benchmarks (such as CPU, disk, memory, network) in a single node

• Through macro-benchmarks (such as HPC)

• Analyzing the best results of isolation• Through IBS benchmark

Page 6: Peformance Evaluation of Container-based Vi

CPU Evaluation

• All of Container-based systems obtained performance results similar to native

• No influence of the different CPU schedulers when a single CPU-intensive process is run in a single processor

• Xen presents a average overhead of 4.3%

LINPACK Benchmark (source: http://www.netlib.org/linpack/)

Page 7: Peformance Evaluation of Container-based Vi

Memory Bandwidth Evaluation

STREAM Benchmark (source: https://www.cs.virginia.edu/stream/)

• Container-based systems have the ability to return unused memory to the host and other containers

• Xen presented 31% of performance overhead when compared to the native throughput

Miguel Xavier
The worst results were observed in Xen, which presented an average overhead of approximately 31% when compared to the native throughput. This overhead is caused by the hypervisor-based virtualization layer that performs memory accesses translation, resulting in loss of performance
Miguel Xavier
This behavior enables a better use o memory
Page 8: Peformance Evaluation of Container-based Vi

Disk Evaluation

IOZone Benchmark (source: https://www.iozone.org)

• LXC and Linux-VServer use the ”deadline” linux scheduler

• OpenVZ uses CFQ scheduler in order to provide the container disk priority functionality

• Xen uses virtualized drivers which are not able to achieve a high performance yet

Miguel Xavier
The ”deadline” scheduler imposes a deadline on all I/O operations to ensure that no request gets starved, and aggressively reorders requests to ensure improvement in I/O performance
Page 9: Peformance Evaluation of Container-based Vi

Network Evaluation

NETPIPE Benchmark (source: http://www.scl.ameslab.gov/netpipe/)

• Xen obtained the worst performance among the virtualization systems probably due to network driver virtualized

Page 10: Peformance Evaluation of Container-based Vi

HPC Evaluation

NAS-MPI Benchmark (source: http://www.nas.nasa.gov/publications/npb.html)

• At this moment, is possible to observe that all container-based systems slightly exceeds the native performance

• All HPC benchmarks while performed on Xen suffered even more overheads by reason of the network penalties

Page 11: Peformance Evaluation of Container-based Vi

Isolation

Isolation Benchmark Suite (source: http://web2.clarkson.edu/class/cs644/isolation/

• The results represent how much the applications performance is impacted by different stress tests in another vm/container

• DNR means that application was not able to run• All of Container-based systems had some impact in isolation

LXC OpenVZ Vserver Xen

CPU 0 0 0 0

Memory Bomb 88,2% 89,3% 20,6% 0,9%

Disk Stress 9% 39% 48,8% 0

Fork Bomb DNR DNR DNR 0

Network Receiver 2,2% 4,5% 13,6% 0,9%

Network Sender 10,3% 35,4% 8,2% 0,3%

Page 12: Peformance Evaluation of Container-based Vi

Conclusions• All container-based systems have a near-native performance of CPU, memory,

disk and network

• The only resource that could be successfully isolated was CPU. All three systems showed poor performance isolation for memory, disk and network

• Since the HPC applications were tested, so far, LXC demonstrates to be the most suitable of the container-based systems for HPC due its use facilities and management

Page 13: Peformance Evaluation of Container-based Vi

Thank you!