Upload
miguel-xavier
View
296
Download
0
Embed Size (px)
Citation preview
Performance Evaluation of Container-based Virtualization for High Performance Computing Environments
Miguel G. Xavier, Marcelo V. Neves, Fabio D. Rossi, Tiago C. Ferreto, Timoteo Lange, Cesar A. F. De Rose
Faculty of Informatics, PUCRSPorto Alegre, Brazil
February 27, 2013
Outline• Introduction• Container-based Virtualization• Evaluation• Conclusion
Introduction• Virtualization
• Hardware independence, availability, isolation and security• Better manageability• Widely used in datacenters/cloud computing• Total cost of ownership is reduced
• HPC and Virtualization • Usage scenarios
• Better resource sharing• Custom environments
• However, hypervisor-based technologies in HPC environments has traditionally been avoided
Container-based Virtualization• A lightweight virtualization layer • Non virtualized drivers• Linux-Vserver, OpenVZ and LXC
Evaluation• Experimental Environment
• Cluster composed by 4 nodes • Two processors with 8 cores (without threads)• 16GB of memory
• Evaluations• Analyzing the best results of performance
• Through micro-benchmarks (such as CPU, disk, memory, network) in a single node
• Through macro-benchmarks (such as HPC)
• Analyzing the best results of isolation• Through IBS benchmark
CPU Evaluation
• All of Container-based systems obtained performance results similar to native
• No influence of the different CPU schedulers when a single CPU-intensive process is run in a single processor
• Xen presents a average overhead of 4.3%
LINPACK Benchmark (source: http://www.netlib.org/linpack/)
Memory Bandwidth Evaluation
STREAM Benchmark (source: https://www.cs.virginia.edu/stream/)
• Container-based systems have the ability to return unused memory to the host and other containers
• Xen presented 31% of performance overhead when compared to the native throughput
Disk Evaluation
IOZone Benchmark (source: https://www.iozone.org)
• LXC and Linux-VServer use the ”deadline” linux scheduler
• OpenVZ uses CFQ scheduler in order to provide the container disk priority functionality
• Xen uses virtualized drivers which are not able to achieve a high performance yet
Network Evaluation
NETPIPE Benchmark (source: http://www.scl.ameslab.gov/netpipe/)
• Xen obtained the worst performance among the virtualization systems probably due to network driver virtualized
HPC Evaluation
NAS-MPI Benchmark (source: http://www.nas.nasa.gov/publications/npb.html)
• At this moment, is possible to observe that all container-based systems slightly exceeds the native performance
• All HPC benchmarks while performed on Xen suffered even more overheads by reason of the network penalties
Isolation
Isolation Benchmark Suite (source: http://web2.clarkson.edu/class/cs644/isolation/
• The results represent how much the applications performance is impacted by different stress tests in another vm/container
• DNR means that application was not able to run• All of Container-based systems had some impact in isolation
LXC OpenVZ Vserver Xen
CPU 0 0 0 0
Memory Bomb 88,2% 89,3% 20,6% 0,9%
Disk Stress 9% 39% 48,8% 0
Fork Bomb DNR DNR DNR 0
Network Receiver 2,2% 4,5% 13,6% 0,9%
Network Sender 10,3% 35,4% 8,2% 0,3%
Conclusions• All container-based systems have a near-native performance of CPU, memory,
disk and network
• The only resource that could be successfully isolated was CPU. All three systems showed poor performance isolation for memory, disk and network
• Since the HPC applications were tested, so far, LXC demonstrates to be the most suitable of the container-based systems for HPC due its use facilities and management
Thank you!