Upload
boden-russell
View
566
Download
0
Tags:
Embed Size (px)
DESCRIPTION
LXC benefit realization presented @ cloudexpo 2014 east in NYC
Citation preview
Linux Containers – NextGen Virtualization for Cloud (Benefit Realization)
Cloud ExpoJune 10-12, 2014New York City, NY
Boden Russell ([email protected])
04/10/2023 2
Why LXC: Performance
Manual VM LXC
Provision Time
Days
Minutes
Seconds / ms
linpack performance @ 45000
0
50
100
150
200
250
vcpus
GF
lop
s
04/10/2023 3
Why LXC: Industry UptrendGoogle trends - LXC
Google trends - docker
04/10/2023 4
Why LXC: Flexible & LightweightVirtual Machines Linux Containers
OS
bins / libsapp
OS
bins / libsapp app
bins / libsapp
bins / libsapp app
app app
OS
bins / libs
app
OS
bins / libs
app
OS
bins / libs
app
bins / libsapp
bins / libsapp
bins / libsapp
bins / libsapp
bins / libsapp
bins / libsapp
bins / libsapp
bins / libsapp
bins / libsapp
Flex
ibili
tyD
ensi
ty
OS
04/10/2023 5
Why LXC: Lower TCO
Supported with out of the box modern Linux Kernel
Open source toolsets
Cloudy integration
04/10/2023 6
Definitions
Linux Containers (LXC LinuX Containers)– Lightweight virtualization– Realized using features provided by a modern Linux kernel – VMs without the hypervisor (kind of)
Containerization of– (Linux) Operating Systems– Single or multiple applications
LXC as a technology ≠ LXC “tools”
04/10/2023 7
Hypervisors vs. Linux Containers
Hardware
Operating System
Hypervisor
Virtual Machine
Operating System
Bins / libs
App App
Virtual Machine
Operating System
Bins / libs
App App
Hardware
Hypervisor
Virtual Machine
Operating System
Bins / libs
App App
Virtual Machine
Operating System
Bins / libs
App App
Hardware
Operating System
Container
Bins / libs
App App
Container
Bins / libs
App App
Type 1 Hypervisor Type 2 Hypervisor Linux Containers
Containers share the OS kernel of the host and thus are lightweight.However, each container must have the same OS kernel.
Containers are isolated, but share OS and, where appropriate, libs / bins.
04/10/2023 8
Hypervisor VM vs. LXC vs. Docker LXC
04/10/2023 9
LXC Technology Stack
Use
r Spa
ceKe
rnel
Spa
ce
Kernel
System Call Interface
Architecture Dependent Kernel Code
GLIBC / Pseudo FS / User Space Tools & Libs
Linux Container Tooling
Linux Container Commoditization
Orchestration & Management
Hardware
cgro
ups
nam
espa
ces
chro
ots
LSM
lxc
04/10/2023 10
About This Benchmark Use case perspective
– As an OpenStack Cloud user I want a Ubuntu based VM with MySQL… Why would I choose docker LXC vs a traditional hypervisor?
OpenStack “Cloudy” perspective– LXC vs. traditional VM from a Cloudy (OpenStack) perspective– VM operational times (boot, start, stop, snapshot)– Compute node resource usage (per VM penalty); density factor
Guest runtime perspective– CPU, memory, file I/O, MySQL OLTP, etc.
Why KVM?– Exceptional performance
DISCLAIMERSThe tests herein are semi-active litmus tests – no in depth tuning,
analysis, etc. More active testing is warranted. These results do not necessary reflect your workload or exact performance nor are they
guaranteed to be statistically sound.
04/10/2023 11
Benchmark Environment Topology @ SoftLayer
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
docker lxc
dstat
controller compute node
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
KVM
dstat
controller compute node
+Awesome!
+Awesome!
04/10/2023 12
STEADY STATE VM PACKING
OpenStack Cloudy Benchmark
04/10/2023 13
Cloudy Performance: Steady State Packing Benchmark scenario overview
– Pre-cache VM image on compute node prior to test– Boot 15 VM asynchronously in succession– Wait for 5 minutes (to achieve steady-state on the
compute node)– Delete all 15 VMs asynchronously in succession
Benchmark driver– cpu_bench.py
High level goals– Understand compute node characteristics under
steady-state conditions with 15 packed / active VMs
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 4702468
10121416
Benchmark Visualization
VMs
Time
Activ
e VM
s
04/10/2023 14
Cloudy Performance: Steady State Packing
1 11 21 31 41 51 61 71 81 91 101 111 121 131 141 151 161 171 181 191 201 211 221 231 241 251 261 271 281 291 301 311 3210
1020304050607080
Docker: Compute Node CPU (full test duration)
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 0.54
– 0.17
1 11 21 31 41 51 61 71 81 91 1011111211311411511611711811912012112212312412512612712812913013113213313410
1020304050607080
KVM: Compute Node CPU (full test duration)
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 7.64
– 1.4
04/10/2023 15
Cloudy Performance: Steady State Packing
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 197 204 2110
2
4
6
8
10
12
14
Docker: Compute Node Steady-State CPU (segment: 31s – 243s)
usrsys
Time (31s – 243s)
CPU
Usag
e In
Per
cent
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 197 204 2110
2
4
6
8
10
12
14
KVM: Compute Node Steady-State CPU (segment: 95s – 307s)
usrsys
Time (95s - 307s)
CPU
Usag
e In
Per
cent
Averages
– 0.2
– 0.03
Averages
– 1.91
– 0.36
31 seconds243 seconds
95 seconds307 seconds
04/10/2023 16
Cloudy Performance: Steady State Packing
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 1061131201271341411481551621691761831901972042110
2
4
6
8
10
12
14
Docker / KVM: Compute Node Steady-State CPU (Segment Overlay)
docker-usrdocker-syskvm-usrkvm-sys
Time: KVM(95s - 307s) Docker(31s – 243s)
CPU
Usag
e In
Per
cent
docker: 31sKVM: 95s
docker: 243sKVM: 307s
Docker Averages
– 0.2
– 0.03
KVM Averages
– 1.91
– 0.36
04/10/2023 17
Cloudy Performance: Steady State Packing
1 13 25 37 49 61 73 85 97 109 121 133 145 157 169 181 193 205 217 229 241 253 265 277 289 301 313 325 3370.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+09
Docker / KVM: Compute Node Used Memory (Overlay)
kvmdocker
Axis Title
Mem
ory
Used
dockerDelta734 MBPer VM49 MB
KVMDelta4387 MBPer VM292 MB
04/10/2023 18
Cloudy Performance: Steady State Packing
1 11 21 31 41 51 61 71 81 91 1011111211311411511611711811912012112212312412512612712812913013113210
102030405060708090
100
Docker: Compute Node 1m Load Average (full test duration)
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
0.15 %
1 11 21 31 41 51 61 71 81 91 1011111211311411511611711811912012112212312412512612712812913013113213310
102030405060708090
100
KVM: Compute Node 1m Load Average (full test duration)
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
35.9 %
04/10/2023 19
SERIALLY BOOT 15 VMS
OpenStack Cloudy Benchmark
04/10/2023 20
Cloudy Performance: Serial VM Boot Benchmark scenario overview
– Pre-cache VM image on compute node prior to test– Boot VM– Wait for VM to become ACTIVE– Repeat the above steps for a total of 15 VMs– Delete all VMs
Benchmark driver– OpenStack Rally
High level goals– Understand compute node characteristics under
sustained VM boots
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 2002468
10121416
Benchmark Visualization
VMs
Time
Activ
e VM
s
04/10/2023 21
Cloudy Performance: Serial VM Boot
docker KVM0
1
2
3
4
5
6
7
3.52911310196
5.78166244825
Average Server Boot Time
Series1
Tim
e In
Sec
onds
04/10/2023 22
Cloudy Performance: Serial VM Boot
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 7905
101520253035
Docker: Compute Node CPU
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 1.39
– 0.57
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 109 113 117 121 1250
5
10
15
20
25
30
35
KVM: Compute Node CPU Usage
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 13.45
– 2.23
04/10/2023 23
Cloudy Performance: Serial VM Boot
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 1260
5
10
15
20
25
30
35
Docker / KVM: Compute Node CPU (Unnormalized Overlay)
kvm-usrkvm-sysdocker-usrdocker-sys
Time
CPU
Usag
e In
Per
cent
04/10/2023 24
Cloudy Performance: Serial VM Boot
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 510
5
10
15
20
25
f(x) = 0.00948850678733032 x + 1.00804392156863
f(x) = 0.358234479638009 x + 1.0632956862745
Docker / KVM: Serial VM Boot Usr CPU (segment: 8s - 58s)
docker(8-58)Linear (docker(8-58))kvm(8-58)Linear (kvm(8-58))
Time (8s - 58s)
Usr C
PU In
Per
cent
8 seconds 58 seconds
04/10/2023 25
Cloudy Performance: Serial VM Boot
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 1260.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
4.00E+09
4.50E+09
5.00E+09
Docker / KVM: Compute Node Memory Used (Unnormalized Overlay)
kvmdocker
Time
Mem
ory
Used
DockerDelta677 MBPer VM45 MB
KVMDelta2737 MBPer VM182 MB
04/10/2023 26
Cloudy Performance: Serial VM Boot
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 650.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
f(x) = 11773408.1342657 x + 1449606116.43077
f(x) = 29765955.3118881 x + 1178597198.76923
Docker / KVM: Serial VM Boot Memory Usage (segment: 1s - 67s)
dockerLinear (docker)kvmLinear (kvm)
Time (1s - 67s)
Mem
ory
Usag
e
1 second 67 seconds
04/10/2023 27
Cloudy Performance: Serial VM Boot
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 790
5
10
15
20
25
30
35
Docker: Compute Node 1m Load Average
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
0.25 %
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 109 113 117 121 12505
101520253035
KVM: Compute Node 1m Load Average
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
11.18 %
04/10/2023 28
SERIAL VM SOFT REBOOT
OpenStack Cloudy Benchmark
04/10/2023 29
Cloudy Performance: Serial VM Reboot Benchmark scenario overview
– Pre-cache VM image on compute node prior to test– Boot a VM & wait for it to become ACTIVE– Soft reboot the VM and wait for it to become ACTIVE
• Repeat reboot a total of 5 times– Delete VM– Repeat the above for a total of 5 VMs
Benchmark driver– OpenStack Rally
High level goals– Understand compute node characteristics under sustained VM reboots
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 550
1
2
3
4
5
6
Benchmark Visualization
Active VMs
Time
Activ
e VM
s
04/10/2023 30
Cloudy Performance: Serial VM Reboot
docker KVM0
20
40
60
80
100
120
140
2.57787958145
124.433238959
Average Server Reboot Time
Series1
Tim
e In
Sec
onds
04/10/2023 31
Cloudy Performance: Serial VM Reboot
docker KVM0
0.5
1
1.5
2
2.5
3
3.5
4
3.56758604053.47976005077
Average Server Delete Time
Series1
Tim
e In
Sec
onds
04/10/2023 32
Cloudy Performance: Serial VM Reboot
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 1090123456789
10
Docker: Compute Node CPU
usrsys
Time
CPU
Usag
e In
Per
cent
3 129 255 381 507 633 759 885 1011113712631389151516411767189320192145227123972523264927752901302731530123456789
10
KVM: Compute Node CPU
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 0.69
– 0.26
Averages
– 0.84
– 0.18
04/10/2023 33
Cloudy Performance: Serial VM Reboot
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 1090.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
Docker: Compute Node Used Memory
Memory
Time
Mem
ory
Used
Delta48 MB
3 143 283 423 563 703 843 983 1123126314031543168318231963210322432383252326632803294330830.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
KVM: Compute Node Used Memory
Memory
Time
Mem
ory
Used
Delta486 MB
04/10/2023 34
Cloudy Performance: Serial VM Reboot
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 1090
0.5
1
1.5
2
2.5
3
Docker: Compute Node 1m Load Average
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
0.4 %
3 129 255 381 507 633 759 885 1011113712631389151516411767189320192145227123972523264927752901302731530
0.5
1
1.5
2
2.5
3
KVM: Compute Node 1m Load Average
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
0.33 %
04/10/2023 35
SNAPSHOT VM TO IMAGE
OpenStack Cloudy Benchmark
04/10/2023 36
Cloudy Performance: Snapshot VM To Image Benchmark scenario overview
– Pre-cache VM image on compute node prior to test– Boot a VM– Wait for it to become ACTIVE– Snapshot the VM– Wait for image to become ACTIVE– Delete VM
Benchmark driver– OpenStack Rally
High level goals– Understand cloudy ops times from a user perspective
04/10/2023 37
Cloudy Performance: Snapshot VM To Image
docker KVM0
10
20
30
40
50
60
36.8875639439
48.0231380463
Average Snapshot Server Time
Series1
Tim
e In
Sec
onds
04/10/2023 38
Cloudy Performance: Snapshot VM To Image
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 650
1
2
3
4
5
6
7
Docker: Compute Node CPU
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 0.42
– 0.15
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 109 1130
1
2
3
4
5
6
7
KVM: Compute Node CPU
usrsys
Time
CPU
Usag
e In
Per
cent
Averages
– 1.46
– 1.0
04/10/2023 39
Cloudy Performance: Snapshot VM To Image
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 10110510911314800000001500000000152000000015400000001560000000158000000016000000001620000000164000000016600000001680000000
KVM: Compute Node Used Memory
Memory
Time
Mem
ory
Used
Delta114 MB
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 6516000000001610000000162000000016300000001640000000165000000016600000001670000000168000000016900000001700000000
Docker: Compute Node Memory Used
Memory
Time
Mem
ory
Used
Delta57 MB
04/10/2023 40
Cloudy Performance: Snapshot VM To Image
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 650
0.020.040.060.08
0.10.120.14
Docker: Compute Node 1m Load Average
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
0.06 %
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 109 1130
0.10.20.30.40.50.60.70.80.9
KVM: Compute node 1m Load Average
1m
Time
1 M
inut
e Lo
ad A
vera
ge
Average
0.47 %
04/10/2023 41
GUEST PERFORMANCE BENCHMARKS
Guest VM Benchmark
04/10/2023 42
Guest Ops: Network
docker KVM0
100
200
300
400
500
600
700
800
900
1000940.26 940.56
Network Throughput
Series1
Thro
ughp
ut In
10^
6 bi
ts/s
econ
d
04/10/2023 43
Guest Ops: Near Bare Metal Performance
Typical docker LXC performance near par with bare metal
linpack performance @ 45000
0
50
100
150
200
250
vcpus
GF
lop
s
220.77Bare metal220.5
@32 vcpu
220.9@ 31 vcpu
MEMCPY DUMB MCBLOCK0
2000
4000
6000
8000
10000
12000
14000
Memory Benchmark Performance
Bare Metal (MiB/s)docker (MiB/s)KVM (MiB/s)
Memory Test
MiB
/s
04/10/2023 44
Runtime Performance Benefits – Block I/O
Tested with [standard] AUFS
04/10/2023 45
Guest Ops: File I/O Random Read / Write
1 2 4 8 16 32 640
200
400
600
800
1000
1200
1400
1600
Sysbench Synchronous File I/O Random Read/Write @ R/W Ratio of 1.50
dockerKVM
Threads
Tota
l Tra
nsfe
rred
In K
b/se
c
04/10/2023 46
Guest Ops: MySQL OLTP
1 2 4 8 16 32 640
2000
4000
6000
8000
10000
12000
14000
MySQL OLTP Random Transactional R/W (60s)
dockerKVM
Threads
Tota
l Tra
nsac
tions
04/10/2023 47
Guest Ops: MySQL Indexed Insertion
100000 200000 300000 400000 500000 600000 700000 800000 900000 10000000
20
40
60
80
100
120
140
MySQL Indexed Insertion @ 100K Intervals
dockerkvm
Table Size In Rows
Seco
nds P
er 1
00K
Inse
rtion
Bat
ch
04/10/2023 48
Cloud Management Impacts on LXC
docker cli nova-docker0
0.5
1
1.5
2
2.5
3
3.5
4
0.17
3.52911310196
Docker: Boot Container - CLI vs Nova Virt
Series1
Seco
nds
Cloud management often caps true ops performance of LXC
04/10/2023 49
Ubuntu MySQL Image Size
docker kvm0
200
400
600
800
1000
1200
381.5
1080
Docker / KVM: Ubuntu MySQL
Series1
Size
In M
B
Out of the box JeOS images for docker are lightweight
04/10/2023 50
LXC In Summary
Near bare metal performance in the guest Fast operations in the Cloud
– Often capped by Cloud management framework Reduced resource consumption (CPU, MEM) on the compute
node – greater density Out of the box smaller image footprint
04/10/2023 51
LXC Gaps
There are gaps…
Lack of industry tooling / support Live migration still a WIP Full orchestration across resources (compute / storage / networking) Fears of security Not a well known technology… yet Integration with existing virtualization and Cloud tooling Not much / any industry standards Missing skillset Slower upstream support due to kernel dev process Memory /CPU proc FS not cgroup aware yet Etc.
04/10/2023 52
LXC: Use Cases For Traditional VMs
There are still use cases where traditional VMs are warranted.
Virtualization of non Linux based OSs– Windows– AIX– Etc.
LXC not supported on host VM requires unique kernel setup which is not applicable to
other VMs on the host (i.e. per VM kernel config)
04/10/2023 53
LXC Recommendations Private environments (trusted code)
– App packaging / deployment / management / etc, devOps, Cloud, etc… No additional worries about security
Public environments– Single tenant
• Same restrictions as private envs; tenant trusted code
– Multi tenant
Privileges, Multitenancy, Untrusted Code
Secu
rity
Mea
sure
s
LSM, capabilities,seccomp, RO bind mounts,
GRSEC, etc.
LXC Security Triangle
04/10/2023 54
References & Related Links http://www.slideshare.net/BodenRussell/realizing-linux-containerslxc http://bodenr.blogspot.com/2014/05/kvm-and-docker-lxc-benchmarking-with.htm
l https://www.docker.io/ http://sysbench.sourceforge.net/ http://dag.wiee.rs/home-made/dstat/ http://www.openstack.org/ https://wiki.openstack.org/wiki/Rally https://wiki.openstack.org/wiki/Docker http://devstack.org/ http://www.linux-kvm.org/page/Main_Page https://github.com/stackforge/nova-docker https://github.com/dotcloud/docker-registry http://www.netperf.org/netperf/ http://www.tokutek.com/products/iibench/ http://www.brendangregg.com/activebenchmarking.html http://wiki.openvz.org/Performance http://www.slideshare.net/jpetazzo/linux-containers-lxc-docker-and-security