Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
Privacy Protection in Virtualized Multi-tenant Cloud:
Software and Hardware Approaches
Haibo Chen Institute of Parallel and Distributed Systems
Shanghai Jiao Tong University http://ipads.se.sjtu.edu.cn/haibo_chen
A Review on a Keynote Speech by Justin Ratter in ISCA 2008
Integrated Approach to CA & OS?
This Talk
Identify security issues with current cloud platform Describe two approaches to privacy protection of VMs
Software approach: nested virtualization Hardware approach: secure processor
A case of hybrid approach to systems research
Security Issues with Current Cloud Platform
Virtualization: enabling the cloud
VM Image
User Auth. & Payment
VMM
Control VM
User VM
User VM
User VM
Can we simply believe in cloud?
Is this “bubble”
trustworthy?
8
Security Concerns in Cloud IDC Survey on Cloud
7 Threats of Cloud Security [Gartner 08]
• Privileged operator access • Regulatory compliance • Data location • Data segregation • Recovery • Investigative support • Long-term viability
9
Why we cannot trivially believe in multi-tenant cloud?
Reason#1: curious or malicious operators
..., peeking in on emails, chats and Google Talk call logs for several months …
Reason#2: huge TCB for cloud
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
VMM Dom0 Kernel Tools TCB
KLOCs
Xen Code Size
2.0
3.0
4.0
VMM
Trusted Compu-ng Base
Control VM
Tools
Kernel
Guest VM
The TCB is growing to 9 Million LOCs by 2011
One point of penetraGon leads to full compromise
37 security issues are found in Xen and 53 in VMWare by Oct 2010. [CVE’12]
The virtualiza-on stack should be untrusted
How Can You Break Virtualization Layer?
ManagementVM Kernel
Guest
VM
VMM
Hardware
Guest
VM
Guest
VMOper at or
1
at t ac k i ng s ur f ac e
at t ac k
2
MicrosoB Windows® Azure™ PlaJorm Privacy Statement, Mar 2011
Amazon AWS User Agreement, 2010
Result: Limited Security Guarantees in Public Cloud
Outline
Overall Idea Software-based Protection
CloudVisor: privacy protection of VMs in multi-tenant cloud with nested virtualization (SOSP 2011)
Hardware-based Protection
HyperCoffer: processor-rooted trust for guest VM protection (HPCA 2013)
A Design Principle
”Any problem in computer science can be solved with another level of indirection.”
– David Wheeler in Butler Lampson’s 1992 ACM Turing Award speech
Functionality of Virtualization-Layer
Major functionality Resource management: manage memory, devices and CPU cores Multi-tenancy: multiplexing hardware resources to tenants, i.e., creating/running multiple VMs Cloud management: VM save, clone, migration
Minor functionality Security protection (e.g., isolation, access control)
Unfortunately, they are intertwined together in the same hypervisor-layer
Forming a large trusted computing base (TCB)
Minimize TCB (Trusted Computing Base) “Privileged operator access” is the top threat [Gartner’08] The hypervisor layer is getting more complex and vulnerable
Add “another layer of indirection”
Separate security protection from other main functionalities Software-based protection: nested virtualization
Add one thin layer below the hypervisor: CloudVisor Hardware-based protection: reducing TCB to a secure processor
Leverage a secure processor to do security and privacy protection
Main Idea
18
CloudVisor: Security Protection of Virtual Machines Using a Nested
Hypervisor
Goal of CloudVisor
Defend again curious or malicious cloud operators Ensure privacy and integrity of a tenant’s VM
Transparent with existing cloud infrastructure Little or no changes to virtualization stack (OS, VMM)
Minimized TCB for cloud Easy to verify correctness (e.g., formal verification)
Non-goals
Side-channel attacks, exploiting a user-VM from network, Execution correctness of a VM
Observation and idea
Key observation: protection logics for VMs are mostly fixed
Idea: separate resource management from security protection
CloudVisor: another layer of indirection Responsible for security protection of VMs
(Unmodified) VMM VM multiplexing and management
Result Minimized TCB VMM and CloudVisor separately designed and evolved
Architecture (logically) of CloudVisor
Bootstrap Intel TXT for late launch CloudVisor Hash of CloudVisor is stored in TPM
CPU states Interpose control switches between VMM and VM (i.e., VMExit)
Memory Pages
Interpose address translaGon from guest physical address to host physical address
I/O data Transparent whole VM image encrypGon Decrypt/encrypt I/O data in CloudVisor
VM protection approach
See our paper for more details
Implementation
Xen VMM Run unmodified Windows, Linux Virtual Machine 1 LOC change to Xen to late launch CloudVisor 100 LOCs patch to Xen to reduce VMExit (Optional)
Run on SMP and support SMP VMs
5500 LOCs small TCB, might be suitable for formal verification
Performance
0
0.2
0.4
0.6
0.8
1
1.2
KBuild apache SPECjbb memcached Average
6.0% 0.2% 2.6% 1.9% 2.7%
Normalized
Slowdo
wn Co
mpa
red to Xen
Xen
CV
Average slowdown 2.7%
Remaining Issues
What if an adversary break into hardware?
Physical Threats can be Real Hardware Maintenance
Thousands of machine failures per year [Schroeder et al, SIGMETRICS’09] in a datacenter Replacement of memory and disk has becomes daily routine
Data Residual Memory bus sniffer Non-volatile memory Cold-boot attack [Halderman’09]
Surveillance camera is NOT Enough! 27
HyperCoffer
HyperCoffer: Processor-Rooted
Transparent Protection of VMs
Goal: Minimalize TCB to Processor
Sec-CPU
Hypervisor
VM-1
Memory
Disk
Bus
Dom-0
NIC Other
Software
Hardware
Untrusted
Trusted
VM-2
CPU
Hypervisor
VM-1
Memory
Disk
Bus
Dom-0
NIC Other
VM-2
Traditional System HyperCoffer
Reduce TCB to only Secure Processor
Background: Secure Processor
Previous Work on Secure Processor Data Privacy
Data is encrypted outside CPU Data is decrypted only in cache
Data Integrity Update hash tree at every write from cache to memory Check hash tree at every read from memory to cache
Mainly used to protect application from Untrusted OS
30
Address-Independent Seed Encryption
Data (PlainText) Counter
Address
Encrypt
VM-‐Key
PAD
Data (CypherText)
XOR
Data Cache Counter Cache
Counters
Secure Processor
31
Secure Processor: Merkle Hash Tree
Data (CypherText) Counters Hash
Root
Secure Processor
32
Secure Processor: Bonsai Merkle Tree
Data (CypherText) Counters Hash
Root
Secure Processor
33
Prior Hardware Approaches
Not really considers systems issues HyperWall (ASPLOS’12)
requires an OS to specify which pages should be protected Ignore complex interactions between hypervisor and guest VM Not compatible with existing VM operations
H-SVM (MICRO’11) Use microprogram to do memory isolation, no defense against hardware attacks Require OS to designate which memory is protected or not
Most others focus on fine-grained protection of applications or app modules (e.g., SecureME, Bastion)
Challenges
1. Interaction between hypervisor and VMs Selectively expose fields of CPU context to the hypervisor Auxiliary info for instruction emulation (e.g. guest page table) I/O emulation for both disk and NIC
2. Backward compatibility with existing OS Minimize the cost of deployment
3. Supporting VM operations Not limited by data structure on the chip
HyperCoffer Retains OS-transparency with VM-Shim
Design Overview
36
Shim
OS
App
Hypervisor
Shim
OS
App
I/O Dev Memory Secure CPU
VM-1 VM-2
Shim Mode
Host Mode
Guest Mode
Data Exchange
Data Exchange
Design Overview
1. Complete Isolation VM-Table to support multiple VMs Tagged cache for different VMs Dedicated EPT memory
2. Controlled Interaction
VM-Shim: Control Interposition Shim mode
VM-Shim: Data Interaction
3 types of interactive data
37
STEP-1: COMPLETE ISOLATION
38
Leveraging Secure Processor Data Privacy: AISE
All the data in VM is encrypted Different VM has different key VM-keys are saved in-chip Malicious hypervisor/hardware cannot read VM data
Data Integrity: BMT
BMT (Bonsai Merkel Tree) Root hash is saved in-chip Every memory read will check hash value Every memory write will update BMT
39
VM-Table
VM-Table Contents Each running VM has one entry Each entry contains a KVM and vm_vector
Kvm is used to encrypt VM, it is per-VM based vm_vector contains VM info for secure processor, used to verify a VM image E.g., the root hash of the BMT
VM-Table is saved in a preserved memory region in encrypted form
40
BMT & AISE are NOT Enough 1. Inter-VM Remapping Attack
Malicious hypervisor remaps VM-A’s page to VM-B If the data of the page is in cache, then VM-B gets it Problem: data in cache is plaintext Solution: tag cache-line with VMID
41
VM-‐A EPT VM-‐B EPT CPU
Cache-‐line
Memory
HPA
GPA
BMT & AISE are NOT Enough 2. Intra-VM Remapping Attack
Malicious hypervisor remaps a VM’s page-A to page-B If the data is in cache, then two pages are switched Solution: dedicated memory for EPT
Flush cache when EPT is changed Optimization: lazy TLB flushing only at n-TLB missing
42
Y X
A B
Y X
A B
HPA
GPA
STEP-2: CONTROLLED INTERACTION
43
VM-Shim Mode VM-Shim
A piece of code runs between hypervisor & VM Exchange data for the two
Shim-Mode
Shim-Mode can access VM’s data VM cannot access Shim’s data
44
Shim
VM
Hypervisor
VMExit
①
②
④
③
VM-Shim: Data Interaction Specification of Interactive Data
Describe the format of interactive data to store then Our implementation: use shim’s memory
Two New Instructions raw_st: store data without encryption raw_ld: load data without integrity check
45
VM’s Memory
Shim’s Memory
CPU Context
InteracGve Data
Encrypted Plaintext
VM-Shim: Interactive Data Minimize Interactive Data
Different for different VMEXITs
Data Specification Communication protocol between hypervisor and shim
46
CPU Context Register value of guest VMs
Disk I/O Meta-‐data of I/O operaGons
Network I/O Both meta-‐data and I/O data
Auxiliary Info E.g., page table entry, trapped instrucGon
Example: Trap & Emulate I/O Instruction: in %dx %eax
Read from I/O port %dx and put value into %eax
47
Shim
VM
Hypervisor
I/O Dev
%dx
%dx
%eax
%eax
insn
CPU
Shim’s Memory
Example: DMA in Network I/O DMA in Network I/O
Maintain states by interposition all I/O operations to device Use Shim’s own memory as shadow buffer for DMA
48
Shim
VM
Hypervisor
I/O Dev
Shim’s Memory
VM’s Memory data
data
Implementation On Emulator
Based on Qemu
On Real Machine
Use hook of VMExit to implement VM-Shim
Components
User-level agent: 200 LOCs VM-Shim: 1100 LOCs Xen: 180 LOCs
49
Evaluation on Simulator
Simulator Dinero-IV LLC: 8MB 8-way set-associative Counter Cache: 64KB and 8-way set-associative Cache using LRU replacement, 64-byte block Memory: 512 MB, with latency 350 cycles AES encryption: 80 cycles
Virtualization Software Xen-4.0.1, domain-0 with Linux 2.6.31
Virtual Machine One or more core, 1GB memory 20GB virtual disk, virtual NIC Unmodified Debian with kernel 2.6.31, x64 Windows XP SP2, x64
50
Evaluation on Simulator
0
2
4
6
8
10
12
14
2.8 3.6
1 0.3 0.5
3.7
6.8
3.2 2.3
0.4
2.4
13.9
7.9
4.2
1.2
2.2
8.1 9
3.3 4
0.6
5.4
Normalized
Slowdo
wn with
Xen
(%) AISL+BMT AISL+BMT+Shim
51
Evaluation on Real-Machine Software
Xen-4.0.1, domain-0 with Linux 2.6.31
Hardware AMD quad-core CPU, 4GB memory 100Mb NIC, 320GB disk
Virtual Machine One or more core, 1GB memory 20GB virtual disk, virtual NIC Unmodified Debian with kernel 2.6.31, x64 Windows XP SP2, x64
52
Evaluation on Real-Machine
0
1
2
3
4
5
6
7
kbuild dbench netperf memcached specjbb-‐xp
2.8 2.3
4.1
6.8
0.7
5.8
1.7
0.3
6.8
1.5
Performan
ce Overhead over Xen
(%) Single-‐core Qual-‐core
53
Summary
Lack security guarantee in multi-tenant cloud A case on integrated approach to computer systems Two software/hardware systems to secure cloud
CloudVisor: whole-VM protection with nested hypervisor HyperCoffer: hardware-rooted whole-system security See our papers for more detailed information
Thanks
Institute of Parallel and Distributed Systems http://ipads.se.sjtu.edu.cn
Ques]ons? CloudVisor/Hypercoffer
One (small) ring to Rule them (cloud) all