Apache CloudStack (Incubating) An Introduction
Kevin Kluge Apache CloudStack Committer
• Create VMs, disks networks, network services
• Self service • Meter usage
Use CloudStack to build IaaS clouds (like EC2)
• Java based • Scalable • Many vendor integrations • Native and EC2 API
How did Amazon build EC2?
Commodity Servers
Commodity Storage Networking
Open Source Xen Hypervisor
Amazon Orchestration Software
AWS API (EC2, S3, …)
Amazon eCommerce Platform
How can you build your cloud?
Servers Storage Networking
Open Source Xen Hypervisor
Amazon Orchestration Software
AWS API (EC2, S3, …)
Amazon eCommerce Platform
ESXi, KVM, XenServer/XCP, OVM
CloudStack Orchestration Software
Your Portal (Optional)
CloudStack or AWS API
Project history
• 2008/2009: closed-source development • First deployments in late 2009
• May 2010: ~98% open source as GPLv3 (open core) • August 2011: 100% open source GPLv3
• April 2012: Switch to Apache License v2 • Submit code to Apache Software Foundation
IaaS Cloud Concepts
Cloud
Built for traditional enterprise apps & client-server compute • Scale-up (pool-based resourcing) • IT management-centric • 1 administrator for 100’s of servers • Proprietary vendor stack
Designed around big data, massive scale & next-gen apps • Scale-out (horizontal resourcing) • Autonomic management • 1 administrator for 1,000’s of servers • Open, value-added stack
Virtualization alone does not make a cloud
Server Virtualiza0on
Design for failure
Self-service recovery
Multi-site redundancy
Ephemeral resources
Cloud Workload
Think Amazon Web Services
Expect reliability
Back-up everything
HA, Fault tolerance
Admin control recovery
Traditional Workload
Think Server Virtualization
Clouds must reliably run all types of workloads
Object Storage
vSphere
ESXi Cluster
ESXi Cluster
ESXi Cluster
Enterprise Networking (e.g., VLAN)
Enterprise Storage (e.g., SAN)
Cloud-‐era Availability
Zone
Cloud-‐era Availability
Zone
Cloud-‐era Availability
Zone
TradiGonal Zone CloudStack Mgmt
Server
Cloud-era Workloads Traditional Workloads
Embrace traditional and extend to Cloud-era
Cloud-‐era Availability
Zone
Cloud-‐era Availability
Zone
TradiGonal Availability
Zone
Apache CloudStack Management Server
TradiGonal Availability
Zone
TradiGonal Availability
Zone
Cloud-‐era Availability
Zone
Cloud-‐era Availability
Zone
Cloud-‐era Availability
Zone
Amazon-Style Cloud
Object store is critical for Cloud-era workloads
CloudStack Mgmt. Server • Workloads are distributed across
availability zones • No guarantee on zone reliability • DBs and Templates snapped to
object store. • For small failures, recreate instance
in same zone • For DR, recreate instance in different
zone • Dramatically less expensive
Object Store
Deployment Architecture
Zone1
Data Center 1
Data Center 2
Zone 3 Zone 2
Data Center 2
Zone 3 Zone 2
Data Center 2
Zone 3 Zone 2
Data Center 2
Zone 3 Zone 2
Data Center 2
Zone 3 Zone 2
Data Center 3
Zone 4
Management Servers
• Single Management Server can manage mulGple zones
• Zones can be geographically distributed but low latency links are expected for beRer performance
• Single MS node can manage up to 10K hosts.
• MulGple MS nodes can be deployed as cluster for scale or redundancy
Router
L3 Core Switch
Top of Rack Switch
… … … … … Availability Zone 1
Servers
Primary Mgmt Server Cluster
Object Store
Pod 1 Pod 2 Pod 3 Pod N
Primary MySQL
Load Balancer
Admin Internet
Availability Zone 2
Backup MySQL
Standby Mgmt Server Cluster Cloud-era zone deployment
10Gbps Storage & Mgmt
1Gbps Guest
10Gbps Storage & Mgmt
1Gbps Guest
10Gbps Storage & Mgmt
1Gbps Guest
…
Load Balancer
Core Switch
Aggrega0on Switch
TOR Switch
Compute Nodes
NFS Primary Storage
Object Store
Pod 1
Pod 2
Pod 200
Internet Traditional zone deployment
Management Server
XenServer
ESX
vCenter
KVM
Agent
OVM
Agent
XAPI HTTP
• XS 5.6, 5.6FP1, 5.6 SP2, 6.0.2, XCP 1.1
• Incremental Snapshots • VHD • NFS, iSCSI, FC & Local disk • Storage over-‐provisioning: NFS
• ESX 4.1, 5.0 • Full Snapshots • VMDK • NFS, iSCSI, FC & Local disk • Storage over-‐provisioning: NFS, iSCSI
• RHEL 6.0, 6.1, 6.2, Ubuntu 12.04
• Full Snapshots (not live) • QCOW2 • NFS, iSCSI & FC • Storage over-‐provisioning: NFS
• OVM 2.2 • No Snapshots • RAW • NFS & iSCSi • No storage over-‐provisioning
XCP
Mgm
t Ser
ver C
PU
Util
.
Sec
onds
to d
eplo
y
25,000 …. to …. 30,000 VMs 0 …. to …. 30,000 VMs
• Simulator developed to test massive scale • Four Management Servers can manage 30,000 hosts • Scale to hundreds of thousands of hosts possible with
multiple management server clusters (regions)
Features
Compute
XCP/XS VMware KVM Oracle VM Bare metal
Hypervisor
Storage
Local Disk iSCSI NFS Fiber Channel
Object Stores
Block & Object
Network
Network Type IsolaGon Load
balancer Firewall VPN
Network & Network Services
Users
Start
Stop
Restart
Destroy
VM Operations Console Access
• CPU UGlized
• Network Read
• Network Writes
VM Status Change Service Offering
2 CPUs 1 GB RAM 20 GB 20 Mbps
4 CPUs 4 GB RAM 200 GB 100 Mbps
Volume
VM 1 Add / Delete Volumes
Schedule Snapshots
Hourly Daily
Weekly Monthly
Now
Create Templates from Volumes
Volume
Template
View Snapshot History 12/2/2012 7.30 am
…. 2/2/2012 7.30 am
• Domain is a unit of isolaGon that represents a customer org, business unit or a reseller
• Domain can have arbitrary levels of sub-‐domains
• A Domain can have one or more accounts
• An Account represents one or more users and is the basic unit of isolaGon
• Admin can limit resources at the Account or Domain levels
Admin
Org A
Admin
Reseller A
Domain
Domain
Admin
Org C
Sub-Domain
User 1
User 2
Group B
Account
Group A
Account
VMs, IPs, Snapshots…
VMs, IPs, Snapshots…
Resources
Resources
CPU Cores
CPU (MHz)
Memory (MB)
Name
Compute
Specify Resource Levels
Custom Disk Size
Disk Size (GB)
Storage Tag
Storage Tag
Public
Name
Disk
Network Rate
Redundant VR
Public
Name
Network
Firewall
Load balancer
CPU Cap
Host Tag
Enable HA
Configure ProperGes
Public
Define Scope
• Create Networks and attach VMs
• Acquire public IP address for NAT & load balancing
• Control traffic to VM using ingress and egress firewall rules
• Set up rules to load balance traffic between VMs
Public Network 65.11.0.0/16
65.11.1.2
Guest VM
1
Guest VM 2
Guest VM 3
Guest VM 4
Public Network/Internet
Physical Load
Balancer
Network Services Managed Externally Network Services Managed by CS
65.11.1.3
65.11.1.4
65.11.1.5
DHCP, DNS
CS Virtual Router
Security Group 1
Security Group 2
65.11.1.2
Guest VM
1
Guest VM 2
Guest VM 3
Guest VM 4
65.11.1.3
65.11.1.4
65.11.1.5
DHCP, DNS
CS Virtual Router
Security Group 1
Security Group 2
EIP, ELB
Public Network/Internet Guest Virtual Network 10.0.0.0/8
VLAN 100
Gateway address 10.1.1.1
DHCP, DNS NAT Load Balancing VPN
6.37..1.11
10.1.1.1
Guest VM 1
10.1.1.3
Guest VM 2
10.1.1.4
Guest VM 3
10.1.1.5
Guest VM 4
CS Virtual Router
Public Network/Internet
Guest Virtual Network 10.0.0.0/8 VLAN 100
Private IP 10.1.1.112
DHCP, DNS
Public IP 6.37.1.11
10.1.1.1
Guest VM 1
10.1.1.3
Guest VM 2
10.1.1.4
Guest VM 3
10.1.1.5
Guest VM 4
Physical Load
Balancer
Private IP 10.1.1.111
Public IP 6.37.1.12
Juniper SRX
Firewall
CS Virtual Router provides Network Services External Devices provide Network Services
CS Virtual Router
Layer-‐2 Layer-‐3 IsolaGon VLAN/SDN Security Groups
Performance BeRer BeRer Network setup Moderate Easy Support broadcast Yes No
Scalability Good Best Interoperability with physical servers
Good Poor
Pod 1
Host 2
Cluster 1
Host 1 Primary Storage
L3 switch
Secondary Storage
L2 switch
CloudStack storage • Configured at Cluster-level. Close to hosts
for better performance • Stores all disk volumes for VMs in a cluster • Cluster can have one or more primary
storages • Local disk, iSCSI, FC or NFS
Primary Storage
• Configured at Zone-level • Stores all Templates, ISOs and Snapshots • Zone can have one or more secondary
storages • NFS, OpenStack Swift, others coming
Secondary Storage
Futures
Apache CloudStack API
Switches Hypervisor
Apache CloudStack API
Firewall Load Bal Baremetal Security
Apache CloudStack API Apache CloudStack API
Storage
Futures • Object storage and SDN short term • Blade orchestration • Region support • Additional hypervisors (need some container support) • Code modularity improvements (OSGI?) • App-specific integration (Hadoop?) • Improved CLI • Additional API support (Google, evolving standards)
Thank You
Project current state
• In incubation within Apache Software Foundation
• Imminent first release!
• Bugs and wiki mostly moved to ASF infra
• Mailing list traffic moved to ASF infra
• Many non-Citrix contributors, committers, and PPMC members
Yes, the ASF is great
Enter ASF
The future needs you! Project web site: http://incubator.apache.org/projects/cloudstack.html Mailing lists: [email protected] [email protected] IRC: #CloudStack on irc.freenode.net
Join your local CloudStack group!