4
Personal tools Log in Storage Model From Proxmox VE Contents 1 Introduction 2 Storage types 2.1 LVM Groups - An Ideal Solution 2.1.1 LVM Groups with Network Backing 2.1.2 LVM Groups with Local Backing 2.2 iSCSI Target 2.3 NFS Share 2.4 Use iSCSI LUN directly 2.5 Directory 2.5.1 HowTo mount a Windows (SaMBa) share on Proxmox VE via /etc/fstab 2.5.2 References Introduction Proxmox VE is using a very flexible storage model. Virtual machine images can be stored on local storage (and more than one local storage type is supported) as well as on shared storage like NFS and on SAN (e.g. using iSCSI). All storage definitions are synchronized throughout the Proxmox_VE_2.0_Cluster, therefore it's just a matter of minutes before a SAN configuration usable on all Proxmox_VE_2.0_Cluster nodes. You may configure as many storage definitions as you like! One major benefit of storing VM´s on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images. Note: OpenVZ containers must be on local storage or NFS. Storage types The following storage types can be added via the web interface. Network storage types supported LVM Group (network backing with iSCSI targets) iSCSI target NFS Share Direct to iSCSI LUN Local storage types supported LVM Group (local backing devices like block devices, FC devices, DRBD, etc.) Directory (storage on existing filesystem) Storage Model - Proxmox VE http://pve.proxmox.com/wiki/Storage_Model 1 de 4 5/29/2013 9:34 AM

Storage Model - Proxmox VE.pdf

Embed Size (px)

Citation preview

Page 1: Storage Model - Proxmox VE.pdf

Personal tools

Log in

Storage ModelFrom Proxmox VE

Contents

1 Introduction2 Storage types

2.1 LVM Groups - An Ideal Solution2.1.1 LVM Groups with Network Backing2.1.2 LVM Groups with Local Backing

2.2 iSCSI Target2.3 NFS Share2.4 Use iSCSI LUN directly2.5 Directory

2.5.1 HowTo mount a Windows (SaMBa) share on Proxmox VE via /etc/fstab2.5.2 References

IntroductionProxmox VE is using a very flexible storage model. Virtual machine images can be stored on local storage (and more than onelocal storage type is supported) as well as on shared storage like NFS and on SAN (e.g. using iSCSI).

All storage definitions are synchronized throughout the Proxmox_VE_2.0_Cluster, therefore it's just a matter of minutes before aSAN configuration usable on all Proxmox_VE_2.0_Cluster nodes.

You may configure as many storage definitions as you like!

One major benefit of storing VM´s on shared storage is the ability to live-migrate running machines without any downtime, as allnodes in the cluster have direct access to VM disk images.

Note: OpenVZ containers must be on local storage or NFS.

Storage typesThe following storage types can be added via the web interface.

Network storage types supportedLVM Group (network backing with iSCSI targets)iSCSI targetNFS ShareDirect to iSCSI LUN

Local storage types supportedLVM Group (local backing devices like block devices, FC devices, DRBD, etc.)Directory (storage on existing filesystem)

Storage Model - Proxmox VE http://pve.proxmox.com/wiki/Storage_Model

1 de 4 5/29/2013 9:34 AM

Page 2: Storage Model - Proxmox VE.pdf

LVM Groups - An Ideal Solution

Using LVM groups provides the best manageability. Logical volumes can easily be created/deleted/moved between physicalstorage devices. If the base storage for the LVM group is accessible on all Proxmox VE nodes (e.g. an iSCSI LUN) or replicated(with DRBD) then all nodes have access to VM images, and live-migration is possible.

LVM Groups with Network Backing

In this configuration, network block devices (iSCSI targets) are used as the physical volumes for LVM logical volume storage.This is a two step procedure and can be fully configured via the web interface.

First, add the iSCSI target. (On some iSCSI targets you need to add the IQN of the Proxmox VE server to allow access.)Click 'Add iSCSI Target' on the Storage listAs storage name use whatever you want but take care, this name cannot be changed later.Give the 'Portal' IP address or servername and scan for unused targetsdisable 'use LUNs direcly'Click save

1.

Second, add LVM group on this target.Click 'Add LVM Group' on the Storage listAs storage name use whatever you want but take care, this name cannot be changed later.For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.For 'Base Volume' select a LUNFor 'Volume Group Name' give a unique name (this name cannot be changed later).Enable shared use (recommended)Click save

1.

LVM Groups with Local Backing

In this configuration, physical block devices (which can be DRBD devices) are used as the physical volumes for LVM logicalvolume storage. Before you can store VM´s this way, you first need to configure LVM2 using the console. Full management is notpossible through the web interface at this time

This is a three step procedure (I just plugged in a 8 GB USB Stick for demonstration, recognized as /dev/sdb on my box)

Physically install all devices you wish to import into a volume group1.Define those physical devices as LVM physical volumes (storage that can be used by LVM volume groups.2.

First create the physical volume (pv):

proxmox-ve:~# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully createdproxmox-ve:~#

Storage Model - Proxmox VE http://pve.proxmox.com/wiki/Storage_Model

2 de 4 5/29/2013 9:34 AM

Page 3: Storage Model - Proxmox VE.pdf

Second, create a volume group (vg):

proxmox-ve:~# vgcreate usb-stick /dev/sdb1 Volume group "usb-stick" successfully createdproxmox-ve:~#

And finally: Add the LVM Group to the storage list via the web interface:

"Storage name: usb", "Base storage: Existing volume groups", "Volume Group Name: usb-stick"

Now you can create the KVM VM:Type: Fully virtualized (KVM)Disk storage: usb (lvm)Disk space (GB): 1Image format: raw (that is the only option you can choose now and therefore does not support Live_Snapshots)

After the creation (let's put will be VM 117), you will have an additional logical volume of the size of the VM's disk space:

proxmox-ve:~# lvdisplay --- Logical volume --- LV Name /dev/usb-stick/vm-117-disk-1 VG Name usb-stick LV Size 1.00 GB

Note: after the experiment, to remove the test storage do:

Remove usb storage from web interfacethen

proxmox-ve:~# vgremove usb-stick Volume group "usb-stick" successfully removed

then

proxmox-ve:~# pvremove /dev/sdb1 Labels on physical volume "/dev/sdb1" successfully wiped

and finally unplug the usb stick.

iSCSI Target

iSCSI is a widely employed technology used to connect to storage servers. Almost all vendors support iSCSI. There are also opensource solutions available: (e.g. Openfiler (http://www.openfiler.com) , FreeNAS (http://www.freenas.org) and OpenMediaVault(http://www.openmediavault.org/) that is Debian based).

iSCSI targets can be fully configured via the web interface. For details see Storage_Model#LVM Groups with Network Backing.

NFS Share

NFS is a very simple way to integrate shared storage into Proxmox VE and enables live-migration. Storage on NFS shares issimilar to the file-on-disk directory method, with the added benefit of shared storage and live migration.

NFS shares can be fully configured via the web interface.

Click 'Add NFS Share' on the Storage listAs storage name use whatever you want but take care, this name cannot be changed later.

Storage Model - Proxmox VE http://pve.proxmox.com/wiki/Storage_Model

3 de 4 5/29/2013 9:34 AM

Page 4: Storage Model - Proxmox VE.pdf

Give the 'Server' IP address or servername of your NFS server and scan for 'Exports'Select the 'Export'Content: select what you want to store: Virtual Disk images, ISO images, OpenVZ templates, backup files or containers.Click save

Use iSCSI LUN directly

This is possible but not recommended.

Note: Currently iSCSI LUN´s are not protected from the Proxmox VE management tools. This means if you use a iSCSI LUNdirectly it still shows up as available and if you use the same LUN a second time you will loose all data on the LUN.

Directory

Proxmox VE can use local directories or locally mounted shares for storage (Virtual disk images, ISO images, or backup files).This is the least flexible, least efficient storage solution, but is very similar to the NFS method, where images are stored on anexisting filesystem as large files.

HowTo mount a Windows (SaMBa) share on Proxmox VE via /etc/fstab

In this scenario, the VM storage functions identically to the directory method. The SMB/CIFS share is mounted as a localmountpoint and appears to Proxmox VE as local storage. To mount a remote samba share, just follow this (adapt it according toyour setup):

First, create a target dir, e.g.:

mkdir /mnt/samba1

nano /etc/fstab

//windows-or-samba-server-name/sharename /mnt/samba1 cifs username=yourusername,password=yoursecretpassword,domain=yourdomainname 0 0

Next you can activate it with:

mount //windows-or-samba-server-name/sharename

Then define a 'Directory' based storage on the web interface using the newly created directory '/mnt/samba1', for example to storebackups.

References

Adding Second Hard Disk (http://www.debiantutorials.com/add-a-second-hard-disk/)Expanding Disk Capacity (http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch27_:_Expanding_Disk_Capacity)/etc/fstab options (http://wiki.debian.org/fstab)

Retrieved from "http://pve.proxmox.com/wiki/Storage_Model"Categories: HOWTO | Technology

This page was last modified on 1 May 2013, at 21:32.

Storage Model - Proxmox VE http://pve.proxmox.com/wiki/Storage_Model

4 de 4 5/29/2013 9:34 AM