51
IMPORTANTE COMO CAMBIAR TAMAÑO VG Un VOLUME GROUP VG no se puede reducir de tamaño directamente, - hay que remover uno o varios volúmenes físicos (PV) que lo compongan vgreduce nombre_grupo /nombre/partición_o_disco – pero el problema es que si no deja remover los volúmenes físicos entonces primero hay que reducir el tamaño del sistema de archivos haciendo antes un chequeo del mismo fsck.ext3 –f /dev/nombre_del_VG/nombre_del_LV resize2fs /dev/nombre_del_VG/nombre_del_LV 150G tamaño final que queremos luego reducimos el tamaño del o los volúmenes que sean necesarios, lvreduce -L 160G /dev/Nombre_del_VG/Nombre_del_LV reajustamos el tamaño del sistema de archivos para que ocupen todo el LV nuevamente resize2fs /dev/Nombre_del_VG/Nombre_del_LV mover los PE (Physical extents) de un PV a otro de nuestra elección o que tenga espacio suficiente para alojarlos pvmove /dev/Nombre_de_particion_o_disco y entonces si proceder a remover el PV que quedó vacío para así tener nuestro VG reducido. vgreduce Nombre_del_VG /dev/nombre_particion_o_disco CREAR LV en un VG Con este comando se crea un volumen con nombre por defecto LogVolnn donde nn son la secuencia de número de volúmenes lógicos existentes en el sistema. Lvcreate –L 10G nombre_grupo_volumen Crea un volumen de tamaño 50G y nombre asignado

How to Resize the Root Volume on LVM From Livecd

Embed Size (px)

Citation preview

Page 1: How to Resize the Root Volume on LVM From Livecd

IMPORTANTE COMO CAMBIAR TAMAÑO VG

Un VOLUME GROUP VG no se puede reducir de tamaño directamente, - hay que remover uno o varios volúmenes físicos (PV) que lo compongan vgreduce nombre_grupo /nombre/partición_o_disco – pero el problema es que si no deja remover los volúmenes físicos entonces primero hay que reducir el tamaño del sistema de archivos haciendo antes un chequeo del mismo

fsck.ext3 –f /dev/nombre_del_VG/nombre_del_LV

resize2fs /dev/nombre_del_VG/nombre_del_LV 150G tamaño final que queremos

luego reducimos el tamaño del o los volúmenes que sean necesarios,

lvreduce -L 160G /dev/Nombre_del_VG/Nombre_del_LV

reajustamos el tamaño del sistema de archivos para que ocupen todo el LV nuevamente

resize2fs /dev/Nombre_del_VG/Nombre_del_LV

mover los PE (Physical extents) de un PV a otro de nuestra elección o que tenga espacio suficiente para alojarlos

pvmove /dev/Nombre_de_particion_o_disco

y entonces si proceder a remover el PV que quedó vacío para así tener nuestro VG reducido.

vgreduce Nombre_del_VG /dev/nombre_particion_o_disco

CREAR LV en un VG

Con este comando se crea un volumen con nombre por defecto LogVolnn donde nn son la secuencia de número de volúmenes lógicos existentes en el sistema.

Lvcreate –L 10G nombre_grupo_volumen

Crea un volumen de tamaño 50G y nombre asignado

lvcreate -L 50G -n nombre_nuevo_volumen nombre_grupo_volumen

Crea un volumen ocupando el 60% del tamaño total del Grupo con el nombre asignado

lvcreate -l 60%VG -n nombre_nuevo_volumen nombre_grupo_volumen

Page 2: How to Resize the Root Volume on LVM From Livecd

Crea un volumen ocupando el 100% del espacio libre en el Grupo con el nombre asignado

lvcreate -l 100%FREE -n nombre_nuevo_volumen nombre_grupo_volumen

Se pueden crear volúmenes dentro de un volumen físico especifico en el grupo. El comando crea un volumen lógico con el nombre asignado en el grupo especificado y el volumen físico también especificado al final del comando

lvcreate -L 10G –n nombre_nuevo_volumen nombre_grupo_volumen /dev/sdxn

Luego de crear nuestro volumen lógico debemos crear el sistema de archivos que va a contener

Mke2fs –j /dev/nombre_grupo_volumen/nombre_volumen_logico

Mke2fs –t ext3 /dev/nombre_grupo_volumen/nombre_volumen_logico

Mkfs.ext3 /dev/nombre_grupo_volumen/nombre_volumen_logico

INCREMENTAR TAMAÑO DE UN LV

To increase the size of a logical volume, use the lvextend command.When you extend the logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it.The following command extends the logical volume /dev/myvg/homevol to 12 gigabytes.

# lvextend -L12G /dev/myvg/homevol

The following command adds another gigabyte to the logical volume /dev/myvg/homevol.

# lvextend -L+ 1G /dev/myvg/homevol

As with the lvcreate command, you can use the -l argument of the lvextend command to specifythe number of extents by which to increase the size of the logical volume. You can also use thisargument to specify a percentage of the volume group, or a percentage of the remaining free space inthe volume group. The following command extends the logical volume called testlv to fill all of theunallocated space in the volume group myvg.

# lvextend -l + 100%FREE /dev/myvg/testlv

After you have extended the logical volume it is necessary to increase the file system size to match the LV size.

resize2fs /dev/nombre_del_VG/nombre_del_LV

Page 3: How to Resize the Root Volume on LVM From Livecd

How to resize the Root volume on LVM

LVM is a great utility for being able to resize volumes on the fly. However, resizing the root (/) volume is tricky since you need to unmount the volume to do file system checks, and that is impossible if you are using it. I wrote this because I could not find a way to do it anywhere else, so I studied and figured it out on my own and figure others out there may want to do this.My reason for this was because I needed more space in my /home directories, and my root directory was way to large for what I was using it for. I made it 30GB in size, and was only using 6GB and it was not going to grow any more. I decided to shrink the root file system from 30GB to 15GB and give the other 15GB to /home. This is how I did it.

The best way to do it is with the Ubuntu Live CD for Alternate Install. It has the LVM applications already to go.

1. Reboot the computer with the CD in and boot off of CD.2. Select Rescue a Broken System. When it gets to the point of asking you to mount a file system, tell it not to mount any file system and run from the install root.3. From the command line, type lvmdiskscan4. This will show you all the disks. You will see your root volume. For these purposes, it is called /dev/VolGrp000/root.5. Run e2fsck -f /dev/VolGrp000/root6. After that completes, run resize2fs /dev/VolGrp000/root 3932160I went to http://www.unitconversion.org/unit_converter/data-storage.html to convert 15Gb to Blocks. Make sure you use Gb and not GB. Also, do not shrink your root filesystem smaller than what it being used. Make sure you know how much is being used and allow some room for growth. This takes a while to run.7. Once complete you can mount the volume and verify that it did shrink by running mount /dev/VolGrp000/root /mnt and then run df and you will see the new size. After this, unmount the volume with umount /mnt8. Now we can put the rest of the space back into the LV pool. Runlvreduce -L 15G /dev/VolGrp000/rootWord of caution, don't shrink it smaller then what you resize2fs it otherwise you will really break your system. This will put the space back into a Spare status and you can use it as you wish.9. Now, you can move the free 15GB to /home logical volume.lvextend -L+15G /dev/VolGrp000/home10. Now you have to resize the filesystem to match the size of the logical volumeresize2fs /dev/VolGrp000/home11. Reboot your PC and you are done.

Page 4: How to Resize the Root Volume on LVM From Livecd

How to Shrink an LV Safely

Logical Volume Management is a vast improvement over standard partitioning schemes. Among many other things, it allows you to decrease the size of a volume without recreating it completely. Here’s how.

First, as is always the case when you’re modifying disk volumes, partitions, or file systems, you should really have a recent backup. A typo in one the following commands could easily destroy data. You have been warned!

All of the required steps must be performed on an unmounted volume. If want to reduce the size of a non-root volume, simply unmount it. For a root volume, you’ll have to boot from a CD. Any modern live or rescue CD should work fine. I prefer SystemRescueCD. It includes almost any disk management programs you might need. After booting from a CD, you may have to issue:

# vgchange -a y

This makes any logical volumes available to the Linux. Most boot CD’s will do it automatically some time during the boot process, but repeating the command won’t hurt. Next, force a file system check on the volume in question:

# e2fsck -f /dev/Nombre_del_VG/Nombre_del_LV

Device names for LVM volumes follow the convention: /dev/<volume group>/<logical volume>. In this case, my volume group is named polar and the volume I’m going to shrink is named root. This is a critical step; resizing a file system in an inconsistent state could have disastrous consequences. Next, resize the actual file system:

# resize2fs /dev/Nombre_del_VG/Nombre_del_LV 180G

Replace 180G with about 90% of the size you want the final volume to be. For example, in this case, I want the final volume to be 200 gigabytes, so I’ll reduce the file system to 180 gigabytes. Why is this necessary? When we reduce the size of the actual volume in the next step, it’s critical that the new size is greater than or equal to the size of the file system. After reading the documentation for both resizefs and lvreduce, I still haven’t been able to find out whether they’re using standard computer gigabytes (1024^3 bytes) or drive manufacturer gigabytes (1000^3 bytes). In this case, the difference is very important. To be on the safe side, we’ll just shrink the file system a bit more than necessary and expand it to use the full space available later. Next, reduce the size of the logical volume:

# lvreduce -L 200G /dev/Nombre_del_VG/Nombre_del_LV

In this case, use the actual size you want to the volume to be. Finally, grow the file system so that it uses all available space on the logical volume:

Page 5: How to Resize the Root Volume on LVM From Livecd

# resize2fs /dev/Nombre_del_VG/Nombre_del_LV

That’s it. Enjoy your newly acquired free space.

Basic Concepts and Terminology:

Figure 1

With LVM, physical volume segments are simply called "physical volumes" (or "PVs").  These PVs are usually entire disks but may be disk partitions.  The PVs in turn are combined to create one or more large virtual disks called "volume groups" (or "VGs").  While you can create many VGs, one may be sufficient.  A VG can grow or shrink by adding or removing PVs from it.  VGs appear to be block devices, similar to other disks such as /dev/hda.  In fact each VG can be referred to by the name "/dev/VG_name".

Once you have one or more volume groups you can create one or more virtual partitions called "logical volumes" (or "LVs").  Note each LV must fit entirely within a single VG.  LVs appear to be block devices similar to disk partitions such as /dev/hda1, with entries named "/dev/VG_name/LV_name".  LVs have a number of parameters that can be set (and later most can be changed) that can affect disk I/O performance, including extent size, chunk size, stripe size and stripe set size, and read-ahead.  These are discussed below.

Finally, you can create any type of filesystem you wish on the logical volume, including as swap space.  Note that some filesystems are more useful with LVM than others.  For example not all filesystems support growing and shrinking.  ext2, ext3, xfs, and reiserfs do support such operations and would be good choices.  The relationship between PVs, VGs, and LVs is illustrated in figure 1.  Table 1 summarizes these acronyms.

Extents:

When creating a volume group from one or more physical volumes, you must specify the size of the "extents" of each of the physical volumes that make up the VG.  Each extent is a single contiguous chunk of disk space, typically 4M in size,

Page 6: How to Resize the Root Volume on LVM From Livecd

but can range from 8K to 16G in powers of 2 only.  (Extents are analogous to disk blocks or clusters.)  The significance of this is that the size of logical volumes are specified as a number of extents.  Logical volumes can thus grow and shrink in increments of the extent size.  A volume group's extent size cannot be changed after it is set.

The system internally numbers the extents for both logical and physical volumes.  These are called logical extents (or LEs) and physical extents (or PEs), respectively.  When a logical volume is created a mapping is defined between logical extents (which are logically numbered sequentially starting at zero) and physical extents (which are also numbered sequentially).

To provide acceptable performance the extent size must be a multiple of the actual disk cluster size (i.e., the size of the smallest chunk of data that can be accessed in a single disk I/O operation).  In addition some applications (such as Oracle database) have performance that is very sensitive to the extent size.  So setting this correctly also depends on what the storage will be used for, and is considered part of the system administrator's job of tuning the system.

Booting Consideration with LVM:

LVM can be use with RAID.  LVM can be used to hold all filesystems.  However special considerations apply when using LVs for the boot and root filesystems.  This is because the BIOS code in the ROM of the motherboard of your computer must be able to locate and load the kernel.  So if the boot partition was a LV, the BIOS would need to know about PVs, VGs, and LVs, and it probably doesn't.  Unless you are using some custom BIOS you must not make the bootable partition an LV.

The root partition (if it isn't also the boot partition) may be a logical volume.  However this means the kernel must access the root partition before it can load any (e.g., LVM) kernel modules.  Thus the modules for LVM must be compiled into the kernel.  This is rarely the case with standard distributions!  (There is a similar issue with SCSI drivers, as most kernels only compile in the IDE drivers.)  For this reason, as well as allowing a filesystem to be accessed by another operating system (yes there are ext2 drivers available for Windows), some system administrators prefer to make the root filesystem on a regular partition rather than on a logical volume.  Note in this case you can make a single root+boot partition.

The solution to using a logical volume for your root filesystem (as it is with SCSI) is either to build a custom kernel with the correct drivers compiled in, or to make sure the system loads a RAM disk initially, known as initrd, which contains all the correct modules.  This RAM disk then loads the system as normal, and goes away.  Creating a ramdisk on Linux is simple using the mkinitrd script.  Just run this command (as root). You need to know the kernel version, and then you must update grub.conf to use the ramdisk:

Page 7: How to Resize the Root Volume on LVM From Livecd

/root# KERNEL_VERSION=`uname -r`/root# mkinitrd -v initrd.$KERNEL_VERSION $KERNEL_VERSION/root# mv initrd.$KERNEL_VERSION /boot/root# vi /boot/grub/grub.conf # or /boot/grub/menu.lst

Before logical volumes can be mounted, the LVM driver must be loaded (or compiled in) to the kernel.  Next all physical volumes on all available drives must be found and examined, in order to determine all the volume groups.  Finally the volume groups must be activated, which causes the kernel to recognize the various block devices.  Only then can the filesystems within logical volumes be mounted. So most systems add code similar to the following to the boot up scripts (typically the rc.sysinit script):

# LVM2 initializationif [ -x /sbin/lvm.static ] # Check for LVM v2then # make sure device mapper (LVM2) kernel module is loaded: if ! grep -q "device-mapper" /proc/devices 2>/dev/null then modprobe dm-mod >/dev/null 2>&1 fi # Cleanup and then recreate device mapper control file: /bin/rm -f /dev/mapper/control echo "mkdmnod" | /sbin/nash --quiet >/dev/null 2>&1 if [ -c /dev/mapper/control ] # if LVM2 is loaded: then # Check for any physical volumes: if /sbin/lvm.static vgscan > /dev/null 2>&1 then echo "Setting up Logical Volume Management:" # Activate volume groups and re-create all /dev entries: /sbin/lvm.static vgchange -a y && /sbin/lvm vgmknodes fi fifi

You may want to edit the file /etc/init.d/halt to deactivate the volume groups at shutdown.  However this shouldn't be necessary when using LVM version 2.  To deactivate volume groups, insert the following near the end of this file (just after the filesystems are mounted read-only and before the comment that says "Now halt or reboot"):

# Deactivate LVM:if [ -x /sbin/lvm.static ] # Check for LVM v2then echo "Deactivating LVM volume groups:" /sbin/lvm.static vgchange -a nfi

Like all storage devices data may become corrupted over time.  LVM provides a command "vgck" you can use to periodically check the consistency of your volume groups.  It may pay to add this command to the bootup scripts.

Page 8: How to Resize the Root Volume on LVM From Livecd

Linear and Striped Mapping:

Let's suppose we have a volume group called VG1, and this volume group has a physical extent size of 4M.  Suppose too this volume group is composed of one disk partition /dev/hda1 and one whole disk /dev/hdb.  These will become physical volumes PV1 and PV2 (more meaningful names for a particular scenario can be given if desired).

The PVs are different sizes and we get 99 (4M) extents in PV1 and 248 extents in PV2, for a total of 347 extents in VG1.  Now any number of LVs of any size can be created from the VG, as long as the total number of extents of all LVs sums to no more than 347.  To make the LVs appear the same as regular disk partitions to the filesystem software, the logical extents are numbered sequentially within the LV.  However some of these LEs may be stored in the PEs on PV1 and others on PV2.  For instance LE[1] of some LV in VG1 could map onto PE[51] of PV1, and thus data written to the first 4M of the LV is in fact written to the 51st extent of PV1.

When creating LVs an administrator can choose between two general strategies for mapping logical extents onto physical extents:

1. Linear mapping will assign a range of PE's to an area of an LV in order (e.g., LE 1–99 map to PV1's PEs, and LE 100–347 map onto PV2's PEs).

2. Striped mapping will interleave the disk blocks of the logical extents across a number of physical volumes.  You can decide the number of PVs to stripe across (the stripe set size), as well as the size of each stripe.

When using striped mapping, all PVs in the same stripe set need to be the same size.  So in our example the LV can be no more than 198 (99 + 99) extents in size.  The remaining extents in PV2 can be used for some other LVs, using linear mapping.

The size of the stripes is independent of the extent size, but must be a power of 2 between 4K and 512K.  (This value n is specified as a power of 2 in this formula: (2^n) × 1024 bytes, where 2 ≤ n ≤ 9.)  The stripe size should also be a multiple of the disk sector size, and finally the extent size should be a multiple of this stripe size.  If you don't do this, you will end up with fragmented extents (as the last bit of space in the extent will be unusable).

Tables 2 and 3 below illustrate the differences between linear and striped mapping.  Suppose you use a stripe size of 4K, an extent size of 12K, and a stripe set of 3 PVs (PVa, PVb, and PVc), each of which is 100 extents.  Then the mapping for an LV ( whose extents we'll call LV1, LV2, ...) to PVs (whose extents we'll call PVa1, PVa2, ..., PVb1, PVb2, ..., PVc1, PVc2, ...) might look something like the following.  (In this table the notation means volume_name extent_number . stripe_number):

Page 9: How to Resize the Root Volume on LVM From Livecd

Example of Linear Mapping

Logical Extents   Physical Extents

LV1 → PVa1

LV2 → PVa2

LV3 → PVa3

LV4 → PVa4

... → ...

LV99 → PVa99

LV100 → PVb1

LV101 → PVb2

... → ...

LV199 → PVb99

LV200 → PVc1

LV201 → PVc2

... → ...

Example of Striped Mapping

Logical Extents   Physical Extents

LV1.1 → PVa1.1

LV1.2 → PVb1.1

LV1.3 → PVc1.1

LV2.1 → PVa1.2

LV2.2 → PVb1.2

LV2.3 → PVc1.2

LV3.1 → PVa1.3

LV3.2 → PVa1.3

LV3.3 → PVa1.3

LV4.1 → PVa2.1

LV4.2 → PVb2.1

LV4.3 → PVc2.1

... → ...

Tables 2 and 3: Linear versus Striped Mapping

In certain situations striping can improve the performance of the logical volume but it can be complex to manage.  However note that striped mapping is useless and will in fact hurt performance, unless the PVs used in the stripe set are from different disks, preferably using different controllers.

(In version 1 of LVM LVs created using striping cannot be extended past the PVs on which they were originally created.  In the current version (LVM 2) striped LVs can be extended by concatenating another set of devices onto the end of the first set.  However this could lead to a situation where (for example) a single LV ends up as a 2 stripe set, concatenated with a linear (non-striped) set, and further concatenated with a 4 stripe set!

Page 10: How to Resize the Root Volume on LVM From Livecd

Snapshots:

A wonderful facility provided by LVM is a snapshot.  This allows an administrator to create a new logical volume which is an exact copy of an existing logical volume (called the original), frozen at some point in time.  This copy is read-only.  Typically this would be used when (for instance) a backup needs to be performed on the logical volume but you don't want to halt a live system that is changing the data.  When done with the snapshot the system administrator can just unmount it and then remove it.  This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state, but the time the original LV must be off-line is much less than a normal backup would take to complete.

In addition the copy typically only needs about 20% or less of the disk space of the original.  Essentially, when the snapshot is made nothing is copied.  However as the original changes, the updated disk blocks are first copied to the snapshot disk area before being written with the changes.  The more changes are made to the original, the more disk space the snapshot will need.

When creating logical volumes to be used for snapshots, you must specify the chunk size.  This is the size of the data block copied from the original to the snapshot volume.  For good performance this should be set to the size of the data blocks written by the applications using the original volume.  While this chunk size is independent of both the extent size and the stripe size (if striping is used), it is likely that the disk block (or cluster or page) size, the stripe size, and the chunk size should all be the same.  Note the chunk size must be a power of 2 (like the stripe size), between 4K and 1M.  (The extent size should be a multiple of this size.)

You should remove snapshot volumes as soon as you are finished with them, because they take a copy of all data written to the original volume and this can hurt performance.  In addition, if the snapshot volume fills up errors will occur.

LVM Administration — Commands and Procedures:

The lvm command permits the administrator to perform all LVM operations using this one interactive command, which includes built-in help and will remember command line arguments used from previous commands for the current command.  However each LVM command is also available as a stand-alone command (that can be scripted).  These are discussed briefly below, organized by task.  See the man page for the commands (or use the built-in help of lvm) for complete details.

Page 11: How to Resize the Root Volume on LVM From Livecd

Plan the Disk Layout:

Disk I/O is often the determining factor in overall system performance.  If your system has multiple disks and controllers, the correct strategy is to have them all used in parallel (that is, simultaneously) as much of the time as possible.  In addition you should aim to place files in such a way as to minimize disk head movement (and thus minimize seek time).

While the performance improvements are real they often aren't significant.  Whether or not to worry about these issues depends upon the current performance, file size, type, and access patterns, and which applications are running.  Some ways to maximize performance are:

To support a large number of users, say on a development system, you can use linear mapping of your logical volumes to a large number of physical volumes on different disks, hopefully using different controllers as well.

On a production system running several different services, the files for each service should be placed on different physical disks (so for example the web server I/O won't interfere with the FTP server I/O).

On a database server the files for the following should be placed on different disks: tables, their indexes, log files, and frequently used tables (that don't get cached in RAM).

If possible it pays to keep the operating system files (i.e., boot, root, /var/log, swap) on one disk and the rest of the system on another, so the system's disk I/O doesn't interfere with the service's I/O.

Large files are often accessed sequentially.  Such access is most efficient when the data blocks of the file are contiguous, at the outer edge of the disk (cylinder zero).  In addition striping across different disks is very helpful in minimizing disk head movement and allowing parallel read/write operations.

Large sequentially accessed files, such as those on a web or FTP server can also benefit from a LVM feature called read ahead.  This allows the next block to be read from disk to RAM at the same time as the current block, useful when the data is contiguous.  (Non-LVM disk read-ahead is also settable in Linux, by changing some parameters in /proc/sys/vm.)

On the other hand, randomly accessed files benefit from a more central placement on a disk, as this will tend to minimize disk seek time.

Using striped mapping can speed up access to large, sequentially accessed files, as more disks (and controllers) are used simultaneously.  Smaller, randomly accessed files don't benefit much from striping.

To avoid striping performance problems you should format one PV per whole disk.  LVM can't tell if two PVs are on the same physical disk or not, so if you create a multiple PVs per disk and then create striped LVs, the stripes could be on different partitions on the same disk.  This would result in a decrease in performance rather than an increase.  If you do format a disk with multiple PVs, make sure no two of them are added to the same volume group if striped mapping will be used!

For a production system it is recommended that you create one PV per whole disk, for Administrative convenience.  It's easier to keep track of the hardware in a system if each real disk only appears once.  This becomes particularly true if a disk fails.

Page 12: How to Resize the Root Volume on LVM From Livecd

Other factors affect the disk I/O performance.  Often these other factors overshadow any performance gains from careful disk layout.  One example is the disk scheduler in the kernel.  The 2.6 version of the Linux kernel comes with two different schedulers you can select between.  In addition the kernel supports features such as read-ahead independently of any LVM settings.

Another issue is the disk controller settings.  Sometimes the disk schedules disk I/O regardless of the kernel.  Disks also have configurable settings, including DMA, buffering, etc.  These can be changed with "hdparam" and other utilities.

Other factors that affect performance include bus type and speed, and what other devices are attached to that bus.

Format Physical Volumes (PVs)

To initialize a disk or disk partition as a physical volume you just run the "pvcreate" command on the whole disk.  For example:

pvcreate /dev/hdb

This creates a volume group descriptor at the start of the second IDE disk.  You can initialize several disks and/or partitions at once.  Just list all the disks and partitions on the command line you wish to format as PVs.

Sometimes this procedure may not work correctly, depending on how the disk (or partition) was previously formatted.  If you get an error that LVM can't initialize a disk with a partition table on it, first make sure that the disk you are operating on is the correct one!  Once you have confirmed that /dev/hdb is the disk you really want to reformat, run the following dd command to erase the old partition table:

# Warning DANGEROUS!# The following commands will destroy the partition table on the# disk being operated on. Be very sure it is the correct disk!dd if=/dev/zero of=/dev/hdb bs=1k count=1 blockdev \ --rereadpt /dev/hdb

For partitions run "pvcreate" on the partition:

pvcreate /dev/hdb1

This creates a volume group descriptor at the start of the /dev/hdb1 partition.  (Note that if using LVM version 1 on PCs with DOS partitions, you must first set the partition type to "0x8e" using fdisk or some other similar program.)

Page 13: How to Resize the Root Volume on LVM From Livecd

Create Volume Groups (VGs)

Use the "vgcreate" program to group selected PVs into VGs, and to optionally set the extent size (the default is 4MB).  The following command creates a volume group named "VG1" from two disk partitions from different disks:

vgcreate VG1 /dev/hda1 /dev/hdb1

Modern systems may use "devfs" or some similar system, which creates symlinks in /dev for detected disks.  With such systems names like "/dev/hda1" are actually the symlinks to the real names.  You can use either the symlink or the real name in the LVM commands, however the older version of LVM demanded you use the real device names, such as /dev/ide/host0/bus0/target0/lun0/part1 and /dev/ide/host0/bus0/target1/lun0/part1.

You can also specify the extent size with this command using the "-s size" option, if the 4Mb default not what you want.  The size is a value followed by one of k (for kilobytes), m (megabytes), g (gigabytes), or t (tetrabytes).  In addition you can put some limits on the number of physical or logical volumes the volume can have.  You may want to change the extent size for performance, administrative convenience, or to support very large logical volumes.  (Note there may be kernel limits and/or application limits on the size of LVs and files on your system.  For example Linux 2.4 kernel has a max size of 2TB.)

The "vgcreate" command adds some information to the headers of the included PVs.  However the kernel modules needed to use the VGs as disks aren't loaded yet, and thus the kernel doesn't "see" the VGs you created.  To make the VGs visible you must activate them.  Only active volume groups are subject to changes and allow access to their logical volumes.

To activate a single volume group VG1, use the command:

vgchange -a y /dev/VG1

("-a" is the same as "--available".)  To active all volume groups on the system use:

vgchange -a y

Create Logical Volumes (LVs)

Creating a logical volume in some VG is the most complex part of LVM setup, due to the many options available.  The basic command syntax is:

lvcreate options size VG_name

Where size is either "-l num_extents" or "-L num_bytes", where num_bytes is a number followed by one of k, m, g, or t.  If this second form is used you may not get an LV

Page 14: How to Resize the Root Volume on LVM From Livecd

of that exact size, as LVs are always a whole number of extents.  You can also use "--extents" for "-l" or "--size" for "-L".

One of the most common options is "-n name" (you can use "--name" for "-n") to specify a name for the logical volume.  If you don't use this option than the LVs are named automatically "lvol1", "lvol2", "lvol3", etc.

Other options include "-C y" (or "--contiguous y") to create an LV with contiguous allocation, "-i num_stripes -I stripe_size" to create an LV with striped mapping (stripe_size is a number between 2 and 9, as described above), and "-r num_sectors" (or "--readahead num_sectors") to set the amount of read ahead to a value between 2 and 120.

Another form of this command is used to create snapshot volumes.  This will be discussed below.

Some examples of creating logical volumes LV1 and LV2 from the volume group VG1:

# To create a 20GB linear LV named "LV1" for some VG named# "VG1" and its block device special file "/dev/VG1/LV1":lvcreate -L 20g -n LV1 VG1

# To create a LV of 100 extents with 2 stripes and stripe size 4 KB:lvcreate -i 2 -I 4 -l 100 -n LV2 VG1

If you want to create an LV that uses the entire VG, use the "vgdisplay" command to find the "Total PE" size, then use that when running lvcreate.

Once the LVs have been created you can format them with filesystems (or as swap space) using standard tools such as "mkfs".  If the new filesystem can be successfully mounted, a final step is to edit the /etc/fstab file and possibly the rc.sysinit file, so that the volumes are mounted automatically at boot time.  It may also be necessary to setup an initial ramdisk for booting (if the "root" filesystem is built on a logical volume).

Create and Use a Snapshot

To create a snapshot of some existing LV, a form of the lvcreate command is used:

root# lvcreate size option -s -n name existing_LV

where size is as discussed previously, "-s" (or "--snapshot") indicates a snapshot LV, "-n name" (or "--name name") says to call the snapshot LV name.  The only option allowed is "-c chunk_size" (or "--chunksize chunk_size"), where chunk_size is specified as a power of 2 in this formula: (2^chunk_size) × 1024 bytes, where 2 ≤ chunk_size ≤ 10.) 

Page 15: How to Resize the Root Volume on LVM From Livecd

Suppose you have a volume group VG1 with a logical volume LV1 you wish to backup using a snapshot.  you can estimate the time the backup will take, and the amount of disk writes that will take place during that time (plus a generous fudge factor), say 300MB.  Then you would run the command:

root# lvcreate -l 300m -s -n backup LV1

to create a snapshot logical volume named /dev/VG1/backup which has read-only access to the contents of the original logical volume named /dev/VG1/LV1 at point in time the snapshot was created.  Assuming the original logical volume contains a file system you now mount the snapshot logical volume on some (empty) directory, then backup the mounted snapshot while the original filesystem continues to get updated.  When finished, unmount the snapshot and delete it (or it will continue to grow as LV1 changes, and eventually run out of space).

Note: If the snapshot is of an XFS filesystem, the xfs_freeze command should be used to quiesce the filesystem before creating the snapshot (if the filesystem is mounted):

/root# xfs_freeze -f /mnt/point;/root# lvcreate -L 300M -s -n backup /dev/VG1/LV1/root# xfs_freeze -u /mnt/pointWarning Full snapshot are automatically disabled

Now create a mount-point (an empty directory) and mount the volume:

/root# mkdir /mnt/dbbackup/root# mount /dev/VG1/backup /mnt/dbbackupmount: block device /dev/ops/dbbackup is write-protected, mounting read-only

If you are using XFS as the filesystem you will need to add the "nouuid" option to the mount command as follows:

/root# mount /dev/VG1/backup /mnt/dbbackup -o nouuid,ro

Do the backup, say by using tar to some "DDS4" or "DAT" tape backup device:

/root# tar -cf /dev/rmt0 /mnt/dbbackuptar: Removing leading `/' from member names

When the backup has finished you unmount the volume and remove it from the system:

root# umount /mnt/dbbackuproot# lvremove /dev/VG1/backuplvremove -- do you really want to remove "/dev/VG1/backup"? [y/n]: ylvremove -- doing automatic backup of volume group "VG1"lvremove -- logical volume "/dev/VG1/backup" successfully removed

Page 16: How to Resize the Root Volume on LVM From Livecd

Examining LVM Information

To see information about some VG use:

vgdisplay some_volume_groupvgs some_volume_group

To see information about some PV use the command:

pvdisplay some_disk_or_partition # e.g., /dev/hda1pvs some_disk_or_partition

To see information about some LV use:

lvdisplay some-logical-volumelvs some-logical-volume

The man pages for these commands provides further details.

Grow VGs, LVs, and Filesystems

To grow a filesystem, you must install a new hard disk (unless you have free space available), format it as a PV,add that PV to your VG, then add the space to your LV, and finally use the filesystem tools to grow it.  (Not all filesystem allow or come with tools to grow and shrink them!)

VGs are resizable (spelled in Linux as "resizeable") by adding or removing PVs from them.  However by default they are created as fixed in size.  To mark a VG as resizable use the command:

root# vgchange -x y #or --resizeable y

Once this is done add a PV (say "hdb2") to some VG (say "VG1") with the command:

root# vgextend VG1 /dev/hdb2

Next, extend an LV with the "lvextend" command.  This command works almost the same as the "lvcreate" command, but with a few different options.  When specifying how much to increase the size of the LV, you can either specify how much to grow the LV with "+size" or you can specify the new (absolute) size (by omitting the plus sign).  So to extend the LV "LV1" on VG "VG1" by 2GB, use:

root# lvextend -L +2G /dev/VG1/LV1

You could also use:

root# lvresize -L +2G /dev/VG1/LV1

Page 17: How to Resize the Root Volume on LVM From Livecd

It would be a good idea to use the same mapping as the original LV, or you will have strange performance issues!  Also note this command can be used to extend a snapshot volume if necessary.

After you have extended the logical volume the last step is to increase the file system size.  How you do this depends on the file system you are using.  Most filesystem types come with their own utilities to grow/shrink filesystems, if they allow that.  These utilities usually grow to fill the entire partition or LV, so there is no need to specify the filesystem size.

Some common filesystem utilities are (assume we are expanding the /home filesystem in LV1 on VG1):

EXT2 and EXT3: EXT2/3 filesystems must be unmounted before they can be resized.  The commands to use are:

root# umount /home # /home is the mount point for /dev/VG1/LV1 root# fsck -f /home # required! root# resize2fs /dev/VG1/LV1 # grow FS to fill LV1.

root# mount /home

Reiserfs: The Reiserfs file system can be safely resized while mounted.  If unmounted resizing is preferred, first umount and afterward mount the filesystem.  For online resizing just use:

root# resize_reiserfs -f /dev/VG1/LV1

XFS: The XFS file systems must be mounted to be resized, and the mount-point is specified rather than the device name:

root# xfs_growfs /home

JFS: Like XFS, the JFS file system must be mounted to be resized and the mount-point is specified rather than the device name.  JFS doesn't have a special utility for resizing, but the mount command has an option that can be used:

root# mount -o remount,resize /home

In some cases the exact number of blocks must be specified.  (A kernel bug in some older Linux versions prevents the LV size from being determined automatically.)  For example to resize a JFS file system that has a 4KB block size (the default) to 4GB, you must use 1M 4KB-blocks.  Now "1M" is 2 raised to the power of 20 (=1048576), so use:

root# mount -o remount,resize=1048576 /home

Page 18: How to Resize the Root Volume on LVM From Livecd

Shrink VGs, LVs, and Filesystems

To shrink a filesystem, you perform the same steps for growing one but in reverse order.  You first shrink the filesystem, then remove the space from the LV (and put it back into the VG).  Other LVs in the same VG can now use that space.  To use it in another VG, you must remove the corresponding PV from the one VG and add it to the other VG.

To shrink a LV you must first shrink the filesystem in that LV.  This can be done with the resize2fs for EXT2/3, or resize_reiserfs for ReiserFS (doing this off-line is safer but not required).  There are similar tools for other filesystem types.  Here's an example of shrinking /home by 1 GB:

# dfFilesystem Size Used Avail Use% Mounted on/dev/sda1 145M 16M 122M 12% /boot/dev/mapper/vg01-lv01 49G 3.7G 42G 9% /home...# umount /home# fsck -f /home # required!fsck 1.38 (30-Jun-2005)e2fsck 1.38 (30-Jun-2005)Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structurePass 3: Checking directory connectivityPass 4: Checking reference countsPass 5: Checking group summary information/home: 32503/6406144 files (0.3% non-contiguous), 1160448/12845056 blocks# resize2fs -p /dev/vg01/lv01 48Gresize2fs 1.38 (30-Jun-2005)Resizing the filesystem on /dev/vg01/lv01 to 12799788 (4k) blocks.Begin pass 3 (max = 9)Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXThe filesystem on /dev/vg01/lv01 is now 12799788 blocks long.

Currently XFS and JFS filesystem types do not support shrinking.  If a newer version of these filesystems will support this, mount may have been updated to support these filesystem types.  (And if not a new tool may be released.)  For such filesystems you can resize them the hard way: Backup the data using some archive tool (e.g., cpio, tar, star, or you can copy the data to some other disk).  Then delete the filesystem in the LV, then shrink the LV, then recreate the new (smaller) filesystem, and finally restore the data.

Once the filesystem has been shrunk it is time to shrink the logical volume.  You can use either the lvreduce command or the lvresize command.  Continuing from the example above:

# lvresize -L -1G /dev/vg01/lv01 Rounding up size to full physical extent 96.00 MB WARNING: Reducing active logical volume to 48 GB THIS MAY DESTROY YOUR DATA (filesystem etc.)

Page 19: How to Resize the Root Volume on LVM From Livecd

Do you really want to reduce lv01? [y/n]: y Reducing logical volume lv01 to 48 GB Logical volume lv01 successfully resized# mount /home

To shrink a VG (say "VG1"), a PV (say "hdc") can be removed from it if none of that PV's extents (the PEs) are in use by any LV.  Run the command:

root# vgreduce VG1 /dev/hdc

You might want to do this to upgrade or replace a worn-out disk.  If the PV is in use by some LV, you must first migrate the data to another available PV within the same VG.  To move all the data from a PV (say "hdb2") to any unused, large enough PV within that VG, use the command:

root# pvmove /dev/hdb2

Delete LVs and VGs

A logical volume (say "LV3" on the volume group "VG2") must be unmounted before it can be removed.  The steps for this are simple:

root# umount /dev/VG2/LV3root# lvremove /dev/VG2/LV3

Before a volume group (say "VG2") is removed you must first deactivate it.  This is done with the command:

root# vgchange -a n VG2

Now the VG can be removed.  This of course will destroy all LVs within it.  The various PVs that made up that VG can then be re-assigned to some other VGs.  Remove (a non-active) volume group with:

root# vgremove VG2

Summary and Examples

In the following examples assume that LVM2 is installed and up to date, and the boot scripts have been modified already if needed.  The first example includes some commentary and some command output; the second is much shorter but uses the long option names just for fun.

Home directory Example

In this example we will create a logical volume to hold the "/home" partition for a multi-media development system.  The system will use a standard EXT3 filesystem of 60 GB, built using 3 25GB SCSI disks (and no hardware RAID).  Since multi-

Page 20: How to Resize the Root Volume on LVM From Livecd

media uses large files it makes sense to use stripe mapping and read-ahead.  We will call the volume group "vg1" and the logical volume "home":

1. Initialize the disks as PVs: /root# pvcreate /dev/sda /dev/sdb /dev/sdc

2. Create a Volume Group, then check it's size: 3. /root# vgcreate vg1 /dev/sda /dev/sdb /dev/sdc4. /root# vgdisplay5. vgdisplay6. --- Volume Group ---7. VG Name vg18. VG Access read/write9. VG Status available/resizable10. VG # 111. MAX LV 25612. Cur LV 013. Open LV 014. MAX LV Size 255.99 GB15. Max PV 25616. Cur PV 317. Act PV 318. VG Size 73.45 GB19. PE Size 4 MB20. Total PE 1880321. Alloc PE / Size 0 / 022. Free PE / Size 18803/ 73.45 GB

VG UUID nP2PY5-5TOS-hLx0-FDu0-2a6N-f37x-0BME0Y

23. Create a 60 GB logical volume, with stripe set of 3 PVs and stripe size of 4 (which shows 2^4 KB = 16KB):

24. /root# lvcreate -i 3 -I 4 -L 60G -n home vg125. lvcreate -- rounding 62614560 KB to stripe boundary size 62614560 KB / 18803 PE26. lvcreate -- doing automatic backup of "vg1"

lvcreate -- logical volume "/dev/vg1/home" successfully created

27. Create an EXT3 filesystem in the new LV:

/root# mkfs -t ext3 /dev/vg1/home

28. Test the new FS: 29. /root# mount /dev/vg1/home /mnt30. /root# df | grep /mnt

/root# umount /dev/vg1/home

31. Update /etc/fstab with the revised entry for /home. 32. Finally, don't forget to update the system journal.

Oracle Database Example

In this example we will create 2 LVs for an Oracle database.  Oracle manages its own striping and read-head/caching, so we won't use these LVM features.  However using hardware RAID is useful, so we will use two RAID 10 disks, hdb and

Page 21: How to Resize the Root Volume on LVM From Livecd

hdc.  The tables will use one logical volume called "tables" on one disk and the indexes and control files will be on a second LV called "indexes" on the other disk.  Both LVs will exist in the VG called "db".  Both filesystems will be XFS, for good performance for large database files:

/root# pvcreate /dev/hdb /dev/hdc/root# vgcreate db /dev/hdb /dev/hdc/root# lvcreate --size 200G --name tables db/root# lvcreate --size 200G --name indexes db/root# mkfs -t xfs /dev/db/tables/root# mkfs -t xfs /dev/db/indexes/root# vi /etc/fstab

/root# vi ~/system-journal

15.2. Adding Swap Space on LVM

Sometimes it is necessary to add more swap space after installation. For example, you may upgrade the amount of RAM in your system from 1 GB to 2 GB, but there is only 2 GB of swap space. It might be advantageous to increase the amount of swap space to 4 GB if you perform memory-intense operations or run applications that require a large amount of memory.

You have three options: create a new swap partition, create a new swap file, or extend swap on an existing LVM2 logical volume. It is recommended that you extend an existing logical volume.

15.2.1. Extending Swap on an LVM2 Logical VolumeBy default, Red Hat Enterprise Linux 6 uses all available space during installation. If this is the case with your system, then you must first add a new physical volume to the volume group used by the swap space. For instructions on how to do so, refer to Section   14.2.2, “Adding Unallocated Volumes to a Volume Group”.

After adding additional storage to the swap space's volume group, it is now possible to extend it. To do so, perform the following procedure (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend by 2 GB):

Procedure 15.1. Extending Swap on an LVM2 Logical Volume

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol01

2. Resize the LVM2 logical volume by 2 GB:

# lvresize /dev/VolGroup00/LogVol01 -L +2G

3. Format the new swap space:

Page 22: How to Resize the Root Volume on LVM From Livecd

# mkswap /dev/VolGroup00/LogVol01

4. Enable the extended logical volume:

# swapon -v /dev/VolGroup00/LogVol01To test if the logical volume was successfully extended, use cat /proc/swaps or free to inspect the swap space.

SI DESPUES DE RENOMBRAR EL VOLUMEN LOGICO DONDE SE ENCUENTRA ROOT EL S.O. NO ARRANCA SE DEBE EDITAR EL ARCHIVO /etc/fstab y el grub.conf ESTE ULTIMO SE ENCUENTRA EN LA RUTA /boot/grub/grub.conf SIEMPRE Y CUANDO EL BOOTLOADER SE HAYA INSTALADO EN LA MISMA PARTICION DE ROOT SINO ESTARA EN SU PARTICION /BOOT APARTE Y HABRA QUE ARRANCAR MANUALMENTE EL S.O. EDITANDO LA LINEA root=/dev/VolGroup00/nombre_volumen_root.

Verificar version de centos instalada

cat /etc/redhat-release en centos 5 e inferior

cat /etc/centos-release en centos 6 y superior

Verificar versión del kernel

uname –r

Verificar arquitectura del S.O

Uname -i

Verificar arquitectura del Procesador

uname –m

verificar tamaño de swap y punto de montaje en volumen lógico

cat /proc/swaps

PARA PODER HACER PING CON EL HOSTNAME DESDE MAQUINAS WINDOWS SE DEBE HABILITAR EL SERVICIO SAMBA EN LA MAQUINA LINUX Y EDITAR EL ARCHIVO DE CONFIGURACION

/etc/samba/smb.conf

EN LAS SIGUENTES LINEAS

workgroup = NOMBRE_DEL_GRUPO_DE_TRABAJOnetbios name = NOMBRE_DE_LA_MAQUINA_LINUX

REINICIAR LOS SERVICIOS SAMBA CON

Page 23: How to Resize the Root Volume on LVM From Livecd

service smb restart

EN CASO DE NO FUNCIONAR REINICIAR LA MAQUINA LINUX Y PROBAR NUEVAMENTE

PARA HACER QUE LINUX LEA UNA NUEVA TABLA DE PARTICIONES DESPUES DE CREAR UNA PARTICION NUEVA SIN REINICIAR USAR EL COMANDO

partprobe

INSTALAR GNOME DESKTOP EN MINIMAL SERVER

yum -y groupinstall "X Window System" "GNOME Desktop Environment"

VERIFICAR PAQUETES INSTALADOS EN CENTOS$ rpm -qa | grep nombre_del_paquete

Boot with xserver stopped

Edit the /etc/inittab file and alter the following line:

id:3:initdefault:

The number you probably have instead of 3 is 5, which is graphical mode.

8. What groups are included in each option at install time?

Based on /usr/lib/anaconda/installclasses/rhel.py from images/install.img the following options are available:

Desktop: base, core, debugging, directory-client, java-platform, network-file-system-client, server-platform, fonts, print-client, basic-desktop, desktop-debugging, desktop-platform, general-desktop, graphical-admin-tools, input-methods, legacy-x, x11, internet-applications, internet-browser, office-suite, remote-desktop-clients

Minimal Desktop: base, core, debugging, directory-client, java-platform, network-file-system-client, server-platform, fonts, print-client, basic-desktop, desktop-debugging, desktop-platform, input-methods, legacy-x, x11, internet-browser, remote-desktop-clients

Minimal: core

Page 24: How to Resize the Root Volume on LVM From Livecd

Basic Server: base, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platform

Database Server: base, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platform, mysql-client, mysql, postgresql-client, postgresql, system-admin-tools

Web Server: base, console-internet, core, debugging, directory-client, java-platform, network-file-system-client, performance, perl-runtime, server-platform, web-server, web-servlet, php, turbogears, mysql-client, postgresql-client

Virtual Host: base, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platform, virtualization, virtualization-client, virtualization-platform

Software Development Workstation: base, core, debugging, directory-client, java-platform, network-file-system-client, performance, perl-runtime, server-platform, fonts, print-client, basic-desktop, desktop-debugging, desktop-platform, general-desktop, graphical-admin-tools, input-methods, legacy-x, x11, internet-browser, graphics, emacs, tex, remote-desktop-clients, virtualization, virtualization-client, virtualization-platform, desktop-platform-devel, development, eclipse, server-platform-devel, technical-writing, additional-devel

The contents of each group can be seen by

yum groupinfo <groupname>

Yum tips for taking charge of package management in CentOS

An important aspect of administering a CentOS server is to understand its package management system and its various online repositories. Together they ensure you are always running a secure and updated server.

CentOS uses the Yellow Dog Updater, Modified (yum) package manager to install and update software packages in RPM format from online software repositories. With yum, in addition to downloading and installing new packages, you can also check for available updates and fetch information about available packages.

The /etc/yum.conf file comes preconfigured with options that affect how you download and use RPM packages. Here's a snippet from the file:

[main]cachedir=/var/cache/yum/$basearch/$releaseverkeepcache=0debuglevel=2

Page 25: How to Resize the Root Volume on LVM From Livecd

logfile=/var/log/yum.log

The cachedir variable points to the location where the RPM packages are downloaded.

The keepcache=0 option instructs yum to delete the packages after they've been installed. If you change the value of the keepcache variable to 1, CentOS will keep the packages even after installation. Storing downloaded packages comes in handy when you are setting up a local update server. You download updates from the Internet on the update server, from where they can be picked up by other CentOS machines on your network.

The debuglevel variable can take values from 0 to 10. The default level 2 produces just enough information to indicate whether an operation was a success or a failure. Debug messages are logged to the specified log file.

When you invoke the yum command to install a software package, it checks the list of configured repositories under the /etc/yum.conf file and in files under the /etc/yum.repos.d directory. Although you can add information about repositories in yum's main configuration file, a good practice is to list them under /etc/yum.repos.d in separate files named file.repo, such as CentOS-Base.repo, in order to help manage repos that may be pulling in software from lots of different sources.

The CentOS repositories

CentOS has several official repositories. Using these default repos ensures that your CentOS installation is binary-compatible with Red Hat Enterprise Linux (RHEL), with which CentOS shares a common code base. The following are the official CentOS repositories, configured under the /etc/yum.repos.d/CentOS-Base.repo file:

[base] – This repository is enabled by default and contains the RPM packages that are also supplied on the installation medium. The contents of this repository don't change after a release.

[updates] – This repository, which is also enabled by default, holds all the security, bug fix, and other updates for the packages in the [base] repositories. All the other repositories contain their own updates.

[extras] – This repository is also enabled by default and contains packages that add functionality to the distro. The packages in this repo are written by CentOS developers and have undergone reasonable testing. Also, packages in this repo never conflict with packages in the [base] repo.

[contrib] – This repository is not enabled by default and contains additional software packages contributed by the CentOS community. While the packages in this repo should not conflict with packages in the [base] repo, these packages haven't been tested by CentOS developers.

[centosplus] – This repository is also not enabled by default and contains software contributed by both CentOS users and developers. However, the packages in this repo may upgrade packages from the [base] repo and thus break binary

Page 26: How to Resize the Root Volume on LVM From Livecd

compatibility with upstream versions. Make sure you have a valid reason before enabling this repo.

To enable a repo, edit the CentOS-Base.repo file and scroll to the repository you want to enable. Toggle the repo by changing enabled=0 to enabled=1.

If you wish to use the CentOSPlus repository, the CentOS developers advise that you use it only to pick particular packages. You can browse the contents of the repo online, and when you find a package you need, such as a newer kernel, you can intruct yum to update only that particular package and ignore the rest. To do so, add the following line to the CentOS-Base.repo file under the [centosplus] entry:

includepkgs=kernel*

At the same time you also have to tell yum not to fetch the kernel from the base repos by adding the following line under the [base] and [update] repo sections:

exclude=kernel kernel-devel kernel-PAE*

Third-party repositories

If you are using CentOS on the desktop, chances are you'll need a package that's not in one of the official CentOS repositories, such as the Flash plugin or Google's Chrome web browser. In that case you'll need to enable a third-party repository.

You can use lots of third-party repos to flesh out your installation with all kinds of apps. However, these repos contain packages that aren't approved by the CentOS project. You should add only those repos that you need. Adding unnecessary repositories can slow down the performance of yum and may introduce inconsistency in your system.

The three most popular third-party repos are Extra Packages for Enterprise Linux (EPEL), RepoForge, and ATrpms. The EPEL repo hosts Fedora packages that have been rebuilt for RHEL. The RepoForge repo contains packages for servers, desktops, and software development. On ATrpms you'll find the latest bleeding-edge drivers for peripherals such as graphics and wireless cards.

To ease the installation of a third-party repo, most repo developers produce a special package that contains the GPG key of the repository as well as configuration information for the package manager to enable it to pull software from the repo. The following command fetches and installs such a special RepoForge installation package, making the required configuration changes to install the RepoForge repository for 64-bit packages in CentOS 6:

# rpm -ivh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Page 27: How to Resize the Root Volume on LVM From Livecd

Yum plugins

You can extend yum's usefulness via plugins, which are Python scripts that extend yum's feature set. You can find yum plugins under /usr/lib/yum-plugins, and their configuration files are housed under /etc/yum/pluginconf.d/.

Run yum search yum-plugin to search for the available plugins. Before you can use any plugins, they must be enabled in yum's configuration file /etc/yum.conf. Edit the file and make sure the plugins variable is set to "1".

Plugins are already enabled in CentOS 6, which ships with plugins such as the fastestmirror, priorities, refresh-packagekit, and security plugins enabled by default. What do these plugins do?

If you have more than one mirror listed in a repo configuration, the fastestmirror plugin will connect and time the connection with each mirror, then sort them by fastest to slowest for yum.

If you decide to add a third-party repository, there is a chance that package conflicts may compromise the stability of your CentOS installation. To avoid such a situation you should prioritize the repos with the priorities plugin. Then, if a package is available in multiple repos, it will be installed from the repo with the highest priority. More importantly, packages from repos with a lower priority will never be used to upgrade packages that were installed from a repo with a higher priority. To assign priorities, add a line to the end of a repository configuration, such as priority=2. The priority number can range from 1 to 99, the latter of which is the default priority for repos. The repo with the lowest numerical priority number will have the highest priority. Generally speaking, the default CentOS repos should be 1, while third-party repos should have a lower priority unless you're looking to replace official Fedora packages with those from a third party.

If you manage a bunch of CentOS machines, you can set up a local update server that fetches updates from the Internet. You can then configure yum in the other machines, preferably with the priorities plugin, to fetch updates from the update server on the network rather than the Internet.

You can use the security plugin to search for and query security updates. For example, running yum --security check-update will show only updates that should be installed for security reasons. You can install all such security updates with yum --security update. The man page of the plugin (man yum-security) lists more interesting uses.

In addition, the blacklist and whiteout plugins are installed but disabled – they were tied to the now deprecated PreUpgrade app to upgrade from one CentOS release to another.

Page 28: How to Resize the Root Volume on LVM From Livecd

As you can see, yum is a powerful and versatile package manager. You can use its plugins system to speed up upgrades and mix and match software from official CentOS and third-party repos without compromising the stability of your installation. Whether you are managing CentOS desktops or servers, you should take time out to learn yum tricks like these.

Using YumIf you are new to Yum, here are some tricks that you should master to use the tool effectively.

To install a package, use yum install package-name. If you have the package on your disk, yum --nogpgcheck localinstall package-name will install the package and automatically check and install dependencies from the repos.

Use yum list package-name to search the repos for a particular package. If you don't know the name of the package, you can search for a string in the name, description, and summary of all the packages with yum search keyword. You can also use yum provide filename to search packages that provide a package or a library.

If you have configured third-party repos, you can use yum list extras to see a list of packages that were installed from repos outside of the main CentOS repository.

Use yum check-update to check for available updates. While you can update a particular package with yum update package-name, yum update will install all available updates. You can even update a particular group of packages. For example, yum groupupdate Virtualization will update all packages in the Virtualization group. Use yum grouplist to list all predefined groups.

Run yum clean packages regularly to ensure the packages are cleared out from under the /var/cache/yum directory. If yum throws a tantrum while you're installing packages, you can refresh the metadata information about the packages with yum clean metadata, or clear the whole cache with yum clean all.

Adding a New Disk Drive to a CentOS SystemOne of the first problems encountered by users and system administrators these days is that desktop systems and servers tend to run out of disk space to store data. Fortunately disk space is now one of the cheapest IT commodities. In the next two chapters we will look at the steps necessary to configure CentOS to use the space provided via the installation of a new internal disk drive.

Mounted File Systems or Logical Volumes

Page 29: How to Resize the Root Volume on LVM From Livecd

There are two ways to configure a new disk drive into a CentOS system. One very simple method is to create one or more Linux partitions on the new drive, create Linux file systems on those partitions and then mount them at specific mount points so that they can be accessed. This is the approach that will be covered in this chapter.

Another approach is to add the new space to an existing volume group or create a new volume group. When CentOS is installed using the default disk configuration layout, a volume group is created and called VolGroup00. Within this volume group are two logical volumes named LogVol00 and LogVol01 that are used to store the / file system and swap partition respectively. By configuring the new disk as part of a volume group we are able to increase the disk space available to the existing logical volumes. Using this approach we are able, therefore, to increase the size of the / file system by allocating some or all of the space on the new disk to LogVol00. This topic will be discussed in detail in Adding a New Disk to a CentOS Volume Group and Logical Volume.

Getting Started

This tutorial assumes that the new physical hard drive has been installed on the system and is visible to the operating system. The best way to do this is to enter the system BIOS during the boot process and ensuring that the BIOS sees the disk drive. Sometimes the BIOS will provide a menu option to scan for new drives. If the BIOS does not see the disk drive double check the connectors and jumper settings (if any) on the drive.

Finding the New Hard Drive in CentOS

Assuming the drive is visible to the BIOS it should automatically be detected by the operating system. Typically, the disk drives in a system are assigned device names beginning hd or sd followed by a letter to indicate the device number. For example, the first device might be /dev/sda, the second /dev/sdb and so on.

The following is output from a system with only one physical disk drive:

ls /dev/sd*/dev/hda /dev/hda1 /dev/hda2 /dev/hdc

This shows that the disk drive represented by /dev/hda is itself divided into 2 partitions, represented by /dev/hda1 and /dev/hda2. In this instance, /dev/hdc is the CDROM. In all likelihood, when a second disk drive is detected by the system it will be assigned to /dev/hdb.

Another option is to install and run the Hardware Browser. If this is already installed it may be launched by selecting Hardware from the System -> Administration menu. If this option is not available it may be installed as follows:

Page 30: How to Resize the Root Volume on LVM From Livecd

su -yum install hwbrowser

In the Hardware Browser scroll down the list of devices until the disk drives become visible. Select the new disk device and click on the Advanced tab to identify the device name. In the following figure the new hard disk has been assigned to /dev/hdb:

The following output is from the same system after a second hard disk drive has been installed:

ls /dev/hd*/dev/hda /dev/hda1 /dev/hda2 /dev/hdb /dev/hdc

As shown above, the new hard drive has been assigned to the device file /dev/hdb. At this point the drive has no partitions shown (because we have yet to create any).

At this point we have a choice of creating partitions and file systems on the new drive and mounting them for access or adding the disk as a physical volume as part of a volume group. To perform the former continue with this chapter, otherwise read Adding a New Disk to a CentOS Volume Group and Logical Volume for details on configuring Logical Volumes.

Creating Linux Partitions

The next step is to create one or more Linux partitions on the new disk drive. This is achieved using the fdisk utility which takes as a command-line argument the device to be partitioned:

fdisk /dev/hdbDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Page 31: How to Resize the Root Volume on LVM From Livecd

Building a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

The number of cylinders for this disk is set to 47536.There is nothing wrong with that, but this is larger than 1024,and could in certain setups cause problems with:1) software that runs at boot time (e.g., old versions of LILO)2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help):

In order to view the current partitions on the disk enter the p command:

Command (m for help): p

Disk /dev/hdb: 24.5 GB, 24533532672 bytes16 heads, 63 sectors/track, 47536 cylindersUnits = cylinders of 1008 * 512 = 516096 bytes

Device Boot Start End Blocks Id System

As we can see from the above fdisk output the disk currently has no partitions because it is a previously unused disk. The next step is to create a new partition on the disk, a task which is performed by entering n (for new partition) and p (for primary partition):

Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4):

In this example we only plan to create one partition which will be partition 1. Next we need to specify where the partition will begin and end. Since this is the first partition we need it to start at cylinder 1 and since we want to use the entire disk we specify the last cylinder as the end. Note that if you wish to create multiple partitions you can specify the size of each partition by cylinders, bytes, kilobytes or megabytes.

Partition number (1-4): 1First cylinder (1-47536, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-47536, default 47536):Using default value 47536

Now that we have specified the partition we need to write it to the disk using the w command:

Page 32: How to Resize the Root Volume on LVM From Livecd

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

If we now look at the devices again we will see that the new partition is visible as /dev/hdb1:

ls /dev/hd*/dev/hda /dev/hda1 /dev/hda2 /dev/hdb /dev/hdb1 /dev/hdc

The new partition will similarly be visible in the Hardware browser. The next step is to create a file system on our new partition.

Creating a File System on a CentOS Disk Partition

We now have a new disk installed, it is visible to CentOS and we have configured a Linux partition on the disk. The next step is to create a Linux file system on the partition so that the operating system can use it to store files and data. The easiest way to create a file system on a partition is to use the mkfs.ext3 utility which takes as arguments the label and the partition device:

/sbin/mkfs.ext3 -L /userdata /dev/hdb1

mke2fs 1.39 (29-May-2006)Filesystem label=/userdataOS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)2998272 inodes, 5989528 blocks299476 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=0183 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000

Writing inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.

Mounting a File System

Page 33: How to Resize the Root Volume on LVM From Livecd

Now that we have created a new file system on the Linux partition of our new disk drive we need to mount it so that it is accessible to the CentOS system and its users. In order to do this we need to create a mount point. A mount point is simply a directory or folder into which the file system will be mounted. For the purposes of this example we will create a /userdata directory to match our file system label (although it is not necessary that these values match):

mkdir /userdata

The file system may then be manually mounted using the mount command:

mount /dev/hdb1 /userdata

Running the mount command with no arguments shows us all currently mounted file systems (including our new file system):

mount/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)proc on /proc type proc (rw)sysfs on /sys type sysfs (rw)devpts on /dev/pts type devpts (rw,gid=5,mode=620)/dev/hda1 on /boot type ext3 (rw)tmpfs on /dev/shm type tmpfs (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)/dev/hdc on /media/CentOS_5.4_Final type iso9660 (ro,noexec,nosuid,nodev,uid=500)/dev/hdb1 on /userdata type ext3 (rw)

Configuring CentOS to Automatically Mount a File System

In order to set up the system so that the new file system is automatically mounted at boot time an entry needs to be added to the /etc/fstab file.

The following example shows an fstab file configured to automount our /userdata partition:

/dev/VolGroup00/LogVol00 / ext3 defaults 1 1LABEL=/boot /boot ext3 defaults 1 2tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0/dev/VolGroup00/LogVol01 swap swap defaults 0 0LABEL=/userdata /userdata ext3 defaults 1 2

With the appropriate configuration line added to the fstab file, the file system will automatically mount on the next system restart. 

Page 34: How to Resize the Root Volume on LVM From Livecd

A simple /etc/fstab, using kernel name descriptors:

/etc/fstab

# <file system> <dir> <type> <options> <dump> <pass>/dev/sda1 / ext4 defaults,noatime 0 1/dev/sda2 none swap defaults 0 0/dev/sda3 /home ext4 defaults,noatime 0 2

Field definitions

The /etc/fstab file contains the following fields separated by a space or tab:

<file system> <dir> <type> <options> <dump> <pass>

<file system> - the partition or storage device to be mounted. <dir> - the mountpoint where <file system> is mounted to. <type> - the file system type of the partition or storage device to be mounted.

Many different file systems are supported: ext2, ext3, ext4, btrfs, reiserfs, xfs, jfs, smbfs, iso9660, vfat, ntfs, swap and auto. The auto type lets the mount command guess what type of file system is used. This is useful for optical media (CD/DVD).

<options> - mount options of the filesystem to be used. Note that some mount options are filesystem specific. Some of the most common options are:

auto - Mount automatically at boot, or when the command mount -a is issued.

noauto - Mount only when you tell it to. exec - Allow execution of binaries on the filesystem. noexec - Disallow execution of binaries on the filesystem. ro - Mount the filesystem read-only. rw - Mount the filesystem read-write. user - Allow any user to mount the filesystem. This automatically implies

noexec, nosuid, nodev, unless overridden. users - Allow any user in the users group to mount the filesystem. nouser - Allow only root to mount the filesystem. owner - Allow the owner of device to mount. sync - I/O should be done synchronously. async - I/O should be done asynchronously. dev - Interpret block special devices on the filesystem. nodev - Don't interpret block special devices on the filesystem. suid - Allow the operation of suid, and sgid bits. They are mostly used to

allow users on a computer system to execute binary executables with temporarily elevated privileges in order to perform a specific task.

nosuid - Block the operation of suid, and sgid bits. noatime - Don't update inode access times on the filesystem. Can help

performance (see atime options). nodiratime - Do not update directory inode access times on the filesystem.

Can help performance (see atime options). relatime - Update inode access times relative to modify or change time.

Access time is only updated if the previous access time was earlier than the

Page 35: How to Resize the Root Volume on LVM From Livecd

current modify or change time. (Similar to noatime, but doesn't break mutt or other applications that need to know if a file has been read since the last time it was modified.) Can help performance (see atime options).

discard - Issue TRIM commands to the underlying block device when blocks are freed. Recommended to use if the filesystem is located on an SSD.

flush - The vfat option to flush data more often, thus making copy dialogs or progress bars to stay up until all data is written.

nofail - Mount device when present but ignore if absent. This prevents errors being reported at boot for removable media.

defaults - the default mount options for the filesystem to be used. The default options for ext4 are: rw, suid, dev, exec, auto, nouser, async.

<dump> - used by the dump utility to decide when to make a backup. Dump checks the entry and uses the number to decide if a file system should be backed up. Possible entries are 0 and 1. If 0, dump will ignore the file system; if 1, dump will make a backup. Most users will not have dump installed, so they should put 0 for the <dump> entry.

<pass> - used by fsck to decide which order filesystems are to be checked. Possible entries are 0, 1 and 2. The root file system should have the highest priority 1 (unless its type is btrfs, in which case this field should be 0) - all other file systems you want to have checked should have a 2. File systems with a value 0 will not be checked by the fsck utility.

Why does my sendmail process hang on boot up?

The problem is that there is probably a network problem. The most common cause is that Linux can not look up the name of the machine (if you set up networking to have a machine name). The machine pauses waiting for the network timeout of DNS lookups, and will eventually bring up the login prompt. Log in as root and check the usual culprits for a problem. If you are directly on a network with a DNS server, check to see if the file /etc/resolv.conf has the correct values for your machine's DNS server. Check with your systems administrator that the values are correct. If you are using Linux on a network without a DNS server (or this box is going to be the DNS server), then you will need to edit the /etc/hosts file to have the hostname and IP address so that the lookups will occur correctly. The format of the /etc/hosts file is like the following example: 127.0.0.1 localhost localhost.localdomain 192.168.200.1 mymachine mymachine.mynetwork.net Where the machine is called mymachine.

Where commands take VG or LV names as arguments, the full path name is optional. An LV called "lvol0" in a VG called "vg0" can be specified as "vg0/lvol0". Where a list of VGs is required but is left empty, a list of all VGs will be substituted. Where a list of LVs is required but a VG is given, a list of all the LVs in that VG will be substituted. So lvdisplay vg0 will display all the LVs in "vg0". Tags can also be used - see --addtag below.

Page 36: How to Resize the Root Volume on LVM From Livecd

One advantage of using the built-in shell is that configuration information gets cached internally between commands.

A file containing a simple script with one command per line can also be given on the command line. The script can also be executed directly if the first line is #! followed by the absolute path of lvm.

I. Change HostName From Command Line

1. Use hostname command to Change Hostname

In this example, we’ll change the hostname from dev-server to prod-server.

hostname command by default will display the current hostname as shown below:

# hostnamedev-server

The following will change the hostname to prod-server.

# hostname prod-server

Once the hostname is changed, verify that it has changed the hostname successfully. As you see below, it has changed the hostname to prod-server

# hostnameprod-server

2. Modify the /etc/hosts file

If you have entries in the /etc/hosts file with the old hostname, you should modify it.

For example, the entry for 127.0.0.1 line in the /etc/hosts file will still show the old hostname. In this example, it shows as dev-server.

$ cat /etc/hosts127.0.0.1 dev-server localhost.localdomain localhostÓ

Ip fija mas el hostname.localdomain mas hostname ejemplo:127.0.0.1 localhost.localdomain localhost::1 localhost6.localdomain6 localhost6192.168.2.10 centos-server.info centos_server

Modify this file, and set the new hostname here. For example, change dev-server to prod-server as shown below.

$ cat /etc/hosts

Page 37: How to Resize the Root Volume on LVM From Livecd

127.0.0.1 prod-server localhost.localdomain localhost

3. Modify the /etc/sysconfig/network file

The /etc/sysconfig/network file also has an entry for HOSTNAME. Change the value here as shown below.

# cat /etc/sysconfig/networkNETWORKING=yesHOSTNAME=prod-server

4. Restart the Network

Restart the network service, if you want any other services that are using the hostname to pickup the changes.

# service network restartShutting down interface eth0: [ OK ]Shutting down loopback interface: [ OK ]Bringing up loopback interface: [ OK ]Bringing up interface eth0: [ OK ]

If this is not a production system, you can also reboot the system to make sure the hostname is changed properly, and the system is picking it up properly during startup.

INSTALL VMWARE TOOLS ON CENTOS COMMAND LINE

MONTAR LA IMAGEN

mount /dev/cdrom /mnt/cdrom

cd /tmp

Descomprimir el instalador

tar zxf /mnt/cdrom/VMwareTools-5.0.0-<xxxx>.tar.gz

desmontar la imagen

umount /dev/cdrom

correr el instalador

./vmware-install.pl

Page 38: How to Resize the Root Volume on LVM From Livecd

I have solved the problem. For benefit of others I am documenting the steps which I took to make it work.

1. deleted everything and download fresh 2 zip files (basic and sqlplus) from oracle otn

2. Unzip in c:\oracle\isntantclient_11_2

3.SetORACLE_HOME=c:\oracle\isntantclient_11_2

4. SET TNS_ADMIN=c:\oracle\isntantclient_11_2

5. SET PATH=c:\oracle\isntantclient_11_2;%PATH%

6. REBOOTED MY MACHINE

STATIC IP ADDRESS## Configure eth0## vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0"NM_CONTROLLED="yes"ONBOOT=yesHWADDR=A4:BA:DB:37:F1:04TYPE=EthernetBOOTPROTO=staticNAME="System eth0"UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03IPADDR=192.168.1.44NETMASK=255.255.255.0 ## Configure Default Gateway## vi /etc/sysconfig/network NETWORKING=yesHOSTNAME=centos6GATEWAY=192.168.1.1 ## Restart Network Interface# /etc/init.d/network restart ## Configure DNS Server## vi /etc/resolv.conf nameserver 8.8.8.8 # Replace with your nameserver ipnameserver 192.168.1.1 # Replace with your nameserver ip

Page 39: How to Resize the Root Volume on LVM From Livecd