Previous Page
Next Page

Managing Storage Volumes

Solaris Volume Manager (SVM), formally called Solstice DiskSuite, comes bundled with the Solaris 10 operating system and uses virtual disks, called volumes, to manage physical disks and their associated data. A volume is functionally identical to a physical disk in the view of an application. You may also hear volumes referred to as virtual or pseudo devices. SVM uses metadevice objects, of which there are four main types: metadevices, state database replicas, disk sets, and hot spare pools. These are described in Table 32.

Table 32. SVM Objects

Object

Description

Volume

A volume is a group of physical slices that appear to the system as a single, logical device. A volume is used to increase storage capacity and increase data availability. The various types of volumes are described next.

State database

A database that stores information about the state of the SVM configuration. Each state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. SVM cannot operate until you have created the state database and its replicas.

Disk sets

A set of disk drives containing state database replicas, volumes, and hot spares that can be shared exclusively, but not at the same time, by multiple hosts. If one host fails, another host can take over the failed host's disk set. This type of fail-over configuration is referred to as a clustered environment.

Hot spare pool

A collection of slices (hot spares) reserved for automatic substitution in case of slice failure in either a submirror or RAID 5 metadevice. Hot spares are used to increase data availability.


The types of SVM volumes you can create using Solaris Management Console or the SVM command-line utilities are concatenations, stripes, concatenated stripes, mirrors, and RAID5 volumes. SVM volumes can be any of the following:

  • Concatenation Concatenations work much the way the Unix cat command is used to concatenate two or more files to create one larger file. If partitions are concatenated, the addressing of the component blocks is done on the components sequentially, which means that data is written to the first available stripe until it is full, then moves to the next available stripe. The file system can use the entire concatenation, even though it spreads across multiple disk drives. This type of volume provides no data redundancy and the entire volume fails if a single slice fails.

  • Stripe A stripe is similar to a concatenation, except that the addressing of the component blocks is interlaced on the slices rather than sequentially. In other words, all disks are accessed at the same time in parallel. Striping is used to gain performance. When data is striped across disks, multiple disk heads and possibly multiple controllers can access data simultaneously. Interlace refers to the size of the logical data chunks on a stripe. Different interlace values can increase performance.

  • Concatenated stripe A concatenated stripe is a stripe that has been expanded by concatenating additional striped slices.

  • Mirror A mirror is composed of one or more stripes or concatenations. The volumes that are mirrored are called submirrors. SVM makes duplicate copies of the data located on multiple physical disks, and presents one virtual disk to the application. All disk writes are duplicated; disk reads come from one of the underlying submirrors. A mirror replicates all writes to a single logical device (the mirror) and then to multiple devices (the submirrors) while distributing read operations. This provides redundancy of data in the event of a disk or hardware failure.

  • RAID 5 Stripes the data across multiple disks to achieve better performance. In addition to striping, RAID 5 replicates data by using parity information. In the case of missing data, the data can be regenerated using available data and the parity information. A RAID 5 metadevice is composed of multiple slices. Some space is allocated to parity information and is distributed across all slices in the RAID5 metadevice. The striped metadevice performance is better than the RAID 5 metadevice, but it doesn't provide data protection (redundancy).

RAID (Redundant Array of Inexpensive Disks)

When describing SVM volumes, it's common to describe which level of RAID the volume conforms to. Usually these disks are housed together in a cabinet and referred to as an array. There are several RAID levels, each referring to a method of distributing data while ensuring data redundancy. These levels are not ratings, but rather classifications of functionality. Different RAID levels offer dramatic differences in performance, data availability, and data integrity depending on the specific I/O environment. Table 33 describes the various levels of RAID.

Table 33. RAID Levels

RAID Level

Description

0

Striped Disk Array without Fault Tolerance.

1

Maintains duplicate sets of all data on separate disk drives. Commonly referred to as mirroring.

2

Data striping and bit interleave. Data is written across each drive in succession one bit at a time. Checksum data is recorded in a separate drive. This method is very slow for disk writes and is seldom used today because ECC is embedded in almost all modern disk drives.

3

Data striping with bit interleave and parity checking. Data is striped across a set of disks one byte at a time, and parity is generated and stored on a dedicated disk. The parity information is used to re-create data in the event of a disk failure.

4

Same as level 3 except data is striped across a set of disks at a block level. Parity is generated and stored on a dedicated disk.

5

Unlike RAID 3 and 4 where parity is stored on one disk, both parity and data are striped across a set of disks.

6

Similar to RAID 5, but with additional parity information written to recover data if two drives fail.

1+0

Combination of RAID 1 (mirror) for resilience and RAID 0 for performance. The benefit of this RAID level is that a failed disk will only render the unit unavailable, and not the entire stripe.


The State Database

The SVM state database contains vital information on the configuration and status of all volumes, hot spares, and disk sets. There are normally multiple copies of the state database, called replicas, and it is recommended that state database replicas be located on different physical disks, or even controllers if possible, to provide added resilience.

The state database, together with its replicas, guarantees the integrity of the state database by using a majority consensus algorithm.

The state database is created and managed using the metadb command. Table 34 shows the metadb options.

Table 34. metadb Options

Option

Description

-a

Attach a new database device.

-c number

Specifies the number of state database replicas to be placed on each device. The default is 1.

-d

Delete all replicas on the specified disk slice.

-f

Used to create the initial state database. It is also used to force the deletion of the last replica.

-h

Displays a usage message.

-i

Inquire about the status of the replicas.

-k system-file

Specifies a different file where replica information should be written. The default is /kernel/drv/md.conf.

-l length

Specifies the length of each replica. The default is 8192 blocks.

-p

Specifies that the system file (default /kernel/drv/md.conf) is updated with entries from /etc/lvm/mddb.cf.

-s setname

Specifies the name of the diskset to which the metadb command applies.



Previous Page
Next Page