31 July 2016

3.3 Overview of RAID 0 and its use cases

RAID 0 does not offer data redundancy as it does not use parity or mirroring techniques. RAID 0 uses striping technique to store the data without using the parity. The striping means that data is spread over all the drives in the RAID set, yielding parallelism. 

RAID 0 configuration uses data striping techniques, where data is striped across all the disks within a RAID set. Therefore it utilizes the full storage capacity of a RAID set. To read data, all the strips are gathered by the controller. When the number of drives in the RAID set increases, the performance improves because more data can be read or written simultaneously. 


RAID 0 is a good option for applications that need high I/O throughput. However, if these applications require high availability during drive failures, RAID 0 does not provide data protection and availability. When any drive in the RAID 0 fails, all the data in that failed drive is lost and it is unrecoverable unless you have other types of protection (backup or replication).

There is no RAID overhead with RAID 0, as no capacity is utilized for mirroring or parity. There is also no performance overhead, as there are no parity calculations to perform.

RAID 0 Use Cases

Choose RAID 0 if 
  • The data is 100 percent scratch, losing it is not a problem, and you need as much capacity out of your drives as possible.
  • As long as there is some other form of data protection such as network RAID or a replica copy that can be used in the event that you lose data in your RAID 0 set.
  • If you want to create a striped logical volume on top of volumes that are already RAID protected. such as creating volumes on already RAID protected LVM in Linux servers.


  • RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls.
  • All storage capacity is used, there is no overhead.
  • The technology is easy to implement.


RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems.

Previous: 3.2 Types of RAID Levels

                                                                                              Next: 3.4 Overview of RAID 1 and its use cases

3.2 Types of RAID Levels

There are different types of RAID levels which we can use in the storage systems. The RAID level selection depends on the parameters such as application performance, data availability requirements, and cost. These RAID levels are defined on the basis of striping, mirroring, and parity techniques. Some RAID levels use a single technique, whereas others use a combination of techniques. Commonly used RAID levels are
  • RAID 0 - Striped set with no fault tolerance
  • RAID 1 - Disk Mirroring
  • RAID 1 + 0 - Nested RAID
  • RAID 3 - Striped set with parallel access and dedicated parity
  • RAID 5 - Striped set with independent disk access and distributed parity
  • RAID 6 - Striped set with independent disk access and dual distributed parity

Comparing RAID Levels

When choosing a RAID type, it is imperative to consider its impact on disk performance and application IOPS. In both mirrored and parity RAID configurations, every write operation translates into more I/O overhead for the disks, which is referred to as a write penalty. The commonly used RAID types during these days are

RAID 1 offers good performance but comes with a 50 percent RAID capacity overhead. It’s a great choice for small-block random workloads and does not suffer from the write penalty.

RAID 5 is block-level interleaved parity, whereby a single parity block per stripe is rotated among all drives in the RAID set. RAID 5 can tolerate a single drive failure and suffers from the write penalty when performing small-block writes.

RAID 6 is block-level interleaved parity, whereby two discrete parity blocks per stripe are rotated among all drives in the RAID set. RAID 6 can tolerate a two-drive failure and suffers from the write penalty when performing small-block writes.

RAID Comparison

In a RAID 1 implementation, every write operation must be performed on two disks configured as a mirrored pair, whereas in a RAID 5 implementation, a write operation may manifest as four I/O operations. When performing I/Os to a disk configured with RAID 5, the controller has to read, recalculate, and write a parity segment for every data write operation.

For example a single write operation on RAID 5 that contains a group of five disks. The parity (P) at the controller is calculated as follows

Cp = C1 + C2 + C3 + C4 (XOR operations)

Whenever the controller performs a write I/O, parity must be computed by reading the old parity (Cp old) and the old data (C4 old) from the disk, which means two read I/Os. Then, the new parity (Cp new) is computed as follows

Cp new = Cp old – C4 old + C4 new (XOR operations)

After computing the new parity, the controller completes the write I/O by writing the new data and the new parity onto the disks, amounting to two write I/Os. Therefore, the controller performs two disk reads and two disk writes for every write operation, and the write penalty is 4.

But in RAID 6, which maintains dual parity, a disk write requires three read operations: two parity and one data. After calculating both the new parities, the controller performs three write operations: two parity and an I/O. Therefore, in a RAID 6 implementation, the controller performs six I/O operations for each write I/O, and the write penalty is 6.

The detailed description of the above RAID levels and their use cases are explained in the next posts.

Less Commonly used RAID Levels

Some RAID levels are not commonly used in today's data centres either due to complexity or cost issues. The other types of RAID levels which are not or rarely in use are

RAID 2 is bit level striping and it is the only RAID that can recover from single-bit errors.

RAID 3 performs parity at the byte level and uses a dedicated parity drive. RAID 3 stripes data for performance and uses parity for fault tolerance. Parity information is stored on a dedicated parity drive so that the data can be reconstructed if a drive fails in a RAID set


RAID 4 is similar to RAID 5. It performs block-level striping but has a dedicated parity drive. The common issues with using dedicated parity drives is that the parity drive can become a bottleneck for write performance. Every single block update on each row requires updating the parity on the dedicated parity drive which can cause the parity drive to become a bottleneck. One advantage of this type of RAID is that it can be easily expanded by adding more disks to the set by only rebuilding the parity drive. 

Previous: 3.1 Redundant Array of Independent Disks (RAID) Overview

                                                                                                                             Next: 3.3 Overview of RAID 0 and its use cases

28 July 2016

3.1 Redundant Array of Independent Disks (RAID) Overview

One of the main feature why the storage systems became intelligent is by using the technique called RAID. A group of disk drives which combinely referred as an disk array are very expensive, have single point of failure and have limited IOPS. Most large data centers experience multiple disk drive failures each day due to increase in capacity and decrease in performance. To overcome these limitations, 25 years ago a technique called RAID is introduced for the smooth uninterrupted running of the data centers. A properly configured RAID will protect the data from failed disk drives and improve I/O performance by parallelizing I/O across multiple drives.

What is a RAID ?

RAID is abbreviated as Redundant Array of inexpensive/independent Disks (RAID) which is a technique in which multiple disk drives are combined into a logical unit called a RAID set and data is written in blocks across the disks in the RAID set. RAID protects against data loss when a drive fails, through the use of redundant drives and parity. RAID also helps in improving the storage system performance as read and write operations are served simultaneously from multiple disk drives.

RAID is typically implemented by using a specialised hardware controller present either on the compute system or on the storage system. The key functions of a RAID controller are management and control of drive aggregations, translation of I/O requests between logical and physical drives, and data regeneration in the event of drive failures.

A RAID array is an enclosure that contains a number of disk drives and supporting hardware to implement RAID. A subset of disks within a RAID array can be grouped to form logical associations called logical arrays, also known as a RAID set or a RAID group.

There are two methods of RAID implementation, hardware and software. Both have their advantages and disadvantages.

Software RAID

Software RAID uses compute system-based software to provide RAID functions and is implemented at the operating-system level. Software RAID implementations offer cost and simplicity benefits when compared with hardware RAID. However, they have the following limitations
  • Performance: Software RAID affects the overall system performance. This is due to additional CPU cycles required to perform RAID calculations.
  • Supported features: Software RAID does not support all RAID levels.
  • Operating system compatibility: Software RAID is tied to the operating system; hence, upgrades to software RAID or to the operating system should be validated for compatibility. This leads to inflexibility in the data-processing environment.
Hardware RAID

In hardware RAID implementations, a specialised hardware controller is implemented either on the server or on the storage system. Controller card RAID is a server-based hardware RAID implementation in which a specialised RAID controller is installed in the server, and disk drives are connected to it. Manufacturers also integrate RAID controllers on motherboards. A server-based RAID controller is not an efficient solution in a data center environment with a large number of servers.

The external RAID controller is a storage system-based hardware RAID. It acts as an interface between the servers and the disks. It presents storage volumes to the servers, and the servers manages these volumes as physical drives. The key functions of the RAID controllers are as follows
  • Management and control of disk aggregations
  • Translation of I/O requests between logical disks and physical disks
  • Data regeneration in the event of disk failures

Hardware RAID  can offer increased performance, faster rebuilds, and hot-spares, and can protect OS boot volumes. However, software RAID tends to be more flexible and cheaper.

RAID Techniques

The three different RAID techniques that form the basis for defining various RAID levels are striping, mirroring, and parity. These techniques determine the data availability and performance of a RAID set as well as the relative cost of deploying a RAID level.

RAID Techniques

Striping: Striping is a technique of spreading data across multiple drives (more than one) in order to use the drives in parallel. All the read-write heads work simultaneously, allowing more data to be processed in a shorter time and increasing performance, compared to reading and writing from a single disk. 

Mirroring: Mirroring is a technique whereby the same data is stored on two different disk drives, yielding two copies of the data. If one disk drive failure occurs, the data remains intact on the surviving disk drive and the controller continues to service the compute system’s data requests from the surviving disk of a mirrored pair. When the failed disk is replaced with a new disk, the controller copies the data from the surviving disk of the mirrored pair. This activity is transparent to the server. 

In addition to providing complete data redundancy, mirroring enables fast recovery from disk failure. However, disk mirroring provides only data protection and is not a substitute for data backup. Mirroring constantly captures changes in the data, whereas a backup captures point-in-time images of the data. Mirroring involves duplication of data i.e the amount of storage capacity needed is twice the amount of data being stored. Therefore, mirroring is considered expensive and is preferred for mission-critical applications that cannot afford the risk of any data loss. 

Mirroring improves read performance because read requests can be serviced by both disks. However, write performance is slightly lower than that in a single disk because each write request manifests as two writes on the disk drives. Mirroring does not deliver the same levels of write performance as a striped RAID.

Parity: Parity is a method to protect striped data from disk drive failure without the cost of mirroring. An additional disk drive is added to hold parity, a mathematical construct that allows re-creation of the missing data. Parity is a redundancy technique that ensures protection of data without maintaining a full set of duplicate data. Calculation of parity is a function of the RAID controller. Parity information can be stored on separate, dedicated disk drives, or distributed across all the drives in a RAID set. 

Now, if one of the data disks fails, the missing value can be calculated by subtracting the sum of the rest of the elements from the parity value, parity calculation is a bit wise XOR operation.

Compared to mirroring, parity implementation considerably reduces the cost associated with data protection. Consider an example of a parity RAID configuration with four disks where three disks hold data, and the fourth holds the parity information. In this example, parity requires only 33 percent extra disk space compared to mirroring, which requires 100 percent extra disk space. However, there are some disadvantages of using parity. Parity information is generated from data on the data disk. Therefore, parity is recalculated every time there is a change in data. This recalculation is time-consuming and affects the performance of the RAID array.

As a best practice, it is highly recommend to create the RAID set from drives of the same type, speed, and capacity to ensure maximum usable capacity, reliability, and consistency in performance. For example, if drives of different capacities are mixed in a RAID set, the capacity of the smallest drive is used from each drive in the set to make up the RAID set’s overall capacity. The remaining capacity of the larger drives remains unused. Likewise, mixing higher speed drives with lower speed drives lowers the overall performance of the RAID set.

                                                                                                         Next: 3.2 Types of RAID Levels

2.7 Accessing data from the Intelligent Storage Systems

Data is stored and accessed by applications using the underlying storage infrastructure. The key components of this infrastructure are the OS (or file system), connectivity, and storage. The server controller card accesses the storage devices using predefined protocols, such as IDE/ATA, SCSI, or Fibre Channel (FC) as discussed in earlier posts.

IDE/ATA and SCSI are popularly used in small and personal computing environments for accessing internal storage. FC and iSCSI protocols are used for accessing data from an external storage device (or subsystems). External storage devices can be connected to the servers directly or through the storage network. When the storage is connected directly to the servers, it is referred as Direct-Attached Storage (DAS). 

By using the above SAN features and protocols data which is stored in the storage systems can be accessed by various methods. The overview of these methods are discussed below. Detail description will be in the next posts.

Data Access Methods from Storage Systems

Data can be accessed over a storage network in one of the following ways
  • Block Level
  • File Level
  • Object Level
In general, the application requests data from the file system or operating system by specifying the filename and location. The file system has two components
  • User component
  • Storage component. 
The user component of the file system performs functions such as hierarchy management, naming, and user access control. The storage component maps the files to the physical location on the storage device. 

The file system maps the file attributes to the logical block address of the data and sends the request to the storage device. The storage device converts the logical block address (LBA) to a cylinder-head-sector (CHS) address and fetches the data.

Depending on the type of the data access method used for a storage system, the controller can either be classified as block-based, file-based, object-based, or unified. An intelligent storage system can have all hard disk drives, all solid state drives, or a combination of both. The different types of data access methods are shown in the below figure.

Data Access Methods

Block Level Access

In a block-level access, the file system is created on a server, and data is accessed on a network at the block level. In this case, raw disks or logical volumes are assigned to the servers for creating the file system.

File Level Access

In a file-level access, the file system is created on a separate file server or at the storage side, and the file-level request is sent over a network. Because data is accessed at the file level, this method has higher overhead, as compared to the data accessed at the block level.

Object Level Access

Object-level access is an intelligent evolution, whereby data is accessed over a network in terms of self-contained objects with a unique object identifier. In this type of access, the file system’s user component resides on the server and the storage component resides on the storage system. This type of data access method is mainly used for offering emerging technologies like cloud and big data.

Previous: 2.6 What are Intelligent Storage Systems ?

                                                                                  Next: 3.1 Redundant Array of Independent Disks (RAID) Overview

26 July 2016

2.5 Storage Array Architecture

The common model architecture of any vendor storage array may consist the below components. 
  • Front-End Ports
  • Processors (CPU)
  • Cache Memory
  • Backend
  • Storage Disks

Storage Array

Front-End Ports: These ports connect to the storage network and allow hosts to access and use exported storage resources. Front-end ports are usually FC or Ethernet (iSCSI, FCoE, NFS, SMB). Hosts that wish to use shared resources from the storage array must connect to the network via the same protocol as the storage array. So if you want to access block LUNs over Fibre Channel, you need a Fibre Channel host bus adapter (HBA) installed in the server.

Processors/Controllers: These are mostly the Intel CPUs, they run the array firmware and control the front-end ports and the I/O that comes in and out over them. They also referred as conrollers

Cache Memory: This is used to increase the array performance and if in mechanical disk-based arrays it is absolutely critical to obtain decent performance. If there is no cache available, it is impossible to get a reasonable performance.

Backend: It might contans more CPUs and ports that connect to the drives that compose the major part of the backend. Sometimes the same CPUs that control the front end also control the backend.

Ports and Connectivity

Servers in a data center communicate with a storage array through the ports on the front-end which are often reffered as front-end ports. The number and type of front-end ports depends on the type and size of the storage array. Large enterprise arrays can have hundreds of front-end ports and depending on the type of storage array, these ports can be using FC, SAS, FCoE and iSCSI protocols.

Connectivity of the servers to the storage array can be made in different types, based upon the type of connection used (DAS, SAN, NAS) it is possible to have multiple paths between a host/server and a storage array.

Direct Attached Storage

In this type of connectivity, hosts can directly connect to the storage array without any SAN switch. In this type of Direct attached storage, there can be only a one-to-one mapping of hosts to storage ports. For example if the storage array has 6 ports, only 6 hosts can be directly attached to the storage array. If multi-path is required, then only 3 servers can be connected directly to the storage array because each server uses 2 ports for the connection.

SAN Attached Storage

In this type of connection, there will be a SAN switch between the server and the storage array and this technique allows multiple servers to share the same storage port. This SAN switch is responsible for the routing of the data access to and from host and storage.

It is essential that each server connecting to storage has at-least two ports to connect to the storage network so that if one fails or is otherwise disconnected, the other can be used to access storage. Ideally these ports should be on a separate PCIe cards but not on the same PCIe card. A host based multipath I/O software controls how data is routed or load balanced across these multiple paths and also deals with failed and flapping paths.

We are going to see more about the above techniques in the future posts.

Previous: 2.4 What is a Storage Array

                                                                      Next: 2.6 What are Intelligent Storage Systems ?

25 July 2016

2.4 What is a Storage Array

A storage array is a type of compute system designed to providing storage to externally attached servers through storage network. These storage arrays might be comprised of number of mechanical hard disks (HDD), Solid-state drives (SSD) or both together. In general, a storage array can have petabyte (PB) of storage space which can be allocated to the servers in the network. Storage arrays connect to host computers over a shared network and typically provide advanced reliability and enhanced functionality. 

These arrays are designed to provide an optimally cooling airflow, vibration dampening and a clean protected power supply along with providing space to the servers through the storage network. Storage arrays come in three major flavours.
  • Storage Area Network (SAN)
  • Network Attached Storage (NAS)
  • Unified (SAN and NAS)

SAN Storage Arrays also known as block storage arrays provide connectivity through block based protocols such as Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), Internet Small Computer System Interface (iSCSI), or Serial Attached SCSI (SAS).  Block storage arrays send low level disk drive access commands called SCSI command descriptor blocks (CDBs) such as READ block, WRITE block and READ CAPACITY over the SAN. SAN storage Arrays are the commonly used arrays in the IT storage market.

NAS Storage Arrays also known as FILERS, provide connectivity over file-based protocols such as Network File System (NFS) and SMB/CIFS. File based protocols work at a higher level than low level block commands. They manipulate files and directories with commands which can create files, rename files and close a file etc. It is generally used to consolidate Windows and Linux file servers, where hosts mount exports and shares from the NAS in exactly the same way they would mount an NFS and CIFS share from a Linux or Windows file server. Because NAS protocols operate over shared Ethernet networks, they usually suffer from higher network based latency than SAN storage and are more prone to network related issues. Also, because NAS storage arrays work with files and directories, they have to deal with file permissions, user accounts, Active Directory, Network Information Service (NIS), file locking and other file related techniques. 

Unified Storage Arrays, also known as multi-protocol arrays provide shared storage over both block and file protocols. IT contains both SSD's and HHD's to combine the advantages of both the disk type techniques, these arrays provide both block and file based storage in a single storage array. Different vendors might implement unified arrays in different ways but the net result is a network storage array that allows storage resources to be accessed by hosts either as block LUNs over block protocols or as network shares over file-sharing protocols.

However, the main purpose of all types of storage arrays (SAN and NAS) is to pool together storage resources and make those resources available to hosts connected over the storage network. These storage arrays also provides the following advanced features and functionalities
  • Replication
  • Snapshots
  • Offloads
  • High Availability and resiliency
  • High Performance
  • Space efficiency

Types of Storage Arrays

Most storage vendors offers two types of storage arrays, enterprise-class arrays and midrange arrays. In general, enterprise-class arrays offers grid-based architecture   and midrange arrays offers dual-controller architectures. 

Dual-Controller architectures provide many of the advanced features seen in enterprise-class grid architectures but at a cheaper cost. Dual-controller architectures are also limited in scalability and do not deal with hardware failures. Grid-architectures offer scale-out and deal better with hardware failures but with additional cost.

Enterprise-Class Storage Arrays offer 
  • Multiple controllers
  • Minimal impact when controllers fail
  • Online non-disruptive upgrades (NDUs)
  • Scale to over 1,000 drives
  • High Availability
  • High Performance
  • Scalable
  • Always on
  • Predictable Performance
  • Expensive

Midrange Storage Arrays offers dual-control architectures and also provides the high 
  • Performance
  • Scalability
  • Availability

All-Flash Arrays
These arrays contains all flash drives and the main purpose of this type is to increase the performance compared to traditional storage arrays. They have front-end ports, usually some DRAM cache, internal buses, backend drives, and the like. They take flash drives, pool them together, carve them into volumes, and protect them via RAID or some other similar protection techniques. Many even offer snapshot and replication services, thin provisioning, deduplication, and compression. All-flash arrays can be dual-controller, single-controller, or scale-out.

Advantages of Storage Arrays

Storage arrays allows storage administrators to pool storage resources, thereby making more-efficient use of both capacity and performance. Some of the advantages of storage arrays are
  • Increased capacity by pooling all the disk storage
  • Increased Performance by pooling all the IOPS
  • Easy and simplified management of disk space
  • Advanced functionalities which offer advanced features such as replication, snapshots, thin provisioning, deduplication, compression, high availability, and OS/hypervisor offloads.
  • Increase reliability with multiple controllers.

                                                                                        Next: 2.5 Storage Array Architecture