10 August 2016

5.1 Introduction to IP SAN and their components

As discussed in previous chapter Fibre Channel (FC) SAN provides high performance and scalability. These advantages of FC SAN come with the burden of additional cost of buying FC components, such as FC HBA and FC switches. So to overcome these cost burden, we have another type of Storage Area networks called IP SAN.

IP SAN uses Internet Protocol (IP) for the transport of storage traffic instead of Fibre Channel (FC) cables. It transports block I/O over an IP-based network. Two primary protocols that leverage IP as the transport mechanism for block-level data transmission are 
  • Internet SCSI (iSCSI)
  • Fibre Channel over IP (FCIP).
iSCSI is a storage networking technology which allows storage resources to be shared over an IP network Whereas FCIP is an IP-based protocol that enables distributed FC SAN islands to be interconnected over an existing IP network. In FCIP, FC frames are encapsulated onto the IP payload and transported over an IP network.

IP is a matured technology and using IP as a storage networking option provides several advantages. 
  • Most organizations have an existing IP-based network infrastructure, which could also be used for storage networking and may be a more economical option than deploying a new FC SAN infrastructure.
  • IP network has no distance limitation, which makes it possible to extend or connect SANs over long distances. With IP SAN, organizations can extend the geographical reach of their storage infrastructure and transfer data that are distributed over wide locations.
  • Many long-distance disaster recovery (DR) solutions are already leveraging IP-based networks. In addition, many robust and mature security options are available for IP networks.
Typically, a storage system comes with both FC and iSCSI ports. This enables both the native iSCSI connectivity and the FC connectivity in the same environment.

Lets look at how iSCSI based SAN works and FC based IP SAN works in this chapter


iSCSI SAN Overview

As said earlier, iSCSI is a storage networking technology which allows storage resources to be shared over an IP network and most of the storage resources which are shared on an iSCSI SAN are disk resources. Just like as SCSI messages are mapped on Fibre Channel in FC SAN, iSCSI is a mapping of SCSI protocol over TCP/IP. 

iSCSI is an acronym for Internet Small Computer System Interface, it deals with block storage and maps SCSI over traditional TCP/IP. This protocol is mostly used for sharing primary storage such as disk drives and in some cases it is used for disk backup environment aswell.

SCSI commands are encapsulated at each layer of the network stack for eventual transmission over an IP network. The TCP layer takes care of transmission reliability and in-order delivery whereas the IP layer provides routing across the network.


iSCSI SAN
In iSCSI SAN, initiators issue read/write data requests to targets over an IP network. Targets respond to initiators over the same IP network. All iSCSI communications follow this request response mechanism and all requests and responses are passed over the IP network as iSCSI Protocol Data Units (PDUs). iSCSI PDU is the fundamental unit of communication in an iSCSI SAN.


iSCSI performance is influenced by three main components. Best initiator performance can be achieved with dedicated iSCSI HBAs, target performance can be achieved by purpose-built iSCSI arrays and finally the network performance can be achieved by dedicated network switches

Multiple layers of security should be implemented on an iSCSI SAN as security is the most important in IT infra. These include CHAP for authentication, discovery domains to restrict device discovery, network isolation and IPsec for encryption of in-flight data.

iSCSI components

iSCSI is an IP-based protocol that establishes and manages connections between hosts and storage systems over IP. iSCSI is an encapsulation of SCSI I/O over IP. 

iSCSI encapsulates SCSI commands and data into IP packets and transports them using TCP/IP. iSCSI is widely adopted for transferring SCSI data over IP between hosts and storage systems and among the storage systems. It is relatively inexpensive and easy to implement, especially environments in which an FC SAN does not exist.
iscsi san components
Key components for iSCSI communication are
  • iSCSI initiators such as an iSCSI HBA
  • iSCSI targets such as a storage system with an iSCSI port
  • IP-based network such as a Gigabit Ethernet LAN An iSCSI initiator sends commands and associated data to a target and the target returns data and responses to the initiator.
Previous: 4.8 Basic troubleshooting tips for Fibre Channel (FC) SAN

8 August 2016

4.8 Basic troubleshooting tips for Fibre Channel (FC) SAN issues

There are many areas where the errors can be made and you might experience lots of issues with the mis-configuration settings. A thorough and deep understanding of the SAN configuration is needed to troubleshoot any storage related issues. Slight differences can make a huge data loss and could make the organisation collapse. To troubleshoot any kind of situation, follow these tips as a starting step before the advanced troubleshooting. There might be other tools to troubleshoot the issues but these are basic first steps which might help you save the time.

1) Always take backup of Switch Configurations

Regular backup of switch configurations needs to be done just in regular intervals just in case if you are unable to troubleshoot the issue and needs to revert back to the previous configuration. Such backup files tend to be human-readable flat files that are extremely useful if you need to compare a broken configuration image to a previously known working configuration. Another option might be to create a new zone configuration each time you make a change, and maintain previous versions that can be rolled back to if there are problems after committing the change.

2) Troubleshooting Connectivity Issues

Many of the day-to-day issues that you see are connectivity issues such as hosts not being able to see a new LUN or not being able to see storage or tape devices on the SAN. Connectivity issues will be due to misconfigured zoning. Each vendor provides different tools to configure and troubleshoot zoning, but the following common CLI commands can prove very helpful.

fcping
fcping is an FC version of the popular IP ping tool. fcping allows you to test the following:
  • Whether a device (N_Port) is alive and responding to FC frames
  • End-to-end connectivity between two N_Ports
  • Latency
  • Zoning between two devices
fcping is available on most switch platforms as well as being a CLI tool for most operating systems and some HBAs. It works by sending Extended Link Service (ELS) echo request frames to a destination, and the destination responding with ELS echo response frames. For example

# fcping 50:01:43:80:05:6c:22:ae

fctrace
Another tool that is modeled on a popular IP networking tool is the fctrace tool. This tool traces a route/path to an N_Port. The following command shows an fctrace command example

# fctrace fcid 0xef0010 vsan 1

3) Things to check while troubleshooting Zoning
  • Are your aliases correct ?
  • If using port zoning, have your switch domain IDs changed ?
  • If using WWPN zoning, have any of the HBA/WWPNs been changed ?
  • Is your zone in the active zone set?

4) Rescan the SCSI Bus if required

After making zoning changes, LUN masking changes or any other work that changes a LUN/volume presentation to a host, you may be required to rescan the SCSI bus on that host in order to detect the new device. The following command shows how to rescan the SCSI bus on a Windows server using the diskpart tool

DISKPART> list disk

DISKPART> rescan

If you know that your LUN masking and zoning are correct but the server still does not see the device, it may be necessary to reboot the host.

5) Understanding Switch Configuration Dumps

Each switch vendor also tends to have a built-in command/script that is used to gather configs and logs to be sent to the vendor for their tech support groups to analyze. The output of these commands/scripts can also be useful to you as a storage administrator. Each vendor has its own version of these commands/scripts

Cisco      -      show tech-support
Brocade  -     supportshow or supportsave
QLogic    -     create support

6) Use Port Error Counters

Switch-based port error counters are an excellent way to identify physical connectivity issues such as
  • Bad cables (bent, kinked, or otherwise damaged cables)
  • Bad connectors (dust on the connectors, loose connectors)
The following example shows the error counters for a physical switch port on a  switch:

admin> portshow 4/15

These port counters can sometimes be misleading. It is perfectly normal to see high counts
against some of the values, and it is common to see values increase when a server is rebooted and when similar changes occur. If you are not sure what to look for, check your switch documentation, but also compare the counters to some of your known good ports.

If some counters are increasing on a given port that you are concerned with, but they are not increasing on some known good ports, then you know that you have a problem on that port.

Other commands show similar error counters as well as port throughput. The following porterrshow command shows some encoding out (enc out) and class 3 discard (disc c3) errors on port 0. This may indicate a bad cable, a bad port, or another hardware problem

admin> porterrshow


                                                                 Next: 5.1 Introduction to IP SAN and their components

4.7 Introduction to VSAN

Virtualizing FC SAN (VSAN)

Virtual SAN (also called virtual fabric) is a logical fabric on an FC SAN, which enables communication among a group of nodes regardless of their physical location in the fabric.

Each SAN can be partitioned into smaller virtual fabrics, generally called as VSANs. VSANs are similar to VLANs in the networking and allows to partition physical kit into multiple smaller logical SANs/fabrics. It is possible to route traffic between virtual fabrics by using vendor-specific technologies. 

In a VSAN, a group of node ports communicate with each other using a virtual topology defined on the physical SAN. Multiple VSANs may be created on a single physical SAN. Each VSAN behaves and is managed as an independent fabric. Each VSAN has its own fabric services, configuration, and set of FC addresses. Fabric-related configurations in one VSAN do not affect the traffic in another VSAN. A VSAN may be extended across sites, enabling communication among a group of nodes, in either site with a common set of requirements.

VSAN

VSANs improve SAN security, scalability, availability, and manageability. VSANs provide enhanced security by isolating the sensitive data in a VSAN and by restricting the access to the resources located within that VSAN. 

For example, a cloud provider typically isolates the storage pools for multiple cloud services by creating multiple VSANs on an FC SAN. Further, the same FC address can be assigned to nodes in different VSANs, thus increasing the fabric scalability. 

The events causing traffic disruptions in one VSAN are contained within that VSAN and are not propagated to other VSANs. VSANs facilitate an easy, flexible, and less expensive way to manage networks. Configuring VSANs is easier and quicker compared to building separate physical FC SANs for various node groups. To regroup nodes, an administrator simply changes the VSAN configurations without moving nodes and recabling.

Configuring VSAN

To configure VSANs on a fabric, an administrator first needs to define VSANs on fabric switches. Each VSAN is identified with a specific number called VSAN ID. The next step is to assign a VSAN ID to the F_Ports on the switch. By assigning a VSAN ID to an F_Port, the port is included in the VSAN. In this manner, multiple F_Ports can be grouped into a VSAN. 

For example, an administrator may group switch ports (F_Ports) 1 and 2 into VSAN 10 (ID) and ports 6 to 12 into VSAN 20 (ID). If an N_Port connects to an F_Port that belongs to a VSAN, it becomes a member of that VSAN. The switch transfers FC frames between switch ports that belong to the same VSAN.

VSAN versus Zone

Both VSANs and zones enable node ports within a fabric to be logically segmented into groups. But they are not same and their purposes are different. There is a hierarchical relationship between them. An administrator first assigns physical ports to VSANs and then configures independent zones for each VSAN. A VSAN has its own independent fabric services, but the fabric services are not available on a per-zone basis.

VSAN Trunking

VSAN trunking allows network traffic from multiple VSANs to traverse a single ISL. It supports a single ISL to permit traffic from multiple VSANs along the same path. The ISL through which multiple VSAN traffic travels is called a trunk link. 

VSAN trunking enables a single E_Port to be used for sending or receiving traffic from multiple VSANs over a trunk link. The E_Port capable of transferring multiple VSAN traffic is called a trunk port. The sending and receiving switches must have at least one trunk E_Port configured for all of or a subset of the VSANs defined on the switches. 

VSAN trunking eliminates the need to create dedicated ISL(s) for each VSAN. It reduces the number of ISLs when the switches are configured with multiple VSANs. As the number of ISLs between the switches decreases, the number of E_Ports used for the ISLs also reduces. By eliminating needless ISLs, the utilization of the remaining ISLs increases. The complexity of managing the FC SAN is also minimized with a reduced number of ISLs.

VSAN Tagging

VSAN tagging is the process of adding or removing a marker or tag to the FC frames that contains VSAN-specific information. Associated with VSAN trunking, it helps isolate FC frames from multiple VSANs that travel through and share a trunk link. Whenever an FC frame enters an FC switch, it is tagged with a VSAN header indicating the VSAN ID of the switch port (F_Port) before sending the frame down to a trunk link. 



The receiving FC switch reads the tag and forwards the frame to the destination port that corresponds to that VSAN ID. The tag is removed once the frame leaves a trunk link to reach an N_Port.

Previous: 4.6 Fibre Channel (FC) SAN Topologies

                                                                                              Next: 4.8 Basic troubleshooting tips for Fibre Channel (FC) SAN

4.6 Fibre Channel (FC) SAN Topologies Overview

Out of the FC SAN inter-connectors such as Hub, Switch and Directors, FC Switch and FC Directors are majorly used devices in any Storage Area Network. These FC switches and FC directors may be connected in a number of ways to form different fabric topologies. Each topology provides certain benefits.


Fibre Channel (FC) SAN Topologies

FC SAN offers 3 types of FC Switch topologies.  They are

Single-Switch Topology

In a single-switch topology, the fabric consists of only a single switch. Both the compute systems and the storage systems are connected to the same switch. A key advantage of a single-switch fabric is that it does not need to use any switch port for ISLs. Therefore, every switch port is usable for compute system or storage system connectivity. Further, this topology helps eliminate FC frames travelling over the ISLs and consequently eliminates the ISL delays.

Single FC Switch Topology

A typical implementation of a single-switch fabric would involve the deployment of an FC director. FC directors are high-end switches with a high port count. When additional switch ports are needed over time, new ports can be added via add-on line cards (blades) in spare slots available on the director chassis. To some extent, a bladed solution alleviates the port count scalability problem inherent in a single-switch topology.

Mesh Topology

A mesh topology may be one of the two types: full mesh or partial mesh. In a full mesh, every switch is connected to every other switch in the topology. A full mesh topology may be appropriate when the number of switches involved is small. A typical deployment would involve up to four switches or directors, with each of them servicing highly localised compute-to-storage traffic. In a full mesh topology, a maximum of one ISL or hop is required for compute-to-storage traffic. However, with the increase in the number of switches, the number of switch ports used for ISL also increases. This reduces the available switch ports for node connectivity.

Mesh Topology

In a partial mesh topology, not all the switches are connected to every other switch. In this topology, several hops or ISLs may be required for the traffic to reach its destination. Partial mesh offers more scalability than full mesh topology. However, without proper placement of compute and storage systems, traffic management in a partial mesh fabric might be complicated and ISLs could become overloaded due to excessive traffic aggregation.

Core-edge Topology

The core-edge topology has two types of switch tiers: edge and core.

The edge tier is usually composed of switches and offers an inexpensive approach to adding more compute systems in a fabric. The edge-tier switches are not connected to each other. Each switch at the edge tier is attached to a switch at the core tier through ISLs.

The core tier is usually composed of directors that ensure high fabric availability. In addition, typically all traffic must either traverse this tier or terminate at this tier. In this configuration, all storage systems are connected to the core tier, enabling compute-to-storage traffic to traverse only one ISL. Compute systems that require high performance may be connected directly to the core tier and consequently avoid ISL delays.

Core-edge Toplogy

The core-edge topology increases connectivity within the FC SAN while conserving the overall port utilization. It eliminates the need to connect edge switches to other edge switches over ISLs. Reduction of ISLs can greatly increase the number of node ports that can be connected to the fabric. If fabric expansion is required, then administrators would need to connect additional edge switches to the core. The core of the fabric is also extended by adding more switches or directors at the core tier. Based on the number of core-tier switches, this topology has different variations, such as single-core topology and dual-core topology. To transform a single-core topology to dual-core, new ISLs are created to connect each edge switch to the new core switch in the fabric.

Link Aggregation

Link aggregation combines two or more parallel ISLs into a single logical ISL, called a port-channel, yielding higher throughput than a single ISL could provide. For example, the aggregation of 10 ISLs into a single port-channel provides up to 160 Gb/s throughput assuming the bandwidth of an ISL is 16 Gb/s. 

Link Aggregation

Link aggregation optimizes fabric performance by distributing network traffic across the shared bandwidth of all the ISLs in a port-channel. This allows the network traffic for a pair of node ports to flow through all the available ISLs in the port-channel rather than restricting the traffic to a specific, potentially congested ISL. The number of ISLs in a port channel can be scaled depending on application’s performance requirement.

Previous: 4.5 Fibre Channel (FC) Zoning Overview

                                                                                                 Next: 4.7  Introduction to VSAN

4.5 Fibre Channel (FC) Zoning Overview

In general the initiator performs the discovery of all the devices in a SAN environment. If zoning is not done previously, the initiator will probe and discover all devices on the SAN fabric. As part of the discovery , every device will also be queried to discover its properties and capabilities. On a large this could take forever and be a massive waste of time and resources. So inorder to speed up and smooth this discovery process, the  name server was created. Each time any device joins the fabric, it performs a process referred as a fabric login (FLOGI). This FLOGI assigns a 24 bit N-Port IDS to all the devices and specifies the class of service to be used.


After a successful FLOGI process, the device then performs a port login (PLOGI) to the name server to register its capabilities. All devices joining a fabric perform PLOGI to the name server. As part of the PLOGI, the device will ask the name server for a list of devices on the fabric and this is where zoning is helpful. Instead of returning a list of all devices on the fabric, the name server returns only a list of those devices that are zoned so as to accessible from the device performing the PLOGI. This process is quicker and more secure that probing the entire SAN for all the devices and it is also allows for greater control and more flexible SAN management.  

Whenever a change takes place in the name server database, the fabric controller sends a Registered State Change Notification (RSCN) to all the nodes impacted by the change. If zoning is not configured, the fabric controller sends the RSCN to all the nodes in the fabric.

Involving the nodes that are not impacted by the change increases the amount of fabric-management traffic. For a large fabric, the amount of FC traffic generated due to this process can be significant and might impact the compute-to-storage data traffic. Zoning helps to limit the number of RSCNs in a fabric. In the presence of zoning, a fabric sends the RSCN to only those nodes in a zone where the change has occurred.

What is Zoning ?

Zoning is an FC switch function that enables node ports within the fabric to be logically segmented into groups and communicate with each other within the group.

FC Zoning

Zoning also provides access control, along with other access control mechanisms, such as LUN masking. Zoning provides control by allowing only the members in the same zone to establish communication with each other. Multiple zones can be grouped together to form a zone set and this zone set is applied to the fabric. Any new zone configured needs to be added to the active zone set in order to applied to the fabric.

Zone members, zones, and zone sets form the hierarchy defined in the zoning process. A zone set is composed of a group of zones that can be activated or deactivated as a single entity in a fabric. Multiple zone sets may be defined in a fabric, but only one zone set can be active at a time. Members are the nodes within the FC SAN that can be included in a zone.

FC switch ports, FC HBA ports, and storage system ports can be members of a zone. A port or node can be a member of multiple zones. Nodes distributed across multiple switches in a switched fabric may also be grouped into the same zone. Zone sets are also referred to as zone configurations.

Best Practices for Zoning
  • Always keep the zones small so that the troubleshooting may get simpler.
  • Have only a single initiator in each zone and it is not recommended to have more than one initiator in a zone.
  • To make troubleshooting easier, also keep the number of targets in a zone small.
  • Give meaningful aliases and names to your zones so that they can easily identified during troubleshooting.
  • Zone changes need to be done with extreme caution and caring to prevent unwanted access of sensitive data.

Zoning can be categorized into three types:

WWN zoning: It uses World Wide Names to define zones. The zone members are the unique WWN addresses of the FC HBA and its targets (storage systems). A major advantage of WWN zoning is its flexibility. If an administrator moves a node to another switch port in the fabric, the node maintains connectivity to its zone partners without having to modify the zone configuration. This is possible because the WWN is static to the node port. WWN zoning is also referred as soft zoning sometimes.

FC SAN Zoning

Port zoning: It uses the switch port ID to define zones. In port zoning, access to node is determined by the physical switch port to which a node is connected. The zone members are the port identifiers (switch domain ID and port number) to which FC HBA and its targets (storage systems) are connected. If a node is moved to another switch port in the fabric, port zoning must be modified to allow the node, in its new port, to participate in its original zone. However, if an FC HBA or storage system port fails, an administrator just has to replace the failed device without changing the zoning configuration. Port zoning is also referred as hard zoning sometimes.

Mixed zoning: It combines the qualities of both WWN zoning and port zoning. Using mixed zoning enables a specific node port to be tied to the WWN of another node.

Previous: Introduction to Fibre Channel (FC) SAN Architecture and port virtualization 

                                                                                                                        Next: Fibre Channel (FC) SAN Topologies

4.4 Introduction to Fibre Channel (FC) SAN Architecture and port virtualization

The FC SAN physical components such as network cables network adapters and hubs or switches can be used to design a Fibre channel Storage Area Network.The different types of FC architecture which can be designed are
  • Point-to-point
  • Fibre channel arbitrated loop (FC-AL)
  • Fibre channel switched fabric (FC-SW). 

Fibre Channel (FC) SAN Architectures

The configuration of the above 3 architectures are explained below

Point-to-point Architecture:  In this configuration, two nodes are connected directly to each other. This configuration provides a dedicated connection for data transmission between nodes. However, the point-to-point configuration offers limited connectivity and scalability and is used in a DAS environment.
Point-to-Point FC


FC-Arbitrated Loop: In this configuration, the devices are attached to a shared loop. Each device contends with other devices to perform I/O operations. The devices on the loop must “arbitrate” to gain control of the loop. At any given time, only one device can perform I/O operations on the loop. Because each device in a loop must wait for its turn to process an I/O request, the overall performance in FC-AL environments is low. 

FC Arbirated Loop

Further, adding or removing a device results in loop re-initialization, which can cause a momentary pause in loop traffic. As a loop configuration, FC-AL can be implemented without any interconnecting devices by directly connecting one device to another two devices in a ring through cables. However, FC-AL implementations may also use FC hubs through which the arbitrated loop is physically connected in a star topology.

FC-Switch: It involves a single FC switch or a network of FC switches (including FC directors) to interconnect the nodes. It is also referred to as fabric connect. A fabric is a logical space in which all nodes communicate with one another in a network. In a fabric, the link between any two switches is called an interswitch link (ISL). ISLs enable switches to be connected together to form a single, larger fabric. 

FC Switch

They enable the transfer of both storage traffic and fabric management traffic from one switch to another. In FC-SW, nodes do not share a loop; instead, data is transferred through a dedicated path between the nodes. Unlike a loop configuration, an FC-SW configuration provides high scalability. The addition or removal of a node in a switched fabric is minimally disruptive; it does not affect the ongoing traffic between other nodes.

FC switches operate up to FC-2 layer, and each switch supports and assists in providing a rich set of fabric services such as the FC name server, the zoning database and time synchronization service. When a fabric contains more than one switch, these switches are connected through a link known as an inter-switch link.

Inter-switch links (ISLs) connect multiple switches together, allowing them to merge into a common fabric that can be managed from any switch in the fabric. ISLs can also be bonded into logical ISLs that provide the aggregate bandwidth of each component ISL as well as providing load balancing and high-availability features.

Fibre Channel (FC) Port Types

The ports in a switched fabric can be one of the following types
  • N_Port: It is an end point in the fabric. This port is also known as the node port. Typically, it is a compute system port (FC HBA port) or a storage system port that is connected to a switch in a switched fabric.
  • E_Port: It is a port that forms the connection between two FC switches. This port is also known as the expansion port. The E_Port on an FC switch connects to the E_Port of another FC switch in the fabric ISLs.
  • F_Port: It is a port on a switch that connects an N_Port. It is also known as a fabric port.
  • G_Port: It is a generic port on a switch that can operate as an E_Port or an F_Port and determines its functionality automatically during initialization.
FC Ports
Common FC ports speeds are 2 Gbps, 4Gbps, 8Gbps and 16Gbps. FC ports along with HBA ports, switch ports and storage array ports can be configured to autonegotiate their speed, autonegotiate is a protocol that allows two devices to agree on a common speed for the link. It is a good practice to hard-code the same speed at both ends of the link.

World Wide Name (WWN)

Each device in the FC environment is assigned a 64-bit unique identifier called the World Wide Name (WWN). The FC environment uses two types of WWNs.

wwwn
  • World Wide Node Name (WWNN) - WWNN is used to physically identify FC network adapters. Unlike an FC address, which is assigned dynamically, a WWN is a static name for each device on an FC network. WWNs are similar to the Media Access Control (MAC) addresses used in IP networking. 
  • WWNs are burned into the hardware or assigned through software. Several configuration definitions in an FC SAN use WWN for identifying storage systems and FC HBAs. WWNs are critical for FC SAN configuration as each node port has to be registered by its WWN before the FC SAN recognizes it.
  • World Wide Port Name (WWPN) - WWPN is used to physically identify FC adapter ports or node ports. For example, a dual-port FC HBA has one WWNN and two WWPNs.

N_Port Virtualization

The proliferation of compute systems in a data centre causes increased use of edge switches in a fabric. As the edge switch population grows, the number of domain IDs may become a concern because of the limitation on the number of domain IDs in a fabric. N_Port.

Virtualization (NPV) addresses this concern by reducing the number of domain IDs in a fabric. Edge switches supporting NPV do not require a domain ID. They pass traffic between the core switch and the compute systems. NPV-enabled edge switches do not perform any fabric services, and instead forward all fabric activity, such as login and name server registration to the core switch.

All ports at the NPV edge switches that connect to the core switch are established as NP_Ports (not E_Ports). The NP_Ports connect to an NPIV-enabled core director or switch. If the core director or switch is not NPIV-capable, the NPV edge switches do not function. As the switch enters or exits from NPV mode, the switch configuration is erased and it reboots. Therefore, administrators should take care when enabling or disabling NPV on a switch. The figure on the slide shows a core-edge fabric that comprises two edge switches in NPV mode and one core switch (an FC director).

N_Port ID virtualization (NPIV) 

It enables a single N_Port (such as an FC HBA port) to function as multiple virtual N_Ports. Each virtual N_Port has a unique WWPN identity in the FC SAN. This allows a single physical N_Port to obtain multiple FC addresses. 

NPIV
VMware or Hypervisors leverage NPIV to create virtual N_Ports on the FC HBA and then assign the virtual N_Ports to virtual machines (VMs). A virtual N_Port acts as a virtual FC HBA port. This enables a VM to directly access LUNs assigned to it 

NPIV enables an administrator to restrict access to specific LUNs to specific VMs using security techniques like zoning and LUN masking; similarly to the assignment of a LUN to a physical compute system. To enable NPIV, both the FC HBAs and the FC switches must support NPIV. The physical FC HBAs on the compute system, using their own WWNs, must have access to all LUNs that are to be accessed by VMs running on that compute system.

Previous: 4.3 Overview of Fibre Channel (FC) SAN Protocol

                                                                                                               Next: 4.5 Fibre Channel (FC) Zoning Overview

4.3 Overview of Fibre Channel (FC) SAN Protocol

Traditionally, compute operating systems have communicated with peripheral devices over channel connections, such as Enterprise Systems Connection (ESCON) and SCSI. Channel technologies provide high levels of performance with low protocol overheads. Such performance is achievable due to the static nature of channels and the high level of hardware and software integration provided by the channel technologies. However, these technologies suffer from inherent limitations in terms of the number of devices that can be connected and the distance between these devices.

In contrast to channel technology, network technologies are more flexible and provide greater distance capabilities. Network connectivity provides greater scalability and uses shared bandwidth for communication. This flexibility results in greater protocol overhead and reduced performance.

The FC architecture represents true channel and network integration and captures some of the benefits of both channel and network technology. FC protocol provides both the channel speed for data transfer with low protocol overhead and the scalability of network technology. FC provides a serial data transfer interface that operates over copper wire and optical fiber.

Introduction to Fibre Channel (FC) Protocol

FC protocol forms the fundamental construct of the FC SAN infrastructure. FC protocol predominantly is the implementation of SCSI over an FC network. SCSI data is encapsulated and transported within FC frames. 

SCSI over FC overcomes the distance and the scalability limitations associated with traditional direct-attached storage. Storage devices attached to the FC SAN appear as locally attached devices to the operating system (OS) or hypervisor running on the compute system.

FC Potocol defines the communication protocol in five layers:
FC Protocol

FC-4 Layer: It is the uppermost layer in the FCP stack. This layer defines the application interfaces and the way Upper Layer Protocols (ULPs) are mapped to the lower FC layers. The FC standard defines several protocols that can operate on the FC-4 layer. Some of the protocols include SCSI, High Performance Parallel Interface (HIPPI) Framing Protocol, ESCON, Asynchronous Transfer Mode (ATM), and IP.

FC-2 Layer: It provides FC addressing, structure, and organization of data (frames, sequences, and exchanges). It also defines fabric services, classes of service, flow control, and routing.

FC-1 Layer: It defines how data is encoded prior to transmission and decoded upon receipt. At the transmitter node, an 8-bit character is encoded into a 10-bit transmission character. This character is then transmitted to the receiver node. At the receiver node, the 10-bit character is passed to the FC-1 layer, which decodes the 10-bit character into the original 8-bit character. FC links, with a speed of 10 Gbps and above, use 64-bit to 66-bit encoding algorithm. This layer also defines the transmission words such as FC frame delimiters, which identify the start and the end of a frame and the primitive signals that indicate events at a transmitting port. In addition to these, the FC-1 layer performs link initialization and error recovery.

FC-0 Layer: It is the lowest layer in the FCP stack. This layer defines the physical interface, media, and transmission of bits. The FC-0 specification includes cables, connectors, and optical and electrical parameters for a variety of data rates. The FC transmission can use both electrical and optical media.

FC Addressing

An FC address is dynamically assigned when a node port logs on to the fabric. The FC address has a distinct format.
FC Addressing

The first field of the FC address contains the domain ID of the switch. A domain ID is a unique number provided to each switch in the fabric. Although this is an 8-bit field, there are only 239 available addresses for domain ID because some addresses are deemed special and reserved for fabric services. For example, FFFFFC is reserved for the name server, and FFFFFE is reserved for the fabric login service. 

The area ID is used to identify a group of switch ports used for connecting nodes. An example of a group of ports with common area ID is a port card on the switch. 

The last field, the port ID, identifies the port within the group. Therefore, the maximum possible number of node ports in a switched fabric is calculated as:

239 domains X 256 areas X 256 ports = 15,663,104 ports.

FC Fabric

A fabric is a collection of connected FC switches that have a common set of services such as they can share a common name server, common zoning database and common FSPS routing table. You can also deploy dual redundant fabrics for resiliency. Each fabric is viewed and managed as a single logical entity and it is common across the fabric to update the zoning configuration from any switch in the fabric.


FC Fabric
Every FC switch in a fabric needs a domain ID. This domain ID is a numeric string that is used to uniquely identify the switch in the fabric. These domain IDs can be administratively set or dynamically assigned by the principal switch in a fabric during reconfiguration. These Domain ID must be a unique IDs within a fabric and should be used for another switch. 

Principal Switch is a main switch in a fabric that is responsible for managing the distribution of domain IDs within the fabric

FC Frame Strucutre

In an FC network, data transport is analogous to a conversation between two people, whereby a frame represents a word, a sequence represents a sentence, and an exchange represents a conversation.

Exchange: An exchange operation enables two node ports to identify and manage a set of information units. Each upper layer protocol (ULP) has its protocol-specific information that must be sent to another port to perform certain operations. This protocol-specific information is called an information unit. The structure of these information units is defined in the FC-4 layer. This unit maps to a sequence. An exchange is composed of one or more sequences.

Sequence: A sequence refers to a contiguous set of frames that are sent from one port to another. A sequence corresponds to an information unit, as defined by the ULP.

Frame: A frame is the fundamental unit of data transfer at FC-2 layer. An FC frame consists of five parts: start of frame (SOF), frame header, data field, cyclic redundancy check (CRC), and end of frame (EOF). 
FC Frame Strucutre

The SOF and EOF act as delimiters. The frame header is 24 bytes long and contains addressing information for the frame. The data field in an FC frame contains the data payload, up to 2,112 bytes of actual data – in most cases the SCSI data. The CRC checksum facilitates error detection for the content of the frame. This checksum verifies data integrity by checking whether the content of the frames are received correctly. The CRC checksum is calculated by the sender before encoding at the FC-1 layer. Similarly, it is calculated by the receiver after decoding at the FC-1 layer.

FC Services

All FC switches, regardless of the manufacturer, provide a common set of services as defined in the FC standards. These services are available at certain predefined addresses. Some of these services are Fabric Login Server, Fabric Controller, Name Server, and Management Server.

Fabric Login Server: It is located at the predefined address of FFFFFE and is used during the initial part of the node’s fabric login process.

Name Server (formally known as Distributed Name Server): It is located at the predefined address FFFFFC and is responsible for name registration and management of node ports. Each switch exchanges its Name Server information with other switches in the fabric to maintain a synchronized, distributed name service.

Fabric Controller: Each switch has a Fabric Controller located at the predefined address FFFFFD. The Fabric Controller provides services to both node ports and other switches. The Fabric Controller is responsible for managing and distributing Registered State Change Notifications (RSCNs) to the node ports registered with the Fabric Controller. If there is a change in the fabric, RSCNs are sent out by a switch to the attached node ports. The Fabric Controller also generates Switch Registered State Change Notifications (SW-RSCNs) to every other domain (switch) in the fabric. These RSCNs keep the name server up-to-date on all switches in the fabric.

Management Server: FFFFFA is the FC address for the Management Server. The Management Server is distributed to every switch within the fabric. The Management Server enables the FC SAN management software to retrieve information and administer the fabric.

Fabric services define three login types:
  • Fabric login (FLOGI): It is performed between an N_Port and an F_Port. To log on to the fabric, a node sends a FLOGI frame with the WWNN and WWPN parameters to the login service at the predefined FC address FFFFFE (Fabric Login Server). In turn, the switch accepts the login and returns an Accept (ACC) frame with the assigned FC address for the node. Immediately after the FLOGI, the N_Port registers itself with the local Name Server on the switch, indicating its WWNN, WWPN, port type, class of service, assigned FC address, and so on. After the N_Port has logged in, it can query the name server database for information about all other logged in ports.
  • Port login (PLOGI): It is performed between two N_Ports to establish a session. The initiator N_Port sends a PLOGI request frame to the target N_Port, which accepts it. The target N_Port returns an ACC to the initiator N_Port. Next, the N_Ports exchange service parameters relevant to the session.
  • Process login (PRLI): It is also performed between two N_Ports. This login relates to the FC-4 ULPs, such as SCSI. If the ULP is SCSI, N_Ports exchange SCSI-related service parameters.

FC Flow Control

Flow control is the process to regulate the data transmission rate between two devices so that a transmitting device does not overflow a receiving device with data. A fabric uses the buffer-to-buffer credit (BB_Credit) mechanism for flow control. The BB_Credit management may occur between any two FC ports.

In a fabric, an FC frame is received and stored in a receive buffer where it is processed by the receiving FC port. If another frame arrives while the receiving port is processing the first frame, a second receive buffer is needed to hold the new frame. If all the receive buffers are filled up with received frames and the transmitting port sends another frame, then the receiving port would not have a receive buffer available to hold the new frame and the frame would be dropped. BB_Credit mechanism ensures that the FC ports do not run out of buffers and do not drop frames.

With BB_Credit mechanism, the transmitting and receiving ports agree on the number of buffers available or BB_Credits during the port login process. The credit value is decremented when a frame is transmitted and incremented upon receiving a response. A receiver ready (R_RDY) is sent from the receiving port to the transmitting port for every free buffer on the receiving side. The transmitting port increments the credit value per R_RDY it receives from the receiving port. The transmitting port maintains a count of available credits and continues to send frames if the count is greater than zero. If the available credits reaches zero, then further transmission of frames is suspended until the credit count becomes a nonzero value.

Previous: 4.2 Fibre Channel (FC) SAN Components

                                                                                                    Next: Introduction to Fibre Channel (FC) SAN Architecture and port virtualization