|Document revision date: 30 March 2001|
A major benefit of OpenVMS is its support of a wide range of interconnects and protocols for network configurations and for OpenVMS Cluster System configurations. This chapter describes OpenVMS Alpha support for Fibre Channel as a storage interconnect for single systems and as a shared storage interconnect for multihost OpenVMS Cluster systems.
The following topics are discussed:
For information about multipath support for Fibre Channel configurations, see Chapter 6.
The Fibre Channel interconnect is shown generically in the figures in this chapter. It is represented as a horizontal line to which the node and storage subsystems are connected. Physically, the Fibre Channel interconnect is always radially wired from a switch, as shown in Figure 7-1.
The representation of multiple SCSI disks and SCSI buses in a storage subsystem is also simplified. The multiple disks and SCSI buses, which one or more HSGx controllers serve as a logical unit to a host, are shown in the figures as a single logical unit.
For ease of reference, the term HSG is used throughout this chapter to represent both an HSG60 and an HSG80, except where it is important to note any difference, as in Table 7-2. In those instances, HSG60 or HSG80 is used.
Fibre Channel is an ANSI standard network and storage interconnect that offers many advantages over other interconnects. Its most important features are described in Table 7-1.
|High-speed transmission||1.06 gigabits per second, full duplex, serial interconnect (can simultaneously transmit and receive 100 megabytes of data per second)|
|Choice of media||OpenVMS support for fiber-optic media.|
|Long interconnect distances||OpenVMS support for multimode fiber at 500 meters per link and for single-mode fiber up to 100 kilometers per link.|
|Multiple protocols||OpenVMS support for SCSI--3. Possible future support for IP, 802.3, HIPPI, ATM, IPI, and others.|
|Numerous topologies||OpenVMS support for switched FC (highly scalable, multiple concurrent communications) and for multiple switches. Possible future support for mixed arbitrated loop and switches.|
Currently, the OpenVMS implementation supports:
Figure 7-1 shows a logical view of a switched topology. The FC nodes are either Alpha hosts, or storage subsystems. Each link from a node to the switch is a dedicated FC connection. The switch provides store-and-forward packet delivery between pairs of nodes. Concurrent communication between disjoint pairs of nodes is supported by the switch.
Figure 7-1 Switched Topology (Logical View)
Figure 7-2 shows a physical view of a Fibre Channel switched topology. The configuration in Figure 7-2 is simplified for clarity. Typical configurations will have multiple Fibre Channel interconnects for high availability, as shown in Section 7.3.4.
Figure 7-2 Switched Topology (Physical View)
OpenVMS Alpha supports the Fibre Channel devices listed in Table 7-2. Note that Fibre Channel hardware names typically use the letter G to designate hardware that is specific to Fibre Channel. Fibre Channel configurations with other Fibre Channel equipment are not supported. To determine the required minimum versions of the operating system and firmware, see the release notes.
Compaq recommends that all OpenVMS Fibre Channel configurations use the latest update kit for the OpenVMS version they are running:
The root name of these kits is FIBRE_SCSI, a change from the earlier naming convention of FIBRECHAN. The kits are available from the following web site:
|AlphaServer 800, 1 1000A, 2 1200, 4000, 4100, 8200, 8400, DS10, DS20, DS20E, ES40, GS60, GS60E, GS80, GS140, GS160, and GS320||Alpha host.|
|HSG80||Fibre Channel controller module with two Fibre Channel host ports and support for six SCSI drive buses.|
|HSG60||Fibre Channel controller module with two Fibre Channel host ports and support for two SCSI buses.|
|MDR||Fibre Channel Modular Data Router, a bridge to a SCSI tape or a SCSI tape library. The MDR must be connected to a Fibre Channel switch. It cannot be connected directly to an Alpha system.|
|KGPSA-BC, KGPSA-CA||OpenVMS Alpha PCI to multimode Fibre Channel host adapters.|
|DSGGA-AA or -AB and DSGGB-AA or -AB||8-port or 16-port Fibre Channel switch.|
|VLGBICs||Very long Gigabit interface converters (GBICs), which are used in long-distance configurations with single-mode fibre-optic cables. The order number is 169887-B21 for a pair of VLGBICs.|
|Single-mode, fiber-optic cable||Single-mode fibre-optic cable up to 100 kilometers can be used.|
|BNGBX- nn||Multimode fiber-optic cable ( nn denotes length in meters).|
OpenVMS supports the Fibre Channel SAN configurations described in the latest Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide (order number: AA-RMPNA-TE) and the Data Replication Manager (DRM) user documentation. This includes support for:
The StorageWorks documentation is available from their web site. First locate the product; then you can access the documentation. The WWW address is:
Within the configurations described in the StorageWorks documentation, OpenVMS provides the following capabilities and restrictions:
This configuration support is in effect as of the revision date of this document. OpenVMS plans to increase these limits in future releases.
In addition to the configurations already described, OpenVMS also supports the SANworks Data Replication Manager. This is a remote data vaulting solution that enables the use of Fibre Channel over longer distances. For more information, see the Compaq StorageWorks web site at:
Qualification of new Fibre Channel hardware and larger configurations is ongoing. New hardware and larger configurations may necessitate enhancements to the Fibre Channel support in OpenVMS. Between releases of OpenVMS, enhancements and corrections to Fibre Channel software are made available by means of remedial kits. Compaq recommends that you monitor the Fibre Channel web site ( http://www.openvms.compaq.com/openvms/fibre/ ) and the Compaq support web site ( http://www.compaq.com/support/ ) for updates for the operating system version you are running.
The latest version of each kit is always posted to the Compaq support
7.2.2 Mixed-Version and Mixed-Architecture Cluster Support
Shared Fibre Channel OpenVMS Cluster storage is supported in both mixed-version and mixed-architecture OpenVMS Cluster systems. The following configuration requirements must be observed:
7.2 with the DEC-AXPVMS-VMS72_HARDWARE-V0100--4.PCSI remedial kit and console revision 5.4 or higher, depending on the AlphaServer model (see the release notes)
Since Fibre Channel support was introduced in OpenVMS Alpha, shadowing
of directly connected Fibre Channel storage using Volume Shadowing for
OpenVMS, has been available. OpenVMS Alpha Version 7.2-1 extended this
support to the shadowing of Fibre Channel multipath devices.
7.2.4 Fibre Channel and OpenVMS Galaxy Configurations
Fibre Channel is supported in all OpenVMS Galaxy configurations. For
more information about Galaxy configurations, see the OpenVMS Alpha Partitioning and Galaxy Guide.
7.3 Example Configurations
This section presents example Fibre Channel configurations. The
configurations build on each other, starting with the smallest valid
configuration and adding redundant components for increasing levels of
availability, performance, and scalability.
7.3.1 Single Host with Dual-Ported Storage
Figure 7-3 shows a single system using Fibre Channel as a storage interconnect.
Figure 7-3 Single Host With One Dual-Ported Storage Controller
Note the following about this configuration:
Figure 7-4 shows multiple hosts connected to a dual-ported storage subsystem.
Figure 7-4 Multiple Hosts With One Dual-Ported Storage Controller
Note the following about this configuration:
7.3.3 Multiple Hosts With Storage Controller Redundancy
Figure 7-5 shows multiple hosts connected to two dual-ported storage
Figure 7-5 Multiple Hosts With Storage Controller Redundancy
This configuration offers the following advantages:
7.3.4 Multiple Hosts With Multiple Independent Switches
Figure 7-6 shows multiple hosts connected to two switches, each of
which is connected to a pair of dual-ported storage controllers.
Figure 7-6 Multiple Hosts With Multiple Independent Switches
This two-switch configuration offers the advantages of the previous configurations plus the following:
7.3.5 Multiple Hosts With Dual Fabrics
Figure 7-7 shows multiple hosts connected to two fabrics; each
fabric consists of two switches.
Figure 7-7 Multiple Hosts With Dual Fabrics
This dual-fabric configuration offers the advantages of the previous configurations plus the following advantages:
7.3.6 Multiple Hosts With Larger Fabrics
The configurations shown in this section offer even higher levels of
performance and scalability.
Figure 7-8 shows multiple hosts connected to two fabrics. Each fabric has four switches.
Figure 7-8 Multiple Hosts With Larger Dual Fabrics
Figure 7-9 shows multiple hosts connected to four fabrics. Each fabric has four switches.
Figure 7-9 Multiple Hosts With Four Fabrics
Fibre Channel devices for disk and tape storage come with
factory-assigned worldwide IDs (WWIDs). These WWIDs are used by the
system for automatic FC address assignment. The FC WWIDs and addresses
also provide the means for the system manager to identify and locate
devices in the FC configuration. The FC WWIDs and adresses are
displayed, for example, by the Alpha console and by the HSG console. It
is necessary, therefore, for the system manager to understand the
meaning of these identifiers and how they relate to OpenVMS device
7.4.1 Fibre Channel Addresses and WWIDs
In most situations, Fibre Channel devices are configured to have temporary addresses. The device's address is assigned automatically each time the interconnect initializes. The device may receive a new address each time a Fibre Channel is reconfigured and reinitialized. This is done so that Fibre Channel devices do not require the use of address jumpers. There is one Fibre Channel address per port, as shown in Figure 7-10.
Figure 7-10 Fibre Channel Host and Port Addresses
In order to provide more permanent identification, each port on each device has a WWID, which is assigned at the factory. Every Fibre Channel WWID is unique. Fibre Channel also has node WWIDs to identify multiported devices. WWIDs are used by the system to detect and recover from automatic address changes. They are useful to system managers for identifying and locating physical devices.
Figure 7-11 shows Fibre Channel components with their factory-assigned WWIDs and their Fibre Channel addresses.
Figure 7-11 Fibre Channel Host and Port WWIDs and Addresses
Note the following about this figure:
There is an OpenVMS name for each Fibre Channel storage adapter, for
each path from the storage adapter to the storage subsystem, and for
each storage device. These sections apply to both disk devices and tape
devices, except for Section 126.96.36.199, which is specific to disk devices.
Tape device names are described in Section 7.5.
188.8.131.52 Fibre Channel Storage Adapter Names
Fibre Channel storage adapter names, which are automatically assigned by OpenVMS, take the form FGx0 :
The naming design places a limit of 26 adapters per system. This naming may be modified in future releases to support a larger number of adapters.
Fibre Channel adapters can run multiple protocols, such as SCSI and LAN. Each protocol is a pseudodevice associated with the adapter. For the initial implementation, just the SCSI protocol is supported. The SCSI pseudodevice name is PGx0 , where x represents the same unit letter as the associated FGx0 adapter.
These names are illustrated in Figure 7-12.
Figure 7-12 Fibre Channel Initiator and Target Names
|privacy and legal statement|