Summary of the content on the page No. 1
Dell™ PowerEdge™ Cluster FE200 Systems
Platform Guide
www.dell.com | support.dell.com
Summary of the content on the page No. 2
Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. CAUTION: A CAUTION indicates a potential for property damage, personal injury, or death. ____________________ Information in this document is subject to change without notice. © 2000–2003 Dell Computer Corporation. All rights reserved. Reproduction in a
Summary of the content on the page No. 3
Contents Supported Cluster Configurations . . . . . . . . . . . . . . . . 1-1 Windows 2000 Advanced Server Cluster Configurations . . . . 1-2 Windows 2000 Advanced Server Service Pack Support . . . . 1-2 QLogic HBA Support for Cluster FE200 Configurations . . . . 1-2 HBA Connectors. . . . . . . . . . . . . . . . . . . . . . . . 1-3 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3 Windows Server 2003, Enterprise Edition Cluster Configurations . . . . . . . . . . . . . .
Summary of the content on the page No. 4
Tables Table 1-1. Supported Cluster Configurations . . . . . . . . 1-1 Table 1-2. Supported HBAs for Cluster FE200 Configurations Running Windows 2000 Advanced Server . . . . . . . . . . . . . . . . 1-2 Table 1-3. Supported HBAs for Cluster FE200 Configurations Running Windows 2000 Advanced Server . . . . . . . . . . . . . . . . 1-4 Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes . . . . . . . . . . . . . . . . . . 1-5 Table 1-5. SAN-Attached Clusters Rules and Guideli
Summary of the content on the page No. 5
This document provides information for installing and connecting peripheral hardware, storage, and SAN components to your Dell™ PowerEdge™ Cluster FE200 system. The ® ® configuration information in this document is specific to Microsoft Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition operating systems. This document covers the following topics: Configuration information for installing peripheral hardware components, such as HBAs, network adapters, and PCI adapter c
Summary of the content on the page No. 6
Obtaining More Information See the Dell PowerEdge Cluster FE200 System Installation and Troubleshooting Guide included with your cluster configuration for a detailed list of related documentation. Windows 2000 Advanced Server Cluster Configurations This section provides information about the Windows 2000 Advanced Server service pack and supported QLogic HBAs and HBA drivers for your cluster configuration. NOTE: HBAs installed in clusters must be identical for redundant paths. Cluster config
Summary of the content on the page No. 7
Table 1-2. Supported HBAs for Cluster FE200 Configurations Running Windows 2000 Advanced Server (continued) PowerEdge System QLA-2200 33 MHz QLA-2200 66 MHz 4600 x 6400/6450 x x 6600/6650 x 8450 x x HBA Connectors Both optical and copper HBA connectors are supported in a SAN-attached and SAN appliance-attached configuration. Optical HBA connectors are not supported in a direct- attached configuration. Guidelines When configuring your cluster, both cluster nodes must contain identical versio
Summary of the content on the page No. 8
NOTE: HBAs installed in clusters must be identical for redundant paths. Cluster configurations are tested and certified using identical QLogic HBAs installed in all of the cluster nodes. Using dissimilar HBAs in your cluster nodes is not supported. QLogic HBA Support for Cluster FE200 Configurations Table 1-3 lists the systems and the QLogic HBAs that are supported for PowerEdge Cluster FE200 configurations running Windows Server 2003, Enterprise Edition. See "Installing Peripheral Component
Summary of the content on the page No. 9
Guidelines When configuring your cluster, both cluster nodes must contain identical versions of the following: Operating systems and service packs Hardware drivers for the network adapters, HBAs, and any other peripheral hardware components Management utilities, such as Dell OpenManage systems management software Fibre Channel HBA BIOS Obtaining More Information See the Dell PowerEdge Cluster FE200 Systems Installation and Troubleshooting Guide included with your cluster configuration
Summary of the content on the page No. 10
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes (continued) PowerEdge PCI Buses HBA DRAC II or III RAID Controller System 1650 Standard riser board: Install HBA in any PCI Install new or existing Install in any PCI bus 2: PCI slot 1 is 64-bit, slot. DRAC III in PCI slot 1 available PCI slot. 66 MHz on the optional riser board. PCI bus 2: PCI slot 2 is 64-bit, 66 MHz Optional riser board: PCI bus 0: PCI slot 1 is 32-bit, 33 MHz PCI bus 2: PCI slot 2 is 64-bit, 66 MHz 2500 PCI bu
Summary of the content on the page No. 11
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes (continued) PowerEdge PCI Buses HBA DRAC II or III RAID Controller System 2650 PCI/PCI-X bus 1: PCI slot 1 is For dual HBA N/A An integrated 64-bit, 33–100 MHz configurations, install the RAID controller is HBAs on separate PCI available on the PCI/PCI-X bus 1: PCI slot 2 is buses to balance the load system board. 64-bit, 33–133 MHz on the system. NOTE: To activate PCI/PCI-X bus 2: PCI slot 3 is the integrated 64-bit, 33–133 MHz
Summary of the content on the page No. 12
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes (continued) PowerEdge PCI Buses HBA DRAC II or III RAID Controller System 6600 PCI bus 0: PCI slot 1 is 32-bit, For dual HBA Install new or existing Install the RAID 33 MHz configurations, install the DRAC III in slot 1. controller in PCI HBAs on separate PCI slot 2 or 3. PCI/PCI-X bus 1: PCI slot 2 buses to balance the load and 3 are 64-bit, 33–100 MHz on the system. PCI/PCI-X bus 2: PCI slot 4 and 5 are 64-bit, 33–100 MHz PCI/PC
Summary of the content on the page No. 13
Attaching Your Cluster Shared Storage Systems to a SAN This section provides the rules and guidelines for attaching your cluster nodes to the shared storage system(s) using a SAN in a Fibre Channel switch fabric. The following SAN configurations are supported: SAN-attached Cluster consolidation SAN appliance-attached NOTE: You can configure a SAN with up to 20 PowerEdge systems and eight storage systems. SAN-Attached Cluster Configurations In a SAN-attached cluster configuration, both
Summary of the content on the page No. 14
Table 1-5. SAN-Attached Clusters Rules and Guidelines (continued) Rule/Guideline Description Cluster pair support All homogeneous and heterogeneous cluster configurations supported in direct-attach configurations are supported in SAN-attached configurations. See "Windows 2000 Advanced Server Cluster Configurations" or "Windows Server 2003, Enterprise Edition Cluster Configurations" for more information about supported cluster pairs. NOTE: The Windows Server 2003, Enterprise Edition support
Summary of the content on the page No. 15
Table 1-5. SAN-Attached Clusters Rules and Guidelines (continued) Rule/Guideline Description Fibre Channel HBAs QLogic 2200/33 MHz. supported QLogic 2200/66 MHz. NOTE: Supports both NOTE: HBAs within a single cluster must be the same. optical and copper HBAs. Operating system Each cluster attached to the SAN can run either Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition. Service pack Windows 2000 Advanced Server configurations require Service Pack 4 or later. Window
Summary of the content on the page No. 16
See the Dell PowerVault Fibre Channel Update Version 5.3 CD for the specific version levels of your SAN components. . Table 1-6. Cluster Consolidation Rules and Guidelines Rule/Guideline Description Number of supported Up to 10 two-node clusters attached to a SAN. Combinations of PowerEdge systems stand-alone systems and cluster pairs not to exceed 20 systems. Cluster pair support Any supported homogeneous system pair with the following HBAs: QLogic 2200/33 MHz. QLogic 2200/66 MHz. Primar
Summary of the content on the page No. 17
Table 1-6. Cluster Consolidation Rules and Guidelines (continued) Rule/Guideline Description Fibre Channel switch zoning Each cluster must have its own zone, plus one zone for the stand- alone systems. The zone for each cluster should include the following hardware components: One cluster with two nodes. One storage system. One or more Fibre Channel-to-SCSI bridges (if applicable). The zone for the stand-alone systems should include the following hardware components: All nonclustered
Summary of the content on the page No. 18
Table 1-6. Cluster Consolidation Rules and Guidelines (continued) Rule/Guideline Description Additional software Dell OpenManage Array Manager. application programs QLogic QLDirect. QMSJ. Obtaining More Information See the Dell PowerEdge Cluster FFE200 Systems Installation and Troubleshooting Guide included with your cluster configuration for more information about cluster consolidation configurations. See the Dell PowerEdge Cluster SAN Revision Compatibility Guide included with your cluster
Summary of the content on the page No. 19
If you have not configured your cluster, apply the QFE (or Service Pack 1 when available) to all of the cluster nodes. If you have configured your cluster, perform one of the following procedures and then reboot each cluster node, one at a time: Manually change the registry TimeOutValue setting to 60 on each cluster node. Download the Cluster Disk Timeout Fix utility from the Dell Support website at support.dell.com and run the utility on your cluster. When prompted, type the name of your
Summary of the content on the page No. 20
1-16 Platform Guide www.dell.com | support.dell.com