Wednesday, February 18, 2009

microsoft Cluster Service Step-by-Step Guide to Installing

Step-by-Step Guide to Installing Cluster Service :
This step-by-step guide provides instructions for installing Cluster service on servers running the Windows® 2000 Advanced Server and Windows 2000 Datacenter Server operating systems. The guide describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications. Rather, it guides you through the process of installing a typical, two-node cluster itself.
On This Page
Introduction Checklists for Cluster Server Installation Cluster Installation Install Cluster Service software Verify Installation For Additional Information Appendix A
Introduction
A server cluster is a group of independent servers running Cluster service and working collectively as a single system. Server clusters provide high-availability, scalability, and manageability for resources and applications by grouping multiple servers running Windows® 2000 Advanced Server or Windows 2000 Datacenter Server.
The purpose of server clusters is to preserve client access to applications and resources during failures and planned outages. If one of the servers in the cluster is unavailable due to failure or maintenance, resources and applications move to another available cluster node.
For clustered systems, the term high availability is used rather than fault-tolerant, as fault tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a clustering solution because organizations must pay for redundant hardware that waits idly for a fault. Fault-tolerant servers are used for applications that support high-value, high-rate transactions such as check clearinghouses, Automated Teller Machines (ATMs), or stock exchanges.
While Cluster service does not guarantee non-stop operation, it provides availability sufficient for most mission-critical applications. Cluster service can monitor applications and resources, automatically recognizing and recovering from many failure conditions. This provides greater flexibility in managing the workload within a cluster, and improves overall availability of the system.
Cluster service benefits include:
High Availability. With Cluster service, ownership of resources such as disk drives and IP addresses is automatically transferred from a failed server to a surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
Failback. Cluster service automatically re-balances the workload in a cluster when a failed server comes back online.
Manageability. You can use the Cluster Administrator to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within the cluster by dragging and dropping cluster objects. You can move data to different servers in the same way. This can be used to manually balance server workloads and to unload servers for planned maintenance. You can also monitor the status of the cluster, all nodes and resources from anywhere on the network.
Scalability. Cluster services can grow to meet rising demands. When the overall load for a cluster-aware application exceeds the capabilities of the cluster, additional nodes can be added.
This paper provides instructions for installing Cluster service on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server. It describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications, but rather to guide you through the process of installing a typical, two-node cluster itself.
Top of page
Checklists for Cluster Server Installation
This checklist assists you in preparing for installation. Step-by-step instructions begin after the checklist.
Software Requirements
Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed on all computers in the cluster.
A name resolution method such as Domain Naming System (DNS), Windows Internet Naming System (WINS), HOSTS, etc.
Terminal Server to allow remote cluster administration is recommended.
Hardware Requirements
The hardware for a Cluster service node must meet the hardware requirements for Windows 2000 Advanced Server or Windows 2000 Datacenter Server. These requirements can be found at The Product Compatibility Search page
Cluster hardware must be on the Cluster Service Hardware Compatibility List (HCL). The latest version of the Cluster Service HCL can be found by going to the Windows Hardware Compatibility List and then searching on Cluster.
Two HCL-approved computers, each with the following:
A boot disk with Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed. The boot disk cannot be on the shared storage bus described below.
A separate PCI storage host adapter (SCSI or Fibre Channel) for the shared disks. This is in addition to the boot disk adapter.
Two PCI network adapters on each machine in the cluster.
An HCL-approved external disk storage unit that connects to all computers. This will be used as the clustered disk. A redundant array of independent disks (RAID) is recommended.
Storage cables to attach the shared storage device to all computers. Refer to the manufacturers' instructions for configuring storage devices. If an SCSI bus is used, see Appendix A for additional information.
All hardware should be identical, slot for slot, card for card, for all nodes. This will make configuration easier and eliminate potential compatibility problems.
Network Requirements
A unique NetBIOS cluster name.
Five unique, static IP addresses: two for the network adapters on the private network, two for the network adapters on the public network, and one for the cluster itself.
A domain user account for Cluster service (all nodes must be members of the same domain).
Each node should have two network adapters—one for connection to the public network and the other for the node-to-node private cluster network. If you use only one network adapter for both connections, your configuration is unsupported. A separate private network adapter is required for HCL certification.
Shared Disk Requirements:
All shared disks, including the quorum disk, must be physically attached to a shared bus.
Verify that disks attached to the shared bus can be seen from all nodes. This can be checked at the host adapter setup level. Please refer to the manufacturer's documentation for adapter-specific instructions.
SCSI devices must be assigned unique SCSI identification numbers and properly terminated, as per manufacturer's instructions.
All shared disks must be configured as basic (not dynamic).
All partitions on the disks must be formatted as NTFS.
While not required, the use of fault-tolerant RAID configurations is strongly recommended for all disks. The key concept here is fault-tolerant raid configurations—not stripe sets without parity.
Top of page
Cluster Installation
Installation Overview
During the installation process, some nodes will be shut down and some nodes will be rebooted. These steps are necessary to guarantee that the data on disks that are attached to the shared storage bus is not lost or corrupted. This can happen when multiple nodes try to simultaneously write to the same disk that is not yet protected by the cluster software.
Use Table 1 below to determine which nodes and storage devices should be powered on during each step.
The steps in this guide are for a two-node cluster. However, if you are installing a cluster with more than two nodes, you can use the Node 2 column to determine the required state of other nodes.
Table 1 Power Sequencing Table for Cluster Installation
Step
Node 1
Node 2
Storage
Comments
Setting Up Networks
On
On
Off
Verify that all storage devices on the shared bus are powered off. Power on all nodes.
Setting up Shared Disks
On
Off
On
Shutdown all nodes. Power on the shared storage, then power on the first node.
Verifying Disk Configuration
Off
On
On
Shut down first node, power on second node. Repeat for nodes 3 and 4 if necessary.
Configuring the First Node
On
Off
On
Shutdown all nodes; power on the first node.
Configuring the Second Node
On
On
On
Power on the second node after the first node was successfully configured. Repeat for nodes 3 and 4 if necessary.
Post-installation
On
On
On
At this point all nodes should be on.
Several steps must be taken prior to the installation of the Cluster service software. These steps are:
Installing Windows 2000 Advanced Server or Windows 2000 Datacenter Server on each node.
Setting up networks.
Setting up disks.
Perform these steps on every cluster node before proceeding with the installation of Cluster service on the first node.
To configure the Cluster service on a Windows 2000-based server, your account must have administrative permissions on each node. All nodes must be member servers, or all nodes must be domain controllers within the same domain. It is not acceptable to have a mix of domain controllers and member servers in a cluster.
Installing the Windows 2000 Operating System
Please refer to the documentation you received with the Windows 2000 operating system packages to install the system on each node in the cluster.
This step-by-step guide uses the naming structure from the "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment" http://www.microsoft.com/windows2000/techinfo/planning/server/serversteps.asp. However, you can use any names.
You must be logged on as an administrator prior to installation of Cluster service.
Setting up Networks
Note: For this section, power down all shared storage devices and then power up all nodes. Do not let both nodes access the shared storage devices at the same time until the Cluster service is installed on at least one node and that node is online.
Each cluster node requires at least two network adapters—one to connect to a public network, and one to connect to a private network consisting of cluster nodes only.
The private network adapter establishes node-to-node communication, cluster status signals, and cluster management. Each node's public network adapter connects the cluster to the public network where clients reside.
Verify that all network connections are correct, with private network adapters connected to other private network adapters only, and public network adapters connected to the public network. The connections are illustrated in Figure 1 below. Run these steps on each cluster node before proceeding with shared disk setup.

Figure 1: Example of two-node cluster (clusterpic.vsd)
Configuring the Private Network Adapter
Perform these steps on the first node in your cluster.
Right-click My Network Places and then click Properties.
Right-click the Local Area Connection 2 icon.
Note: Which network adapter is private and which is public depends upon your wiring. For the purposes of this document, the first network adapter (Local Area Connection) is connected to the public network, and the second network adapter (Local Area Connection 2) is connected to the private cluster network. This may not be the case in your network.
Click Status. The Local Area Connection 2 Status window shows the connection status, as well as the speed of connection. If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding. Click Close.
Right-click Local Area Connection 2 again, click Properties, and click Configure.
Click Advanced. The window in Figure 2 should appear.
Network adapters on the private network should be set to the actual speed of the network, rather than the default automated speed selection. Select your network speed from the drop-down list. Do not use an Auto-select setting for speed. Some adapters may drop packets while determining the speed. To set the network adapter speed, click the appropriate option such as Media Type or Speed.

Figure 2: Advanced Adapter Configuration (advanced.bmp)
All network adapters in the cluster that are attached to the same network must be identically configured to use the same Duplex Mode, Flow Control, Media Type, and so on. These settings should remain the same even if the hardware is different.
Note: We highly recommend that you use identical network adapters throughout the cluster network.
Click Transmission Control Protocol/Internet Protocol (TCP/IP).
Click Properties.
Click the radio-button for Use the following IP address and type in the following address: 10.1.1.1. (Use 10.1.1.2 for the second node.)
Type in a subnet mask of 255.0.0.0.
Click the Advanced radio button and select the WINS tab. Select Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Do this step for the private network adapter only.
The window should now look like Figure 3 below.

Figure 3: Private Connector IP Address (ip10111.bmp)
Configuring the Public Network Adapter
Note: While the public network adapter's IP address can be automatically obtained if a DHCP server is available, this is not recommended for cluster nodes. We strongly recommend setting static IP addresses for all network adapters in the cluster, both private and public. If IP addresses are obtained via DHCP, access to cluster nodes could become unavailable if the DHCP server goes down. If you must use DHCP for your public network adapter, use long lease periods to assure that the dynamically assigned lease address remains valid even if the DHCP service is temporarily lost. In all cases, set static IP addresses for the private network connector. Keep in mind that Cluster service will recognize only one network interface per subnet. If you need assistance with TCP/IP addressing in Windows 2000, please see Windows 2000 Online Help.
Rename the Local Area Network Icons
We recommend changing the names of the network connections for clarity. For example, you might want to change the name of Local Area Connection (2) to something like Private Cluster Connection. The naming will help you identify a network and correctly assign its role.
Right-click the Local Area Connection 2 icon.
Click Rename.
Type Private Cluster Connection into the textbox and press Enter.
Repeat steps 1-3 and rename the public network adapter as Public Cluster Connection.

Figure 4: Renamed connections (connames.bmp)
The renamed icons should look like those in Figure 4 above. Close the Networking and Dial-up Connections window. The new connection names automatically replicate to other cluster servers as they are brought online.
Verifying Connectivity and Name Resolution
To verify that the private and public networks are communicating properly, perform the following steps for each network adapter in each node. You need to know the IP address for each network adapter in the cluster. If you do not already have this information, you can retrieve it using the ipconfig command on each node:
Click Start, click Run and type cmd in the text box. Click OK.
Type ipconfig /all and press Enter. IP information should display for all network adapters in the machine.
If you do not already have the command prompt on your screen, click Start, click Run and typing cmd in the text box. Click OK.
Type ping ipaddress where ipaddress is the IP address for the corresponding network adapter in the other node. For example, assume that the IP addresses are set as follows:
Node
Network Name
Network Adapter IP Address
1
Public Cluster Connection
172.16.12.12.
1
Private Cluster Connection
10.1.1.1
2
Public Cluster Connection
172.16.12.14
2
Private Cluster Connection
10.1.1.2
In this example, you would type ping 172.16.12.14 and ping 10.1.1.2 from Node 1, and you would type ping 172.16.12.12 and 10.1.1.1 from Node 2.
To verify name resolution, ping each node from a client using the node's machine name instead of its IP number. For example, to verify name resolution for the first cluster node, type ping hq-res-dc01 from any client.
Verifying Domain Membership
All nodes in the cluster must be members of the same domain and able to access a domain controller and a DNS Server. They can be configured as member servers or domain controllers. If you decide to configure one node as a domain controller, you should configure all other nodes as domain controllers in the same domain as well. In this document, all nodes are configured as domain controllers.
Note: See More Information at the end of this document for links to additional Windows 2000 documentation that will help you understand and configure domain controllers, DNS, and DHCP.
Right-click My Computer, and click Properties.
Click Network Identification. The System Properties dialog box displays the full computer name and domain. In our example, the domain name is reskit.com.
If you are using member servers and need to join a domain, you can do so at this time. Click Properties and following the on-screen instructions for joining a domain.
Close the System Properties and My Computer windows.
Setting Up a Cluster User Account
The Cluster service requires a domain user account under which the Cluster service can run. This user account must be created before installing Cluster service, because setup requires a user name and password. This user account should not belong to a user on the domain.
Click Start, point to Programs, point to Administrative Tools, and click Active Directory Users and Computers
Click the + to expand Reskit.com (if it is not already expanded).
Click Users.
Right-click Users, point to New, and click User.
Type in the cluster name as shown in Figure 5 below and click Next.

Figure 5: Add Cluster User (clusteruser.bmp)
Set the password settings to User Cannot Change Password and Password Never Expires. Click Next and then click Finish to create this user.
Note: If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the cluster service configuration on each node before password expiration.
Right-click Cluster in the left pane of the Active Directory Users and Computers snap-in. Select Properties from the context menu.
Click Add Members to a Group.
Click Administrators and click OK. This gives the new user account administrative privileges on this computer.
Close the Active Directory Users and Computers snap-in.
Setting Up Shared Disks
Warning: Make sure that Windows 2000 Advanced Server or Windows 2000 Datacenter Server and the Cluster service are installed and running on one node before starting an operating system on another node. If the operating system is started on other nodes before the Cluster service is installed, configured and running on at least one node, the cluster disks will probably be corrupted.
To proceed, power off all nodes. Power up the shared storage devices and then power up node one.
About the Quorum Disk
The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster. We make the following quorum disk recommendations:
Create a small partition (min 50MB) to be used as a quorum disk. We generally recommend a quorum disk to be 500MB.)
Dedicate a separate disk for a quorum resource. As the failure of the quorum disk would cause the entire cluster to fail, we strongly recommend you use a volume on a RAID disk array.
During the Cluster service installation, you must provide the drive letter for the quorum disk. In our example, we use the letter Q.
Configuring Shared Disks
Right click My Computer, click Manage, and click Storage.
Double-click Disk Management
Verify that all shared disks are formatted as NTFS and are designated as Basic. If you connect a new drive, the Write Signature and Upgrade Disk Wizard starts automatically. If this happens, click Next to go through the wizard. The wizard sets the disk to dynamic. To reset the disk to Basic, right-click Disk # (where # specifies the disk you are working with) and click Revert to Basic Disk.
Right-click unallocated disk space
Click Create Partition…
The Create Partition Wizard begins. Click Next twice.
Enter the desired partition size in MB and click Next.
Accept the default drive letter assignment by clicking Next.
Click Next to format and create partition.
Assigning Drive Letters
After the bus, disks, and partitions have been configured, drive letters must be assigned to each partition on each clustered disk.
Note: Mountpoints is a feature of the file system that allows you to mount a file system using an existing directory without assigning a drive letter. Mountpoints is not supported on clusters. Any external disk used as a cluster resource must be partitioned using NTFS partitions and must have a drive letter assigned to it.
Right-click the desired partition and select Change Drive Letter and Path.
Select a new drive letter.
Repeat steps 1 and 2 for each shared disk.

Figure 6: Disks with Drive Letters Assigned (drives.bmp)
When finished, the Computer Management window should look like Figure 6 above. Now close the Computer Management window.
Verifying Disk Access and Functionality
Click Start, click Programs, click Accessories, and click Notepad.
Type some words into Notepad and use the File/Save As command to save it as a test file called test.txt. Close Notepad.
Double-click the My Documents icon.
Right-click test.txt and click Copy
Close the window.
Double-click My Computer.
Double-click a shared drive partition.
Click Edit and click Paste.
A copy of the file should now reside on the shared disk.
Double-click test.txt to open it on the shared disk. Close the file.
Highlight the file and press the Del key to delete it from the clustered disk.
Repeat the process for all clustered disks to verify they can be accessed from the first node.
At this time, shut down the first node, power on the second node and repeat the Verifying Disk Access and Functionality steps above. Repeat again for any additional nodes. When you have verified that all nodes can read and write from the disks, turn off all nodes except the first, and continue with this guide.
Top of page
Install Cluster Service software
Configuring the First Node
Note: During installation of Cluster service on the first node, all other nodes must either be turned off, or stopped prior to Windows 2000 booting. All shared storage devices should be powered up.
In the first phase of installation, all initial cluster configuration information must be supplied so that the cluster can be created. This is accomplished using the Cluster Service Configuration Wizard.
Click Start, click Settings, and click Control Panel.
Double-click Add/Remove Programs.
Double-click Add/Remove Windows Components .
Select Cluster Service. Click Next.
Cluster service files are located on the Windows 2000 Advanced Server or Windows 2000 Datacenter Server CD-ROM. Enter x:\i386 (where x is the drive letter of your CD-ROM). If Windows 2000 was installed from a network, enter the appropriate network path instead. (If the Windows 2000 Setup flashscreen displays, close it.) Click OK.
Click Next.
The window shown in Figure 7 below appears. Click I Understand to accept the condition that Cluster service is supported on hardware from the Hardware Compatibility List only.

Figure 7: Hardware Configuration Certification Screen (hcl.bmp)
Because this is the first node in the cluster, you must create the cluster itself. Select The first node in the cluster, as shown in Figure 8 below and then click Next.

Figure 8: Create New Cluster (clustcreate.bmp)
Enter a name for the cluster (up to 15 characters), and click Next. (In our example, we name the cluster MyCluster.)
Type the user name of the cluster service account that was created during the pre-installation. (In our example, this user name is cluster.) Leave the password blank. Type the domain name, and click Next.
Note: You would normally provide a secure password for this user account.
At this point the Cluster Service Configuration Wizard validates the user account and password.
Click Next.
Configuring Cluster Disks
Note: By default, all SCSI disks not residing on the same bus as the system disk will appear in the Managed Disks list. Therefore, if the node has multiple SCSI buses, some disks may be listed that are not to be used as shared storage (for example, an internal SCSI drive.) Such disks should be removed from the Managed Disks list.
The Add or Remove Managed Disks dialog box shown in Figure 9 specifies which disks on the shared SCSI bus will be used by Cluster service. Add or remove disks as necessary and then click Next.

Figure 9: Add or Remove Managed Disks (manageddisks.bmp)
Note that because logical drives F: and G: exist on a single hard disk, they are seen by Cluster service as a single resource. The first partition of the first disk is selected as the quorum resource by default. Change this to denote the small partition that was created as the quorum disk (in our example, drive Q). Click Next.
Note: In production clustering scenarios you must use more than one private network for cluster communication to avoid having a single point of failure. Cluster service can use private networks for cluster status signals and cluster management. This provides more security than using a public network for these roles. You can also use a public network for cluster management, or you can use a mixed network for both private and public communications. In any case, make sure at least two networks are used for cluster communication, as using a single network for node-to-node communication represents a potential single point of failure. We recommend that multiple networks be used, with at least one network configured as a private link between nodes and other connections through a public network. If you have more than one private network, make sure that each uses a different subnet, as Cluster service recognizes only one network interface per subnet.
This document is built on the assumption that only two networks are in use. It shows you how to configure these networks as one mixed and one private network.
The order in which the Cluster Service Configuration Wizard presents these networks may vary. In this example, the public network is presented first.
Click Next in the Configuring Cluster Networks dialog box.
Make sure that the network name and IP address correspond to the network interface for the public network.
Check the box Enable this network for cluster use.
Select the option All communications (mixed network) as shown in Figure 10 below.
Click Next.

Figure 10: Public Network Connection (pubclustnet.bmp)
The next dialog box shown in Figure 11 configures the private network. Make sure that the network name and IP address correspond to the network interface used for the private network.
Check the box Enable this network for cluster use.
Select the option Internal cluster communications only.

Figure 11: Private Network Connection (privclustnet.bmp)
Click Next.
In this example, both networks are configured in such a way that both can be used for internal cluster communication. The next dialog window offers an option to modify the order in which the networks are used. Because Private Cluster Connection represents a direct connection between nodes, it is left at the top of the list. In normal operation this connection will be used for cluster communication. In case of the Private Cluster Connection failure, cluster service will automatically switch to the next network on the list—in this case Public Cluster Connection. Make sure the first connection in the list is the Private Cluster Connection and click Next.
Important: Always set the order of the connections so that the Private Cluster Connection is first in the list.
Enter the unique cluster IP address (172.16.12.20) and Subnet mask (255.255.252.0), and click Next.

Figure 12: Cluster IP Address (clusterip.bmp)
The Cluster Service Configuration Wizard shown in Figure 12 automatically associates the cluster IP address with one of the public or mixed networks. It uses the subnet mask to select the correct network.
Click Finish to complete the cluster configuration on the first node.
The Cluster Service Setup Wizard completes the setup process for the first node by copying the files needed to complete the installation of Cluster service. After the files are copied, the Cluster service registry entries are created, the log files on the quorum resource are created, and the Cluster service is started on the first node.
A dialog box appears telling you that Cluster service has started successfully.
Click OK.
Close the Add/Remove Programs window.
Validating the Cluster Installation
Use the Cluster Administrator snap-in to validate the Cluster service installation on the first node.
Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.

Figure 13: Cluster Administrator (1nodeadmin.bmp)
If your snap-in window is similar to that shown above in Figure 13, your Cluster service was successfully installed on the first node. You are now ready to install Cluster service on the second node.
Configuring the Second Node
Note: For this section, leave node one and all shared disks powered on. Power up the second node.
Installing Cluster service on the second node requires less time than on the first node. Setup configures the Cluster service network settings on the second node based on the configuration of the first node.
Installation of Cluster service on the second node begins exactly as for the first node. During installation of the second node, the first node must be running.
Follow the same procedures used for installing Cluster service on the first node, with the following differences:
In the Create or Join a Cluster dialog box, select The second or next node in the cluster, and click Next.
Enter the cluster name that was previously created (in this example, MyCluster), and click Next.
Leave Connect to cluster as unchecked. The Cluster Service Configuration Wizard will automatically supply the name of the user account selected during the installation of the first node. Always use the same account used when setting up the first cluster node.
Enter the password for the account (if there is one) and click Next.
At the next dialog box, click Finish to complete configuration.
The Cluster service will start. Click OK.
Close Add/Remove Programs.
If you are installing additional nodes, repeat these steps to install Cluster service on all other nodes.
Top of page
Verify Installation
There are several ways to verify a successful installation of Cluster service. Here is a simple one:
Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.

Figure 14: Cluster Resources (clustadmin.bmp)
The presence of two nodes (HQ-RES-DC01 and HQ-RES-DC02 in Figure 14 above) shows that a cluster exists and is in operation.
Right Click the group Disk Group 1 and select the option Move. The group and all its resources will be moved to another node. After a short period of time the Disk F: G: will be brought online on the second node. If you watch the screen, you will see this shift. Close the Cluster Administrator snap-in.
Congratulations. You have completed the installation of Cluster service on all nodes. The server cluster is fully operational. You are now ready to install cluster resources like file shares, printer spoolers, cluster aware services like IIS, Message Queuing, Distributed Transaction Coordinator, DHCP, WINS, or cluster aware applications like Exchange or SQL Server.
Top of page
For Additional Information
This guide covers a simple installation of Cluster service. For more articles and papers on Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Cluster service, see: The Windows 2000 Web site. For information on installing DHCP, Active Directory, and other services, see Windows 2000 Online Help, the Windows 2000 Planning and Deployment Guide, and the Windows 2000 Resource Kit.
Top of page
Appendix A
This appendix is provided as a generic instruction set for SCSI drive installations. If the SCSI hard disk vendor's instructions conflict with the instructions here, always use the instructions supplied by the vendor".
The SCSI bus listed in the hardware requirements must be configured prior to installation of Cluster services. This includes:
Configuring the SCSI devices.
Configuring the SCSI controllers and hard disks to work properly on a shared SCSI bus.
Properly terminating the bus. The shared SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.
In addition to the information on the following pages, refer to the documentation from the manufacturer of the SCSI device or the SCSI specifications, which can be ordered from the American National Standards Institute (ANSI). The ANSI web site contains a catalog that can be searched for the SCSI specifications.
Configuring the SCSI Devices
Each device on the shared SCSI bus must have a unique SCSI ID. Since most SCSI controllers default to SCSI ID 7, part of configuring the shared SCSI bus will be to change the SCSI ID on one controller to a different SCSI ID, such as SCSI ID 6. If there is more than one disk that will be on the shared SCSI bus, each disk must also have a unique SCSI ID.
Some SCSI controllers reset the SCSI bus when they initialize at boot time. If this occurs, the bus reset can interrupt any data transfers between the other node and disks on the shared SCSI bus. Therefore, SCSI bus resets should be disabled if possible.
Terminating the Shared SCSI Bus
Y cables can be connected to devices if the device is at the end of the SCSI bus. A terminator can then be attached to one branch of the Y cable to terminate the SCSI bus. This method of termination requires either disabling or removing any internal terminators the device may have.
Trilink connectors can be connected to certain devices. If the device is at the end of the bus, a trilink connector can be used to terminate the bus. This method of termination requires either disabling or removing any internal terminators the device may have.
Y cables and trilink connectors are the recommended termination methods, because they provide termination even when one node is not online.
Note: Any devices that are not at the end of the shared bus must have their internal termination disabled.
1 See Appendix A for information about installing and terminating SCSI devices.
Top of page
1
See Appendix A for information about installing and terminating SCSI devices.

microsoft Cluster Service Step-by-Step Guide to Installing

Step-by-Step Guide to Installing Cluster Service :
This step-by-step guide provides instructions for installing Cluster service on servers running the Windows® 2000 Advanced Server and Windows 2000 Datacenter Server operating systems. The guide describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications. Rather, it guides you through the process of installing a typical, two-node cluster itself.
On This Page
Introduction Checklists for Cluster Server Installation Cluster Installation Install Cluster Service software Verify Installation For Additional Information Appendix A
Introduction
A server cluster is a group of independent servers running Cluster service and working collectively as a single system. Server clusters provide high-availability, scalability, and manageability for resources and applications by grouping multiple servers running Windows® 2000 Advanced Server or Windows 2000 Datacenter Server.
The purpose of server clusters is to preserve client access to applications and resources during failures and planned outages. If one of the servers in the cluster is unavailable due to failure or maintenance, resources and applications move to another available cluster node.
For clustered systems, the term high availability is used rather than fault-tolerant, as fault tolerant technology offers a higher level of resilience and recovery. Fault-tolerant servers typically use a high degree of hardware redundancy plus specialized software to provide near-instantaneous recovery from any single hardware or software fault. These solutions cost significantly more than a clustering solution because organizations must pay for redundant hardware that waits idly for a fault. Fault-tolerant servers are used for applications that support high-value, high-rate transactions such as check clearinghouses, Automated Teller Machines (ATMs), or stock exchanges.
While Cluster service does not guarantee non-stop operation, it provides availability sufficient for most mission-critical applications. Cluster service can monitor applications and resources, automatically recognizing and recovering from many failure conditions. This provides greater flexibility in managing the workload within a cluster, and improves overall availability of the system.
Cluster service benefits include:
High Availability. With Cluster service, ownership of resources such as disk drives and IP addresses is automatically transferred from a failed server to a surviving server. When a system or application in the cluster fails, the cluster software restarts the failed application on a surviving server, or disperses the work from the failed node to the remaining nodes. As a result, users experience only a momentary pause in service.
Failback. Cluster service automatically re-balances the workload in a cluster when a failed server comes back online.
Manageability. You can use the Cluster Administrator to manage a cluster as a single system and to manage applications as if they were running on a single server. You can move applications to different servers within the cluster by dragging and dropping cluster objects. You can move data to different servers in the same way. This can be used to manually balance server workloads and to unload servers for planned maintenance. You can also monitor the status of the cluster, all nodes and resources from anywhere on the network.
Scalability. Cluster services can grow to meet rising demands. When the overall load for a cluster-aware application exceeds the capabilities of the cluster, additional nodes can be added.
This paper provides instructions for installing Cluster service on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server. It describes the process of installing the Cluster service on cluster nodes. It is not intended to explain how to install cluster applications, but rather to guide you through the process of installing a typical, two-node cluster itself.
Top of page
Checklists for Cluster Server Installation
This checklist assists you in preparing for installation. Step-by-step instructions begin after the checklist.
Software Requirements
Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed on all computers in the cluster.
A name resolution method such as Domain Naming System (DNS), Windows Internet Naming System (WINS), HOSTS, etc.
Terminal Server to allow remote cluster administration is recommended.
Hardware Requirements
The hardware for a Cluster service node must meet the hardware requirements for Windows 2000 Advanced Server or Windows 2000 Datacenter Server. These requirements can be found at The Product Compatibility Search page
Cluster hardware must be on the Cluster Service Hardware Compatibility List (HCL). The latest version of the Cluster Service HCL can be found by going to the Windows Hardware Compatibility List and then searching on Cluster.
Two HCL-approved computers, each with the following:
A boot disk with Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed. The boot disk cannot be on the shared storage bus described below.
A separate PCI storage host adapter (SCSI or Fibre Channel) for the shared disks. This is in addition to the boot disk adapter.
Two PCI network adapters on each machine in the cluster.
An HCL-approved external disk storage unit that connects to all computers. This will be used as the clustered disk. A redundant array of independent disks (RAID) is recommended.
Storage cables to attach the shared storage device to all computers. Refer to the manufacturers' instructions for configuring storage devices. If an SCSI bus is used, see Appendix A for additional information.
All hardware should be identical, slot for slot, card for card, for all nodes. This will make configuration easier and eliminate potential compatibility problems.
Network Requirements
A unique NetBIOS cluster name.
Five unique, static IP addresses: two for the network adapters on the private network, two for the network adapters on the public network, and one for the cluster itself.
A domain user account for Cluster service (all nodes must be members of the same domain).
Each node should have two network adapters—one for connection to the public network and the other for the node-to-node private cluster network. If you use only one network adapter for both connections, your configuration is unsupported. A separate private network adapter is required for HCL certification.
Shared Disk Requirements:
All shared disks, including the quorum disk, must be physically attached to a shared bus.
Verify that disks attached to the shared bus can be seen from all nodes. This can be checked at the host adapter setup level. Please refer to the manufacturer's documentation for adapter-specific instructions.
SCSI devices must be assigned unique SCSI identification numbers and properly terminated, as per manufacturer's instructions.
All shared disks must be configured as basic (not dynamic).
All partitions on the disks must be formatted as NTFS.
While not required, the use of fault-tolerant RAID configurations is strongly recommended for all disks. The key concept here is fault-tolerant raid configurations—not stripe sets without parity.
Top of page
Cluster Installation
Installation Overview
During the installation process, some nodes will be shut down and some nodes will be rebooted. These steps are necessary to guarantee that the data on disks that are attached to the shared storage bus is not lost or corrupted. This can happen when multiple nodes try to simultaneously write to the same disk that is not yet protected by the cluster software.
Use Table 1 below to determine which nodes and storage devices should be powered on during each step.
The steps in this guide are for a two-node cluster. However, if you are installing a cluster with more than two nodes, you can use the Node 2 column to determine the required state of other nodes.
Table 1 Power Sequencing Table for Cluster Installation
Step
Node 1
Node 2
Storage
Comments
Setting Up Networks
On
On
Off
Verify that all storage devices on the shared bus are powered off. Power on all nodes.
Setting up Shared Disks
On
Off
On
Shutdown all nodes. Power on the shared storage, then power on the first node.
Verifying Disk Configuration
Off
On
On
Shut down first node, power on second node. Repeat for nodes 3 and 4 if necessary.
Configuring the First Node
On
Off
On
Shutdown all nodes; power on the first node.
Configuring the Second Node
On
On
On
Power on the second node after the first node was successfully configured. Repeat for nodes 3 and 4 if necessary.
Post-installation
On
On
On
At this point all nodes should be on.
Several steps must be taken prior to the installation of the Cluster service software. These steps are:
Installing Windows 2000 Advanced Server or Windows 2000 Datacenter Server on each node.
Setting up networks.
Setting up disks.
Perform these steps on every cluster node before proceeding with the installation of Cluster service on the first node.
To configure the Cluster service on a Windows 2000-based server, your account must have administrative permissions on each node. All nodes must be member servers, or all nodes must be domain controllers within the same domain. It is not acceptable to have a mix of domain controllers and member servers in a cluster.
Installing the Windows 2000 Operating System
Please refer to the documentation you received with the Windows 2000 operating system packages to install the system on each node in the cluster.
This step-by-step guide uses the naming structure from the "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment" http://www.microsoft.com/windows2000/techinfo/planning/server/serversteps.asp. However, you can use any names.
You must be logged on as an administrator prior to installation of Cluster service.
Setting up Networks
Note: For this section, power down all shared storage devices and then power up all nodes. Do not let both nodes access the shared storage devices at the same time until the Cluster service is installed on at least one node and that node is online.
Each cluster node requires at least two network adapters—one to connect to a public network, and one to connect to a private network consisting of cluster nodes only.
The private network adapter establishes node-to-node communication, cluster status signals, and cluster management. Each node's public network adapter connects the cluster to the public network where clients reside.
Verify that all network connections are correct, with private network adapters connected to other private network adapters only, and public network adapters connected to the public network. The connections are illustrated in Figure 1 below. Run these steps on each cluster node before proceeding with shared disk setup.

Figure 1: Example of two-node cluster (clusterpic.vsd)
Configuring the Private Network Adapter
Perform these steps on the first node in your cluster.
Right-click My Network Places and then click Properties.
Right-click the Local Area Connection 2 icon.
Note: Which network adapter is private and which is public depends upon your wiring. For the purposes of this document, the first network adapter (Local Area Connection) is connected to the public network, and the second network adapter (Local Area Connection 2) is connected to the private cluster network. This may not be the case in your network.
Click Status. The Local Area Connection 2 Status window shows the connection status, as well as the speed of connection. If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding. Click Close.
Right-click Local Area Connection 2 again, click Properties, and click Configure.
Click Advanced. The window in Figure 2 should appear.
Network adapters on the private network should be set to the actual speed of the network, rather than the default automated speed selection. Select your network speed from the drop-down list. Do not use an Auto-select setting for speed. Some adapters may drop packets while determining the speed. To set the network adapter speed, click the appropriate option such as Media Type or Speed.

Figure 2: Advanced Adapter Configuration (advanced.bmp)
All network adapters in the cluster that are attached to the same network must be identically configured to use the same Duplex Mode, Flow Control, Media Type, and so on. These settings should remain the same even if the hardware is different.
Note: We highly recommend that you use identical network adapters throughout the cluster network.
Click Transmission Control Protocol/Internet Protocol (TCP/IP).
Click Properties.
Click the radio-button for Use the following IP address and type in the following address: 10.1.1.1. (Use 10.1.1.2 for the second node.)
Type in a subnet mask of 255.0.0.0.
Click the Advanced radio button and select the WINS tab. Select Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Do this step for the private network adapter only.
The window should now look like Figure 3 below.

Figure 3: Private Connector IP Address (ip10111.bmp)
Configuring the Public Network Adapter
Note: While the public network adapter's IP address can be automatically obtained if a DHCP server is available, this is not recommended for cluster nodes. We strongly recommend setting static IP addresses for all network adapters in the cluster, both private and public. If IP addresses are obtained via DHCP, access to cluster nodes could become unavailable if the DHCP server goes down. If you must use DHCP for your public network adapter, use long lease periods to assure that the dynamically assigned lease address remains valid even if the DHCP service is temporarily lost. In all cases, set static IP addresses for the private network connector. Keep in mind that Cluster service will recognize only one network interface per subnet. If you need assistance with TCP/IP addressing in Windows 2000, please see Windows 2000 Online Help.
Rename the Local Area Network Icons
We recommend changing the names of the network connections for clarity. For example, you might want to change the name of Local Area Connection (2) to something like Private Cluster Connection. The naming will help you identify a network and correctly assign its role.
Right-click the Local Area Connection 2 icon.
Click Rename.
Type Private Cluster Connection into the textbox and press Enter.
Repeat steps 1-3 and rename the public network adapter as Public Cluster Connection.

Figure 4: Renamed connections (connames.bmp)
The renamed icons should look like those in Figure 4 above. Close the Networking and Dial-up Connections window. The new connection names automatically replicate to other cluster servers as they are brought online.
Verifying Connectivity and Name Resolution
To verify that the private and public networks are communicating properly, perform the following steps for each network adapter in each node. You need to know the IP address for each network adapter in the cluster. If you do not already have this information, you can retrieve it using the ipconfig command on each node:
Click Start, click Run and type cmd in the text box. Click OK.
Type ipconfig /all and press Enter. IP information should display for all network adapters in the machine.
If you do not already have the command prompt on your screen, click Start, click Run and typing cmd in the text box. Click OK.
Type ping ipaddress where ipaddress is the IP address for the corresponding network adapter in the other node. For example, assume that the IP addresses are set as follows:
Node
Network Name
Network Adapter IP Address
1
Public Cluster Connection
172.16.12.12.
1
Private Cluster Connection
10.1.1.1
2
Public Cluster Connection
172.16.12.14
2
Private Cluster Connection
10.1.1.2
In this example, you would type ping 172.16.12.14 and ping 10.1.1.2 from Node 1, and you would type ping 172.16.12.12 and 10.1.1.1 from Node 2.
To verify name resolution, ping each node from a client using the node's machine name instead of its IP number. For example, to verify name resolution for the first cluster node, type ping hq-res-dc01 from any client.
Verifying Domain Membership
All nodes in the cluster must be members of the same domain and able to access a domain controller and a DNS Server. They can be configured as member servers or domain controllers. If you decide to configure one node as a domain controller, you should configure all other nodes as domain controllers in the same domain as well. In this document, all nodes are configured as domain controllers.
Note: See More Information at the end of this document for links to additional Windows 2000 documentation that will help you understand and configure domain controllers, DNS, and DHCP.
Right-click My Computer, and click Properties.
Click Network Identification. The System Properties dialog box displays the full computer name and domain. In our example, the domain name is reskit.com.
If you are using member servers and need to join a domain, you can do so at this time. Click Properties and following the on-screen instructions for joining a domain.
Close the System Properties and My Computer windows.
Setting Up a Cluster User Account
The Cluster service requires a domain user account under which the Cluster service can run. This user account must be created before installing Cluster service, because setup requires a user name and password. This user account should not belong to a user on the domain.
Click Start, point to Programs, point to Administrative Tools, and click Active Directory Users and Computers
Click the + to expand Reskit.com (if it is not already expanded).
Click Users.
Right-click Users, point to New, and click User.
Type in the cluster name as shown in Figure 5 below and click Next.

Figure 5: Add Cluster User (clusteruser.bmp)
Set the password settings to User Cannot Change Password and Password Never Expires. Click Next and then click Finish to create this user.
Note: If your administrative security policy does not allow the use of passwords that never expire, you must renew the password and update the cluster service configuration on each node before password expiration.
Right-click Cluster in the left pane of the Active Directory Users and Computers snap-in. Select Properties from the context menu.
Click Add Members to a Group.
Click Administrators and click OK. This gives the new user account administrative privileges on this computer.
Close the Active Directory Users and Computers snap-in.
Setting Up Shared Disks
Warning: Make sure that Windows 2000 Advanced Server or Windows 2000 Datacenter Server and the Cluster service are installed and running on one node before starting an operating system on another node. If the operating system is started on other nodes before the Cluster service is installed, configured and running on at least one node, the cluster disks will probably be corrupted.
To proceed, power off all nodes. Power up the shared storage devices and then power up node one.
About the Quorum Disk
The quorum disk is used to store cluster configuration database checkpoints and log files that help manage the cluster. We make the following quorum disk recommendations:
Create a small partition (min 50MB) to be used as a quorum disk. We generally recommend a quorum disk to be 500MB.)
Dedicate a separate disk for a quorum resource. As the failure of the quorum disk would cause the entire cluster to fail, we strongly recommend you use a volume on a RAID disk array.
During the Cluster service installation, you must provide the drive letter for the quorum disk. In our example, we use the letter Q.
Configuring Shared Disks
Right click My Computer, click Manage, and click Storage.
Double-click Disk Management
Verify that all shared disks are formatted as NTFS and are designated as Basic. If you connect a new drive, the Write Signature and Upgrade Disk Wizard starts automatically. If this happens, click Next to go through the wizard. The wizard sets the disk to dynamic. To reset the disk to Basic, right-click Disk # (where # specifies the disk you are working with) and click Revert to Basic Disk.
Right-click unallocated disk space
Click Create Partition…
The Create Partition Wizard begins. Click Next twice.
Enter the desired partition size in MB and click Next.
Accept the default drive letter assignment by clicking Next.
Click Next to format and create partition.
Assigning Drive Letters
After the bus, disks, and partitions have been configured, drive letters must be assigned to each partition on each clustered disk.
Note: Mountpoints is a feature of the file system that allows you to mount a file system using an existing directory without assigning a drive letter. Mountpoints is not supported on clusters. Any external disk used as a cluster resource must be partitioned using NTFS partitions and must have a drive letter assigned to it.
Right-click the desired partition and select Change Drive Letter and Path.
Select a new drive letter.
Repeat steps 1 and 2 for each shared disk.

Figure 6: Disks with Drive Letters Assigned (drives.bmp)
When finished, the Computer Management window should look like Figure 6 above. Now close the Computer Management window.
Verifying Disk Access and Functionality
Click Start, click Programs, click Accessories, and click Notepad.
Type some words into Notepad and use the File/Save As command to save it as a test file called test.txt. Close Notepad.
Double-click the My Documents icon.
Right-click test.txt and click Copy
Close the window.
Double-click My Computer.
Double-click a shared drive partition.
Click Edit and click Paste.
A copy of the file should now reside on the shared disk.
Double-click test.txt to open it on the shared disk. Close the file.
Highlight the file and press the Del key to delete it from the clustered disk.
Repeat the process for all clustered disks to verify they can be accessed from the first node.
At this time, shut down the first node, power on the second node and repeat the Verifying Disk Access and Functionality steps above. Repeat again for any additional nodes. When you have verified that all nodes can read and write from the disks, turn off all nodes except the first, and continue with this guide.
Top of page
Install Cluster Service software
Configuring the First Node
Note: During installation of Cluster service on the first node, all other nodes must either be turned off, or stopped prior to Windows 2000 booting. All shared storage devices should be powered up.
In the first phase of installation, all initial cluster configuration information must be supplied so that the cluster can be created. This is accomplished using the Cluster Service Configuration Wizard.
Click Start, click Settings, and click Control Panel.
Double-click Add/Remove Programs.
Double-click Add/Remove Windows Components .
Select Cluster Service. Click Next.
Cluster service files are located on the Windows 2000 Advanced Server or Windows 2000 Datacenter Server CD-ROM. Enter x:\i386 (where x is the drive letter of your CD-ROM). If Windows 2000 was installed from a network, enter the appropriate network path instead. (If the Windows 2000 Setup flashscreen displays, close it.) Click OK.
Click Next.
The window shown in Figure 7 below appears. Click I Understand to accept the condition that Cluster service is supported on hardware from the Hardware Compatibility List only.

Figure 7: Hardware Configuration Certification Screen (hcl.bmp)
Because this is the first node in the cluster, you must create the cluster itself. Select The first node in the cluster, as shown in Figure 8 below and then click Next.

Figure 8: Create New Cluster (clustcreate.bmp)
Enter a name for the cluster (up to 15 characters), and click Next. (In our example, we name the cluster MyCluster.)
Type the user name of the cluster service account that was created during the pre-installation. (In our example, this user name is cluster.) Leave the password blank. Type the domain name, and click Next.
Note: You would normally provide a secure password for this user account.
At this point the Cluster Service Configuration Wizard validates the user account and password.
Click Next.
Configuring Cluster Disks
Note: By default, all SCSI disks not residing on the same bus as the system disk will appear in the Managed Disks list. Therefore, if the node has multiple SCSI buses, some disks may be listed that are not to be used as shared storage (for example, an internal SCSI drive.) Such disks should be removed from the Managed Disks list.
The Add or Remove Managed Disks dialog box shown in Figure 9 specifies which disks on the shared SCSI bus will be used by Cluster service. Add or remove disks as necessary and then click Next.

Figure 9: Add or Remove Managed Disks (manageddisks.bmp)
Note that because logical drives F: and G: exist on a single hard disk, they are seen by Cluster service as a single resource. The first partition of the first disk is selected as the quorum resource by default. Change this to denote the small partition that was created as the quorum disk (in our example, drive Q). Click Next.
Note: In production clustering scenarios you must use more than one private network for cluster communication to avoid having a single point of failure. Cluster service can use private networks for cluster status signals and cluster management. This provides more security than using a public network for these roles. You can also use a public network for cluster management, or you can use a mixed network for both private and public communications. In any case, make sure at least two networks are used for cluster communication, as using a single network for node-to-node communication represents a potential single point of failure. We recommend that multiple networks be used, with at least one network configured as a private link between nodes and other connections through a public network. If you have more than one private network, make sure that each uses a different subnet, as Cluster service recognizes only one network interface per subnet.
This document is built on the assumption that only two networks are in use. It shows you how to configure these networks as one mixed and one private network.
The order in which the Cluster Service Configuration Wizard presents these networks may vary. In this example, the public network is presented first.
Click Next in the Configuring Cluster Networks dialog box.
Make sure that the network name and IP address correspond to the network interface for the public network.
Check the box Enable this network for cluster use.
Select the option All communications (mixed network) as shown in Figure 10 below.
Click Next.

Figure 10: Public Network Connection (pubclustnet.bmp)
The next dialog box shown in Figure 11 configures the private network. Make sure that the network name and IP address correspond to the network interface used for the private network.
Check the box Enable this network for cluster use.
Select the option Internal cluster communications only.

Figure 11: Private Network Connection (privclustnet.bmp)
Click Next.
In this example, both networks are configured in such a way that both can be used for internal cluster communication. The next dialog window offers an option to modify the order in which the networks are used. Because Private Cluster Connection represents a direct connection between nodes, it is left at the top of the list. In normal operation this connection will be used for cluster communication. In case of the Private Cluster Connection failure, cluster service will automatically switch to the next network on the list—in this case Public Cluster Connection. Make sure the first connection in the list is the Private Cluster Connection and click Next.
Important: Always set the order of the connections so that the Private Cluster Connection is first in the list.
Enter the unique cluster IP address (172.16.12.20) and Subnet mask (255.255.252.0), and click Next.

Figure 12: Cluster IP Address (clusterip.bmp)
The Cluster Service Configuration Wizard shown in Figure 12 automatically associates the cluster IP address with one of the public or mixed networks. It uses the subnet mask to select the correct network.
Click Finish to complete the cluster configuration on the first node.
The Cluster Service Setup Wizard completes the setup process for the first node by copying the files needed to complete the installation of Cluster service. After the files are copied, the Cluster service registry entries are created, the log files on the quorum resource are created, and the Cluster service is started on the first node.
A dialog box appears telling you that Cluster service has started successfully.
Click OK.
Close the Add/Remove Programs window.
Validating the Cluster Installation
Use the Cluster Administrator snap-in to validate the Cluster service installation on the first node.
Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.

Figure 13: Cluster Administrator (1nodeadmin.bmp)
If your snap-in window is similar to that shown above in Figure 13, your Cluster service was successfully installed on the first node. You are now ready to install Cluster service on the second node.
Configuring the Second Node
Note: For this section, leave node one and all shared disks powered on. Power up the second node.
Installing Cluster service on the second node requires less time than on the first node. Setup configures the Cluster service network settings on the second node based on the configuration of the first node.
Installation of Cluster service on the second node begins exactly as for the first node. During installation of the second node, the first node must be running.
Follow the same procedures used for installing Cluster service on the first node, with the following differences:
In the Create or Join a Cluster dialog box, select The second or next node in the cluster, and click Next.
Enter the cluster name that was previously created (in this example, MyCluster), and click Next.
Leave Connect to cluster as unchecked. The Cluster Service Configuration Wizard will automatically supply the name of the user account selected during the installation of the first node. Always use the same account used when setting up the first cluster node.
Enter the password for the account (if there is one) and click Next.
At the next dialog box, click Finish to complete configuration.
The Cluster service will start. Click OK.
Close Add/Remove Programs.
If you are installing additional nodes, repeat these steps to install Cluster service on all other nodes.
Top of page
Verify Installation
There are several ways to verify a successful installation of Cluster service. Here is a simple one:
Click Start, click Programs, click Administrative Tools, and click Cluster Administrator.

Figure 14: Cluster Resources (clustadmin.bmp)
The presence of two nodes (HQ-RES-DC01 and HQ-RES-DC02 in Figure 14 above) shows that a cluster exists and is in operation.
Right Click the group Disk Group 1 and select the option Move. The group and all its resources will be moved to another node. After a short period of time the Disk F: G: will be brought online on the second node. If you watch the screen, you will see this shift. Close the Cluster Administrator snap-in.
Congratulations. You have completed the installation of Cluster service on all nodes. The server cluster is fully operational. You are now ready to install cluster resources like file shares, printer spoolers, cluster aware services like IIS, Message Queuing, Distributed Transaction Coordinator, DHCP, WINS, or cluster aware applications like Exchange or SQL Server.
Top of page
For Additional Information
This guide covers a simple installation of Cluster service. For more articles and papers on Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Cluster service, see: The Windows 2000 Web site. For information on installing DHCP, Active Directory, and other services, see Windows 2000 Online Help, the Windows 2000 Planning and Deployment Guide, and the Windows 2000 Resource Kit.
Top of page
Appendix A
This appendix is provided as a generic instruction set for SCSI drive installations. If the SCSI hard disk vendor's instructions conflict with the instructions here, always use the instructions supplied by the vendor".
The SCSI bus listed in the hardware requirements must be configured prior to installation of Cluster services. This includes:
Configuring the SCSI devices.
Configuring the SCSI controllers and hard disks to work properly on a shared SCSI bus.
Properly terminating the bus. The shared SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.
In addition to the information on the following pages, refer to the documentation from the manufacturer of the SCSI device or the SCSI specifications, which can be ordered from the American National Standards Institute (ANSI). The ANSI web site contains a catalog that can be searched for the SCSI specifications.
Configuring the SCSI Devices
Each device on the shared SCSI bus must have a unique SCSI ID. Since most SCSI controllers default to SCSI ID 7, part of configuring the shared SCSI bus will be to change the SCSI ID on one controller to a different SCSI ID, such as SCSI ID 6. If there is more than one disk that will be on the shared SCSI bus, each disk must also have a unique SCSI ID.
Some SCSI controllers reset the SCSI bus when they initialize at boot time. If this occurs, the bus reset can interrupt any data transfers between the other node and disks on the shared SCSI bus. Therefore, SCSI bus resets should be disabled if possible.
Terminating the Shared SCSI Bus
Y cables can be connected to devices if the device is at the end of the SCSI bus. A terminator can then be attached to one branch of the Y cable to terminate the SCSI bus. This method of termination requires either disabling or removing any internal terminators the device may have.
Trilink connectors can be connected to certain devices. If the device is at the end of the bus, a trilink connector can be used to terminate the bus. This method of termination requires either disabling or removing any internal terminators the device may have.
Y cables and trilink connectors are the recommended termination methods, because they provide termination even when one node is not online.
Note: Any devices that are not at the end of the shared bus must have their internal termination disabled.
1 See Appendix A for information about installing and terminating SCSI devices.
Top of page
1
See Appendix A for information about installing and terminating SCSI devices.

Exchange Database Size Limit Configuration and Management

Exchange Database Size Limit Configuration and Management

Prior to Microsoft Exchange Server 2003 Service Pack 2 (SP2), there was no method to configure database size limits for Exchange Server 2003. Exchange Server 2003 SP2 introduces the following new features:
For the Standard Edition, the default configured database size limit will now be 18 GB, a 2 GB addition to the previous limit, with a new maximum size of 75 GB.
For the Enterprise Edition, there is no default configured database size limit, and no software set maximum size.
Both versions of Exchange Server 2003 with SP2 have the ability to configure a limit, a warning threshold, and a warning interval set through registry keys.
Size check done against the database now uses logical database size. Empty or white space in the database does not count against the configured database size limit; therefore, no offline defragmenting is required for recovery exceeding the configured or licensed database limits.
Limit checks, done at regular intervals, are now controlled by the store process instead of JET. The default time interval is 24 hours and this interval is configurable through the registry.
Registry Settings

The database size limit registry keys are read when the database mounts (not when the service starts up), and when each limit check task runs.
You must set registry parameters for each database targeted for size limit modification. The registry entries should be located under each database entry in the local server registry. Accordingly, you must reset the registry keys manually if the server has to be rebuilt using the /disasterecovery setup switch.
Note:
Incorrectly editing the registry can cause serious problems that may require you to reinstall your operating system. Problems resulting from editing the registry incorrectly may not be able to be resolved. Before editing the registry, back up any valuable data.
All registry settings discussed in this topic are created in the following registry location:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\\Private-013e2e46-2cd7-4a8e-bfec-0e4652b94b00
The GUID in this key (Private-013e2e46-2cd7-4a8e-bfec-0e4652b94b00) is an example and should match the value of the objectGUID attribute on the database’s Active Directory object.
Note:
By default, registry entries mentioned in this article are not present; when you create the entry, you override the default value set in code.
Note:
All of registry values mentioned in this article are in decimal, not hexadecimal.
The following new registry settings are available with SP2:
Database Size Limit in GB
Database Size Buffer Warning in Percentage
Database Size Check Start Time in Hours from Midnight
Database Size Limit in GB

The Database Size Limit in GB setting is the configurable maximum size of a database not to exceed the maximum licensed size of your database. For Standard Edition, you can set the database size limit between 1 and 75 GB. By default, the limit is 18 GB. For Enterprise Edition you can set the database size limit between 1 and 8,000 GB. By default, there is no limit.
The following registry value controls the Configurable Database Size Limit:
Data type
Name
Value (in GB)
Default (in GB)
REG_DWORD
Database Size Limit in GB
Standard: 1-75
Enterprise: 1-8000
Standard: 18
Enterprise: 8,000 Unlimited
Database Size Buffer in Percentage

The Database Size Buffering in Percentage setting is a configurable error threshold that will warn you with an event log entry when your database is at or near capacity, and will shut down within 24 hours of the event being logged. By default, Exchange Server 2003 SP2 logs events when the database has grown to within 10 percent of the configured database size limit. This threshold is configurable. The smallest buffer is 1 percent of the configured size limit.
The following registry value controls the Database Size Buffer:
Data type
Name
Value (in %)
Default (in %)
REG_DWORD
Database Size Buffer in Percentage
1 - 100
10
Database Size Check Start Time in Hours from Midnight

The Database Size Check Start Time in Hours from Midnight setting allows you to configure when the system will check your database to see if it is over the currently configured Database Size Limit. By default, the database size check happens at 05:00 (5:00 A.M.) every day. This time can be changed. If modified, the next task is scheduled at the new Offset hour. Checks at Database Size Check Interval are skipped until new start time.
First database size check will not take the database offline if the size limit has been exceeded. Because the database does not go offline, you are ensured at least 24 hour of availability after the limit is exceeded for default settings.
Data type
Name
Values
Default
Description
REG_DWORD
Database Size Check Start Time in Hours from Midnight
1 - 24
5
Determines the hour the first database size check will occur after a database is mounted.
Behavior When the Configured Database Size Limit or Licensed Database Size Limit Are Reached

When a database mounts, the store process compares the physical database size against the Configured Database Size Limit in GB. If the physical size is within or exceeds the configured Database Size Warning Buffer in Percentage, the store performs a logical calculation of the database size. If it is below this warning buffer, there is no need to calculate the free space because the logical size will never exceed the physical size. Generally, the physical size is less than the warning threshold, so the size check should take under a millisecond to complete. If the free space calculation must be performed, the size check may require a few seconds to parse through the database to generate the logical size calculation.
If the Database Size Warning Buffer in Percentage is reached or exceeded, an error event, event ID 9688, is logged in the Application event log.
With Exchange Server 2003 SP2 or later, the server performs the following tasks when the configurable (or default configured) database size limit is reached:
If the first check after a database mount finds the database size above the limit, the database will not be taken offline but an error event (ID 9689) will be logged in the Application event log.
If it is the second check, an error event will be logged in the Application event log and the database will be taken offline.
After the administrator remounts the database, he or she has 24 hours (or until the next database size check or 05:00 if the default is set) to take corrective actions.
Licensed Database Size Limit

Exchange Server 2003 Standard Edition is limited to a single storage group with a single private information store database and a single public folder database. Prior to SP2, each database was limited to 16 GB of total physical size. SP2 increases the licensed database size limit for Exchange Server 2003 Standard Edition from 16 GB to 75 GB; the default configured database size limit will be 18 GB. Exchange Server 2003 Enterprise Edition storage group and Exchange store options do not change with the application of SP2. However, a configurable Exchange store size limit is added to the Enterprise Edition.
Exchange Server 2003 version
Licensed limit
Default configuration limit
Standard Edition before SP2
16 GB
Not applicable
Standard Edition with SP2
75 GB
18 GB
Enterprise Edition before SP2
8,000 GB (unlimited)
Not applicable
Enterprise Edition with SP2
8,000 GB (unlimited)
8,000 GB
Note:
The current hard coded limit of the JET database is 8,192 GB, or 8 terabytes (TB).
Disaster Recovery Planning Considerations

If you change the size limit of your Exchange databases, you may want to re-evaluate your Exchange database backup and restore plan. Specifically, if you increase the size limit of the Exchange databases, be sure to test your backup and recovery operations using the new database size limits to make sure that you can still meet your service level agreements. For example, if the previous size of a mailbox store was 15 GB and you were able to meet your service level agreement by recovering the data in less than 8 hours, you may no longer be able to recover the database that quickly if you increase the size of a mailbox store to 20 GB or larger.
For information about service level agreements, see "Establishing a Service Level Agreement" in "Setting Availability Goals" in the Exchange 2003 High Availability Guide.
For information about how to configure database size limit options, see "Configure Database Size Limits" in the Exchange Server 2003 SP2 online Help.

Exchange Database Size Limit Configuration and Management

Exchange Database Size Limit Configuration and Management

Prior to Microsoft Exchange Server 2003 Service Pack 2 (SP2), there was no method to configure database size limits for Exchange Server 2003. Exchange Server 2003 SP2 introduces the following new features:
For the Standard Edition, the default configured database size limit will now be 18 GB, a 2 GB addition to the previous limit, with a new maximum size of 75 GB.
For the Enterprise Edition, there is no default configured database size limit, and no software set maximum size.
Both versions of Exchange Server 2003 with SP2 have the ability to configure a limit, a warning threshold, and a warning interval set through registry keys.
Size check done against the database now uses logical database size. Empty or white space in the database does not count against the configured database size limit; therefore, no offline defragmenting is required for recovery exceeding the configured or licensed database limits.
Limit checks, done at regular intervals, are now controlled by the store process instead of JET. The default time interval is 24 hours and this interval is configurable through the registry.
Registry Settings

The database size limit registry keys are read when the database mounts (not when the service starts up), and when each limit check task runs.
You must set registry parameters for each database targeted for size limit modification. The registry entries should be located under each database entry in the local server registry. Accordingly, you must reset the registry keys manually if the server has to be rebuilt using the /disasterecovery setup switch.
Note:
Incorrectly editing the registry can cause serious problems that may require you to reinstall your operating system. Problems resulting from editing the registry incorrectly may not be able to be resolved. Before editing the registry, back up any valuable data.
All registry settings discussed in this topic are created in the following registry location:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\\Private-013e2e46-2cd7-4a8e-bfec-0e4652b94b00
The GUID in this key (Private-013e2e46-2cd7-4a8e-bfec-0e4652b94b00) is an example and should match the value of the objectGUID attribute on the database’s Active Directory object.
Note:
By default, registry entries mentioned in this article are not present; when you create the entry, you override the default value set in code.
Note:
All of registry values mentioned in this article are in decimal, not hexadecimal.
The following new registry settings are available with SP2:
Database Size Limit in GB
Database Size Buffer Warning in Percentage
Database Size Check Start Time in Hours from Midnight
Database Size Limit in GB

The Database Size Limit in GB setting is the configurable maximum size of a database not to exceed the maximum licensed size of your database. For Standard Edition, you can set the database size limit between 1 and 75 GB. By default, the limit is 18 GB. For Enterprise Edition you can set the database size limit between 1 and 8,000 GB. By default, there is no limit.
The following registry value controls the Configurable Database Size Limit:
Data type
Name
Value (in GB)
Default (in GB)
REG_DWORD
Database Size Limit in GB
Standard: 1-75
Enterprise: 1-8000
Standard: 18
Enterprise: 8,000 Unlimited
Database Size Buffer in Percentage

The Database Size Buffering in Percentage setting is a configurable error threshold that will warn you with an event log entry when your database is at or near capacity, and will shut down within 24 hours of the event being logged. By default, Exchange Server 2003 SP2 logs events when the database has grown to within 10 percent of the configured database size limit. This threshold is configurable. The smallest buffer is 1 percent of the configured size limit.
The following registry value controls the Database Size Buffer:
Data type
Name
Value (in %)
Default (in %)
REG_DWORD
Database Size Buffer in Percentage
1 - 100
10
Database Size Check Start Time in Hours from Midnight

The Database Size Check Start Time in Hours from Midnight setting allows you to configure when the system will check your database to see if it is over the currently configured Database Size Limit. By default, the database size check happens at 05:00 (5:00 A.M.) every day. This time can be changed. If modified, the next task is scheduled at the new Offset hour. Checks at Database Size Check Interval are skipped until new start time.
First database size check will not take the database offline if the size limit has been exceeded. Because the database does not go offline, you are ensured at least 24 hour of availability after the limit is exceeded for default settings.
Data type
Name
Values
Default
Description
REG_DWORD
Database Size Check Start Time in Hours from Midnight
1 - 24
5
Determines the hour the first database size check will occur after a database is mounted.
Behavior When the Configured Database Size Limit or Licensed Database Size Limit Are Reached

When a database mounts, the store process compares the physical database size against the Configured Database Size Limit in GB. If the physical size is within or exceeds the configured Database Size Warning Buffer in Percentage, the store performs a logical calculation of the database size. If it is below this warning buffer, there is no need to calculate the free space because the logical size will never exceed the physical size. Generally, the physical size is less than the warning threshold, so the size check should take under a millisecond to complete. If the free space calculation must be performed, the size check may require a few seconds to parse through the database to generate the logical size calculation.
If the Database Size Warning Buffer in Percentage is reached or exceeded, an error event, event ID 9688, is logged in the Application event log.
With Exchange Server 2003 SP2 or later, the server performs the following tasks when the configurable (or default configured) database size limit is reached:
If the first check after a database mount finds the database size above the limit, the database will not be taken offline but an error event (ID 9689) will be logged in the Application event log.
If it is the second check, an error event will be logged in the Application event log and the database will be taken offline.
After the administrator remounts the database, he or she has 24 hours (or until the next database size check or 05:00 if the default is set) to take corrective actions.
Licensed Database Size Limit

Exchange Server 2003 Standard Edition is limited to a single storage group with a single private information store database and a single public folder database. Prior to SP2, each database was limited to 16 GB of total physical size. SP2 increases the licensed database size limit for Exchange Server 2003 Standard Edition from 16 GB to 75 GB; the default configured database size limit will be 18 GB. Exchange Server 2003 Enterprise Edition storage group and Exchange store options do not change with the application of SP2. However, a configurable Exchange store size limit is added to the Enterprise Edition.
Exchange Server 2003 version
Licensed limit
Default configuration limit
Standard Edition before SP2
16 GB
Not applicable
Standard Edition with SP2
75 GB
18 GB
Enterprise Edition before SP2
8,000 GB (unlimited)
Not applicable
Enterprise Edition with SP2
8,000 GB (unlimited)
8,000 GB
Note:
The current hard coded limit of the JET database is 8,192 GB, or 8 terabytes (TB).
Disaster Recovery Planning Considerations

If you change the size limit of your Exchange databases, you may want to re-evaluate your Exchange database backup and restore plan. Specifically, if you increase the size limit of the Exchange databases, be sure to test your backup and recovery operations using the new database size limits to make sure that you can still meet your service level agreements. For example, if the previous size of a mailbox store was 15 GB and you were able to meet your service level agreement by recovering the data in less than 8 hours, you may no longer be able to recover the database that quickly if you increase the size of a mailbox store to 20 GB or larger.
For information about service level agreements, see "Establishing a Service Level Agreement" in "Setting Availability Goals" in the Exchange 2003 High Availability Guide.
For information about how to configure database size limit options, see "Configure Database Size Limits" in the Exchange Server 2003 SP2 online Help.

LinkWithin

Popular Posts