With the NTFS volume mount points feature, you can surpass
the 26-drive-letter limitation. By using volume mount points, you can graft, or
mount a target partition into a folder on another physical disk. volume mount points are transparent to programs. This article discusses how to create
volume mount points on a server cluster, and considerations associated with it.
Adding a mount point to shared disk is the same as adding a mount
point to a non-shared disk. Mount points are added by Win32 API
SetVolumeMountPoint, and are deleted by DeleteVolumeMountPoint. This has
nothing to do with the disk resource dynamic link library (DLL). The resource
DLL is only concerned about the volume global universal identifications
(GUIDs), and not the actual mount points.
There are three ways to
add mount points to a system (clustered and non-clustered are the same):
Logical Disk Manager (Diskmgmt.msc)
Mountvol.exe from the command prompt
Write your own .exe file, using the Win32 API
SetVolumeMountPoint, and DeleteVolumeMountPoint
When you create a volume mount point on a server cluster,
you need to take into consideration the following key items in regards to
volume mount points:
They cannot go between clustered, and non-clustered
You cannot create mount points on the Quorum
If you have a mount point from one shared disk to another,
you must make sure that they are in the same group, and that the mounted disk
is dependent on the root disk.
How to set up volume mount points on a Clustered Server
Log on locally with administrative rights to the node that
owns the root disk, into which you are going to be grafting the directory. This
is the disk that will contain the mount point.
Open Cluster Administrator (CluAdmin.exe), and pause other
nodes in the Cluster.
Partition the disk, and then create the mount point. To do
so, follow these steps:
To open Disk Management, click Start, click Run, type diskmgmt.msc, and then click OK.
Select the disk that you would like to graft into the
Right-click the free space on the disk, and then click New Partition.
Create a Primary Partition, and then click Next.
Set the size of the partition.
Select Mount in the following empty NTFS
folder, click Browse to browse to the directory in which you would like the mount
point to be created, and then click New Folder (this will be the root into which the volume is mounted). Click
the newly created folder, click OK, and then click Next.
Format the partition by using the NTFS File
This is a requirement of both Microsoft Cluster Server
(MSCS), and the volume mount points feature.
Create the new Disk resource, and then set dependencies. To
do so, follow these steps:
Open Cluster Administrator.
Right-click the group that owns the Shared Disk
resource for the disk on which you just created the volume mount point. Click New, and then click Resource.
For the Resource type, click Physical Disk. Verify that it is in the same group as the the root disk. Click Next.
Make sure all nodes are possible owners, and then click
Double-click the root disk, to make this volume mount
point disk dependent on the root disk. Click Next.
In the Disk Parameters window, you should see your disk
listed. It will be listed by the disk number, and partition number; this is
different from standard MSCS disks, which are listed by drive letter. Click Finish.
Right-click the new Disk resource, and then click Bring Online.
Unpause all other nodes, and test that you can fail the
group over to every node and access the newly created mount point.
Important The new volume mount point functions on all nodes in the cluster group. However, when you open Windows Explorer or you double-click My Computer on any node other than the node where the volume mount point was created, the new volume mount point may be displayed by using a folder symbol instead of by using a drive symbol. When you right-click the folder symbol and then click Properties, the File System value is set to RAW and not to NTFS.
To configure the volume mount point to display correctly on all nodes in the cluster group, follow these steps.
Note These steps must be performed on all the nodes that will own the volume mount point.
As soon as the volume mount point has been created on node1, manually fail over to node2, and then pause all other nodes in the cluster except node2.
On node2, open Disk Management. To do this, follow these steps:
Click Start, click Administrative Tools, and then click Computer Management.
In the Computer Management MMC snap-in, click Disk Management.
In Disk Management, right-click the mounted volume, and then click Change Drive Letter and Paths.
Select the mount point, click Remove, click Add, and then reassign the same drive letter to the mount point.
Unpause all other nodes.
Repeat steps 1 through 5 until the volume mount point is successfully created on all nodes in the cluster group.
Note After you follow these steps, the following conditions may continue to exist:
The volume mount point is still displayed as a folder and not as a drive.
The File System value is still set to RAW and not to NTFS.
However, the mount point continues to function correctly. This is a purely cosmetic issue. It is not a functional issue.
Some best practices for when you are using volume mount points are as follows:
Try to use the root (host) volume exclusively for mount points. The root volume is the volume that is hosting the mount points. This greatly reduces the time that it takes to restore access to the mounted volumes if you have to run a chkdsk. This also reduces the time that it takes to restore from backup on the host volume.
If you use the root (host) volume exclusively for mount points, the size of the host volume only has to be several MB. This reduces the probability that the volume is used for anything other than the mount points.
In a cluster, where high availability is important, you can make redundant mount points on separate host volumes. This helps guarantee that if one root (host) volume is inaccessible, you can still access the data that is on the mounted volume through the other mount point. For example, if HOST_VOL1 (D:) is on Mountpoint1, user data is on LUN3. Then, if HOST_VOL2 (E:) is on Mountpoint1, user data is on LUN3. Therefore, customers can now access LUN3 through either D:\mountpoint1 or through E:\mountpount1.
Note Because the user data that is on LUN3 depends on both D: and E: volumes, you have to temporarily remove the dependency of any failed host volume until the volume is back in service. Otherwise, the user data that is on LUN3 remains in a failed state.