ORIGINALLY PUBLISHED 21st June 2017
In 7.7.1 we added a long-awaited feature that we have called host clusters. This feature is not exactly rocket science to understand, but here’s a breakdown of a couple of useful scenarios that you can use it for, and some information about how to configure it. Note – host cluster support was CLI only in 7.7.1, to use the GUI you’ll have to be running 7.8.1 or later.
Major Use Cases
1/ If you have a cluster of hosts (e.g. a set of SVC nodes in front of a V7000), and you want to make sure that all the hosts have the same SCSI host mappings, but you want the physical machines in different host objects so you know which WWPN belongs to which server.
Once you’ve made your host cluster, you simply map your new volumes to the host cluster rather than the hosts.
2/ Sometimes you have a cluster where you need a number of shared mappings and some non-shared mappings. The most common example of this is a VMware Cluster where the ESX server is SAN booted. In this case, you need the datastore volumes share between every ESXi server, and each ESXi server needs its own SAN boot volume.
Therefore you will need to have shared mappings which are used by all hosts in the host cluster, and private mappings which are only mapped to a single (or subset) of the hosts in the host cluster.
Tip: 7.8.1 allows you to apply a throttle across the entire host cluster (or a host) rather than on a single volume
Caveats in 7.7.1 and 7.8.0
Basically, this boils down to the fact that you will not be able to manage the host clusters, and more importantly map volumes to the host cluster in the GUI. Unless you have a lot of scripts/automation in your environment and you never provision volumes using the GUI – it is probably better to wait until you have upgraded to 7.8.1
Creating new Host Clusters when running 7.8.1 or later
This use case is fairly well documented and has full GUI support – so I’m not going to go into it in detail here. The command lines is mkhostcluster and the GUI panel is called Host Clusters.
Converting your existing configuration into host clusters
Converting multiple hosts into a single host cluster
So many of you who are reading this already have hosts with existing mappings and you will want to convert them to host clusters. The mechanism to do this in the GUI is fairly good (it basically uses the all-in-one approach below but also allows you to manually specify some private mappings). But even if you use the GUI method – you still need to at least do the “before you start section below”
Once you’ve made the conversion, and you are running 7.8.1 or newer, you will be able to add more hosts and map volumes to the host cluster using the GUI.
Before you start –
- Make sure that no hosts have any throttles associated with them
- you need to convince yourself that your SCSI host mappings for shared volumes is identical on all hosts. If it isn’t then you will not be able to convert to a host cluster without fixing it.
How do you check this?
- Using the CLI – Run
lshostvdiskmap <host ID or name>
for each host in your cluster. - Using the GUI – The Host Mappings view will provide the data about which volumes are mapped to which host on which SCSI ID.. You can also use the filter and sorting features to make the work a little easier.
- Once you have the data all you need to do is to validate that vdisk X has the same SCSI ID on all of the hosts in your host cluster. Repeat this for each shared vdisk.
What do you do if they don’t match?
This is a bit harder and I can’t give you a definitive answer. The simplest solution is probably to shutdown one physical host from the cluster, undo and redo all its mappings then power it back on again. When you redo the mappings you should manually specify the SCSI ID to ensure that it matches your target. Remember that if you change the SCSI ID of the SAN boot volume then you may need to fix the BIOS to boot from the correct volume. Other solutions may exist.
Alternatively – just don’t use host clusters.
Warning: If the
host is SVC – you must make all of your hosts match the SCSI ID
configuration as recorded in the lsmdisk output of the SVC.
Warning: VMware are saying that they require SCSI IDs
to match on all ESXi hosts for latest VMware 6.5 – so it is probably
better to fix this sooner rather than later – here is the VMware
document which makes that statement: https://recommender.vmware.com/solution/SOL-12531
I’ll document the two main approaches to converting existing hosts into host clusters here, and the pros/cons of each. For either method it will probably be best to test it on a test cluster first.
Both methods should complete the conversion with no interruption to the hosts.
The one-at-a-time approach
- Pick a master host.
If there are any differences between the hosts then you will make everyone else match the master host. - Decide whether the master host has any “private mappings”. A private mapping could be a SAN boot lun for that server, or a volume to store paging space on. The point is that the private LUN is not expected to be mapped to all hosts in the cluster just one (or maybe a few)
- Create the host cluster using the master host as the template:
mkhostcluster -name myfirsthostcluster -seedfromhost <master host ID or name> -ignoreseedvolume <list of volume IDs or names for the private mappings for the master host - separated by colons>
If all volumes are shared (i.e. there are no private mappings) then simply omit the -ignoreseedvolume option - Validate that the corresponding host cluster has the correct private and shared mapping using
lshostvdiskmap <master host ID>
The output will list all of the volumes mapped to that host, as well as telling you whether the mapping is shared (one that’s really mapped to the cluster) or private (only mapped to this host). - Now your host cluster exists, you simply need to add the other hosts to the cluster. So for each additional host in the cluster
addhostclustermember -host <additional host ID or name> -hostcluster <host cluster ID or name>
This command will fail if there are any conflicts in the SCSI host mappings. But hopefully you already checked that in the before you start section and everything will go swimmingly. Additional Volumes that are mapped to this host and are not already in the list of shared cluster mappings will be preserved as private mappings
Note private mappings on host 1 can share the same SCSI ID as a private mapping on a different host. But it cannot use the same SCSI ID as one of the shared mappings (for hopefully obvious reasons).
Pros:
- You are only ever operating one host from the host cluster. So if something goes wrong the host cluster can hopefully failover.
- You have more control about what volumes will become shared versus private mappings
Cons:
- This approach requires more CLI commands to complete.
The all-in-one approach
- Collect your list of hosts
- Create the host cluster :
mkhostcluster -name myfirsthostcluster -seedfromhost <complete list of Host IDs or Names, separated with colons>
This command will fail if there are any conflicts in the SCSI host mappings. But hopefully you already checked that in the before you start section and everything will go swimmingly. Volumes that are only mapped to a single host will be preserved as private mappings, others will be converted to shared mappings
Notes:- private mappings on host 1 can share the same SCSI ID as a private mapping on a different host. But it cannot use the same SCSI ID as one of the shared mappings (for hopefully obvious reasons).
- You can use
-ignoreseedvolume
if you want to make extra sure that the private volumes are kepy private - When specifying the list of host or volumes, ideally make sure
that they are in ascending ID order, and that there are no duplicates.
- Validate that the corresponding host cluster has the correct private and shared mappings for each host using
lshostvdiskmap <host ID>
The output will list all of the volumes mapped to that host, as well as telling you whether the mapping is shared (one that’s really mapped to the cluster) or private (only mapped to this host).
Pros:
- Simpler solution, fewer commands
Cons:
- Potential to affect the entire host cluster
- If you accidentally create a cluster without all of the host IDs (especially if you do it with only one host ID) then the system may not be able to correctly work out which are the private mappings, and may result in a mapping configuration that you don’t want. The only way to fix this will be to unmap/remap – which may require a downtime.
Converting one host with many WWPNs into a host cluster
Warning: This procedure will cause the hosts to see paths to volumes going away and coming back again. This will put a stress on multipathing drivers, so I don’t recommend doing it online if you are in any doubt about your multipathing driver stability.
I will assume that for this scenario that there are no private mappings and all mappings are shared mappings. I’m also going to use the FC WWPNs in all the example commands, however if you are doing the same thing with iSCSI the commands are basically identical.
This procedure can also be done in the GUI in the host panels.
Note: converting one host into 10 hosts (for example) will increase the number of vdiskhostmappings that you need to use for that collection of WWPNs by a factor of 10. In a large configuration please ensure that you check to ensure that you won’t run out of these vdiskhostmaps before you start. At the time of writing the limit is 20,000.
- Create a new host cluster from the megahost (the single host object with all of the WWPNs)
mkhostcluster -name myfirsthostcluster -seedfromhost <host ID or name>
- For each of the sub-hosts (the entities that you want to break out
of the megahost into their own host object) that you want to break out
from the megahost
- Remove one WWPN/iSCSI name from the original host ID
rmhostport -fcwwpn <wwpn> <megahost ID or name>
- Create a new host objects using that WWPNs/iSCSI name
mkhost -name mysecondhost -fcwwpn <WWPN> -hostcluster <host cluster ID or name>
- Remove each of the additional WWPNs from the megahost one at a time and add them to the new sub-host
rmhostport -fcwwpn <wwpn> <megahost ID or name>
addhostport -fcwwpn <wwpn> <sub-host ID or name>
- Remove one WWPN/iSCSI name from the original host ID
Leave a Reply