I’m currently at a client who is moving their SRM protected VM’s onto Hitachi VSP arrays. It’s been close to 6 years since I’ve done any work with Hitachi and replication so I assumed it would have gotten better since then, unfortunately it hasn’t, it’s exactly the same as it was.

The storage admin, thankfully, is pretty sharp and had replication (or as Hitachi calls it , pairs) setup already. We installed HORCM (Hitachi Online Remote Copy Manager) on the SRM servers which contains the commands used by the SRA. This requires way more manual editing of configuration files then should be necessary. Then installed the Hitachi SRA on both servers.

The goal of this SRM implementation is to enable fail-over from site a to b and to use the re-protect option to reverse the replication to enable fail-back to site a, currently site b is only a recovery site. Testing the recovery plans is also required by using Hitachi’s Copy-on-Write Snapshots.

After creating the horcm configuration files we threw a test VM onto one of the replicated datastores and started to test and immediately ran into errors.

By looking through the SRM logs it was obvious the horcm config files were our issue. After a couple hours of troubleshooting we finally got everything sorted out and working for all of our requirements. Surprisingly (to me at least) is that there is really no solid “Here’s how to configure SRM with Hitachi Storage” guides or blog posts out there, don’t even get me started on how bad Hitachi’s documentation is. So here’s the quick and dirty of what you need.

I’m assuming you already have vCenter and SRM installed in both sites with the SRM servers paired and the storage admin has used the paircreate command to create the required pairs.

  • Install HORCM to the default location (C:\HORCM).
  • Add C:\HORCM\etc to the systems path.
  • Install the Hitachi SRA, there are no options just press next a few times.
  • According to Hitachi’s documentation, two system (not user) environment variables need to be set on both SRM servers, SplitReplication=True and RMSRATMU=1, we found setting RMSRATMU=1 was one of our issues so we removed it.
  • Reboot the SRM servers.
  • The SRM server needs a command device, if the SRM servers are VM’s (which I would hope they are) you’ll need to create a RDM in physical compatibility mode and map it to the VM, let windows write a signature to the device but don’t format it. I found conflicting information on the size it should be, ranging from 30-40 MB so use 50 to be safe.
  • To issue commands HORCM needs to be running as a service, this is done by copying the horcm_run.txt from c:\HORCM\tool to horcmX_run.txt where X is the number for this instance, edit the copied file to set the HORCMINST variable save and close the file and run C:\HORCM\tool\svcexe /S=HORCMX /A=C:\HORCM\Tool\svcexe.exe again where X is the instance number. This needs to be done on both SRM servers.

Notes about HORCM instances, if you want to use Hitachi’s Copy-on-Write Snapshots you need two HORCM instances on the recovery site, the instance ID for the snapshots must be +1 of the replicated LUNS (LDEV’s as Hitachi calls them) so if you used HORCM10 for the instance for the replicated LUNS you must use HORCM11 for the snapshots or running test fail-overs will not work.
Once the services have been installed you must create the horcmX.conf files and place them in C:\HORCM\etc, again X is the HORCM instance ID.

Creating these files correctly is the hardest part of this config. Below are samples that should work for most instances that meet the same requirements we had.
Assuming I used HORCM instance ID 10 on the protected site (I’m going to reserve 11 in case the requirements for using Copy-on-Write in this site change later) and 12 and 13 in the remote site I’d have these 3 conf files in their respective C:\HORCM\etc directories. Be sure to change the IP addresses, CMD device, Serial# and LDEV#’s to match your environment.

horcm10.conf (protected site)

#/************************* For HORCM_MON *************************************/
HORCM_MON
#ip_address     service         poll(10ms)     timeout(30ms)
Local IP        horcm10          1000           3000

#/************************** For HORCM_CMD ************************************/
HORCM_CMD
#dev_name                dev_name                dev_name
\\.\CMD-11111 

#/************************** For HORCM_LDEV ***********************************/
HORCM_LDEV
#dev_group          dev_name    Serial#   CU:LDEV(LDEV#)   MU#
srm1                vm1         11111     0x111a             
srm1                vm2         11111     0x1110             
srm1                vm3         11111     0x1111             

#/************************* For HORCM_INST ************************************/
HORCM_INST
#dev_group  ip_address  service
srm1        Remote IP   horcm12

horcm12.conf (recovery site)

#/************************* For HORCM_MON *************************************/
HORCM_MON
#ip_address     service         poll(10ms)     timeout(30ms)
Local IP        horcm12          1000           3000

#/************************** For HORCM_CMD ************************************/
HORCM_CMD
#dev_name                dev_name                dev_name
\\.\CMD-11112

#/************************** For HORCM_LDEV ***********************************/
HORCM_LDEV
#dev_group      dev_name        Serial#   CU:LDEV(LDEV#)   MU#
srm1                vm1         11112     0x1003             
srm1                vm2         11112     0x1004             
srm1                vm3         11112     0x1005             
srm1-snap       vm1-snap        11112     0x1003            0 
srm1-snap       vm2-snap        11112     0x1004            0 
srm1-snap       vm3-snap        11112     0x1005            0 

#/************************* For HORCM_INST ************************************/
HORCM_INST
#dev_group      ip_address      service
srm1            Remote IP       horcm10
srm1-snap       Local IP        horcm13

horcm13.conf (recovery site)

#/************************* For HORCM_MON *************************************/
HORCM_MON
#ip_address     service         poll(10ms)     timeout(30ms)
Local IP        horcm13          1000           3000

#/************************** For HORCM_CMD ************************************/
HORCM_CMD
#dev_name                dev_name                dev_name
\\.\CMD-11112

#/************************** For HORCM_LDEV ***********************************/
HORCM_LDEV
#dev_group      dev_name        Serial#   CU:LDEV(LDEV#)   MU#
srm1-snap       vm1-snap        11112     0xa000             
srm1-snap       vm2-snap        11112     0xa001             
srm1-snap       vm3-snap        11112     0xa002             

#/************************* For HORCM_INST ************************************/
HORCM_INST
#dev_group          ip_address      service
srm1-snap           Local IP        horcm12

Notice that only horcm12 has any data in the MU (Mirror Unit) column and only for the snapshot devices. If this is blank running Recovery Plans in Test mode will fail, if there’s a value for every device everything (including adding the Array Manager) will fail.

Once these files are created and saved to C:\HORCM\etc you can start the HORCMX service(s) on each SRM server.

The configuration of the SRA is pretty straight forward, you add an Array Manager, give it a name for the site you’re working on and since HORCM is local to each server in the first field enter HORCMINST=X where X is the local HORCM instance for the replicated LUNS and then enter the username and password that’s been setup for SRM to use.

If your HORCM config files are correct the Array Manager will be added, repeat for the other site and run your normal SRM tests.

4 thoughts on “SRM with Hitachi Storage

  1. Just wanted to post a Thanks for this info. Even though it’s almost 5 years old, I’m setting up a new SRM instance with 6.5 and ran into a problem. Found a bookmark to this page (which had helped me get 5.8 running) and a couple tips on the variables got me over the hump. Good stuff.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.