ViPR 2.1 - Install data fabric on Commodity nodes

Table of Contents

Overview

Use this step-by-step procedure to install the ViPR data fabric on Commodity nodes.

This procedure applies to Commodity nodes where the OEL operating system is installed.

Everything you need to install the data fabric is provided in an OVA file that you obtain from the EMC help desk. The OVA file is named ecs-fabric-installer-<version>.ova.

Prerequisites:

There are two special nodes during the data fabric install.

Back to Top

Prepare the nodes for data fabric installation

Before you begin

The Commodity nodes must be powered on.

The yum commands listed in the following procedure require an internet connection. If you do not have an internet connection when running these commands, the packages might also be on the installation media for the operating system running on the nodes.

Procedure

  1. Verify that each node has nsenter installed by running the following command:
    # "type nsenter"
    If the command returns /usr/bin/nsenter, it is installed on the node. If it returns "nsenter: not found", you must install it.
  2. Determine if gdisk and xfsprogs are installed on the nodes, by running the following command:
    # rpm -qa | egrep 'gdisk|xfsprogs'
    The output should return the rpm versions of each, but if either is missing, install it using yum. For example:
    # yum --noplugins install gdisk
    # yum --noplugins install xfsprogs
Back to Top

Add node IP addresses to Extra Node IP addresses

Procedure

  1. Log in to Controller as administrator.
  2. Navigate to Settings > Configuration
  3. Type the IP addresses of the nodes as a comma-separated list into the Extra Node IP Addresses field.
  4. Click Save.
Back to Top

Deploy the installer host OVA

Procedure

  1. Download the ViPR data fabric OVA file, ecs-fabric-installer-version-build.ova to a temporary directory.
  2. Start the vSphere client.
  3. Log in to the vCenter Server where you are deploying the installer host.
  4. Select File > Deploy OVF Template.
  5. Browse to and select the ViPR installer host OVA file that you copied to the temporary directory.
  6. Accept the End User License Agreement.
  7. Specify a name for the installer host.
  8. Select the host or cluster on which to run the installer host.
  9. If resource pools are configured (not required for the installer host), select one.
  10. If more than one datastore is attached to the ESX Server, select the datastore for the installer host.
  11. Select Thin Provision.
  12. On the Network Mapping page, map the source network to a destination network as appropriate.
  13. Power on the VM.
    You can log in to the VM using the root/ChangeMe credentials. The data fabric software is located in the /opt/emc/ecs-fabric-installer folder.
Back to Top

Create a script that defines the IP address for nodes in the fabric

As a convenience, create a script that defines the IP addresses for the set of nodes in the data fabric, for the Controller VIP, and the ZooKeeper ensemble variables.

Procedure

  1. On the installer hose, navigate to the /opt/emc/ecs-fabric-installer directory.
  2. Create a script that contains a set of environment variables that represents the IP addresses that you need to supply during the data fabric installation and save the script in /opt/emc/ecs-fabric-installer
    #!/bin/bash
    echo set node IP addresses
    N1=<Node1 IP>
    N2=<Node2 IP>
    N3=<Node3 IP>
    N4=<Node4 IP>
    export N1 N2 N3 N4
    echo $N1
    echo $N2
    echo $N3
    echo $N4
    
    echo set nodes that require comma-separated and space-separated lists and the ZooKeeper ensemble
    NC="$N1,$N2,$N3,$N4"
    NS="$N1 $N2 $N3 $N4"
    NZK="$N1 $N2 $N3"
    export NC NS NZK
    echo $NC
    echo $NS
    echo $NZK
    
    echo set Controller VIP
    CVIP=<Controller VIP>
    export CVIP
    echo $CVIP
  3. Change the access permissions on the script you created by executing chmod.
    # chmod 755 <script_name.sh>
  4. Execute the environment variable script.
    # source <script_name.sh>
  5. Verify the environment variable script set the values correctly.
    # echo $NC 
    # echo $NS
  6. Set up SSH password-less login between the installer node and the target nodes by using SSH keygen. Accept the defaults, and do not set a password.
    1. Create the authentication SSH key.
      # ssh-keygen
    2. Copy the generated key file from the installer node to each of the target nodes by running the following command:
      # ssh-copy-id <target-node-IP- address>
Back to Top

Install HAL and the hardware manager

Procedure

  1. From the installer hose, use SSH to copy the viprhal and nile-hwmgr rpms to each node.
  2. Log in to each node, and use the rpm command to install the hardware abstraction layer.
    # rpm -ivh viprhal*
    
    Preparing...                ########################################### [100%]
       1:viprhal                ########################################### [100%]
  3. On each node, run the following command to install the hardware manager:
    # rpm -ivh nile-hwmgr-<version-build.x86_64>.rpm
    Preparing...                ########################################### [100%]
       1:nile-hwmgr             ########################################### [100%]
  4. On each node, start the hardware manager service, by running the following command:
    # service nileHardwareManager start
  5. On each node, verify the hardware manager service started successfully, by running the following command:
    # service nileHardwareManager status
Back to Top

Prepare, check, install, and configure data fabric

If you want to capture the command output of the system checks and other installation commands, run a typescript session by using the Linux script command. For example:
#  script <script_file_name>.log
fab vipr.check -H ....
exit
Script done
After the installation is complete, you can review the output in<script_file_name>.log.

Procedure

  1. On the installer node, go to the /opt/emc/ecs-fabric-installer directory.
  2. Run the fab vipr.prepare command to prepare the nodes so they can support the data fabric. The command makes sure that the correct version of Docker is installed and it opens the necessary ports if a firewall is enabled.
    For example:
    # fab vipr.prepare -H $NC
  3. Run the fab vipr.check command to verify the nodes meet the minimum requirements.
    In this example, the CONTROLLER_VIP and registration_key parameters are added to ensure that the vipr.check also checks that the node IPs were added to the Controller's Extra Node IP addresses.
    # fab vipr.check -H $NC --set CONTROLLER_VIP=$CVIP,registration_key=ChangeMe
  4. Run the fab vipr.install to copy the installation images from the installer host to the Docker registry node.
    For example:
    # fab vipr.install -H $NC --set ZK_SERVER_LIST="$NZK",REGISTRY="$N1"
  5. Use the Docker commands docker images and docker ps to verify that the installation images are present and deployed on the registry node, and that the correct containers are running on the nodes.
    1. Verify the containers are installed and the fabric container is running on the commodity nodes by running the following:
      for n in $NS; do echo == $n; ssh $n docker images; done
      for n in $NS; do echo == $n; ssh $n docker ps; done
    2. Verify the registry container is running on the node defined in the REGISTRY variable.
      # ssh $N1 docker ps
    3. Verify the fabric is running by running the following command for each node ($N1, $N2, and so on).
      #ssh $N1 service vipr-fabric status
  6. Run the vipr.config command to configure the ZooKeeper ensemble on the selected nodes, and to register the nodes with the ViPR Controller. The registration_key is the root password for ViPR.
    # fab vipr.config -H $NC --set ZK_SERVER_LIST="$ZK",CONTROLLER_VIP=$VIP,registration_key=ChangeMe,REGISTRY="$N1"
Back to Top

Verify data fabric installation

Procedure

  1. Log in to the ViPR Controller using root credentials.
  2. Change to Administration mode.
  3. Under System Health, verify the following is true:
    1. The Data Fabric Status is reported as Good.
  4. Under Physical Assets, click Commodity Nodes.
  5. Verify the:
    1. Number of nodes reported matches the number of nodes installed.
    2. Status of each installed Commodity node is reported as Good.
  6. Open each node and verify the:
    1. Disk counts reported per node matches the expected number of disks.
    2. Disk status is reported as Good for all disks.
  7. You can continue provisioning your Commodity system.
Back to Top

Provision the object service

Procedure

  1. Provision the nodes to run the object service by running vipr.provision.pull.
    # fab vipr.provision.pull -H $NC --set CONTROLLER_VIP=$VIP,registration_key=ChangeMe,OBJECT="$NS"
  2. Verify that the object container is running by using the docker ps command.
    # for n in $NS; do echo == $n; ssh $n docker ps; done
  3. Wait 10 minutes after the object containers start to allow them to initialize.
  4. Create the varray and datastore by running fab vipr.provision.varray.
    # fab vipr.provision.varray -H $NC --set CONTROLLER_VIP=$VIP,registration_key=ChangeMe,OBJECT="$NS"
  5. Verify that the varray and datastore were created successfully in one of the following ways:
    1. From the Controller, go to Virtual Assets > Virtual Arrays, and verify that CommodityvPool is present.
    2. From the installer host, execute the following commands:
      Authenticate the user:
      ./viprcli authenticate -u root -d /opt/storageos/cli
      Password :
      root : Authenticated Successfully
      /opt/storageos/cli/rootcookie32127 : Cookie saved successfully
      
      Get the list of datastores.
      ./viprcli datastore list
    The datastores are named by using the IP address of commodity node.
Back to Top

Configure ConnectEMC

Before you begin

Obtain the serial number.

Failure to add a System Master Serial Number prevents the system from properly transmitting alerts. Run this procedure each time the ViPR controller's ConnectEMC configuration changes.

Procedure

  1. Run the vipr.call_home script passing in the following parameters:
    fab vipr.call_home -H $NC --set ZK_SERVER_LIST="$NZK",CONTROLLER_VIP=$CVIP,registration_key=ChangeMe,REGISTRY="$N1",PSNT="<PSNT#/hardwareTLA>",CUSTOMER_NAME="<customer_Name>",SITE_ID="<site_id>"
    
    For example:
    fab vipr.call_home  -H $NC  --set ZK_SERVER_LIST="$NZK",CONTROLLER_VIP=$CVIP,registration_key=ChangeMe,REGISTRY="$N1"
    PSNT="PSNT123456",CUSTOMER_NAME="Customer1",SITE_ID="SITE1234"
  2. Verify the configuration was successful and is operating properly by checking the following:
    1. Check that all nodes have the run_sysconf_collect.sh executable in the /etc/cron.dailydirectory.
    2. Check that the contents of the run_sysconf_collect.sh file are in the /opt/emc/nile/bin/sysconf_collector -m <registry_node_name> directory.
    3. Go to the registry node, open /etc/crontab. Verify the following line has been populated with the hour-minute-day filled in with a value within 30 days of the current date:
      12 17 25 * * root source /opt/emc/nile/bin/start_syr_collect.sh
    4. Check that there has been a callout done by looking in /opt/connectemc/archive. You should see both a ZIP file and an XML file with the same name, but the ZIP file has ECS_G4_Config as a suffix.
    5. Check the email account to verify you received the notification.
    6. You should see IP addresses for data and management, and you should see internal and DAE/disk information values populated.
Back to Top

Set data and command endpoints

To configure ViPR object storage, inter-VDC communication endpoints must be specified for the VDC. These values are used by data nodes to control the transfer of data between VDCs and must be entered for a VDC even if you do not intend to link ViPR VDCs.

Before you begin

Procedure

  1. Select Virtual Assets > Virtual Data Centers.
  2. If you have already generated a secret key and have it stored ready for use, you can skip this step. If you want to use the same secret key used by the ViPR Controller, select Secret Key at the Virtual Data Centers page, and copy the secret key so that you can paste it in a later step.
  3. At the Virtual Data Centers page, select the Inter-VDC Endpoints control which is located in the Edit field of the Virtual Data Centers table.
  4. At the Inter-VDC Communications Endpoints page, enter the IP addresses that you want to make available for command and data communications with this VDC into the Inter-VDC Command Endpoint(s) and Inter-VDC Data Endpoint(s) fields.
    If you have more than one data services node and they are accessed through a load balancer, you should enter the IP address and port of the load balancer into the data and command fields. If you do not have a load balancer, enter the IP addresses for all data services nodes into the data and command fields as comma separated lists. This allows the source VDC to load-balance the sending of WAN traffic across all data nodes in the destination VDC
  5. In the Secret Key field, paste the value of the secret key.
  6. Click Save.
Back to Top

Create an object virtual pool

Procedure

  1. Log in to the Controller as a System Administrator.
  2. Select Admin > Virtual Assets > Object Virtual Pools.
  3. Click Add.
  4. Enter the name of the Object Virtual Pool.
  5. Enter a description.
  6. Select the virtual array from the drop down.
  7. Click Save.
Back to Top

Troubleshooting: Retrying a failed data fabric installation

Use this procedure to uninstall then reinstall the fabric components on all of the nodes except for the registry node.

Before you begin

Procedure

  1. After you have resolved the issues that caused the installation failure, run the following command to uninstall the data fabric components from each node.
    # fab vipr.remove.containers_not_registry -H $NC
    Wait until the command reports Done before continuing with procedure.
  2. To reinstall the containers on the nodes in the fabric, except for the registry node, run the vipr.install command passing in the SKIP_REGISTRY keyword and the IP address of the registry node.
    # fab vipr.install -H $NC --set SKIP_REGISTRY,REGISTRY="$N1",ZK_SERVER_LIST="$ZK"
    
    Running the command with the SKIP_REGISTRY keyword generates the following warning message that you can safely ignore: Warning: Registry already installed and SKIP_REGISTRY set. Will skip installing registry and loading images.
    Once the vipr.install completes, you can provision the object service on the node as described in the installation procedure.
Back to Top

Fabric install commands reference

To install and provision the ViPR data fabric, you run a set of python scripts from the command line of the installer node. Use the following table as a reference when supplying parameters for the fabric commands.

The fab tool syntax is:
fab task options 
task can be one of the following:

Use -H to pass in a comma-separated list of the public IP addresses of the nodes.

Use --set to pass in this set of key-value pairs.

Back to Top