Install data fabric on Commodity nodes

Table of Contents

Back to Top

Overview

Use this step-by-step procedure to install the ViPR data fabric on Commodity nodes.

This article applies to EMC ViPR 2.0.

This procedure applies to Commodity nodes where the OEL operating system is installed.

Install the data fabric after installing ViPR controller.

Everything you need to install the data fabric is provided in an OVA file that you obtain from the EMC help desk. The OVA file is named ECS-fabric-installer-<version>.ova.

There are two special nodes during the data fabric install.

Back to Top

Deploy the installer host OVA

Procedure

  1. Download the ViPR data fabric OVA file, ecs-fabric-installer-version-build.ova to a temporary directory.
  2. Start the vSphere client.
  3. Log in to the vCenter Server where you are deploying the installer host.
  4. Select File > Deploy OVF Template.
  5. Browse to and select the ViPR installer host OVA file that you copied to the temporary directory.
  6. Accept the End User License Agreement.
  7. Specify a name for the installer host.
  8. Select the host or cluster on which to run the installer host.
  9. If resource pools are configured (not required for the installer host), select one.
  10. If more than one datastore is attached to the ESX Server, select the datastore for the installer host.
  11. Select Thin Provision.
  12. On the Network Mapping page, map the source network to a destination network as appropriate.
  13. Power on the VM.
  14. Log in to the VM using root/ChangeMe as the credentials.
  15. Navigate to /opt/emc/ECS-fabric-installer.
Back to Top

Prepare the nodes for data fabric installation

Before you begin

The Commodity nodes must be powered on.

Procedure

  1. Verify that each node has nsenter installed by running the following command:

    # "type nsenter"

    If the command returns /usr/bin/nsenter, it is installed on the node. If it returns "nsenter: not found", you must install it.
  2. Determine if gdisk and xfsprogs are installed on the nodes, by running the following command:

    # rpm -qa | egrep 'gdisk|xfsprogs'

    The output should return the rpm versions of each, but if either is missing, install it using yum. For example:

    # yum --noplugins install gdisk

    # yum --noplugins install xfsprogs

Back to Top

Install HAL and the hardware manager

Procedure

  1. Set up SSH password-less login between the installer node and the target nodes by using SSH keygen, for example:
    1. Create the authentication SSH key.

      # ssh-keygen

    2. Copy the generated key file from the installer node to each of the target nodes by running the following command:

      # ssh-copy-id <target-node-IP- address>

  2. Using SSH, copy the viprhal and nile-hwmgr rpms to each commodity node.
  3. On each node, use the rpm command to install the hardware abstraction layer.

    # rpm -ivh viprhal* Preparing... ########################################### [100%] 1:viprhal ########################################### [100%]

  4. On each node, run the following command to install the hardware manager:

    # rpm -ivh nile-hwmgr-1.0.0.0-186.18a37be.x86_64.rpm Preparing... ########################################### [100%] 1:nile-hwmgr ########################################### [100%]

  5. Start the hardware manager service, by running the following command:

    # service nileHardwareManager start

  6. Verify the hardware manager service started successfully, by running the following command:

    # service nileHardwareManager status

Back to Top

Prepare, check, install, and configure data fabric

Before you begin

  • Upload the proper Commodity license to ViPR Controller.
  • Add the IP addresses of the Commodity nodes to the ViPR Controller by navigating to Settings > Configuration > Extra Node IP addresses.

Procedure

  1. On the installer node, go to the nile-fabric-installer directory.
  2. Run the fab vipr.prepare command to prepare the nodes so they can support the data fabric. It does things like install Docker and open the necessary ports if a firewall is enabled.
    For example:

    # fab vipr.prepare -H "10.5.116.244,10.5.116.245,10.5.116.246,10.5.116.247"

  3. Run the fab vipr.check command to verify the nodes meet the minimum requirements.
    For example:

    # fab vipr.check -H "10.5.116.244,10.5.116.245,10.5.116.246,10.5.116.247"

  4. Run the fab vipr.install to copy the following installation images from the installer host to the Docker registry node: Docker, data fabric, Docker registry, ZooKeeper, and object service.
    For example:

    # fab vipr.install -H "10.5.116.244,10.5.116.245,10.5.116.246,10.5.116.247" --set ZK_SERVER_LIST="192.168.13.11 192.168.13.12 192.168.13.13",REGISTRY="10.5.116.244"

  5. In this step, you use the Docker commands docker images and docker ps to verify that the set of images are present and deployed on the registry node, and that the correct containers are running on the nodes.
    1. Run docker images on the registry node to verify the following images are present: registry, object, scale-io (used for ECS installations), fabric and ZooKeeper, and that the ZooKeeper image is deployed on all the nodes listed in the ZK_SERVER_LIST.
      For example:

      # ssh 10.5.116.244 docker images docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE 10.5.116.244:5000/emcvipr/upgrade latest 8e0d47c97d2d 4 hours ago 1.207 GB 10.5.116.244:5000/emcvipr/fabric latest 1087305bbe53 6 hours ago 1.414 GB emcvipr/fabric 1.0.0.0-533.9704fdb 1087305bbe53 6 hours ago 1.414 GB emcvipr/fabric latest 1087305bbe53 6 hours ago 1.414 GB emcvipr/object 1.0.0.0-31230.14e5c14 b44804cd693a 7 hours ago 1.869 GB emcvipr/object latest b44804cd693a 7 hours ago 1.869 GB 10.5.116.244:5000/emcvipr/object latest b44804cd693a 7 hours ago 1.869 GB emcvipr/zookeeper 1.0.0.0-28.8fdcbdd 6999841173c3 6 days ago 1.063 GB emcvipr/zookeeper latest 6999841173c3 6 days ago 1.063 GB 10.5.116.244:5000/emcvipr/zookeeper latest 6999841173c3 6 days ago 1.063 GB 10.5.116.244:5000/emcvipr/scaleio latest 8c6e7d6da37c 6 days ago 1.306 GB emcvipr/scaleio 1.0.0.0-35.c7d22bf 8c6e7d6da37c 6 days ago 1.306 GB emcvipr/scaleio latest 8c6e7d6da37c 6 days ago 1.306 GB emcvipr/nile-registry 1.0.0.0-19.b9c6e83 541bf3a98b83 7 days ago 1.286 GB emcvipr/nile-registry latest 541bf3a98b83 7 days ago 1.286 GB

    2. Run docker ps to verify the registry and fabric containers are running on the registry node and to verify the fabric container is running on all of the other nodes.

      # ssh 10.5.116.244 docker ps

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 767f6df47064 10.5.116.244:5000/emcvipr/fabric:latest /opt/vipr/boot/boot. 5 minutes ago Up 3 minutes emcvipr-fabric 5db6d7bb90f6 emcvipr/nile-registry:1.0.0.0-19.b9c6e83 ./boot.sh 46 minutes ago Up 45 minutes 0.0.0.0:5000->5000/tcp emcvipr-registry

    Verify the fabric is running by executing the following command:

    # service vipr-fabric status

  6. Run the vipr.config command to configure the ZooKeeper ensemble on the selected nodes, and to register the nodes with the ViPR Controller. The registration_key is the root password for ViPR.
    For example:

    # fab vipr.config -H 10.5.116.244,10.5.116.245,10.5.116.246,10.5.116.247 --set ZK_SERVER_LIST="192.168.13.11 192.168.13.12 192.168.13.13", CONTROLLER_VIP=10.247.200.30,registration_key=ChangeMe,REGISTRY="10.5.116.244"

Back to Top

Verify data fabric installation

Procedure

  1. Log in to the ViPR Controller using root credentials.
  2. Change to Administration mode.
  3. Under System Health, verify the following is true:
    1. The Data Fabric Status is reported as Good.
  4. Under Physical Assets, click > Commodity Nodes.
  5. Verify the:
    1. Number of nodes reported matches the number of nodes installed.
    2. Status of each installed Commodity node is reported as Good.
  6. Open each node and verify the:
    1. Disk counts reported per node matches the expected number of disks.
    2. Disk status is reported as Good for all disks.
  7. You can continue provisioning your Commodity system.
Back to Top

Provision the object service

Procedure

  1. Provision the nodes to run the object service by running vipr.provision.pull.

    fab vipr.provision.pull -H "10.5.116.244,10.5.116.245,10.5.116.246,10.5.116.247,10.249.231.213,10.249.97.48,10.247.97.49,10.247.97.50" --set CONTROLLER_VIP=10.247.200.30,registration_key=ChangeMe,OBJECT="10.5.116.244 10.5.116.245 10.5.116.246 10.5.116.247"

    [127.0.0.1] Executing task 'vipr.provision.pull' [127.0.0.1] [INFO] 'BLOCK' not specified, use Object as default service ... Done.

  2. Verify that the object container is running on the commodity nodes. SSH to each node and run the docker ps command, for example:

    # docker ps

  3. Wait 5 minutes after the object containers start to allow them to initialize.
  4. Create the varray and datastore by running fab vipr.provision.varray.

    fab vipr.provision.varray -H -H "10.5.116.244,10.5.116.245,10.5.116.246,10.5.116.247,10.249.231.213,10.249.97.48,10.247.97.49,10.247.97.50" --set CONTROLLER_VIP=10.247.200.30,registration_key=ChangeMe,OBJECT="10.5.116.244 10.5.116.245 10.5.116.246 10.5.116.247"

    [127.0.0.1] Executing task 'vipr.provision.varray' [127.0.0.1] [INFO] Attempting to create varray CommodityvPool Done.

Back to Top

Fabric install commands reference

To install and provision the ViPR data fabric, you run a set of python scripts from the command line of the installer node. Use the following table as a reference when supplying parameters for the fabric commands.

The fab tool syntax is:

fab task options

task can be one of the following:

Use -H to pass in a comma-separated list of the public IP addresses of the Commodity nodes.

Use --set to pass in this set of key-value pairs.