ViPR 2.1 - Add Commodity capacity

Table of Contents

Add capacity

Describes the process for adding a new node and disks to a Commodity fabric.

Note Image
In this release, a new node can be an object node or a ScaleIO SDS node. New nodes cannot be ZooKeeper, ScaleIO MDM, or ScaleIO TB nodes.

The process for adding Commodity capacity is:

  1. Obtain and prepare approved node and disk hardware. See the ViPR on Commodity installation readiness checklist for specific model information and configuration requirements
  2. Use the step-by-step procedures in this article to install the ViPR data fabric on the new Commodity node. The procedure contained in this document varies only slightly from the install procedure for a new fabric. The main command is fab vipr.extend instead of fab vipr.install. You will follow the same preinstall steps and many of the post-install verification steps

There are two special nodes during the data fabric install.

Back to Top

Deploy the installer host OVA

Procedure

  1. Download the ViPR data fabric OVA file, ecs-fabric-installer-version-build.ova to a temporary directory.
  2. Start the vSphere client.
  3. Log in to the vCenter Server where you are deploying the installer host.
  4. Select File > Deploy OVF Template.
  5. Browse to and select the ViPR installer host OVA file that you copied to the temporary directory.
  6. Accept the End User License Agreement.
  7. Specify a name for the installer host.
  8. Select the host or cluster on which to run the installer host.
  9. If resource pools are configured (not required for the installer host), select one.
  10. If more than one datastore is attached to the ESX Server, select the datastore for the installer host.
  11. Select Thin Provision.
  12. On the Network Mapping page, map the source network to a destination network as appropriate.
  13. Power on the VM.
    You can log in to the VM using the root/ChangeMe credentials. The data fabric software is located in the /opt/emc/ecs-fabric-installer folder.
Back to Top

Prepare the nodes for data fabric installation

Before you begin

The Commodity nodes must be powered on.

The yum commands listed in the following procedure require an internet connection. If you do not have an internet connection when running these commands, the packages might also be on the installation media for the operating system running on the nodes.

Procedure

  1. Verify that each node has nsenter installed by running the following command:
    # "type nsenter"
    If the command returns /usr/bin/nsenter, it is installed on the node. If it returns "nsenter: not found", you must install it.
  2. Determine if gdisk and xfsprogs are installed on the nodes, by running the following command:
    # rpm -qa | egrep 'gdisk|xfsprogs'
    The output should return the rpm versions of each, but if either is missing, install it using yum. For example:
    # yum --noplugins install gdisk
    # yum --noplugins install xfsprogs
Back to Top

Install HAL and the hardware manager

Procedure

  1. From the installer hose, use SSH to copy the viprhal and nile-hwmgr rpms to each node.
  2. Log in to each node, and use the rpm command to install the hardware abstraction layer.
    # rpm -ivh viprhal*
    
    Preparing...                ########################################### [100%]
       1:viprhal                ########################################### [100%]
  3. On each node, run the following command to install the hardware manager:
    # rpm -ivh nile-hwmgr-<version-build.x86_64>.rpm
    Preparing...                ########################################### [100%]
       1:nile-hwmgr             ########################################### [100%]
  4. On each node, start the hardware manager service, by running the following command:
    # service nileHardwareManager start
  5. On each node, verify the hardware manager service started successfully, by running the following command:
    # service nileHardwareManager status
Back to Top

Prepare, check, extend, configure, and provision the object service

Configure the new node using several fab tool commands.

Before you begin

Procedure

  1. On the installer node, go to the nile-fabric-installer directory.
  2. Run the fab vipr.prepare command to prepare the nodes so they can support the data fabric. It does things like install docker and open the necessary ports if a firewall is enabled.
    For example:
    # fab vipr.prepare -H "10.5.116.244" 
  3. Run the fab vipr.check command to verify the node meets the minimum requirements.
    For example:
    # fab vipr.check -H "10.5.116.244" 
  4. Run the fab vipr.extend command to copy the installation images from the installer host to the docker registry node: data fabric, docker registry, object service. The -H option only contains the IP of the new node.
    fab vipr.extend -H 10.247.201.240 --set REGISTRY="10.247.201.236"
  5. In this step, you use the docker commands docker images and docker ps to verify that the set of images are present and deployed on the registry node, and that the correct containers are running on the nodes.
    1. Run docker images on the registry node to verify the following images are present: registry, object, scale-io, upgrade, fabric and ZooKeeper, and that the ZooKeeper image is deployed on all the nodes listed in the ZK_SERVER_LIST.
      For example:
      # ssh 10.5.116.244 docker images
      docker images
      REPOSITORY                              TAG                     IMAGE ID            CREATED             VIRTUAL SIZE
      emcvipr/upgrade                         1.0.0.0-509.91bd1c6     8e0d47c97d2d        4 hours ago         1.207 GB
      emcvipr/upgrade                         latest                  8e0d47c97d2d        4 hours ago         1.207 GB
      10.5.116.244:5000/emcvipr/upgrade     latest                  8e0d47c97d2d        4 hours ago         1.207 GB
      10.5.116.244:5000/emcvipr/fabric      latest                  1087305bbe53        6 hours ago         1.414 GB
      emcvipr/fabric                          1.0.0.0-533.9704fdb     1087305bbe53        6 hours ago         1.414 GB
      emcvipr/fabric                          latest                  1087305bbe53        6 hours ago         1.414 GB
      emcvipr/object                          1.0.0.0-31230.14e5c14   b44804cd693a        7 hours ago         1.869 GB
      emcvipr/object                          latest                  b44804cd693a        7 hours ago         1.869 GB
      10.5.116.244:5000/emcvipr/object      latest                  b44804cd693a        7 hours ago         1.869 GB
      emcvipr/zookeeper                       1.0.0.0-28.8fdcbdd      6999841173c3        6 days ago          1.063 GB
      emcvipr/zookeeper                       latest                  6999841173c3        6 days ago          1.063 GB
      10.5.116.244:5000/emcvipr/zookeeper   latest                  6999841173c3        6 days ago          1.063 GB
      10.5.116.244:5000/emcvipr/scaleio     latest                  8c6e7d6da37c        6 days ago          1.306 GB
      emcvipr/scaleio                         1.0.0.0-35.c7d22bf      8c6e7d6da37c        6 days ago          1.306 GB
      emcvipr/scaleio                         latest                  8c6e7d6da37c        6 days ago          1.306 GB
      emcvipr/nile-registry                   1.0.0.0-19.b9c6e83      541bf3a98b83        7 days ago          1.286 GB
      emcvipr/nile-registry                   latest                  541bf3a98b83        7 days ago          1.286 GB
    2. Run docker ps to verify the registry and fabric containers are running on the registry node and to verify the fabric container is running on all of the other nodes.
      # ssh 10.5.116.244 docker ps
      CONTAINER ID        IMAGE                                       COMMAND                CREATED             STATUS              PORTS                    NAMES
      767f6df47064        10.5.116.244:5000/emcvipr/fabric:latest   /opt/vipr/boot/boot.   5 minutes ago       Up 3 minutes                                 emcvipr-fabric
      5db6d7bb90f6        emcvipr/nile-registry:1.0.0.0-19.b9c6e83    ./boot.sh              46 minutes ago      Up 45 minutes       0.0.0.0:5000->5000/tcp   emcvipr-registry
      
    Verify the fabric is running by executing the following command:
     # service vipr-fabric status
  6. Run the vipr.config command to configure the ZooKeeper ensemble on the selected nodes, and to register the commodity nodes with the ViPR Controller. The registration_key is the root password for ViPR. The values for vipr.config ZK_SERVER_LIST are the private IP address of nodes hosting a ZooKeeper server. A new node cannot host a ZooKeeper server in this release.
    For example:
    #  #fab vipr.config -H 10.5.116.244 --set ZK_SERVER_LIST="10.5.116.244 10.5.116.246",
    CONTROLLER_VIP=10.247.200.30,registration_key=ChangeMe,REGISTRY="10.5.116.244"
Back to Top

Verify data fabric installation

Procedure

  1. Log in to the ViPR Controller using root credentials.
  2. Change to Administration mode.
  3. Under System Health, verify the following is true:
    1. The Data Fabric Status is reported as Good.
  4. Under Physical Assets, click Commodity Nodes.
  5. Verify the:
    1. Number of nodes reported matches the number of nodes installed.
    2. Status of each installed Commodity node is reported as Good.
  6. Open each node and verify the:
    1. Disk counts reported per node matches the expected number of disks.
    2. Disk status is reported as Good for all disks.
  7. You can continue provisioning your Commodity system.
Back to Top

Fabric install commands reference

To install and provision the ViPR data fabric, you run a set of python scripts from the command line of the installer node. Use the following table as a reference when supplying parameters for the fabric commands.

The fab tool syntax is:
fab task options 
task can be one of the following:

Use -H to pass in a comma-separated list of the public IP addresses of the nodes.

Use --set to pass in this set of key-value pairs.

Back to Top