Add Commodity capacity

TOC

BacktoTop

Add capacity

Describes the process for adding a new node and disks to a Commodity fabric.

This article applies to EMC ViPR 2.0.

Note Image
In this release, a new node can be an object node or a ScaleIO SDS node. New nodes cannot be ZooKeeper, ScaleIO MDM, or ScaleIO TB nodes.

The process for adding Commodity capacity is:

  1. Obtain and prepare approved node and disk hardware. See the ViPR on Commodity installation readiness checklist for specific model information and configuration requirements
  2. Use the step-by-step procedures in this article to install the ViPR data fabric on the new Commodity node. The procedure contained in this document varies only slightly from the install procedure for a new fabric. The main command is fab vipr.extend instead of fab vipr.install. You will follow the same preinstall steps and many of the post-install verification steps
There are two special nodes during the data fabric install.
  • The installer host node is a VM that is created by deploying the ECS-fabric-installer-<version>.ova. It has all of the data fabric software and tools that you need to perform a successful install.
  • The registry node (or docker registry node) can be any of the Commodity nodes. You define the node by passing its IP address as the value for the REGISTRY key on the --set parameter of the fab command used to install, configure, and provision the data fabric. The registry node contains all of the images for all of the software used by the commodity nodes. It is used during install and upgrade and when adding new or replacing failed nodes.
BacktoTop

Deploy the installer host OVA

Before you begin

Obtain credentials to the vCenter Server where you are deploying the installer host.

Procedure

  1. Download the ViPR data fabric OVA file to a temporary directory.
  2. Start the vSphere client and log in to the vCenter Server where you are deploying the installer host.
  3. Select File > Deploy OVF Template.
  4. Browse to and select the ViPR installer host OVA file that you copied to the temporary directory.
  5. Accept the End User License Agreement.
  6. Specify a name for the installer host.
  7. Select the host or cluster on which to run the installer host.
  8. If resource pools are configured (not required for the installer host), select one.
  9. If more than one datastore is attached to the ESX Server, select the datastore for the installer host.
  10. Select Thin Provision.
  11. On the Network Mapping page, map the source network to a destination network as appropriate.
  12. Power on the VM.
  13. Log in to the VM using root/ChangeMe as the credentials.
  14. Navigate to /opt/emc/ECS-fabric-installer.
BacktoTop

Prepare the nodes for data fabric installation

Before you begin

The Commodity nodes must be powered on.

Procedure

  1. Verify that each node has nsenter installed by running the following command:

    # "type nsenter"

    If the command returns /usr/bin/nsenter, it is installed on the node. If it returns "nsenter: not found", you must install it.
  2. Determine if gdisk and xfsprogs are installed on the nodes, by running the following command:

    # rpm -qa | egrep 'gdisk|xfsprogs'

    The output should return the rpm versions of each, but if either is missing, install it using yum. For example:

    # yum --noplugins install gdisk

    # yum --noplugins install xfsprogs

BacktoTop

Install HAL and the hardware manager

Procedure

  1. Set up SSH password-less login between the installer node and the target nodes by using SSH keygen, for example:
    1. Create the authentication SSH key.

      # ssh-keygen

    2. Copy the generated key file from the installer node to each of the target nodes by running the following command:

      # ssh-copy-id <target-node-IP- address>

  2. Using SSH, copy the viprhal and nile-hwmgr rpms to each commodity node.
  3. On each node, use the rpm command to install the hardware abstraction layer.

    # rpm -ivh viprhal* Preparing... ########################################### [100%] 1:viprhal ########################################### [100%]

  4. On each node, run the following command to install the hardware manager:

    # rpm -ivh nile-hwmgr-1.0.0.0-186.18a37be.x86_64.rpm Preparing... ########################################### [100%] 1:nile-hwmgr ########################################### [100%]

  5. Start the hardware manager service, by running the following command:

    # service nileHardwareManager start

  6. Verify the hardware manager service started successfully, by running the following command:

    # service nileHardwareManager status

BacktoTop

Prepare, check, extend, configure, and provision the object service

Configure the new node using several fab tool commands.

Before you begin

  • Upload the proper Commodity license to ViPR Controller.
  • Add the IP addresses of the Commodity nodes to the ViPR Controller by navigating to Settings > Configuration > Extra Node IP addresses.

Procedure

  1. On the installer node, go to the nile-fabric-installer directory.
  2. Run the fab vipr.prepare command to prepare the nodes so they can support the data fabric. It does things like install docker and open the necessary ports if a firewall is enabled.
    For example:

    # fab vipr.prepare -H "10.5.116.244"

  3. Run the fab vipr.check command to verify the node meets the minimum requirements.
    For example:

    # fab vipr.check -H "10.5.116.244"

  4. Run the fab vipr.extend command to copy the installation images from the installer host to the docker registry node: data fabric, docker registry, object service. The -H option only contains the IP of the new node.

    fab vipr.extend -H 10.247.201.240 --set REGISTRY="10.247.201.236"

  5. In this step, you use the docker commands docker images and docker ps to verify that the set of images are present and deployed on the registry node, and that the correct containers are running on the nodes.
    1. Run docker images on the registry node to verify the following images are present: registry, object, scale-io, upgrade, fabric and ZooKeeper, and that the ZooKeeper image is deployed on all the nodes listed in the ZK_SERVER_LIST.
      For example:

      # ssh 10.5.116.244 docker images docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE emcvipr/upgrade 1.0.0.0-509.91bd1c6 8e0d47c97d2d 4 hours ago 1.207 GB emcvipr/upgrade latest 8e0d47c97d2d 4 hours ago 1.207 GB 10.5.116.244:5000/emcvipr/upgrade latest 8e0d47c97d2d 4 hours ago 1.207 GB 10.5.116.244:5000/emcvipr/fabric latest 1087305bbe53 6 hours ago 1.414 GB emcvipr/fabric 1.0.0.0-533.9704fdb 1087305bbe53 6 hours ago 1.414 GB emcvipr/fabric latest 1087305bbe53 6 hours ago 1.414 GB emcvipr/object 1.0.0.0-31230.14e5c14 b44804cd693a 7 hours ago 1.869 GB emcvipr/object latest b44804cd693a 7 hours ago 1.869 GB 10.5.116.244:5000/emcvipr/object latest b44804cd693a 7 hours ago 1.869 GB emcvipr/zookeeper 1.0.0.0-28.8fdcbdd 6999841173c3 6 days ago 1.063 GB emcvipr/zookeeper latest 6999841173c3 6 days ago 1.063 GB 10.5.116.244:5000/emcvipr/zookeeper latest 6999841173c3 6 days ago 1.063 GB 10.5.116.244:5000/emcvipr/scaleio latest 8c6e7d6da37c 6 days ago 1.306 GB emcvipr/scaleio 1.0.0.0-35.c7d22bf 8c6e7d6da37c 6 days ago 1.306 GB emcvipr/scaleio latest 8c6e7d6da37c 6 days ago 1.306 GB emcvipr/nile-registry 1.0.0.0-19.b9c6e83 541bf3a98b83 7 days ago 1.286 GB emcvipr/nile-registry latest 541bf3a98b83 7 days ago 1.286 GB

    2. Run docker ps to verify the registry and fabric containers are running on the registry node and to verify the fabric container is running on all of the other nodes.

      # ssh 10.5.116.244 docker ps

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 767f6df47064 10.5.116.244:5000/emcvipr/fabric:latest /opt/vipr/boot/boot. 5 minutes ago Up 3 minutes emcvipr-fabric 5db6d7bb90f6 emcvipr/nile-registry:1.0.0.0-19.b9c6e83 ./boot.sh 46 minutes ago Up 45 minutes 0.0.0.0:5000->5000/tcp emcvipr-registry

    Verify the fabric is running by executing the following command:

    # service vipr-fabric status

  6. Run the vipr.config command to configure the ZooKeeper ensemble on the selected nodes, and to register the commodity nodes with the ViPR Controller. The registration_key is the root password for ViPR. The values for vipr.config ZK_SERVER_LIST are the private IP address of nodes hosting a ZooKeeper server. A new node cannot host a ZooKeeper server in this release.
    For example:

    # #fab vipr.config -H 10.5.116.244 --set ZK_SERVER_LIST="10.5.116.244 10.5.116.246", CONTROLLER_VIP=10.247.200.30,registration_key=ChangeMe,REGISTRY="10.5.116.244"

BacktoTop

Verify data fabric installation

Procedure

  1. Log in to the ViPR Controller using root credentials.
  2. Change to Administration mode.
  3. Under System Health, verify the following is true:
    1. The Data Fabric Status is reported as Good
  4. Under Physical Assets, click > Commodity Nodes.
  5. Verify the:
    1. Number of nodes reported matches the number of nodes installed.
    2. Status of each installed commodity node is reported as Good.
  6. Open each node and verify the:
    1. Disk counts reported per node matches the expected number of disks.
    2. Disk status is reported as Good for all disks.
  7. You can continue provisioning your Commodity system.
BacktoTop

Fabric install commands reference

To install and provision the ViPR data fabric, you run a set of python scripts from the command line of the installer node. Use the following table as a reference when supplying parameters for the fabric commands.

The fab tool syntax is:

fab task options

task can be one of the following:

Use -H to pass in a comma-separated list of the public IP addresses of the commodity nodes.

Use --set to pass in this set of key-value pairs.