ViPR 2.2 - Configuration Requirements for Storage Systems

Table of Contents

Storage systems

ViPR ControllerSystem Administrators can review the ViPR Controller configuration requirements for a specific storage system before adding a storage system to ViPR Controller.

User credentials

While adding storage systems to ViPR Controller, you will enter the user credentials to access the storage system from ViPR Controller. The credentials entered when you add a storage system to ViPR Controller are independent of the currently logged in ViPR Controller user. All ViPR Controller operations, which you perform on a storage system, are executed as the user that is entered when the storage system is added to ViPR Controller.

ViPR Controller operations require that the ViPR Controller user has administrative privileges. If there are additional credentials requirements for a specific type of storage system, it will be described in mor detail in the section for that storage type.

Back to Top

Requirements for ViPR to collect port metrics

Allocating new ports based on performance metrics, computed metrics, and user-defined maximum limits is supported on VMAX, VNX, and HDS storage systems.

Refer to Set up Metrics-Based Port Selection using the ViPR UI, for the storage system configuration requirements for ViPR to collect the metrics.

Back to Top

Hitachi Data Systems

Before you add Hitachi Data Systems (HDS) storage to ViPR, configure the storage as follows.

Gather the required information

HitachiHiCommand Device Manager (HiCommand) is required to use HDS storage with ViPR. You need to obtain the following information to configure and add the HiCommand manager to ViPR.

General configuration requirements

  • HiCommand Device Manager software must be installed and licensed.
  • Create a HiCommand Device Manager user for ViPR to access the HDS storage. This user must have administrator privileges to the storage system to perform all ViPR operations.
  • HiCommand Device Manager must discover all Hitachi storage systems (that ViPR will discover) before you can add them to ViPR.
  • When you add the HiCommand Device Manager as a ViPR storage provider, all the storage systems that the storage provider manages will be added to ViPR. If you do not want ViPR to manage all the storage systems, before you add the HiCommand Device Manager, configure the HiCommand Device Manager to manage only the storage systems that will be added to ViPR.
    Note Image
    After you add the storage provider to ViPR, you can deregister or delete storage systems that you will not use in ViPR.

Configuration requirements for auto-tiering

ViPR provides auto-tiering for the six standard HDS auto-tiering policies for Hitachi Dynamic Tiering (HDT) storage pools.

HDS auto-tiering requires the Hitachi Tiered Storage Manager license on the HDS storage system.

Configuration requirements for data protection features

HDS protection requires:

  • Hitachi Thin Image Snapshot software for snapshot protection
  • Hitachi Shadow Image Replication software for clone and mirror protection.

ViPR requires the following is configured to use the Thin Image, and ShadowImage features:

  • Requires Hitachi Replication Manager is installed on a separate server.
  • The HiCommand Device Manager agent must be installed and running on a pair management server.
  • To enable ViPR to use the Shadow Image pair operations, create a ReplicationGroup on the HDS, named ViPR-Replication-Group using either the HiCommand Device Manager or Hitachi Storage Navigator .
  • To enable ViPR to use the Thin Image pair operations, create a SnapshotGroup on the HDS, named ViPR-Snapshot-Group using either the HiCommand Device Manager or Hitachi Storage Navigator .

Back to Top

EMC VMAX

ViPR management of VMAX systems is performed through the EMC SMI-S provider. Your SMI-S provider, and VMAX storage system must be configured as follows before the storage system is added toViPR.

ViPR management of VMAX systems is performed through the EMC SMI-S provider. Your SMI-S provider, and VMAX storage system must be configured as follows before the storage system is added toViPR.

SMI-S provider configuration requirements for VMAX

You will need the following information to validate that the SMI-S provider is configured as required for ViPR, and to add the storage systems to ViPR:

Gather the required information

  • SMI-S provider host address
  • SMI-S provider credentials (default is admin/#1Password)
  • SMI-S provider port (default is 5989)

Before adding VMAX storage to ViPR, login to your SMI-S provider to ensure SMI-S meets the following configuration requirements:

  • The host server running Solutions Enabler (SYMAPI Server) and SMI-S provider (ECOM) differs from the server where the VMAX service processors are running.
  • The storage system is discovered in the SMI-S provider.
  • When the storage provider is added to ViPR, all the storage systems managed by the storage provider will be added to ViPR. If you do not want all the storage systems on an SMI-S provider to be managed by ViPR, configure the SMI-S provider to only manage the storage systems that will be added to ViPR, before adding the SMI-S provider to ViPR.
    Note Image
    Storage systems that will not be used in ViPR, can also be deregistered, or deleted after the storage provider is added to ViPR For more details, see Configure Registered Storage Systems Using the ViPR UI.

  • The remote host, SMI-S provider (Solutions Enabler (SYMAPI Server) and EMC CIM Server (ECOM)) are configured to accept SSL connections.
  • The EMC storsrvd daemon is installed and running.
  • The SYMAPI Server and the ViPR server hosts are configured in the local DNS server and that their names are resolvable by each other, for proper communication between the two. If DNS is not used in the environment, be sure to use the hosts files for name resolution (/etc/hosts or c:/Windows/System32/drivers/etc/hosts).
  • The EMC CIM Server (ECOM) default user login, password expiration option is set to "Password never expires."
  • The SMI-S provider host is able to see the gatekeepers (six minimum).

Back to Top

VMAX storage system

You prepare the VMAX storage system before adding it to ViPR as follows.

  • Create a sufficient amount of storage pools for storage provisioning with ViPR (for example, SSD, SAS, NL-SAS).
  • Define FAST policies.

    Storage Tier and FAST Policy names must be consistent across all VMAX storage systems.

  • It is not required to create any LUNs, storage groups, port groups, initiator groups, or masking views.
  • After a VMAX storage system has been added and discovered in ViPR, the storage system must be rediscovered if administrative changes are made on the storage system using the storage system element manager.
  • For configuration requirements when working with meta volumes see EMC ViPR Support for Meta Volumes on VMAX and VNX Arrays.

Back to Top

EMC VNX for Block

ViPR management of VNX for Block systems is performed through the EMC SMI-S provider. Your SMI-S provider, and VNX for Block storage system must be configured as follows before the storage system is added toViPR.

Back to Top

SMI-S provider configuration requirements for VNX for Block

You will need the following information to validate that the SMI-S provider is configured as required for ViPR, and to add the storage systems to ViPR:

Gather the required information

  • SMI-S provider host address
  • SMI-S provider credentials (default is admin/#1Password)
  • SMI-S provider port (default is 5989)

Before adding VNX for Block storage to ViPR, login to your SMI-S provider to ensure SMI-S meets the following configuration requirements:

  • The host server running Solutions Enabler (SYMAPI Server) and SMI-S provider (ECOM) differs from the server where the VNX for Block storage processors are running.
  • The storage system is discovered in the SMI-S provider.
  • When the storage provider is added to ViPR, all the storage systems managed by the storage provider will be added to ViPR. If you do not want all the storage systems on an SMI-S provider to be managed by ViPR, configure the SMI-S provider to only manage the storage systems that will be added to ViPR, before adding the SMI-S provider to ViPR.
    Note Image
    Storage systems that will not be used in ViPR, can also be deregistered, or deleted after the storage provider is added to ViPR For more details, see Configure Registered Storage Systems Using the ViPR UI.

  • The remote host, SMI-S provider (Solutions Enabler (SYMAPI Server) and EMC CIM Server (ECOM)) are configured to accept SSL connections.
  • The EMC storsrvd daemon is installed and running.
  • The SYMAPI Server and the ViPR server hosts are configured in the local DNS server and that their names are resolvable by each other, for proper communication between the two. If DNS is not used in the environment, be sure to use the hosts files for name resolution (/etc/hosts or c:/Windows/System32/drivers/etc/hosts).
  • The EMC CIM Server (ECOM) default user login, password expiration option is set to "Password never expires."
  • The SMI-S provider host needs IP connectivity over the IP network with connections to both VNX for Block storage processors.

Back to Top

VNX for Block storage system

You prepare the VNX for Block storage system before adding it to ViPR as follows.

  • Create a sufficient amount of storage pools or RAID groups for storage provisioning with ViPR.
  • If volume full copies are required, install SAN Copy enabler software on the storage system.
  • If volume continuous-native copies are required, create clone private LUNs on the array.
  • Fibre Channel networks for VNX for Block storage systems require an SP-A and SP-B port pair in each network, otherwise virtual pools cannot be created for the VNX for Block storage system.
  • For configuration requirements when working with meta volumes see EMC ViPR Support for Meta Volumes on VMAX and VNX Arrays.

Back to Top

VPLEX

Before adding VPLEX storage systems to ViPR, validate that the VPLEX environment is configured as follows:

  • ViPR supports VPLEX in a Local or Metro configuration. VPLEX Geo configurations are not supported.
  • Configure VPLEX metadata back-end storage.
  • Create VPLEX logging back-end storage.
  • Verify that the:
    • Storage systems to be used are connected to the networks containing the VPLEX back-end ports.
    • Hosts to be used have initiators in the networks containing the VPLEX front-end ports.
  • Verify that logging volumes are configured to support distributed volumes in a VPLEX Metro configuration.
  • It is not necessary to preconfigure zones between the VPLEX and storage systems, or between hosts and the VPLEX , except for those necessary to make the metadata backing storage and logging backing storage available.

Back to Top

Third-party block storage (OpenStack)

ViPR uses the OpenStack Block Storage (Cinder) service to manage OpenStack supported block storage systems. Your OpenStack block storage systems must meet the following installation and configuration requirements before the storage systems in OpenStack, and the storage system's resources can be managed by ViPR.

Back to Top

Third-party block storage provider installation requirements

ViPR uses the OpenStack Block Storage (Cinder) Service to add third-party block storage systems to ViPR.

Supported Openstack installation platforms

Openstack installation is supported on the following platforms:

  • Red Hat Enterprise Linux
  • SUSE Enterprise Linux
  • Ubuntu Linux

For a list of the supported platform versions, see Openstack documentation at: http://docs.openstack.org.

The following two components must be installed. Both components can be installed on the same server, or on separate servers.

  • OpenStack Identity Service (Keystone)

    Required for authentication

  • OpenStack Block Storage (Cinder)

    The core service that provides all storage information.

For complete installation and configuration details, refer to the OpenStack documentation at: http://docs.openstack.org.

Back to Top

Third-party block storage system support

Third-party storage systems must be configured on the OpenStack Block Storage Controller node (Cinder service).

Supported third-party block storage systems

ViPR operations are supported on any third-party block storage systems, tested by OpenStack, which use Fibre Channel or iSCSI protocols.

Non-supported storage systems

ViPR does not support third-party storage systems using:
  • Proprietary protocols such as Ceph.
  • Drivers for block over NFS.
  • Local drivers such as LVM.
OpenStack supported storage systems and drivers not supported by ViPR are:
  • LVM
  • NetAppNFS
  • NexentaNFS
  • RBD (Ceph)
  • RemoteFS
  • Scality
  • Sheepdog
  • XenAPINFS

Refer to www.openstack.org for information about OpenStack third-party storage systems.

ViPR native driver support and recommendations

ViPR provides limited support of third-party block storage. Therefore, to have full support of all ViPR operations, it is recommended to add or manage the following storage systems with ViPR native drivers, and not to use the OpenStack third-party block storage provider to add these storage systems to ViPR:
  • EMC VMAX
  • EMC VNX for Block
  • EMC VPLEX
  • Hitachi Data Systems (with Fibre Channel only)
Add these storage systems directly in ViPR, using the storage system host address, or the host address of the proprietary storage provider.

Back to Top

Supported ViPR operations

ViPR discovery, and following service operations can be performed on third-party block storage systems:

  • Create Block Volume
  • Export Volume to Host
  • Create a Block Volume for a Host
  • Expand block volume
  • Remove Volume by Host
  • Remove Block Volume
  • Create Full Copy
  • Create Block Snapshot
  • Create volume from snapshot
  • Remove Block Snapshot
Note Image
Using the ViPR, Create VMware Datastore service, to create a datastore from a block volume created by a third-party storage system is not supported. However datastores can be created from third-party block volumes, manually through VMware vCenter.

Back to Top

OpenStack configuration

After the successful installation of Keystone and Cinder services, the Cinder configuration file needs to be modified for the storage systems it will manage. After modifying the configuration file, the volume types need to be created to map to the backend drivers. These volume types will be discovered as storage pools of a specific storage system in ViPR.

OpenStack third-party storage configuration recommendations

It is recommended that before adding the storage provider to ViPR, configure the storage provider with only the storage systems that you want to manage with ViPR through the third-party block storage provider.

When the third-party block storage provider is added to the ViPR Physical Assets, all of the storage systems managed by the OpenStack block storage service, which are supported by ViPR, will be added to ViPR.

Note Image
If you do not configure the storage provider with only the storage systems you want managed through ViPR, it is possible to deregister or delete the storage systems through ViPR after they are added to ViPR. However, it is a better practice to configure the storage provider for ViPR integration prior to adding the storage provider to ViPR.

Back to Top

Cinder service configuration requirements

The cinder configuration file is located in: /etc/cinder/cinder.conf. An entry must be added to the configuration file for each storage system that will be managed.

Cinder does not have any specific standards on backend driver attributes definitions. Refer to the vendor-specific recommendations on how to configure the cinder driver, which may involve installing a vendor specific plugin or CLI.

Back to Top

Storage system (backend) configuration settings

To manage storage systems, Cinder defines backend configurations in individual sections, of the cinder.conf file, which are specific to the storage system type as follows.

Procedure

  1. Uncomment, enabled_backends, which will be commented by default, and add the multiple back-end names. In the following example, NetApp and IBM SVC are added as backend configurations.
    enabled_backends=netapp-iscsi,ibm-svc-fc
  2. Near the end of the file add the storage system specific entries as follows:
    [netapp-iscsi]
    #NetApp array configuration goes here
     
    [ibm-svc-fc]
    #IBM SVC array configuration goes here
  3. Restart the Cinder service.
    #service openstack-cinder-volume restart
Back to Top

Create volume types in OpenStack

ViPR requires that properties are defined for the volume types in OpenStack:

  • To map the volumes to the backend driver.
  • For ViPR to distinguish whether the volume is set for thin or thick provisioning during discovery of the volume.

ViPR-specific properties

Volume types can be created either through Cinder CLI commands or through the Dashboard (OpenStack UI ). The properties required by ViPR in the Cinder CLI for the volume type are:

  • volume_backend_name
  • vipr:is_thick_pool=true

volume_backend_name

The following example demonstrates the Cinder CLI commands to create volume types (NetApp, IBM SVC), and map them to the backend driver.

cinder --os-username admin --os-password <password> --os-tenant-name admin type-create "NetAPP-iSCSI"
cinder --os-username admin --os-password <password> --os-tenant-name admin type-key "NetAPP-iSCSI" set volume_backend_name=NetAppISCSI
 
 
cinder --os-username admin --os-password <password> --os-tenant-name admin type-create "IBM-SVC-FC"
cinder --os-username admin --os-password <password> --os-tenant-name admin type-key "IBM-SVC-FC" set volume_backend_name=IBMSVC-FC
 
cinder --os-username admin --os-password <password> --os-tenant-name admin extra-specs-list

vipr:is_thick_pool=true

By default, during discovery, ViPR sets the provisioning type of OpenStack volumes to thin. If the provisioning type is thick, you must set the ViPR-specific property for the thick provisioning to true for the volume type. If the provisioning type of the volume is thin, you do not need to set the provisioning type for the volume in OpenStack.

The following example demonstrates the Cinder CLI commands to create a volume type (NetApp), and define the provisioning type of the volume as thick.

cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-create "NetAPP-iSCSI"
cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-key "NetAPP-iSCSI" set volume_backend_name=NetAppISCSI
cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-key "NetAPP-iSCSI" set vipr:is_thick_pool=true
  
cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 extra-specs-list

Validate setup

Validate that OpenStack has been configured correctly to create volumes for each of the added storage systems.
  1. Create volumes for each type of volume created in OpenStack.
    Volumes can be created in the OpenStack UI or the Cinder CLI. The Cinder CLI command to create a volume is:
    cinder --os-username admin --os-tenant-name admin  --display-name <volume-name> --volume-type <volume-type-id> <size>
  2. Check that the volumes are getting created on the associated storage system. For example, NetApp-iSCSI type volumes should be created only on the NetApp storage system.

Back to Top

Stand-alone EMC ScaleIO configuration requirements

Your stand-alone ScaleIO should meet the following system requirements and be configured as follows before adding the storage to ViPR.

  • Protection domains are defined.
  • All storage pools are defined.

Back to Top

EMC XtremIO

Before adding XtremIO storage to ViPR, ensure that there is physical connectivity between hosts, fabrics and the array.

Back to Top

EMC VNXe

Before you add VNXe storage systems to ViPR review the following information.

Create a sufficient amount of storage pools for storage provisioning with ViPR.

Back to Top

EMC® Data Domain®

Before adding Data Domain storage to ViPR, configure the storage as follows.

  • The Data Domain file system (DDFS) is configured on the Data Domain system.
  • Data Domain Management Center (DDMC) is installed and configured.
  • Network connectivity is configured between the Data Domain system and DDMC.

While adding Data Domain storage to ViPR, it is helpful to know that:

  • A Data Domain Mtree is represented as a file system in ViPR.
  • Storage pools are not a feature of Data Domain. However, ViPR uses storage pools to model storage system capacity. Therefore, ViPR creates one storage pool for each Data Domain storage system registered to ViPR, for example, if three Data Domain storage systems were registered to ViPR, there would be three separate Data Domain storage pools. One storage pool for each registered Data Domain storage system.

Back to Top

EMC Isilon

Before adding EMC Isilon storage to ViPR, configure the storage as follows.

Gather the required information

The following information is needed to configure the storage and add it to ViPR.

Configuration requirements

  • SmartConnect is licensed and configured as described in Isilon documentation. Be sure to verify that:
    • The names for SmartConnect zones are set to the appropriate delegated domain.
    • DNS is in use for ViPR and provisioned hosts are delegating requests for SmartConnect zones to SmartConnect IP.
  • SmartQuota must be licensed and enabled.
  • There is a minimum of 3 nodes in the Isilon cluster configured.
  • Isilon clusters and zones will be reachable from ViPR Controller VMs.
  • When adding an Isilon storage system to ViPR, you will need to use either the root user credentials, or create an account for ViPR users, that has administrative privileges to the Isilon storage system.
  • The Isilon user is independent of the currently logged in ViPR user. All ViPR operations performed on the Isilon storage system, are executed as the Isilon user that is entered when the Isilon storage system is added to ViPR.

Back to Top

NetApp

Before adding NetApp storage to ViPR, configure the storage as follows.

  • ONTAP is in 7-mode configuration.
  • Multistore is enabled (7 Mode with Multi store is only supported by ViPR).
  • Aggregates are created.
  • NetApp licenses for NFS, CIFS, and snapshots are installed and configured.
  • vFilers are created and necessary interfaces/ports associated with them
  • Setup NFS/CIFS on the vFilers

Back to Top

EMC VNX for File

Before adding VNX for File storage to ViPR, ensure your system meets the version support requirements, and configure the storage as follows.

  • Storage pools for VNX for File have been created.
  • Control Stations are operational and will be reachable from ViPR Controller VMs.
  • VNX SnapSure is installed, configured, and licensed.

Back to Top

Add a storage system to ViPR

Once you have confirmed that the storage system meets the ViPR configuration requirements, you can add the storage to ViPR using the ViPR UI or the ViPR REST API.

ViPR UI

For steps to add a storage system to ViPR using the ViPR UI, refer to Add Storage Systems Using the ViPR UI.

ViPR REST API

For steps to add a specific type of storage system to ViPR using the ViPR REST API, click:

Back to Top