ViPR 2.1 - Add Third-party Block Storage to ViPR
Table of Contents
This article further describes:
- How ViPR discovers, and registers the storage system after you add it to ViPR.
- The optional configuration steps you can perform on the storage system after it is added and discovered in ViPR.
This article is part of a series
You can add storage to ViPR at anytime. If, however, you are setting up the virtual data center for the first time, complete the steps described in Step by Step: Setup a ViPR Virtual Data Center.Back to Top
OpenStack version support
For supported versions, see the EMC ViPR Support Matrix available on the EMC Community Network (community.emc.com).
Openstack installation is supported on the following platforms:
- Red Hat Enterprise Linux
- SUSE Enterprise Linux
- Ubuntu Linux
For a list of the supported platform versions, see Openstack documentation at: http://docs.openstack.org.
The following two components must be installed. Both components can be installed on the same server, or on separate servers.
- OpenStack Identity Service (Keystone)
Required for authentication
- OpenStack Block Storage (Cinder)
The core service that provides all storage information.
For complete installation and configuration details, refer to the OpenStack documentation at: http://docs.openstack.org.Back to Top
Supported third-party block storage systems
ViPR operations are supported on any third-party block storage systems, tested by OpenStack, which use Fibre Channel or iSCSI protocols.
Non-supported storage systemsViPR does not support third-party storage systems using:
- Proprietary protocols such as Ceph.
- Drivers for block over NFS.
- Local drivers such as LVM.
- RBD (Ceph)
Refer to www.openstack.org for information about OpenStack third-party storage systems.
ViPR native driver support and recommendations
- EMC VMAX
- EMC VNX for Block
- EMC VPLEX
- Hitachi Data Systems (with Fibre Channel only)
- Create Block Volume
- Export Volume to Host
- Create a Block Volume for a Host
- Expand block volume
- Remove Volume by Host
- Remove Block Volume
- Create Full Copy
- Create Block Snapshot
- Create volume from snapshot
- Remove Block Snapshot
Back to Top
OpenStack third-party storage configuration recommendations
It is recommended that before adding the storage provider to ViPR, configure the storage provider with only the storage systems that you want to manage with ViPR through the third-party block storage provider.
When the third-party block storage provider is added to the ViPR Physical Assets, all of the storage systems managed by the OpenStack block storage service, which are supported by ViPR, will be added to ViPR.
Back to Top
Cinder does not have any specific standards on backend driver attributes definitions. Refer to the vendor-specific recommendations on how to configure the cinder driver, which may involve installing a vendor specific plugin or CLI.Back to Top
enabled_backends, which will be commented by default, and add the multiple back-end names. In the following example, NetApp and IBM SVC are added as backend configurations.
- Near the end of the file add the storage system specific entries as follows:
[netapp-iscsi] #NetApp array configuration goes here [ibm-svc-fc] #IBM SVC array configuration goes here
- Restart the Cinder service.
#service openstack-cinder-volume restart
- To map the volumes to the backend driver.
- For ViPR to distinguish whether the volume is set for thin or thick provisioning during discovery of the volume.
Volume types can be created either through Cinder CLI commands or through the Dashboard (OpenStack UI ). The properties required by ViPR in the Cinder CLI for the volume type are:
The following example demonstrates the Cinder CLI commands to create volume types (NetApp, IBM SVC), and map them to the backend driver.
cinder --os-username admin --os-password <password> --os-tenant-name admin type-create "NetAPP-iSCSI" cinder --os-username admin --os-password <password> --os-tenant-name admin type-key "NetAPP-iSCSI" set volume_backend_name=NetAppISCSI cinder --os-username admin --os-password <password> --os-tenant-name admin type-create "IBM-SVC-FC" cinder --os-username admin --os-password <password> --os-tenant-name admin type-key "IBM-SVC-FC" set volume_backend_name=IBMSVC-FC cinder --os-username admin --os-password <password> --os-tenant-name admin extra-specs-list
By default, during discovery, ViPR sets the provisioning type of OpenStack volumes to thin. If the provisioning type is thick, you must set the ViPR-specific property for the thick provisioning to true for the volume type. If the provisioning type of the volume is thin, you do not need to set the provisioning type for the volume in OpenStack.
The following example demonstrates the Cinder CLI commands to create a volume type (NetApp), and define the provisioning type of the volume as thick.
cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-create "NetAPP-iSCSI" cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-key "NetAPP-iSCSI" set volume_backend_name=NetAppISCSI cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-key "NetAPP-iSCSI" set vipr:is_thick_pool=true cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 extra-specs-list
- Create volumes for each type of volume created in
Volumes can be created in the OpenStack UI or the Cinder CLI. The Cinder CLI command to create a volume is:
cinder --os-username admin --os-tenant-name admin --display-name <volume-name> --volume-type <volume-type-id> <size>
- Check that the volumes are getting created on the associated storage system. For example, NetApp-iSCSI type volumes should be created only on the NetApp storage system.
Before you begin
Only System Administrators can add storage to ViPR.
The following steps describe how to add the storage system to ViPR using the ViPR UI. To use the REST API to add the storage system to ViPR see the Add Third-Party Block Storage to ViPR Using the REST API article.
- Select .
- Click Add.
- Enter a storage provider name.
- Select Third-party block for the type of storage provider.
- Enter the host IP Address.
- Enable if the storage provider will use SSL or not.
- Leave the default or enter the port.
- Enter user credentials with storage system administrator privileges.
If the OpenStack Block Storage System nodes are installed on separate servers, enter the OpenStack Block Storage (Cinder) Controller node credentials.
- Click Save.
After ViPR discovers a third-party block storage array, a default storage port is created for the storage system, and appears in the Storage Port page, with the name Default, and storage port identifier: Openstack+<storagesystemserialnumber>+Port+Default.
Fibre Channel configured storage ports
ViPR export operations cannot be performed on an FC connected storage system, which has been added to ViPR without any WWPNs assigned to the storage port. Therefore, ViPR system administrators must manually add at least one WWPN to the default storage port before performing any export operations on the storage system. WWPNs can only be added to ViPR through the VIPR CLI. For steps to configure the storage port through the ViPR CLI refer to Add WWPNs to FC connected third-party block storage ports.
After the WWPN is added to the storage port, you can perform export operations on the storage system from ViPR. At the time of the export, ViPR reads the export response from the Cinder service. The export response will include the WWPN, which was manually added by the system administrator from the ViPR CLI, and any additional WWPNs listed in the export response. ViPR then creates a storage port for each of the WWPNs listed in the export response during the export operation.
After a successful export operation is performed, the Storage Port page displays any newly created ports, in addition to the Default storage port.
Each time another export operation is performed on the same storage system, ViPR reads the Cinder export response. If the export response presents WWPNs, which are not present in ViPR, then ViPR creates new storage ports for every new WWPN.
iSCSI configured storage ports
The default storage port is used to support the storage system configuration until an export is performed on the storage system. At the time of the export, ViPR reads the export response from the Cinder service, which includes the iSCSI IQN. ViPR then modifies the default storage port's identifier with the IQN received from the Cinder export response.
Each time another export operation is performed on the same storage system, ViPR reads the Cinder export response. If the export response presents an IQN, which is not present in ViPR, then ViPR creates a new storage port.Back to Top
You will need to get at least one valid WWPN for the storage port before continuing.
Use the following CLI commands to add a WWPN to the storage port.
- Get the last three digits of the storage system serial number from the list of storage systems.
C:\Users\<username>viprcli storagesystem list NAME PROVIDER_NAME SYSTEM_TYPE SERIAL_NUMBER IBMSVC-FC_StorwizeSVCDriver+11111111234 myProviderName openstack 11111111234
- Get the port network ID for the Default storage port. The storage port network ID (PORT_NETWORK_ID) will be an invalid value.
C:\Users\<username>viprcli storageport list -t openstack -sn 234 PORT_NAME TRANSPORT_TYPE NETWORK_NAME PORT_NETWORK_ID REGISTRATION_STATUS DEFAULT FC FABRIC_name-fabric <some invalid value> REGISTERED
- Add the WWPN (50:01:02:34:05:06:FE:07 in this example) to the storage port.
C:\Users\<username>viprcli storageport update -t openstack -sn 234 -pn DEFAULT -tt FC -pnwid "50:01:02:34:05:06:FE:07"
- Repeat step 2, to validate the value was added to the storage port (PORT_NETWORK_ID).
C:\Users\<username>viprcli storageport list -t openstack -sn 234 PORT_NAME TRANSPORT_TYPE NETWORK_NAME PORT_NETWORK_ID REGISTRATION_STATUS DEFAULT FC FABRIC_name-fabric 50:01:02:34:05:06:FE:07 REGISTERED
Refer to the EMC ViPR CLI Reference guide for more information.Back to Top
Once the storage system is discovered you can:
- Deregister or delete the storage systems that ViPR will not manage
- Define the storage system resource allocation limit
- Deregister storage pools
- Set the storage pool resource allocation limit
If you want ViPR to manage only some of the storage systems discovered with the storage provider, you can deregister or delete the storage system from ViPR.
Deregister or delete storage systems from ViPR
- In the Admin Mode, select .
- Select the box in the storage system row.
Deregister to keep the storage system in
ViPR, and make it unavailable to use as a
Or, click Delete to remove the storage system from ViPR.
- Select .
- Click the storage system name in the Storage System table.
- In the Edit Storage System page, disable Unlimited Resource Allocation setting.
- Specify the maximum number of volumes
ViPR is allocated for provisioning on this storage system. The amount must be 0 or higher.
The Resource Limit value is a count of the number of volumes allowed to be provisioned on the storage system.
- Click Save.
If a storage pool becomes unavailable on the storage system, the storage pool remains in the list of available ViPR storage pools. You must deregister the storage pool manually in ViPR to ensure ViPR does not use it as a resource when a service operation is executed.
- Select .
- Locate the row for the storage system in which the pools reside.
- In the Edit row, click Pools.
- Check the row for each pool that you want to make unavailable to ViPR for provisioning.
- Click Deregister.
- Select .
- Locate the row for the storage system where the pools reside.
- In the Edit row, click Pools.
- Click the pool name.
- Change the maximum utilization percentage.
The default is 75%.
- For thin pool provisioning, set a maximum snapshot percentage.
The default is 300%.
- Enter a numeric value for the volume limit available to
ViPR to provision from this storage pool.
By default, there is no limit on the amount of a storage pool that can be used by ViPR.The Resource Limit value is a count of the number of volumes allowed to be provisioned using the selected storage pool.
- Click Save.
Add the corresponding SAN Switch from the ViPR page. When a SAN switch is added to ViPR, the Fibre Channel networks (Brocade Fabrics or Cisco VSANs), are automatically discovered and registered in ViPR. Additionally, through discovery of the SAN switch topology, ViPR discovers, and registers the host initiators for hosts on the network, and identifies which storage systems are associated with the SAN switch.
Refer to Add Fabric Managers and SAN Networks to EMC ViPR for more information.
For Storage Systems that use ViPR services with the iSCSI protocol, the iSCSI host ports must be logged into the correct target array ports before they can be used in the service.Back to Top