ViPR 2.2 - Ingest Unmanaged Block Volumes into ViPR

Table of Contents

Introduction

Existing block volumes can be brought under ViPR management by ingesting them using services provided in the Service Catalog. The operation is a two stage process and requires the use of two services, a discovery service followed by an ingestion service. The services are for use by System Administrators only and are not visible to normal catalog users. This article describes how to use these services and provides additional information to support some ingestion use cases.

Once under ViPR management, ingested storage resources can be managed by provisioning users in the same way as if they had been created using ViPR, allowing them to be exported to hosts, expanded, protected using snapshot and copy techniques, etc. All functions (snapshots, mirroring, FAST policies, clones etc) are supported on ingested volumes.

Any volume that is brought under ViPR management must be assigned to a block virtual pool. So, before you can ingest volumes, you must ensure that a virtual pool exists whose criteria match those of the volumes to be ingested. For example, if you want to ingest volumes located on solid state devices (SSDs), you must ensure that virtual pools exist that allow SSDs. Similarly, if you want to ingest VMAX volumes that have associated SRDF volumes, you must ensure that there is a block virtual pool that can have VMAX volumes that are SRDF protected.

The discovery process finds volumes on a specified array and matches them with block virtual pools. When you use the ingest service, only volumes that were previously discovered are ingested.

Back to Top

Ingesting block volumes into ViPR: notes

Before you ingest a volume, you should be aware of the following notes and limitations.

  • ViPR can only ingest volumes on VMAX and VNX arrays. ViPR cannot ingest volumes from other block storage arrays, such as VMAX3 or XtremIO.
  • ViPR can ingest unexported VPLEX Local and VPLEX distributed virtual volumes. Ingest of exported VPLEX volumes is not supported in ViPR.
  • In a VPLEX environment, back-end storage is not ingested when you ingest the VPLEX virtual volume.

  • Datastore information built from an ESX cluster is not ingested into ViPR when the volume underlying the datastore is ingested. Only the volume presented to ESX is ingested.
  • ViPR can ingest a volume that is exported to a host over either Fiber Channel or iSCSI. ViPR cannot ingest hosts that communicate with volumes over both Fiber Channel and iSCSI.
Back to Top

Ingestion constraints

ViPR applies a set of tests on the unmanaged volume before it ingests the volume.

  • Both the virtual pool and virtual array specified in the service order must be active and accessible to the requesting tenant and project.
  • You cannot ingest a volume or metavolume that is in a consistency group. If you request three volumes to be ingested, and two of them belong to consistency groups on the array, one volume will be successfully ingested and the two volumes in the consistency groups will be skipped.
    Note Image
    VPLEX virtual volumes that are part of a consistency group can be ingested.

  • Since ViPR can manage asynchronous SRDF volumes only when they are part of a consistency group, ViPR cannot ingest an SRDF volume if it is in Asynchronous mode.
  • If the volume is exported to a host or cluster, it can be ingested if you run the service Ingest Exported Unmanaged Volumes.
    Note Image
    Ingest of exported VPLEX virtual volumes is not supported in VIPR.

  • If the volume has full copies (clones), snapshots, or mirrors, the volume can be ingested. However, only the parent volume is ingested. The replica is not ingested. Once a volume is ingested this way, all operations on the volume are allowed except deletion. You cannot delete an ingested volume if it has pre-existing snapshots, mirrors or full copies.
  • If the volume is RecoverPoint-protected, it cannot be ingested.
  • If the volume's auto-tiering policy does not match the auto-tiering policy specified in the virtual pool, it cannot be ingested.
  • If, by ingesting the volumes you requested, the capacity quota set on the virtual pool would be exceeded, the ingestion will fail.
  • The SAN Multipath parameters in the virtual pool must satisfy the constraints established in the volume's masking view and zoning. See SAN Multipath settings for VNX volume ingestion for more information.
  • The physical array on which the volume resides cannot have its storage ports divided between two or more virtual arrays. All storage ports on the physical array must be assigned to a single virtual array within ViPR.
  • If you are ingesting a VNX volume that has been exported to a host, check to be sure that the host has been registered with ViPR using its Fully Qualified Domain Name (FQDN). If the host has been registered with ViPR using its IP address, you should use the ViPR User Interface to change the Host IP address to an FQDN.
  • Ingestion of an exported volume will fail if the ports in its masking view/storage group are not part of the virtual array into which it will be ingested.
  • Ingestion will fail unless there are at least two existing host initiators in the ViPR array that match the initiators in the ingested masking view/storage group.
  • If you ingest a volume, the Host LUN ID for the volume is not ingested from the export group on the array. To synchronize the Host LUN ID for a volume, you can unexport then export the volume. The Host LUN ID will then display properly in the ViPR user interface.
  • If an unmanaged volume is present in one or more storage groups on the array, but has not been exported to any host, ViPR will skip this volume during ingestion. To successfully ingest this volume, do one of the following:
    • Mount the volume on a host, then call the service Ingest Exported Unmanaged Volumes.
    • Delete the volume from all storage groups and call Ingest Unmanaged Volumes.
Back to Top

Ingestion constraint: masking view with no zones

Follow the procedure below to ingest a volume that is masked, but has no zones.

If you are trying to ingest a volume that has been exported to a host, you could encounter this error:
Checking numpath for host rmhostc22.lss.emc.com
Initiator 20:00:00:25:B5:16:C1:20 of host rmhostc22.lss.emc.com is not assigned to any ports.
Host rmhostc22.lss.emc.com (urn:storageos:Host:a380810f-0962-4a69-952a-bbe2a498cf1a:vdc1) 
has fewer ports assigned 0 than min_paths 4
This error indicates that there are no zones established for the masking view. To fix this problem, follow these steps:
  1. Using your switch software, create zones for the masking view.
  2. From the ViPR user interface, choose Physical Assets > Fabric Managers and rediscover the switch.
  3. Run Block Storage Services > Discover Unmanaged Volumes to discover the unmanaged volumes on the array.
  4. Rerun Block Storage Services > Ingest Exported Unmanaged Volumes.
Back to Top

Before Discovery - VPLEX virtual volumes

ViPR enables you to ingest unexported, unmanaged VPLEX virtual volumes.

Before you ingest an unexported, unmanaged VPLEX virtual volume, complete the following procedures.

  • Build a virtual array that includes network access to the VPLEX device and the VPLEX cluster on which the volume is configured.

  • Build a virtual pool that has the same High Availability setting as the volume. ( VPLEX Local or VPLEX Distributed)

  • if the VPLEX virtual volume is in a consistency group, the virtual pool must have Multi-Volume Consistency enabled.

Backing storage for the VPLEX virtual volume is not ingested - only the VPLEX virtual volume.

RecoverPoint-protected volumes are not ingested.

Back to Top

Ingest unexported unmanaged volumes

To ingest a volume that has not been exported to a host, you must run two services in this order:

  1. Discover Unmanaged Volumes
  2. Ingest Unmanaged Volumes
Back to Top

Before discovery - unexported volumes and metavolumes

Before you run the Discover Unmanaged Volumes service on an unmanaged volume or metavolume, follow this procedure.

Before you begin

From ViPR, discover the storage array where the volume resides. This brings the physical storage pools on the array under ViPR management.

Procedure

  1. Examine the volume in Unisphere to determine the storage pool in which the volume resides.
  2. From ViPR, build a virtual array that includes connectivity to the physical storage array on which your volume resides.
  3. From ViPR, build a virtual pool that matches the physical storage pool where the volume resides.
  4. If the volume resides in a thin pool on the array, be sure that the virtual pool that matches the volume's physical storage pool has Provisioning Type: Thin.
Back to Top

Discover unmanaged volumes

The ViPR service catalog provides a Discover Unmanaged Volumes service that finds block volumes which are not under ViPR management and matches them to a ViPR virtual pool. The operation is also supported from the ViPR API and CLI.

Before you begin

The following prerequisites are applicable:
  • This operation requires the System Administrator role in ViPR.

  • The virtual array and virtual pool into which you want to ingest the storage pools must exist when the discovery is performed. There must be at least one virtual pool in ViPR that matches the physical storage pool that contains the volume.

Procedure

  1. Select Service Catalog > View Catalog > Block Storage Services > Discover Unmanaged Volumes.
  2. Select the physical block storage system on which you want to discover unmanaged volumes. You can select more than one storage system.
  3. Select Order.
    The orders page is displayed and shows the progress of the request.
Back to Top

Was the discovery successful?

This topic describes steps to take to see if your discovery service was successful, and how to respond to a discovery that does not succeed.

There are two ways to determine if the discovery is successful:
  • Check the ViPR log ( {vipr home}/logs/controllersvc.log) for the string SUPPORTED_VPOOL_LIST after unmanaged volume discovery. The matching virtual pools will be listed in the logs. If no pools are listed in the logs, the SUPPORTED_VPOOL_LIST will be missing in the logs for a given unmanaged volume.
  • Call the following ViPR REST API:
    GET
    /vdc/unmanaged/volumes/{id}
    The SUPPORTED_VPOOL_LIST section of the unmanaged volume feed should contain the names of the matching virtual pools. If this list is empty, the discovery was unsuccessful.
Back to Top

Ingest unmanaged volumes

The ViPR service catalog provides an Ingest Unmanaged Volumes service that brings previously discovered unmanaged block volumes under ViPR management. The operation is also supported from the ViPR API and CLI.

Before you begin

  • This operation requires the System Administrator role in ViPR.
  • You must run Discover Unmanaged Volumes on the array from which the block volumes will be ingested.

  • Ingested volumes will be assigned to a project. You must belong to the selected project and have write-permission on the project.

Note Image
If the virtual array or virtual pool has been modified since the last time the unmanaged volumes were discovered, rerun Discover Unmanaged Volumes prior to running the ingest operation to ensure volumes are assigned to the correct virtual array and virtual pool.

Procedure

  1. At the ViPR UI, select Service Catalog > View Catalog > Block Storage Services > Ingest Unmanaged Volumes.
  2. Select the storage system from which you want to ingest block volumes.
  3. Select a virtual array that contains physical array storage pools that you want to import. The storage system might contribute physical storage pools to a number of virtual pools. If you want to ingest volumes that match other virtual pools you will need to run the service again for the other virtual pools.
  4. From the array physical storage pools that form part of the virtual array, select a virtual pool that matches the physical pool where the unmanaged volume reside.
  5. Select a project. ViPR assigns the unmanaged volumes to the project you choose.
  6. Select Order.
    The orders page shows the progress of the request. If the order is successfully fulfilled, you can look at the Resources page to see the imported volumes.

After you finish

Once the unmanaged volumes have been ingested into ViPR, they can exported to a host and then mounted, or used for other functions such as SRDF mirror volumes.

To export a volume to either a Windows or Linux host, use:
  • Service Catalog > View Catalog > Block Storage Services > Export Volume to a Host.

To mount the volumes on a host:
  • For Linux hosts use: Service Catalog > View Catalog > Block Service for Linux > Mount Existing Volume on Linux.
  • For Windows hosts use: Service Catalog > View Catalog > Block Service for Windows > Mount Volume on Windows.

Back to Top

Ingest exported unmanaged volumes

To ingest a volume that has been exported to a host, you must run two services in this order:

  1. Discover Unmanaged Volumes
  2. Ingest Exported Unmanaged Volumes
Back to Top

Before discovery - exported volumes and metavolumes

Before you run the Discover Unmanaged Volumes service on an unmanaged volume or metavolume that has been exported to a host or cluster, follow this procedure.

Before you begin

From ViPR, discover the storage array where the volume resides.

From ViPR, discover the host or cluster to which the volume has been exported. Use the FQDN of the host for discovery. If you discover the host using its IP address, you may have encounter problems ingesting exported VNX volumes.

Procedure

  1. Examine the volume in Unisphere to determine the storage pool in which the volume resides.
  2. Build a project or choose an existing project to which you have write access.
  3. From ViPR, build a virtual array that includes connectivity to the physical storage array on which your volume resides, and any hosts to which your target volume has been exported.
  4. From ViPR, build a virtual pool that matches the physical storage pool where the volume resides.
  5. If the volume resides in a thin pool on the array, be sure that the virtual pool that matches the volume's physical storage pool has Provisioning Type: Thin.
Back to Top

Before Discovery - Checks for exported VMAX volumes

Before you ingest an unmanaged VMAX block volume that has been exported to a host, you must collect some information from the masking views on the array, and the fabric on the switch.

From the masking view, collect this information:

  • Determine if the Masking View has a storage group associated with a specific FAST policy. If so, the FAST policy must be specified in the virtual pool you use for the ingestion.
  • If the host or cluster has multiple masking views, and those masking view have different FAST policies assigned, you will have to build multiple virtual pools - one for each FAST policy.
  • If the storage group in the masking view specifies Host IO Limit settings, the virtual pool must specify Host IO Limit Settings that match the Storage Group.
Use your switch software to collect information about the fabric that enables communication between your host and your array.
  • If your host initiator is zoned to 2 front end adapter ports, the virtual pool you use for ingest must have Minimum Paths set to 2.
  • Set the Maximum Paths value in the virtual pool to the number of paths established for all of the host's initiators in all zones on the switch (or a greater value).
  • If a host has multiple initiators, all of those initiators must be zoned to the same number of array ports. The number of paths per initiator is set in the virtual pool you use for ingest.
Back to Top

SAN Multipath settings for VNX volume ingestion

If you are ingesting an exported VNX volume, set SAN Multipath fields according to the guidelines below.

  • Check the host in UniSphere (or ViPR or another tool) for the initiators associated with the host.
  • Use the switch software (such as CMCNE or an equivalent CLI) to count the number of paths from each of the host's initiator ports to the array. This value is the number of array ports to which all the host's initiators have been zoned. For a VNX, the number of paths is determined based on the existing zones between the host initiators and the array ports that are in the VNX storage group. For example, if a host has two initiators, and each initiator is zoned to two array ports, the Maximum Paths field in the ingest virtual pool is set to 4.
  • Check to be sure that each of the host's initiators is zoned to the same number of array ports. If the host's initiators are zones to different numbers of array ports, the ingest service will fail.
You will require this information when you build the virtual pool for the ingest.
Back to Top

Setting paths in zones and on the array before ViPR ingest

When computing the number of paths a host has to a VNX storage group or VMAX masking view, ViPR uses the zoning paths between the host initiators and the storage ports that are in that VNX storage group or VMAX masking view.

For some arrays such as a VNX, the same paths defined in the zones must also be defined on the storage array for the paths to become effective. It is possible for the two paths sets (the switch-defined zone paths, and the array-defined paths) to not match. This is considered to be a mis-configuration that should be corrected.

For example, assume the following:
  • A storage group has 2 initiators: I1 and I2
  • The storage group has two storage ports P1 and P2
  • VNX paths are defined as I1 --> P1 and I2 --> P2
  • All initiator ports are in a single zone: I1, I2, P1 and P2.
When ViPR generates a path count, it will count 4 paths:
  • I1 --> P1
  • I1 --> P2
  • I2 --> P1
  • I2 --> P2

If you assume that only 2 paths exist (and enter 2 in the Maximum Paths field of the virtual pool you use for the ingest operation) but ViPR counts 4, your ingest may fail. Remove un-used zoning paths before you try to ingest the volume.

Back to Top

FAST policy settings for ingested volumes

If your volume has a FAST policy assigned to it on the array, you must set the Auto-tiering Policy field to that policy name in the virtual pool you use for ingestion.

Back to Top

ViPR Best Practice: Host with multiple masking views

When you are ingesting a volume that has been exported to a host, and that host has multiple masking views, you should ingest at least one volume from each masking view.

Consider the following figure:
host with multiple masking views

Host with multiple masking views

Best practice in this situation is to ingest a volume from each masking view, and use ViPR for all management of these volumes. If you ingest one volume and not the other, problems can occur in situations where the uningested masking view is modified outside of ViPR. ViPR has no method of synchronizing with changes to an uningested masking view.

Back to Top

Discover unmanaged volumes

The ViPR service catalog provides a Discover Unmanaged Volumes service that finds block volumes which are not under ViPR management and matches them to a ViPR virtual pool. The operation is also supported from the ViPR API and CLI.

Before you begin

The following prerequisites are applicable:
  • This operation requires the System Administrator role in ViPR.

  • The virtual array and virtual pool into which you want to ingest the storage pools must exist when the discovery is performed. There must be at least one virtual pool in ViPR that matches the physical storage pool that contains the volume.

Procedure

  1. Select Service Catalog > View Catalog > Block Storage Services > Discover Unmanaged Volumes.
  2. Select the physical block storage system on which you want to discover unmanaged volumes. You can select more than one storage system.
  3. Select Order.
    The orders page is displayed and shows the progress of the request.
Back to Top

Was the discovery successful?

This topic describes steps to take to see if your discovery service was successful, and how to respond to a discovery that does not succeed.

There are two ways to determine if the discovery is successful:
  • Check the ViPR log ( {vipr home}/logs/controllersvc.log) for the string SUPPORTED_VPOOL_LIST after unmanaged volume discovery. The matching virtual pools will be listed in the logs. If no pools are listed in the logs, the SUPPORTED_VPOOL_LIST will be missing in the logs for a given unmanaged volume.
  • Call the following ViPR REST API:
    GET
    /vdc/unmanaged/volumes/{id}
    The SUPPORTED_VPOOL_LIST section of the unmanaged volume feed should contain the names of the matching virtual pools. If this list is empty, the discovery was unsuccessful.
Back to Top

Ingest exported unmanaged volumes

The ViPR service catalog provides an Ingest Exported Unmanaged Volumes service that brings previously discovered unmanaged block volumes that have already been exported to hosts under ViPR management. The operation is also supported from the ViPR API and CLI.

Before you begin

This operation requires the System Administrator role in ViPR.

Procedure

  1. From the ViPR user interface, select Service Catalog > Block Storage Services > Ingest Exported Unmanaged Volumes.
    Enter the information in the following table into the service order form.
  2. Click Order.
    The orders page shows the progress of the request. If the order is successfully fulfilled, you can look at the Resources page to see the imported volumes.
Back to Top

Block volume ingestion use cases

For some ingestion use cases, the ingestion procedure requires additional steps or additional information about the configuration required in order to perform ingestion.

Back to Top

Ingest SRDF protected volumes

To ingest VMAX volumes that are protected by SRDF, you must ingest both the source volume and the target volume, and you must ingest them into different virtual pools. The source virtual pool must contain physical storage pools located on the source VMAX system. The target virtual pool must contain physical storage pools located on the target VMAX system. As the source and target volumes will be ingested into different virtual pools, you will need to perform two ingest operations.

It is assumed that you have two VMAX arrays and that SRDF protection between the arrays is configured. In addition, the two VMAX systems should be added to different ViPR virtual arrays. If you have not yet configured VMAX with SRDF you should follow the instructions provided in: Protect Data Using VMAX SRDF Remote Replication with ViPR.

Note Image

It is possible for both source and target virtual pools to belong to the same virtual array, however, it is recommended that they are configured into different arrays to make the relationship clearer.


To ingest an SRDF-protected volume, do the following:
  1. Check the name of the RDF group for the SRDF pair you are trying to ingest. The ViPR project you create for the ingest operation must have exactly the same name as the RDF group. To check the RDF group name, you can use an element manager such as SMC, or use the following SYMCLI command:
    # symcfg -sid <id> list -rdfg all
  2. Ensure that you have set up a virtual array with a virtual pool that has SRDF protection configured. This virtual pool will be used to ingest the source volume. The virtual pool must specify Data Protection that uses the source virtual pool at the Virtual Assets > Block Virtual Pools page. The following settings should be specified:

    The virtual pool Data Protection panel with the appropriate selections is show below.

    SRDF protection on source virtual pool

    When Add Copy is selected, the target virtual array and virtual pool can be specified. Only virtual arrays and their constituent pools that can act as the target for the source VMAX are offered by this dialog.

    SRDF protection: specify target virtual pool

  3. Ensure that you have set up a virtual array with a virtual pool for ingesting the target volume. The virtual pool should not be SRDF protected.
  4. Run the discovery process and specify both the source and target VMAX arrays (Discover unmanaged volumes).
  5. After successful discovery on the source and target arrays, run the ingestion process on the source array followed by the target array (Ingest unmanaged volumes).
Back to Top

Ingest unexported, unmanaged, VPLEX volumes

Ingestion of unexported, unmanaged VPLEX volumes can be achieved using the discover and ingest services from the ViPR service catalog, or using the ViPR API or CLI. Because ViPR does not support all of the array back end types supported by VPLEX, it can ingest the VPLEX virtual volume and perform operations that do not depend having control of the back-end array.

The following notes are applicable:
  • The following operations are supported on an ingested VPLEX volumes: Export, Unexport, Mirror, Migrate etc.
  • A virtual volume can be ingested only if it not already exported. ViPR does not support ingest of exported VPLEX volumes.
  • Operations that require back end operations: Expand, Snap, Clone etc are not supported.
Ingested volumes can also be moved to a different virtual pool. This is useful where you want to migrate the back-end storage of a an ingested VPEX volume from an array that is not supported by ViPR to an array that is supported. When performing this virtual pool migration, the following apply:
  • A virtual pool change to migrate the back-end storage can be performed on all ingested Local VPLEX volumes. For distributed VPLEX volumes, the ingested volume must consist of a single extent on a single storage volume.
  • A virtual pool change of an ingested local VPLEX volume will result in a change to the name of the volume on the VPLEX. The VPLEX automatically changes the name of the local virtual volume to be the name of the local device on which it is built with the "_vol" suffix appended. Since it is migrated to a new local device created by ViPR, with the ViPR naming convention, the new name has the same convention as a ViPR created volume. This does not happen for a distributed volume, which uses VPLEX extent migration, because VPLEX device migration is not supported for distributed volumes.

A typical procedure for ingesting and moving an ingested volume is provided in: Move data from an unsupported array to supported array using VPLEX.

Back to Top

Move data from an unsupported array to supported array using VPLEX

You can use a VPLEX storage system managed by ViPR to move data from an array that is not supported by ViPR to a supported array. The procedure requires you to connect the unsupported array to VPLEX and ingest the source

Before you begin

  • Ensure that a VPLEX system has been added to ViPR and has been successfully discovered.
  • Ensure that a VPLEX virtual pool exists. This will be used to create a target volume to which data will be moved. The virtual pool must specify High Availability as VPLEX Local.
  • You must have the Tenant Administrator role in ViPR in order to access the Change Volume Virtual Pool Service.
  • You must have the System Administrator role to perform any physical or virtual asset operations and to run the discovery and ingestion services.

Procedure

  1. Connect VPLEX to the origin array.
  2. Encapsulate origin array volumes into VPLEX using the VPLEX UI or Unisphere.
  3. At the ViPR UI, create a virtual pool that includes the VPLEX physical storage pool that hosts the volume from which you want to ingest and migrate.
  4. In the UI user mode, run the discovery service Service Catalog > View Catalog > Block Storage Services > Discover Unmanaged Volumes on the VPLEX array.
    Details of running this service are provided in (Discover unmanaged volumes).
  5. After successful discovery, run the ingestion process (Ingest unmanaged volumes).
    You will need to specify the virtual pool that you created and that has the origin volume.
  6. If you have not already configured VPLEX to use the destination array, and configured a virtual pool that includes suitable storage pools from the destination array, you will need to:
    1. Bring the destination array online and connect it to the VPLEX array.
    2. Add the VPLEX array to ViPR and make sure that it has been discovered by ViPR.
    3. Create a virtual pool (in an existing or new virtual array) that includes the VPLEX virtual pool to which you want to migrate the data. The VPLEX virtual pool be a storage pool located on the destination backing array.
  7. Run the Change Volume Virtual Pool service to migrate the data. The service is located in the service catalog at Service Catalog > View Catalog > Block Storage Services > Change Volume Virtual Pool and specify the Operation as VPLEX Data Migration.
    More information on using the Change Volume Virtual Pool service can be found in: Change the ViPR Virtual Pool used in a VPLEX Environment.
    This step will create volumes on both the VPLEX and the destination array and ViPR converts the underlying volume on the destination array to a ViPR-managed volume.
  8. At the ViPR UI, delete the origin volume using the Service Catalog > View Catalog > Block Storage Services > Remove Block Volumes service or using the Delete operation from the Resources > Volumes page. Specify the Deletion Type as Inventory Only.
    This will delete the volumes from ViPR only, the volumes remain on physical devices.
  9. The VPLEX and origin array can now be removed, if desired, since data is on the new array.

Results

Data from the unsupported array is now successfully migrated to the supported storage array.

Back to Top