ViPR 2.1 - EMC ViPR with VPLEX: Benefits and Examples

Table of Contents

Executive Summary

EMC VPLEX federates data located on heterogeneous storage arrays to create dynamic, distributed, highly available data centers allowing for data mobility, collaboration over distance, and data protection.

In ever-scaling environments, with customers and end-users asking for more capacity, protection, and flexibility for their storage, it is difficult for storage administrators to efficiently manage infrastructure and deliver storage quickly. VPLEX transforms the delivery of IT into a flexible, efficient, reliable, and resilient service. EMC ViPR is a lightweight, software-only product that transforms existing storage into a simple, extensible, and open platform which can deliver fully automated storage services to help realize the full potential of the software-defined data center. VPLEX and ViPR together provide power to storage administrators to reduce the time to deliver complex environments that scale to their end users.

Back to Top

Audience

This document is intended for storage professionals who are familiar with EMC products and are looking for information on why, when, and how to use ViPR and VPLEX together.

Back to Top

What is VPLEX

VPLEX, with its GeoSynchrony operating system, addresses three primary IT needs:

  • Data Mobility: Move data non-disruptively between EMC and third-party storage arrays without host downtime. VPLEX moves data transparently and the virtual volumes retain the same identities and the same access points to the host. The host does not need to be reconfigured. VPLEX moves applications and data between different storage installations:
    • Within the same data center or across a campus (VPLEX Local)
    • Within a geographical region (VPLEX Metro)
    • Across even greater distances (VPLEX Geo)
  • Availability: VPLEX creates high-availability storage infrastructure across these same varied geographies with unmatched resiliency. Protect data in the event of disasters or failure of components in your data centers. With VPLEX, you can withstand failures of storage arrays, cluster components, an entire site failure, or loss of communication between sites (when two clusters are deployed)
  • Collaboration: VPLEX provides efficient, real-time data collaboration over distance for Big Data applications. AccessAnywhere provides cache-consistent active-active access to data across VPLEX clusters. Multiple users at different sites can work on the same data while maintaining consistency of the dataset.
Back to Top

What is ViPR?

EMC® ViPR™ is a software-defined platform that abstracts, pools, and automates a data center's underlying physical storage infrastructure.

ViPR provides data center administrators a single control plane for heterogeneous storage systems. ViPR enables software-defined data centers by providing:
  • Storage automation capabilities for heterogeneous block and file storage (controlpath).
  • Object data management and analytic capabilities through Data Services that create a unified pool (bucket) of data across file shares (data path).
  • Integration with VMware and Microsoft compute stacks to enable higher levels of compute and network orchestration.
  • A comprehensive RESTful interface for integrating ViPR with management and reporting applications.
  • A web-based User Interface (UI) that provides the ability to configure and monitor ViPR, as well as perform self-service storage provisioning by enterprise users.
  • Comprehensive and customizable platform reporting capabilities, including capacity metering, chargeback, and performance monitoring through the ViPR SolutionPack.
Back to Top

Installation and integration: supported deployment models

ViPR supports discovery and management of storage provided by VPLEX Local and VPLEX Metro configurations.

VPLEX Geo is not supported. The diagram below shows an example of a VPLEX Metro configuration across two data centers. Host 1 and host 2 can access volumes 1, 2, 3, and 4 through the locally connected VPLEX. If the hosts are clustered (not shown in Figure 1) they can leverage a distributed virtual volume that spans sites 1 and 2. The environment in Figure 1 can withstand multiple component failures and continue to operate without a disruption in service.

Example VLEX Metro

VPLEX Metro example

Back to Top

Discovering VPLEX

To use ViPR with VPLEX you need to discover the back-end VMAX/VNX arrays, the VPLEX clusters, the hosts that you intend to provision storage to, and the attached fabrics.

For discovery and management activities, ViPR uses the VPLEX Element Manager API. ViPR treats VPLEX systems as a storage system physical asset, and automatically rediscovers them every 60 minutes by default.

To discover a VPLEX system, select Physical Assets > Storage Systems and specify the following information:
  • IP address of the VPLEX system's management server
  • port number for the API (443 by default)
  • credentials to access the system
Alternatively, you can discover the VPLEX by entering information about its SMI-S provider. Choose Physical Assets > Storage Providers and enter this information:
  • The FQDN of the VPLEX system's management server
  • port number 5989 for an SSL connection or 5988 for standard sockets.
  • credentials to access the system

After saving this information, ViPR automatically performs an initial discovery of the VPLEX cluster(s).

To discover a VPLEX Metro configuration from ViPR, you must discover one of the two VPLEX clusters.

It is possible to discover both management servers of the VPLEX system. Discovering both clusters enables ViPR to continue to discover and manage the VPLEX in the event that one of the management servers is unavailable.

For more information on ViPR array discoveries, refer to the EMC ViPR 2.1 Product Documentation Index.

vplex discovery

Discovering a VPLEX storage system

Back to Top

Create a VPLEX-based virtual array

Once ViPR discovers the physical assets, the next step is to create virtual arrays and virtual pools. The virtual array you create should contain both the VPLEX cluster and the block arrays to which it is physically connected and zoned.

There must be one virtual array for each VPLEX cluster. By configuring the virtual array this way, ViPR knows where to get the back-end storage and which VPLEX cluster to use when block storage with VPLEX is requested. You should carefully plan and perform this step because it is not possible to change the configuration once resources have been provisioned without first disruptively removing the provisioned volumes.

Edit Virtual Array

Back to Top

Local fabric virtual arrays

The easiest way to add the back-end storage and the VPLEX cluster to the virtual array is to select the appropriate fabric from the network selection dialog.

This approach applies when the VPLEX clusters and back-end storage are isolated on separate local fabrics, as opposed to being stretched fabrics that cross sites. Once you select networks, ViPR identifies all of the storage systems on the fabric, including the VPLEX, and adds them to the virtual array.

Virtual array network list

Add Networks

Adding networks to virtual arrays

Back to Top

Stretched fabric virtual arrays

When you create a virtual array, rather than choose a network, you can use an individual array and VPLEX port selection. This more granular and configurable method allows you to select individual ports from the back-end array that have been specifically designated for use with VPLEX protected storage.

If a fabric across two sites contains both VPLEX clusters in a VPLEX Metro configuration, you must use the port selection method to add the VPLEX and backing arrays to the virtual array.

You should add both the back-end and front-end ports from the VPLEX as well as ports from the appropriate back-end array to the virtual array. The same virtual array must not contain ports from both VPLEX clusters. This limitation differentiates each cluster at each site and ensures the correct back-end storage array is used in conjunction with the VPLEX cluster in the same geographical location.

Storage Ports

Storage Ports

You can confirm the two clusters in a VPLEX Metro configuration are in two different virtual arrays by looking at the physical array in ViPR and reviewing the ports displayed as in Figure 6. For VPLEX systems with one engine, the ports that appear in ViPR in the group column that start with "director-1" are from the first VPLEX cluster and ones that start with "director-2" are from the second cluster. The following figure shows the Add Storage Ports dialog box.

Add storage ports

Add Storage Ports

In the example VPLEX Metro configuration in Figure 1, you can create two virtual arrays for the most flexibility. With one virtual array, you can only create local VPLEX virtual volumes. When creating one virtual array per site, the first virtual array, vArray1 at site 1, would contain the following:
  • Back-end array ports from both arrays connected to the local VPLEX.
  • VPLEX back-end ports connected to local storage.
  • VPLEX front-end ports connected or capable of being connected to hosts using VPLEX storage.

The second virtual array should contain similar components; however, all components must be located at the second site (site 2 in the example), as shown in the Figure 8.

Physical assets in virtual arrays

Back to Top

Adding VPLEX high availability to virtual pools

Virtual pools for block storage offer two VPLEX high availability options: VPLEX Local and VPLEX Distributed.

When you specify local high availability for a virtual pool, the ViPR storage provisioning services create VPLEX local virtual volumes. If you specify VPLEX distributed high availability for a virtual pool, the ViPR storage provisioning services create VPLEX distributed virtual volumes. Because ViPR understands the networking between all the components, you could add both virtual arrays in Figure 8 to the same virtual pool if desired. When creating a virtual pool with VPLEX local high availability:

  1. Select the virtual array or arrays for which virtual pool will be used to create local virtual volumes.
  2. Specify the desired characteristics for the back-end storage volumes that ViPR creates and that serve as the VPLEX local virtual volumes. Note that if multiple virtual arrays are selected, each virtual array must contain storage that satisfies the selected storage characteristics.
  3. Select "VPLEX Local" for the Remote Protection/High Availability setting and save the virtual pool. You now have a virtual pool you can use to provision VPLEX local virtual volumes from the selected virtual arrays, where the back-end volumes will have the storage characteristics specified in the virtual pool.
VPLEX Local High Availability

Adding virtual pool with VPLEX local availability

VPLEX local provisioning with a single virtual pool

To enable distributed high availability on block storage created from a virtual pool, select "VPLEX Distributed," select the virtual array and, optionally, select the virtual pool to use at the destination site. When the high availability virtual pool is not specified, the settings in the current virtual pool are used for the back-end storage on the high availability side. In that configuration, both virtual arrays from both sites must be selected for use by the virtual pool.

To create a virtual pool with VPLEX distributed high availability:

  1. Select the virtual array or arrays for which the virtual pool will be used to create local virtual volumes.
  2. Specify the storage characteristics desired for the back-end storage volumes. Note that if multiple virtual arrays are selected, each virtual array must contain storage that satisfies the selected storage characteristics.
  3. Select "VPLEX Distributed" for the Remote Protection/High Availability setting and save the virtual pool.

You can now use the virtual pool to provision VPLEX distributed virtual volumes from the selected virtual arrays, where the back-end volumes have the storage characteristics specified in the virtual pool.

If you use multiple virtual pools, create the remote virtual pool first, and then edit the settings of the local virtual pool to specify a "Highly Available Virtual Array" and the "Highly Available Virtual Pool."

With the VPLEX Metro example, when provisioning from the "vPool-VPLEXDistributed" pool, ViPR uses "vArray-Site1" as the primary virtual array and "vArray-Site2" and "vPool-VPLEXLocal" as the high availability virtual array and virtual pool, respectively, for storing the second copy of the distributed volume.

VPLEX distributed provisioning

VPLEX Distributed provisioning

Back to Top

Creating VPLEX virtual volumes

Creating and exporting virtual volumes is the basis for all solutions enabled by ViPR and VPLEX.

ViPR's ability to manage VPLEX, the back-end block storage arrays, the SAN fabric, and the hosts/clusters allows you to create virtual volumes and export them using the Block Storage Services > Create Block Volume for a Host service.

When provisioning orders are executed to a virtual array containing a VPLEX and virtual pool configured with VPLEX availability, ViPR automatically performs the following configuration tasks. These tasks allow the host to use the new storage without further manual intervention:
  1. Creates a volume on the back-end storage array.
  2. If necessary, creates all necessary masking constructs on the back-end arrays to export the back-end volumes to the VPLEX.
  3. If necessary, creates all required zoning constructs to establish the connectivity between the VPLEX and the back-end storage arrays to make the back-end volumes visible to the VPLEX.
  4. Discovers and claims the new volume on the VPLEX.
  5. Creates a new VPLEX extent, local device, and virtual volume using the full capacity of the back-end array volume.
  6. If necessary, registers the host's initiator on the VPLEX.
  7. Performs masking and mapping to a host by adding the virtual volume to a storage view on the VPLEX using automated port selection.
  8. If necessary, create zones from the VPLEX cluster to the host/cluster.
  9. Rescan the host/cluster to pick up new host devices.

If you request distributed virtual volumes, numbers 1-5 are also performed on the remote site's arrays and fabrics.

ViPR intelligently executes these steps according to performance and redundancy best practices and its ability to monitor and understand available paths in the environment. For instance, through its use of the EMC SMI Provider to discover the back-end array, ViPR can monitor ports used as part of the port groups in the masking views. Automatic discoveries occur every hour and if one of those ports is offline, ViPR does not attempt to allocate storage through that unusable port.

For subsequent provisioning tasks, ViPR can leverage its awareness of the properties and topology of the environment and reuse constructs on all managed systems. For example, ViPR reuses storage groups, initiator groups, port groups, masking views on the array and on VPLEX, and back-end and front-end zones.

Back to Top

ViPR naming conventions on VPLEX

When you use ViPR to create new virtual volumes on a VPLEX system, ViPR uses the following naming patterns for the backing storage and the virtualized constructs within the VPLEX:

Additionally, the name you provide in ViPR when placing an order is used as an alias on the back-end VMAX or VNX array. This volume alias can be viewed using the appropriate CLI or element manager for the array.

When provisioning storage from the VPLEX to a host for the first time, ViPR will perform the appropriate zoning, discover and then register the appropriate initiators. The initiators will first start with the "UNREGISTERED_" prefix and then be changed to start with "REGISTERED_" when ViPR is complete.

Back to Top

Adding VPLEX to an existing VMAX/VNX/ViPR environment

It is simple to add VPLEX systems to an environment managed by ViPR. Creating new virtual arrays and new virtual pools is all that needs to be done after discovery, the physical connectivity, and initial configuration (including provisioning of the meta-data and logging volumes) of the VPLEX is complete.

The new virtual arrays should contain the physical arrays that should be used as the backing array for the virtual volumes and new virtual pools should be created with the VPLEX Local or VPLEX Distributed settings for remote protection and availability.

When the first order for VPLEX-based storage is requested, ViPR zones the specified host using the minimum and maximum path settings from the specified virtual pool. For the zoning from back-end array to the VPLEX, ViPR will follow VPLEX best practices and ensure that every director must have at least two paths to all storage. Additionally, no director will be connected more than four paths to any storage. Having more than four paths causes issues with timeouts taking too long before switching to alternate directors which can cause connectivity loss.

For more information, see VPLEX-VMAX Multiple Masking Views Support.

Using ViPR you can then easily convert existing non-exported volumes that reside on VMAX or VNX arrays to VPLEX virtualized volumes using the Change Virtual Pool service. This operation is detailed in the VPLEX Data Mobility by Changing Virtual Arrays/pools section in this document

Back to Top

ViPR in pre-provisioned data centers

ViPR provides maximum benefit when it manages new storage from the beginning of its service time. By having the new environment under ViPR control, ViPR is aware of and can manage all the elements of your storage array network. In environments configured by other tools, ViPR provides slightly less functionality.

In established data centers, where ViPR is used after a VPLEX and its back-end storage have already been configured and used for some time, ViPR is unable to manage any of the existing virtual volumes which have been exported to hosts. However, it is possible to ingest virtual volumes that have not been exported or have a local mirror attached on the VPLEX. This operation can be done on a virtual volume with any backend array including 3rd party arrays that are not supported in ViPR.

For more details on ingestion refer to VIPR 2.1 - Ingest Unmanaged Block Volumes into VIPR.

When you exporting virtual volumes to a host or a RecoverPoint system, you can reuse existing storage views. For a storage view to be reused, it must contain the same combination of VPLEX front-end ports and initiator ports that ViPR has selected for use in the export group.

Back to Top

Provisioning to VPLEX-enabled stretched clusters

ViPR takes the work out of making system-wide (long distance) fault-tolerant virtual storage available to hosts. It is able to do this through its support of VPLEX Metro HA configurations, even with complex configurations consisting of host clusters which are cross-connected to fabrics across sites.

For instance, configurations using VMware vSphere software, VPLEX, and cross-connected fabrics can tolerate physical host, VPLEX cluster, inter-cluster link, storage array, and VPLEX witness failures with only the physical host failure requiring any downtime as the VMware HA software automatically restarts the affected VMs.

For more information on these configurations, see VIPR 2.1 - EMC VIPR Support For Stretched Host Clusters.

Back to Top

Expansion of VPLEX virtual volumes

You can expand existing virtual volumes through the Expand Block Volume service in the ViPR user interface.

This service increases the size of the back end array volume through creation or expansion of a meta-volume, rediscovers the array from the VPLEX side, and then executes an expand-virtual volumes command on the VPLEX to increase the size of the virtual volume to the new size of the back end array volume.

For more information on the Expand Block Volume see the following articles:
Back to Top

Creating back-end clones of VPLEX virtual volumes

To achieve even more protection or enable access to other uses of the information stored on the back-end array volume, ViPR can create a full clone of that volume. The clone can be used for backup and restore purposes or to enable business continuity operations such as end-of-month reporting.

This advanced configuration is available through the "Block Protection Services-> Create Full Copy" service in the Service Catalog. When executing this service you can simply select the virtual volume for which a clone is required, a base name for the new clones, and the number of clones required. ViPR handles the rest. Using this service doesn't require the end-user to understand the relationship from the virtual volume to the back-end array volume. ViPR traverses this relationship and creates the back-end clone without any additional user input. The resulting clone can then be exported to a host if necessary. Although not available now, a future version of ViPR will allow users to create snapshots of back-end volumes.

Back to Top

VPLEX data mobility: change virtual array and change virtual pool

Two of the more advanced services available with VPLEX and ViPR are Change Virtual Array and Change Virtual Pool.

These services allow for the migration of virtual volumes within clusters and across clusters using the device migration functionality available on the VPLEX.

The Change Virtual Array services enables you to:
  • Move a VPLEX virtual volume from one VPLEX cluster to another.
  • Reassign the VPLEX virtual volume's ViPR virtual array to a different virtual array.
  • Change the back end physical storage volume on which VPLEX virtual volume is based to another physical storage volume assigned to the new virtual array.
  • Move the data on the original physical storage volume to the new storage volume.

The Change Virtual Pool. service allows you to:

  • Change virtual pool to create a new VPLEX virtual volume

  • Change the VPLEX virtual volume remote protection
  • Change the backend storage volume for a VPLEX virtual volume

See the following articles for more information:

VIPR 2.1 - Data Mobility: Change the VIPR Virtual Pool in a VPLEX Environment

VIPR 2.1 - Data Mobility: Change the VIPR Virtual Array in a VPLEX Environment

Back to Top