ViPR 2.1 - EMC ViPR with VPLEX: Benefits and Examples
Table of Contents
In ever-scaling environments, with customers and end-users asking for more capacity, protection, and flexibility for their storage, it is difficult for storage administrators to efficiently manage infrastructure and deliver storage quickly. VPLEX transforms the delivery of IT into a flexible, efficient, reliable, and resilient service. EMC ViPR is a lightweight, software-only product that transforms existing storage into a simple, extensible, and open platform which can deliver fully automated storage services to help realize the full potential of the software-defined data center. VPLEX and ViPR together provide power to storage administrators to reduce the time to deliver complex environments that scale to their end users.Back to Top
- Data Mobility: Move data non-disruptively between EMC and third-party storage arrays without host downtime. VPLEX moves data transparently and the virtual volumes retain the same identities and the same access points to the host. The host does not need to be reconfigured. VPLEX moves applications and data between different storage installations:
- Within the same data center or across a campus (VPLEX Local)
- Within a geographical region (VPLEX Metro)
- Across even greater distances (VPLEX Geo)
- Availability: VPLEX creates high-availability storage infrastructure across these same varied geographies with unmatched resiliency. Protect data in the event of disasters or failure of components in your data centers. With VPLEX, you can withstand failures of storage arrays, cluster components, an entire site failure, or loss of communication between sites (when two clusters are deployed)
- Collaboration: VPLEX provides efficient, real-time data collaboration over distance for Big Data applications. AccessAnywhere provides cache-consistent active-active access to data across VPLEX clusters. Multiple users at different sites can work on the same data while maintaining consistency of the dataset.
- Storage automation capabilities for heterogeneous block and file storage (controlpath).
- Object data management and analytic capabilities through Data Services that create a unified pool (bucket) of data across file shares (data path).
- Integration with VMware and Microsoft compute stacks to enable higher levels of compute and network orchestration.
- A comprehensive RESTful interface for integrating ViPR with management and reporting applications.
- A web-based User Interface (UI) that provides the ability to configure and monitor ViPR, as well as perform self-service storage provisioning by enterprise users.
- Comprehensive and customizable platform reporting capabilities, including capacity metering, chargeback, and performance monitoring through the ViPR SolutionPack.
VPLEX Geo is not supported. The diagram below shows an example of a VPLEX Metro configuration across two data centers. Host 1 and host 2 can access volumes 1, 2, 3, and 4 through the locally connected VPLEX. If the hosts are clustered (not shown in Figure 1) they can leverage a distributed virtual volume that spans sites 1 and 2. The environment in Figure 1 can withstand multiple component failures and continue to operate without a disruption in service.
For discovery and management activities, ViPR uses the VPLEX Element Manager API. ViPR treats VPLEX systems as a storage system physical asset, and automatically rediscovers them every 60 minutes by default.
- IP address of the VPLEX system's management server
- port number for the API (443 by default)
- credentials to access the system
- The FQDN of the VPLEX system's management server
- port number 5989 for an SSL connection or 5988 for standard sockets.
- credentials to access the system
After saving this information, ViPR automatically performs an initial discovery of the VPLEX cluster(s).
To discover a VPLEX Metro configuration from ViPR, you must discover one of the two VPLEX clusters.
It is possible to discover both management servers of the VPLEX system. Discovering both clusters enables ViPR to continue to discover and manage the VPLEX in the event that one of the management servers is unavailable.
For more information on ViPR array discoveries, refer to the EMC ViPR 2.1 Product Documentation Index.Back to Top
There must be one virtual array for each VPLEX cluster. By configuring the virtual array this way, ViPR knows where to get the back-end storage and which VPLEX cluster to use when block storage with VPLEX is requested. You should carefully plan and perform this step because it is not possible to change the configuration once resources have been provisioned without first disruptively removing the provisioned volumes.
If a fabric across two sites contains both VPLEX clusters in a VPLEX Metro configuration, you must use the port selection method to add the VPLEX and backing arrays to the virtual array.
You should add both the back-end and front-end ports from the VPLEX as well as ports from the appropriate back-end array to the virtual array. The same virtual array must not contain ports from both VPLEX clusters. This limitation differentiates each cluster at each site and ensures the correct back-end storage array is used in conjunction with the VPLEX cluster in the same geographical location.
You can confirm the two clusters in a VPLEX Metro configuration are in two different virtual arrays by looking at the physical array in ViPR and reviewing the ports displayed as in Figure 6. For VPLEX systems with one engine, the ports that appear in ViPR in the group column that start with "director-1" are from the first VPLEX cluster and ones that start with "director-2" are from the second cluster. The following figure shows the Add Storage Ports dialog box.
- Back-end array ports from both arrays connected to the local VPLEX.
- VPLEX back-end ports connected to local storage.
- VPLEX front-end ports connected or capable of being connected to hosts using VPLEX storage.
The second virtual array should contain similar components; however, all components must be located at the second site (site 2 in the example), as shown in the Figure 8.
When you specify local high availability for a virtual pool, the ViPR storage provisioning services create VPLEX local virtual volumes. If you specify VPLEX distributed high availability for a virtual pool, the ViPR storage provisioning services create VPLEX distributed virtual volumes. Because ViPR understands the networking between all the components, you could add both virtual arrays in Figure 8 to the same virtual pool if desired. When creating a virtual pool with VPLEX local high availability:
- Select the virtual array or arrays for which virtual pool will be used to create local virtual volumes.
- Specify the desired characteristics for the back-end storage volumes that ViPR creates and that serve as the VPLEX local virtual volumes. Note that if multiple virtual arrays are selected, each virtual array must contain storage that satisfies the selected storage characteristics.
- Select "VPLEX Local" for the Remote Protection/High Availability setting and save the virtual pool. You now have a virtual pool you can use to provision VPLEX local virtual volumes from the selected virtual arrays, where the back-end volumes will have the storage characteristics specified in the virtual pool.
To create a virtual pool with VPLEX distributed high availability:
- Select the virtual array or arrays for which the virtual pool will be used to create local virtual volumes.
- Specify the storage characteristics desired for the back-end storage volumes. Note that if multiple virtual arrays are selected, each virtual array must contain storage that satisfies the selected storage characteristics.
- Select "VPLEX Distributed" for the Remote Protection/High Availability setting and save the virtual pool.
You can now use the virtual pool to provision VPLEX distributed virtual volumes from the selected virtual arrays, where the back-end volumes have the storage characteristics specified in the virtual pool.
If you use multiple virtual pools, create the remote virtual pool first, and then edit the settings of the local virtual pool to specify a "Highly Available Virtual Array" and the "Highly Available Virtual Pool."
With the VPLEX Metro example, when provisioning from the "vPool-VPLEXDistributed" pool, ViPR uses "vArray-Site1" as the primary virtual array and "vArray-Site2" and "vPool-VPLEXLocal" as the high availability virtual array and virtual pool, respectively, for storing the second copy of the distributed volume.
ViPR's ability to manage VPLEX, the back-end block storage arrays, the SAN fabric, and the hosts/clusters allows you to create virtual volumes and export them using theservice.
- Creates a volume on the back-end storage array.
- If necessary, creates all necessary masking constructs on the back-end arrays to export the back-end volumes to the VPLEX.
- If necessary, creates all required zoning constructs to establish the connectivity between the VPLEX and the back-end storage arrays to make the back-end volumes visible to the VPLEX.
- Discovers and claims the new volume on the VPLEX.
- Creates a new VPLEX extent, local device, and virtual volume using the full capacity of the back-end array volume.
- If necessary, registers the host's initiator on the VPLEX.
- Performs masking and mapping to a host by adding the virtual volume to a storage view on the VPLEX using automated port selection.
- If necessary, create zones from the VPLEX cluster to the host/cluster.
- Rescan the host/cluster to pick up new host devices.
If you request distributed virtual volumes, numbers 1-5 are also performed on the remote site's arrays and fabrics.
ViPR intelligently executes these steps according to performance and redundancy best practices and its ability to monitor and understand available paths in the environment. For instance, through its use of the EMC SMI Provider to discover the back-end array, ViPR can monitor ports used as part of the port groups in the masking views. Automatic discoveries occur every hour and if one of those ports is offline, ViPR does not attempt to allocate storage through that unusable port.
For subsequent provisioning tasks, ViPR can leverage its awareness of the properties and topology of the environment and reuse constructs on all managed systems. For example, ViPR reuses storage groups, initiator groups, port groups, masking views on the array and on VPLEX, and back-end and front-end zones.Back to Top
Additionally, the name you provide in ViPR when placing an order is used as an alias on the back-end VMAX or VNX array. This volume alias can be viewed using the appropriate CLI or element manager for the array.
When provisioning storage from the VPLEX to a host for the first time, ViPR will perform the appropriate zoning, discover and then register the appropriate initiators. The initiators will first start with the "UNREGISTERED_" prefix and then be changed to start with "REGISTERED_" when ViPR is complete.Back to Top
The new virtual arrays should contain the physical arrays that should be used as the backing array for the virtual volumes and new virtual pools should be created with the VPLEX Local or VPLEX Distributed settings for remote protection and availability.
When the first order for VPLEX-based storage is requested, ViPR zones the specified host using the minimum and maximum path settings from the specified virtual pool. For the zoning from back-end array to the VPLEX, ViPR will follow VPLEX best practices and ensure that every director must have at least two paths to all storage. Additionally, no director will be connected more than four paths to any storage. Having more than four paths causes issues with timeouts taking too long before switching to alternate directors which can cause connectivity loss.
For more information, see VPLEX-VMAX Multiple Masking Views Support.
Using ViPR you can then easily convert existing non-exported volumes that reside on VMAX or VNX arrays to VPLEX virtualized volumes using the Change Virtual Pool service. This operation is detailed in the VPLEX Data Mobility by Changing Virtual Arrays/pools section in this documentBack to Top
In established data centers, where ViPR is used after a VPLEX and its back-end storage have already been configured and used for some time, ViPR is unable to manage any of the existing virtual volumes which have been exported to hosts. However, it is possible to ingest virtual volumes that have not been exported or have a local mirror attached on the VPLEX. This operation can be done on a virtual volume with any backend array including 3rd party arrays that are not supported in ViPR.
For more details on ingestion refer to VIPR 2.1 - Ingest Unmanaged Block Volumes into VIPR.
When you exporting virtual volumes to a host or a RecoverPoint system, you can reuse existing storage views. For a storage view to be reused, it must contain the same combination of VPLEX front-end ports and initiator ports that ViPR has selected for use in the export group.Back to Top
For instance, configurations using VMware vSphere software, VPLEX, and cross-connected fabrics can tolerate physical host, VPLEX cluster, inter-cluster link, storage array, and VPLEX witness failures with only the physical host failure requiring any downtime as the VMware HA software automatically restarts the affected VMs.
For more information on these configurations, see VIPR 2.1 - EMC VIPR Support For Stretched Host Clusters.Back to Top
This service increases the size of the back end array volume through creation or expansion of a meta-volume, rediscovers the array from the VPLEX side, and then executes an expand-virtual volumes command on the VPLEX to increase the size of the virtual volume to the new size of the back end array volume.
This advanced configuration is available through the "Block Protection Services-> Create Full Copy" service in the Service Catalog. When executing this service you can simply select the virtual volume for which a clone is required, a base name for the new clones, and the number of clones required. ViPR handles the rest. Using this service doesn't require the end-user to understand the relationship from the virtual volume to the back-end array volume. ViPR traverses this relationship and creates the back-end clone without any additional user input. The resulting clone can then be exported to a host if necessary. Although not available now, a future version of ViPR will allow users to create snapshots of back-end volumes.Back to Top
These services allow for the migration of virtual volumes within clusters and across clusters using the device migration functionality available on the VPLEX.
- Move a VPLEX virtual volume from one VPLEX cluster to another.
- Reassign the VPLEX virtual volume's ViPR virtual array to a different virtual array.
- Change the back end physical storage volume on which VPLEX virtual volume is based to another physical storage volume assigned to the new virtual array.
- Move the data on the original physical storage volume to the new storage volume.
The Change Virtual Pool. service allows you to:
Change virtual pool to create a new VPLEX virtual volume
- Change the VPLEX virtual volume remote protection
- Change the backend storage volume for a VPLEX virtual volume
See the following articles for more information:Back to Top