New Features and Changes for ViPR Controller 3.0

Table of Contents

New Feature Description

This article lists and describes the new features and changes introduced in ViPR Controller 3.0.

Unless otherwise noted, all of the ViPR Controller operations to support the enhancements provided in this version of ViPR Controller can be performed from the ViPR Controller UI, REST API, and CLI.

The complete set of ViPR Contoller 3.0 documentation is provided in the ViPR Controller Product Documentation Index.

Back to Top

New storage platform version support

The following storage platform versions are supported with ViPR Controller 3.0.

For more details about features and platforms supported by ViPR Controller 3.0 see the ViPR Controller Support Matrix.

Back to Top

Restructure of the ViPR Controller UI menus and options

The ViPR Controller UI menu structure has been reconstructed as follows.

ViPR Controller UI menus

The menus displayed in the UI are dependent on the user role assigned to the logged in user. See the User Role Requirements in the ViPR Controller UI online help for more details.

Changes to Download System Logs page

The default values for the ViPR Controller UI System Logs > Download > Download System Logs page, Log Level, and Orders options have changed.

Back to Top

Enhancements to Isilon Storage System support

ViPR Controller can now be used to perform the following operations on Isilon storage systems:

Back to Top

Replication Copy and Disaster Recovery of Isilon File Systems

ViPR Controller can be used to create replication copies of critical file system data, which is available for disaster recovery at any given point in time. File replication and disaster recovery is only supported on Isilon file systems enabled with a SyncIQ license.

When replication is enabled in the file virtual pool data protection attribute, ViPR Controller replicates the data on a source file system by creating a replication copy (target) of a source file system. A replication copy of the file system can be created with local mirror protection or remote mirror protection. When configured for local mirror protection, the local replication copy is created from the same source virtual pool from the same storage system from which the source file system was provisioned. When configured for remote mirror protection, the remote replication copy is created from a different target virtual pool and different storage system than where the source file system was provisioned. Once replication copies are created you can use the ViPR Controller to failover from the source to target file system.

To use ViPR Controller create replication copies of Isilon file systems, and failover to target devices, you will need to perform the following operations:

  1. Discover the Isilon storage systems in ViPR Controller.
  2. Create a file virtual pool, and set the Data Protection, Replication attributes.
  3. Use the file provisioning services to create the source and target file systems from the replication enabled virtual pool.
  4. Use the file protection catalog services, file replication copy service to create replication copies for existing file systems.

Information and requirements to enable replication copies of Isilon file systems

Be aware of the following before enabling replication on Isilon file systems:

  • Isilon storage systems, must be licensed, and enabled with SyncIQ.
  • Only asynchronous replication is supported.
  • Local or remote replication is supported for file systems.
  • Replication is supported on the same types of storage devices.
  • Full copy or Clone of Isilon storage systems is not supported.
  • Syncrhonizing from an older version of Isilon file systems to a new version of Isilon storage systems is supported, however synchronizing from a higher version of Isilon file systems to lower version of Isilon file systems is not supported.
  • ViPR Controller can only be used to create one target copy of a source file system. Creating multiple targets for one source file system, and cascading replication is not supported.
  • You can only move file systems from an unprotected virtual pool to a protected virtual pool. All other options must be configured the same in both the virtual pool from which the file system is being moved and the virtual pool in which the file system is being moved.
  • When the target file system is created in ViPR Controller, it is write enabled until the file replication copy operation is run using it as a target file system. Once it is established as a target file system by the file replication copy order, any data that was previously written to the target file system will be lost.

ViPR Controller UI features

Continuous copy, and failover are managed from the following areas of the ViPR Controller UI:

Back to Top

Schedule snapshots of Isilon file systems

You can use ViPR Controller to create a file snapshot schedule policy, which defines:

  • Regularly scheduled intervals when ViPR Controller will create snapshots of an Isilon file system.
  • The retention period for how long the snapshot will be retained before it is deleted.

The steps to create and assign schedule polices are:

  1. Discover storage systems.
  2. Create a file virtual pool with the schedule snapshot option enabled.
  3. Create one or more snapshot schedule policies in ViPR Controller.
  4. Create file systems from file virtual pools with snapshot scheduling enabled.
  5. Assign one or more snapshot policies to the file system.

Information and requirements to schedule snapshots of Isilon file systems

Be aware of the following before scheduling snapshots on Isilon file systems:

  • Only Tenant Administrators can configure schedule policies.
  • Schedule policies are only supported for local snapshots on Isilon storage systems with SnapshotIQ enabled.
  • Snapshot scheduling must be enabled on the virtual pool.
  • Schedule policies cannot be created on ingested file systems, or file systems created in ViPR Controller prior to this release.
  • The snapshot policy can be reused for different file systems.
  • One file system can be assigned one or more schedule policies.

ViPR Controller UI features

Snapshot scheduling is managed from the following areas of the ViPR Controller UI:

Back to Top

Set the smart quota on an Isilon file system

You can use ViPR Controller to set smart quota limits at the file system, and quota directory level of Isilon storage systems managed by ViPR Controller.

Smart quota limits are set from ViPR Controller, and sent to the storage system. ViPR Controller will display a warning when limits are exceeded, the limits are then enforced by, and notifications are sent from the storage system.

Information and requirements to set smart quota limits

  • Smart Quota limits can only be set on Isilon storage systems which are configured with a SmartQuota license.
  • ViPR Controller detects whether the storage system is configured with a SmartQuota license at the time of provisioning, and provisioning will fail if you have entered smart quota values on a service for:
    • Isilon storage systems which are not enabled with a SmartQuota license.
    • All storage systems, other than Isilon storage enabled with a SmartQuota license.
  • When SmartQuota limits are set on the file system, the QuotaDirectories under the file system will inherit the same limits that were set on the file system, unless the different SmartQuota limits are set on the QuotaDirectories, while the QuotaDirectories are being created.
  • Once you have set the SmartQuota limits from ViPR Controller, you cannot change the SmartQuota values on the file system, or QuotaDirectories from the ViPR Controller UI. You must use the ViPR Controller CLI or ViPR Controller REST API to change the SmartQuota limits.
  • ViPR Controller will only enforce smart quota limits set on the file system by ViPR Controller.
  • For troubleshooting, refer to the apisvc.log, and the controllersvc.log log files.

ViPR Controller UI features

Smart quota values are set, and managed from the following areas of the ViPR Controller UI:

Back to Top

Ingestion of Isilon file system Access Control Lists (ACLs)

When discovering and ingesting Isilon unmanaged file systems with NFSv4 protocol, ViPR Controller will also discover, and ingest any access controls set on the file system.

Additionally, ViPR Controller will also discover and ingest access controls set on the sub-directories of Isilon file systems enabled with NFSv4. Once ingested ViPR Controller allows you edit the permissions of ingested access controls, add more access controls to the Access Control List (ACL), and delete access controls from the list.

ViPR Controller UI features

To view the ingested access control lists:

  1. Go to Resources > File Systems, and click the name of the file system.
  2. Expand the NFS Access Control area to see the access control lists for the file system, and file system sub-directories.
  3. Click Access Control to view the access controls set in the access control list for the file system, or sub-directory.

Back to Top

A single Virtual NAS (vNAS) can be associated with multiple projects

A single vNAS can be associated with multiple projects in ViPR Controller.

Steps to configure ViPR Controller to share a vNAS with multiple projects are:

  1. Discover the storage system.
  2. Set the Controller Configuration to allow a vNAS to be shared with multiple projects.
  3. Map a vNAS to multiple projects.

Information and requirements to associate a vNAS with multiple projects

ViPR Controller UI features

vNAS association with multiple projects is managed through the following areas of the ViPR Controller UI.

Back to Top

Application services

An application is a logical grouping of volumes determined by the customer. With application services, you can create, restore, resynchronize, detach, or delete full copies or snapshots of the volumes that are grouped by application.

A single ViPR block consistency group represents consistency groups on all related storage and protection systems including RecoverPoint, VPLEX, and block storage arrays (such as VMAX and VNX). In previous releases, a single consistency group was limited, at most, to one consistency group on any one storage system. This prevented the creation of full copies or snapshots of subsets of RecoverPoint or VPLEX consistency groups. Now you can use Application services to create and manage sub groups of volumes in order to overcome this limitation.

ViPR Controller UI features

Application services are managed through the following areas of the ViPR Controller UI.

Back to Top

Discovery and management of HP-UX hosts

HP-UX hosts, and host initiators can now be discovered and managed by ViPR Controller.

For the HP-UX versions supported by ViPR Controller see the ViPR Controller Support Matrix.

ViPR Controller UI options for HP-UX

HP-UX hosts, and host initiators are added, discovered, registered, and provisioned storage from the following areas of the ViPR Controller UI.

Back to Top

Service Catalog improvements

In addition to the updates to the Service Catalog to support the ViPR Controller features offered with ViPR Controller 3.0, which are described in this document, the following improvements have also been made to the service catalog.

Back to Top

New features in Block Storage services

You can unexport multiple volumes or remove a volume snapshot from an export. The Remove Block Volumes feature now supports an Inventory Only deletion type.

ViPR Controller UI features

Block Storage services are managed through the following areas of the ViPR Controller UI.

Back to Top

Enhancements to EMC Elastic Cloud Storage support

ViPR Controller now supports the following EMC Elastic Cloud Storage (ECS) operations.

Back to Top

Replication as a quality of service in Object Virtual Pools

When creating an object virtual pool in ViPR Controller, you can now set the Replication value required to include a replication group in an object virtual pool.

ViPR Controller UI features

The Replication entry field has been added to the ViPR Controller UI, Virtual > Object Virtual Pool > Data Protection area.

You enter the minimum number of data centers that the replication group must be a part of to be included in this virtual pool.

The minimum value is one.

Back to Top

Assign User Access Control to a bucket

ViPR Controller can be used to assign access control to buckets. Access can be given to an individual user, a group of users, or a custom group of users.

Information and requirements to assign access control to buckets

The user, group, or custom group must have been configured for the ECS prior to assigning them to buckets from ViPR Controller.

When adding access control to buckets, you cannot use spaces in the user, group, or custom group names.

ViPR Controller UI features

Bucket access control lists (ACLs) are managed from the following areas of the ViPR Controller UI.

Back to Top

Namespace discovered with EMC Elastic Cloud Storage (ECS) systems

ViPR Controller discovers all namespaces configured on an ECS system while discovering the ECS.

Discovering the ECS namespaces with the ECS makes for a more user friendly experience when mapping ECS namespaces to ViPR Controller tenants, by allowing users to select from a list of namespaces, rather than having to manually type them in, as in previous versions of ViPR Controller.

Back to Top

Support for ECS User Secret Key

You can use ViPR Controller to generate ECS user secret keys using the ViPR Controller REST API, or CLI.

This option is not available in the ViPR Controller UI.

Back to Top

Inventory delete only from ViPR Controller for Buckets

ViPR Controller provides the Inventory Only option to delete a bucket from the ViPR Controller database.

If a bucket needs to be deleted from both the ViPR Controller database, and the ECS a full delete should be used. If the full delete fails, because ViPR Controller did not detect the bucket on the ECS, then you can use the Inventory Only delete to remove the bucket from the ViPR Controller database.

Back to Top

Mobility group migration

Mobility groups enable the migration of multiple VPLEX volumes with one order. Group volumes by host, cluster, or by an explicit list of volumes.

ViPR Controller UI features

Mobility group migration services are managed through the following areas of the ViPR Controller UI.

Back to Top

Ingestion of RecoverPoint consistency groups

Block Storage Services have been updated to support ingestion of RecoverPoint consistency groups.

Use cases for ingestion of RecoverPoint consistency groups

ViPR Controller UI features

Back to Top

Additional RecoverPoint support

The following RecoverPoint mode and support has been added to ViPR Controller.

ViPR Controller UI features

Back to Top

Resynchronize Block Snapshot for VMAX2 and XIO

The Resynchronize Block Snapshot service has been added to the ViPR Controller service catalog, which allows you to resynchronize a snapshot of a block volume or consistency group for VMAX2, and XIO systems.

This operation adds a point-in-time copy to one or more snapshots of the selected volume or consistency group.

The Resynchronize Block Snapshot service is only available for VMAX2, and XIO storage systems.

Back to Top

Support for SnapVX for VMAX3 storage systems

ViPR Controller provides support for SnapVX functionality, and ingestion of SnapVX devices for VMAX3 storage systems.

ViPR Controller support for SnapVX is also supported on the following storage systems, and configurations when VMAX3 is used for the backend:

Refer to VMAX3 documentation for more information about TimeFinder SnapVX functionality.

SnapVX operations supported by ViPR Controller

ViPR Controller uses Snapshot Sessions to manage SnapVX sessions, and devices. The following TimeFinder SnapVX operations are supported by ViPR Controller with VMAX3 storage systems. All operations support volumes in a consistency group.

ViPR Controller UI features

SnapVX sessions are managed from the following areas of the ViPR Controller UI.

Back to Top

Additional SRDF support

The following SRDF mode and support has been added to ViPR Controller.

Back to Top

SRDF Metro support for VMAX3 storage systems

ViPR Controller now supports SRDF Metro for VMAX3 storage systems. SRDF Metro is established on the block virtual pool by setting the SRDF copy mode to Active. The Active mode is then used by ViPR Controller to select only the storage pools that are SRDF Metro enabled to add to the block virtual pool.

ViPR controller supports export functionality for SRDF Metro, as well as SRDF Metro replication functionality. Replication is supported with the following ViPR Controller functionality:

  • Snapshots
  • Full Copies
  • Continuous Copies

Information and requirements to support SRDF Metro

For SMI-S version requirements refer to the ViPR Controller Support Matrix.

  • ViPR Contoller only supports SRDF Metro between two VMAX3 storage systems.
  • VMAX3 storage systems must be enabled with an SRDF Metro licenses.
  • ViPR Controller does not support ingestion of SRDF Metro devices.
  • You do not need to discover the “witness,” storage system in ViPR Controller with the SRDF Metro-enabled storage systems.
  • SRDF Metro operations are supported with volumes in a consistency group.
  • ViPR Controller does not support Swap and Failover operations. When a new pair needs to be created or added to the same ViPR Controller project (SRDF Group), ViPR Controller will suspend the existing pairs, and add the new pairs.

Back to Top

Support for VMAX meta and VMAX3 devices

ViPR Controller now supports the following SRDF configurations between VMAX and VMAX3 devices.

  • Creation and management of SRDF relationships where a VMAX meta volume is the source, and a VMAX3 thin device is the target.
  • Creation and management of SRDF relationships where a VMAX meta volume is the target, and a VMAX3 think device is the source.
  • Using the Change Virtual Pool, > Add SRDF protection option on existing SRDF devices to swap the target to a:
    • VMAX3 thin device, where a VMAX meta is the source.
    • VMAX meta volume, where a VMAX3 think device is the source.
      Note Image
      Swap may not work if the VMAX meta is comprised of 2 times more cylinders than the VMAX3 device.

Back to Top

Storage Orchestration for OpenStack (SOFO) and OVF for deployment with OpenStack deployment

ViRP Controller support for OpenStack has been enhanced by providing support for storage orchestration for OpenStack, and includes the OpenStack Cinder OVF for ViPR Controller for deployment of southbound integration with OpenStack Liberty.

Storage Orchestration for OpenStack (SOFO)

Use ViPR Controller to manage block storage in an OpenStack ecosystem. An orchestration is a series of functions performed in a specific order that accomplishes a requested task.

OpenStack is open source software designed to build public and private clouds. In OpenStack, there is a project called 'Cinder" that is used to provision and manage block storage. It has a defined set of standard REST APIs for enterprise storage management.

ViPR Controller is a software-defined-storage (SDS) controller designed to bring cloud-like benefits to enterprise storage management. Benefits include automatic provisioning, a self-service portal, policy-based storage profile definitions, and single pane of glass for management of multi-vendor storage systems. The SOFO implementation uses a Java implementation of the REST APIs to enable ViPR Controller to manage block storage in OpenStack deployments.
Note Image
See the CoprHD User Guide: Storage Orchestration for Openstack here: https://coprhd.atlassian.net/wiki/display/COP/User+Guide+%3A+Storage+Orchestration+For+OpenStack

OpenStack Cinder OVF for ViPR Controller

The OpenStack Cinder OVF for ViPR Controller is available for deployment of a ViPR Controller southbound integration with OpenStack. You still have the option of deploying OpenStack through the OpenStack downloads, but you now have the option of using the OpenStack Cinder OVF for ViPR Controller to deploy ViPR Controller southbound integration with OpenStack.

For details refer to the EMC ViPR Controller 3.0 Release Notes.

Note Image
f you are already running ViPR Controller with OpenStack, you do not need to re-install OpenStack.

Back to Top

Use ViPR Controller to create gatekeeper volumes on VMAX storage systems

You can use ViPR Controller to create gatekeeper volumes, which are less than 1 GB, on VMAX and VMAX3 storage systems.

You cannot use ViPR Controller to expand volumes, which are less than 1 GB.

Back to Top

Additional XtremIO support

The following XtremIO mode and support has been added to ViPR Controller.

Back to Top

Support for restore operation directly from the UI

You can now do a Restore operation directly from the ViPR Controller UI.

The Data Backup and Restore page can be accessed from the System > Data Backup and Restore menu.

Back to Top

Option to change backup interval and number of backup copies from UI

You can now use the ViPR Controller UI to change the backup interval and the maximum number of backup copies to save.

Select System > General Configuration > Backup.

Option Description
Backup Time The time (hh:mm) that the scheduled backup starts, based on the local time zone.
Number of Backups per Day Choose 1 or 2 backups per day at the scheduled Backup Time.
Backup Max Copies (scheduled) The maximum number of scheduled backup copies (0-5) to save on the ViPR nodes. Once this number is reached, older scheduled backup copies are deleted from the nodes so that newer ones can be saved.
Backup Max Copies (manual) The maximum number of manually-created backup copies (0-5) to save on the ViPR nodes. Once this number is reached, no additional copies can be created until you manually delete the older manually-created copies from the nodes.
Back to Top

Support for System Disaster Recovery

ViPR Controller 3.0 now provides support for System Disaster Recovery

Note Image
Please contact your EMC Customer Support Representative for assistance in planning of the end-to-end System Disaster Recovery configuration for use cases based on your datacenter physical assets and environment.

The IT infrastructure managed by ViPR Controller spreads across multiple data centers. Multiple ViPR instances could be deployed in different data centers in order to tolerate center-wide data failures. ViPR System Disaster Recovery uses the Active/Standby model, which means that only one Active ViPR instance serves for provisioning operations while the other ViPR instances are configured as Standby sites. Failover and Switchover operations are supported to cope with disasters.

System Disaster Recovery is managed from the ViPR Controller UI. System Disaster Recovery status is displayed on the Dashboards > Overview page, and Disaster Recovery status and operations are managed from the System > System Disaster Recovery page.

For additional detailed information on System Disaster Recovery, refer to the EMC ViPR Controller Disaster Recovery, Backup and Restore Guide, which is provided in the ViPR Controller Product Documentation Index.

Dashboards > Overview: System Disaster Recovery

The Dashboards > Overview page provides summary Status and Network Health information for the Active and Standby Disaster Recovery sites.

System > System Disaster Recovery

Standby sites can be added, edited, or deleted, and disaster recovery operations managed from the System > System Disaster Recovery page.

The following table provides details on the information displayed on the System Disaster Recovery page and the actions that can be performed during the Disaster Recovery process.

Click anywhere on the site row to display the site status, including Site UUID, Controller Status, Site Creation Time, Latency, and Synchronization Status.

Back to Top

Support for node-to-node IPsec encryption

Internet Protocol Security (IPsec) is a protocol suite for secure Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. IPsec enables secure communication between nodes in a ViPR cluster and between ViPR clusters, and uses an IPsec Key to secure the communication.

IPsec is managed from the Security > IPsec page.

The IPsec page provides current information on IPsec Status and IPsec Configuration. The page is refreshed once per minute.

Back to Top

New licensing model

Starting with ViPR Controller 3.0, a new licensing model has been deployed.

Starting with Release 3.0, ViPR Controller has implemented a new licensing model. The new model supports a new-format managed capacity license and a raw, usable, frame-based capacity license. With the raw capacity single license file, each license file can include multiple increments, both array-type and tiered.

The new licensing model is not compatible with the old-format managed capacity license used with older versions of ViPR Controller.

ViPR Controller 3.0 new installation

  • For a fresh ViPR 3.0 installation with a new license, you should encounter no problem and may proceed normally.
  • If you try to do a fresh ViPR 3.0 installation with an old license, you will receive an error message "Error 1013: License is not valid" and will not be able to proceed with the installation. You must open a Service Request (SR) ticket to obtain a new license file.

ViPR Controller 3.0 upgrade installation

  • For an upgrade ViPR 3.0 installation with an old license, ViPR 3.0 will continue to use the old-format license, but the license will say "Legacy" when viewing the Version and License section of the Dashboards in the ViPR GUI. There is no automatic conversion to the new-format license. To convert to the new-format license, you must open a Service Request (SR) ticket to obtain a new license file. After you upload the new-format license, the GUI display will show "Licensed".

Pre-3.0 versions of ViPR Controller

  • Pre 3.0 versions of ViPR controller will accept the new-format license file. However, they will only recognize the last increment in the new file.

Back to Top

Improved query filter for Audit Log

The Audit Log query filter has been improved.

The System > Audit Log page displays the recorded activities performed by administrative users for a defined period of time.

The Audit Log table displays the Time at which the activity occurred, the Service Type (for example, vdc or tenant), the User who performed the activity, the Result of the operation, and a Description of the operation.

Filtering the Audit Log Display

  1. Select System > Audit Log. The Audit Log table defaults to displaying activities from the current hour on the current day and with a Result Status of ALL STATUS (both SUCCESS and FAILURE) .
  2. To filter the Audit Log table, click Filter.
  3. In the Filter System Logs dialog box, you can specify the following filters:
    • Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
    • Start Time: To display the audit log for a longer time span, use the calendar control to select the Date from which you want to see the logs, and use the Hour control to select the hour of day from which you want to display the audit log.
    • Service Type: Specify a Service Type (for example, vdc or tenant).
    • User: Specify the user who performed the activity.
    • Keyword: Specify a keyword term to filter the Audit Log even further.
  4. Select Update to display the filtered Audit Log.

Downloading Audit Logs

  1. Select System > Audit Log. The Audit Log table defaults to displaying activities from the current hour on the current day and with a Result Status of ALL STATUS (both SUCCESS and FAILURE) .
  2. To download audit logs, click Download.
  3. In the Download System Logs dialog box, you can specify the following filters:
    • Result Status: Specify ALL STATUS (the default), SUCCESS, or FAILURE.
    • Start Time: Use the calendar control to select the Date from which you want to see the logs, and use the Hour control to select the hour of day from which you want to display the audit log.
    • End Time: Use the calendar control to select the Date to which you want to see the logs, and use the Hour control to select the hour of day to which you want to display the audit log. Check Current Time to use the current time of day.
    • Service Type: Specify a Service Type (for example, vdc or tenant).
    • User: Specify the user who performed the activity.
    • Keyword: Specify a keyword term to filter the downloaded system logs even further.
  4. Select Download to download the system logs to your system as a zip file.

Back to Top