ViPR 2.2 - New Features in EMC ViPR

Table of Contents

ViPR Controller 2.2.0 new features and changes

New features and changes introduced in ViPR 2.2.0.x are described in the following sections.

Changes to ViPR Data Services Support

The next release of ViPR Data Services, Elastic Cloud Storage (ECS) software, and ECS Appliance will not require ViPR Controller; EMC is separating the Element Manager functionality from the overall management and automation that is provided by ViPR Controller for these components. Therefore, starting with version 2.2, ViPR Controller only supports storage automation on EMC and non-EMC file and block arrays. ViPR Controller 2.2 does not support ViPR Data Services components (Object and HDFS Services) on file arrays, commodity, or ECS Appliance.

Note Image
ViPR Data Services (Object and HDFS) are available today for EMC and third-party file arrays. To use ViPR Data Services on file arrays, you must be running ViPR Controller version 2.1.x, with a ViPR Data Services package. Future versions of ViPR Data Services, ECS software, and ECS appliance will include element and system management software, thus removing the dependency for a separate, external ViPR Controller.

ViPR Controller Microsoft Hyper-V Installation

ViPR Controller versions 2.2.0.1 and higher can now be installed in Hyper-V environments. Previous versions of the ViPR Controller could only be installed in VMware environments.

For more information, see the Install EMC ViPR Controller on Hyper-V article.

Support for MetroPoint

ViPR now supports MetroPoint configurations that include RecoverPoint protection for both legs of a VPLEX distributed virtual volume.

For more information, see the ViPR Support for MetroPoint article.

SRDF Enhancements

ViPR SRDF support has been enhanced to include support for the SRDF Split function.

For more information, see the ViPR Support for VMAX SRDF Remote Replication article.

Support for VPLEX Distributed Virtual Volume Mirrors

ViPR now supports continuous copies (mirrors) for both legs of a VPLEX distributed virtual volume.

For more information, see the ViPR Support for VPLEX Distributed Volume Mirrors article.

New Storage System Support

Support for the following storage systems has been added to ViPR:
  • EMC Data Domain as Filer
  • EMC VNXe 3200
  • IBM XIV

For more information, see the following ViPR articles:

Support for the discovery and management of the following block storage arrays has been added to ViPR:
  • VMAX3
  • XtremIO with VPLEX
  • XtremIO with VPLEX and RecoverPoint

For more information on block storage array support, see the following ViPR articles:

Third-Party Block Enhancements

Support for the management of third-party block storage systems has been added to ViPR:
  • Storage port creation through the UI and CLI
  • Multipathing solution support for Fibre Channel protocol
For more information, see the following ViPR articles:

AIX and AIX VIO Host Support

ViPR operations can be used to:
  • Discover AIX hosts, and AIX VIO servers.
  • Create block and file storage for AIX hosts, and AIX VIO servers.
  • Mount block and file storage to AIX LPAR clients, and AIX stand-alone hosts.
  • Unmount block and file storage that was mounted to AIX hosts by ViPR.
  • Delete block and storage that was created for AIX and AIX VIO hosts in ViPR.

For more information, see the following ViPR articles:

HDS Feature Support

The following feature support has been added to ViPR for HDS storage systems:
  • Auto-tiering for standard Hitachi Unified Storage (HUS) VM policies

    New auto-tiering options when system type is set to HDS for the block storage pools.
    Note Image
    The ViPR Contoller supports auto-tiering on HUS VM storage systems. Auto-tiering is not supported for HDS VSP storage systems.

  • Collection of port usage metrics
  • Set the Host Mode and Host Mode Option

    By default, ViPR now sets the Host Mode, and Host Mode Options when you use ViPR to export an HDS volume to a host or cluster.

    For more information, see the Set the Host Mode and Host Mode Option on Host Groups for HDS Storage article.

  • Block protection services

VCE Vblock System Support

ViPR can be used to provision and manage the compute and storage components of a Vblock system.

New ViPR features allow you to:
  • Add Vblock compute systems (Cisco Unified Computing Systems (UCS)), and compute images (ESX operating system installation files) to the ViPR Physical Assets.
  • Manage Vblock Compute System resources (UCS Blades) through the ViPR Compute Virtual Pools in the ViPR Virtual Assets.
  • Use Vblock System Services to provision, and manage Vblock system compute resources.

Once the compute resources have been provisioned, you can use the ViPR Block and File Storage services to provision and manage the storage resources in your Vblock system.

For more information, see the following ViPR articles:

Backup and Restore Enhancements

Administrators can now schedule ViPR backups and download them to an external server using the ViPR user interface.

For more information, see the EMC ViPR Native Backup and Restore Service article.

EMC RecoverPoint Minimum Supported Versions

ViPR 2.2 requires a minimum version of EMC RecoverPoint 4.1 Service Pack 1 or RecoverPoint 4.1 Patch 1. Older versions of RecoverPoint will not work with ViPR 2.2. If you are upgrading to ViPR Controller 2.2 from a previous ViPR Controller-only version, you must upgrade to RecoverPoint 4.1 Service Pack 1 or Patch 1 before performing the upgrade to ViPR 2.2.

Customize Naming Conventions for ViPR Resources

By default as you add physical and virtual assets, ViPR creates a number of resources, such as masking views and zones using a hard-coded naming convention for each type of resource. However, you can override the default and provide your own naming conventions for several types of resources using Physical Assets > Controller Config in the ViPR UI. The naming conventions can also be modified using the /config/controller REST APIs.

The following resource naming conventions can be modified:
  • San Zoning
    • Zoning - scope can be set globally or by system type
  • VMAX Masking
    • Host Masking View Name
    • Cluster Masking View Name
    • Host Storage Group Name
    • Cluster Storage Group Name
    • Host Cascaded IG Name
    • Cluster Cascaded IG Name
    • Host Cascaded SG Name
    • Cluster Cascaded SG Name
    • Host Initiator Group Name
    • Cluster Initiator Group Name
    • Host Port Group Name
    • Cluster Port Group Name
  • VNX Storage Groups
    • Host Storage Group Name
  • VPlex
    • Storage View Name
  • XtremIO
    • Volume Folder Name
    • Initiator Group Name
    • Host Initiator Group Folder Name
    • Cluster Initiator Group Folder Name
  • HDS
    • Host Group Name
    • Host Group Nick Name

For more information, see the Customize the Names of Resources Created by ViPR on Physical Systems article.

ScaleIO 1.31 Support

ScaleIO 1.31 is now supported by ViPR.

For more information on ScaleIO, see the following ViPR articles:

Performance-based Array/Pool/Port Selection

Users can set maximum limits (ceiling) above which a port is no longer available for allocation. Ports which have one or more metrics over a ceiling will not be used for any allocations until all metrics return to a value under the user-set maximum limits or the maximum limits are removed.

The following limits can be set:
  • Port Utilization Ceiling
  • Initiator Ceiling
  • Volume Ceiling
  • CPU Utilization Ceiling
  • Days to Average Utilization
  • Weight for Exponential Moving Average

You can also enable or disable the metrics.

To set the limits:
  • UI - Physical Assets > Controller Config > Port Allocation
  • REST API - PUT /config/controller

For more information, see the following ViPR articles:

Task Notifications

A number of ViPR operations and services are processed asynchronously. Asynchronous operations return a task or list of tasks.

Each task represents a block of work performed by the controller engine. These tasks can be followed to check if the operation has succeeded, failed or is in progress, and can be followed using both the UI and the REST API.

There are two types of tasks:
  • Tenant tasks, such as adding a host. Any user that is a member of the tenant can view the tasks and task details that are related to that tenant.
  • System tasks that are not associated with any tenant, such as adding a storage array. Only System Administrators can view system tasks. Only System Administrators and Security Administrators can view the details of a system task.

By default, tasks last for seven days from the date of completion. In addition, the time interval between task cleaning operations is 60 minutes. But these values can be changed using:
  • UI - Settings > Configuration > Other
  • REST - changing task_ttl and task_task_clean_interval using PUT /config/properties
Two screens have been added to the UI to display the tasks. Both screens display a red percentage complete bar if the task completed with an error, and a green percentage complete bar if the task completed successfully.
  • Resources > Tasks displays a screen showing a count of the number of tenant and system tasks in the pending, completed with an error, and completed successfully states. The last 1000 tasks are displayed. System administrators and security administrators can select one of the tasks to view its details.
  • Task popup displays the last 5 tasks.

In previous versions of the ViPR REST API, tasks were recorded against a resource, such as /block/volumes/{id}/tasks/{task_id}. If the volume was deleted, then all of the tasks associated with the volume were also deleted. The REST API includes a new API, /vdc/tasks/, which separates a task from its resource, and allows you to get task details, search for tasks, assign tags to tasks, delete a task, and get all tasks in a tenant or system. It also includes a method to get a count of how many tasks are in the pending, completed with error, and completed successful states. In addition, when you delete a resource the tasks that are associated with the resource are still available for viewing.

For more information on ViPR tasks, see the following ViPR articles:

Back to Top

ViPR Controller 2.2 Service Pack 1 new features and changes

New features and changes introduced in ViPR Controller 2.2 Service Pack 1 are described in the following sections.

Install ViPR Controller on VMware with no vApp

You can use the installer script to deploy ViPR Controller as 3 or 5 separate VMs, not as a single multi-VM vApp using one of the following:

For more information see the Install ViPR Controller on VMware without vApp article.

ViPR support for VPLEX configurations

Back to Top