New Features and Changes for ViPR Controller 3.5

Table of Contents

New features and changes overview

This article lists and describes the new features, and changes provided with the ViPR Controller 3.5 release.

ViPR Controller REST API updates are documented in the ViPR Controller REST API Reference.

ViPR Controller UI, and CLI updates for a feature are highlighted with the new feature description in this document. For more details about the:

All documents can be accessed from the ViPR Controller Product Documentation Index

Back to Top

Guided Licensing, Initial Setup, and Deployment of your Virtual Data Center

The ViPR Controller UI Getting Started Guide is used to quickly and easily navigate you through:

Configuration requirements

Review the following before using the Getting Started Guide to configure your VDC:

ViPR Controller UI

The Getting Started Guide opens the first time you log into the ViPR Controller UI and automatically walks you through the licensing and initial set up steps.

If you are provisioning with VMAX All Flash, Unity All Flash, or XtremIO storage systems, the Getting Started Guide takes you through the necessary steps to build your VDC, and provision storage.

Additionally, you have the option to close out of the ViPR ControllerGetting Started Guide at any time. ViPR Controller checks off each step that you have completed, allowing you to go back to the guide, and begin where you left off. Simply, click the Guide option in the upper, right menu of the ViPR Controller UI to re-enter the guide at the same point from which you exited the guide

Back to Top

Support for Host/Array Affinity for VMAX, VNX for Block, Unity, and XtremIO storage systems

Support for host/array affinity allows you to do the following:

Back to Top

Host/array affinity discovery for unmanaged volumes

Host/array affinity discovery allows ViPR Controller to discover host/array affinity when VMAX, VNX for Block, Unity, or XtremIO storage is provisioned to a given host, when the storage volumes are not under ViPR Controller management (unmanaged volumes).

Host/array affinity discovery identifies the storage and host connectivity through the host initiator. Once the host initiator is added to ViPR Controller, enabling discovery of host/array affinity includes discovery of the storage volumes, and the masking views used by the host initiator to connect the storage and the host.

When an unmanaged volume is discovered by host/array affinity for a storage system, and a host that have been discovered by ViPR Controller, the storage system is identified as the preferred storage system. When a storage system is identified as the preferred storage system, ViPR Controller will provision from the preferred storage system to the same host, or cluster when host/array affinity resource placement is enabled. For details see Host/array affinity resource placement.

ViPR Controller allows you to perform host/array affinity discovery on demand, or at scheduled intervals. Once scheduled host/array affinity discovery is enabled (disabled by default):

  • For the hosts already discovered in ViPR Controller host/array affinity discovery will occur 90 seconds after the ViPR Controller nodes are restarted.
  • When a new host is added to ViPR Controller, host/array affinity discovery will occur for the newly added host at the next scheduled interval.
  • When a vCenter is added to ViPR Controller, host/array affinity discovery is performed on the hosts brought in with the vCenter. When a new vCenter is added to ViPR Controller, host/array affinity discovery will occur for the hosts brought in with the vCenter, at the next scheduled interval.

Configuration requirements

When using ViPR Controller to discover host/array affinity be aware of the following:

  • This feature is supported with VMAX, VNX for Block, Unity, or XtremIO storage systems.
  • The host initiator, for which you want to discover host/array affinity, must have been added to ViPR Controller.
  • While discovery of host/array affinity makes ViPR Controller aware of unmanaged VMAX, VNX for Block, Unity, or XtremIO storage volumes provisioned to a host, it does not discover and ingest the unmanaged storage volumes.
  • You must rediscover host/array affinity after an unmanaged volume, has been exported, or unexported to a host, or if the unmanaged volume has been ingested. You can perform host/array affinity on demand or at a scheduled interval.
  • On demand host/array affinity discovery can be initiated from the host from the ViPR Controller UI, CLI or REST API.
  • You can perform host/array affinity discovery on the storage system using the ViPR Controller CLI or REST API. If you perform host/array affinity discovery on a storage system managed by a storage provider, host/array affinity discovery will occur on all the storage systems managed by the storage provider.
  • Host/array affinity discovery can be scheduled from the ViPR Controller UI.

ViPR Controller UI

The following options on the ViPR Controller UI provide the functionality to use the feature:

ViPR Controller UI Pages and Options Description
Physical > Hosts > Discover Array Affinity Select the hosts for which you want to run host/array affinity discovery, and click Discover Array Affinity to run discovery, or rediscovery of the host/array affinity for the selected hosts immediately.
System > General Configuration > Discovery The following options are used to schedule ViPR Controller to automatically perform host/array affinity discovery:
  • Enable Array Affinity Discovery — Set to true to enable ViPR Controller for scheduled host/array affinity discovery. When set to false, host/array affinity discovery must be performed on demand.
  • Array Affinity Discovery — When Array Affinity Discovery is enabled, this is the number of seconds between the time that ViPR Controller will rediscover for host/array affinity.
  • Array Affinty Referesh Interval — When Array Affinity Discovery is enabled, this is the number of seconds before a new discovery operation is allowed since the last time host/array affinity was discovered.

ViPR Controller CLI

The following commands are used to perform on demand host/array affinity discovery from the ViPR Controller CLI.

ViPR Controller CLI Commands and Options Description
viprcli storagesystem discover_arrayaffinity Performs host/array affinity discovery for the given storage system.
viprcli host discover-array-affinity Performs host/array affinity discovery for the given hosts.

Back to Top

Host/array affinity resource placement

ViPR Controller host/array affinity resource placement allows you to use the preferred storage during provisioning for a given host. You can further define that if the preferred storage becomes unavailable, whether or not ViPR Controller can continue to provision to that host from non-preferred storage.

Discovery of preferred storage

When host/array affinity resource placement is enabled, ViPR Controller identifies the preferred storage for

  • Unmanaged volumes through host/array affinity discovery. For details see Host/array affinity discovery for unmanaged volumes.
  • Managed volumes as the storage, and masking views which have already been provisioned to the hosts or cluster using ViPR Controller.

Be aware of the following when using this feature:

  • This feature is supported with VMAX, VNX for Block, Unity, or XtremIO storage systems.
  • Host/array affinity is identified by the connectivity through the host initiator. Therefore, the host initiator, for which you want to manage host/array affinity, must have been added to ViPR Controller.
  • Host/array affinity provisioning is only applied when you use the following block storage services: Create a Block Volume for a Host, or Create and Mount Block Volume from anyone of the host-specific block storage services. If you only use the Create Block Volume service to first create the volume, and then later export the volume to a host, host/array affinity will not be applied.

ViPR Controller UI

The following ViPR Controller UI pages, and options provide the functionality to use the feature:

ViPR Controller UI Pages Description
Virtual > Block Virtual Pools > Create or Edit Block Virtual Pools > Resource Placement Policy Options are:
  • Default - Storage Array selection based on performance metrics and capacity — allows ViPR Controller to use the default method of storage selection during provisioning.
  • Host/Array Affinity - Storage Arrays/Pools selection based on Host/Cluster array affinity first, then performance metrics and capacity — enables the virtual pool to be used for host/array affinity provisioning. During provisioning ViPR Controller will only provision from the preferred storage. If there are no preferred storage pools in the virtual pool or if preferred storage is unavailable, then ViPR Controller will continue to provision from non-preferred storage only if the value set in the Physical > Controller Config > Host/Array Affinity Resource Placement tab is greater than the number of preferred storage systems.
    Note Image
    You can define the Host/Array Affinity Resource Placement value. The default value is 4096. Decrease the value to enforce stricter host/array affinity resource placement.

Physical > Controller Config > Host/Array Affinity Resource Placement tab When the virtual pool from which the storage is being provisioned is enabled for host/array affinity, use this to set the maximum number of storage from which storage can be provisioned to a host.
  • Scope Type: Global — this setting is applied to all hosts, when host/array affinity resource placement is enabled.
  • Scope Value — is not applicable at the time of this release.
  • Value — the maximum number of storage systems from which ViPR Controller can provision. By default, the value is set to 4096. To use non preferred systems, the value must be greater than the number of preferred storage systems. If the value is:
    • Less than or equal to the number of preferred storage systems, then the provisioning order will fail when the preferred storage becomes unavailable.
    • Greater than the number of preferred storage systems, then ViPR Controller will attempt the provisioning order on the non-preferred storage systems if the preferred storage becomes unavailable. Once the storage from a non-preferred storage system is provisioned to the host, the storage becomes preferred storage for that host, and the number of preferred storage system is increased by one.

ViPR Controller CLI

The viprcli vpool create, and viprcli vpool update commands includes the [-placementpolicy | pp] option to set the resource placement type from the CLI. Enter either of the following:

  • default_policy — to have ViPR Controller to use the default method of storage selection during provisioning.
  • array_affinity — to enable the virtual pool to be used for host/array affinity provisioning.

Back to Top

Add aliases to VMAX storage system initiator names

You can use the ViPR Controller CLI or REST API to add aliases to the initiator world wide port names (WWPNs) in masking views for VMAX storage systems.

Configuration requirements and information

ViPR Controller commands

The following ViPR Controller CLI commands have been added to list and add aliases to VMAX storage system initiator names:

Back to Top

Support for compression on VMAX3 All Flash Arrays

You can use ViPR Controller to discover and manage VMAX3 All Flash Arrays, which support compression, and to enable, and disable compression on the storage group.

Configuration requirements and information

Be aware of the following when working with compression enabled storage:

ViPR Controller UI

The following ViPR Controller UI pages, and options provide the functionality to use the feature:

ViPR Controller UI Pages Description
Physical > Create or Edit Block Virtual Pools > Hardware > Enable Compression When enabled, only the VMAX3 All Flash storage groups, which support compression will be available to add to the virtual pool. It is not required that compression is enabled on the VMAX3 storage groups, it is only required that compression is supported on the storage groups. When storage from this virtual pool is provisioned to the host, it will apply the compression settings defined on the storage system.
Catalog > Block Storage Services > Change Volume Virtual Pool > Change Auto-tiering Policy, Host IO Limits, or Compression Allows you to move volumes to a virtual pool where compression is enabled, or where compression is enabled, and the compression ratio set on the storage pools in the virtual pool matches the ratio set on the volume being moved.
Storage Systems > <storage system name > Storage Pools > Compression Enabled column Identifies if compression is enabled on the VMAX3 storage pool.
Virtual Array > <virtual array name > Storage Pools > Compression Enabled column Identifies if compression is enabled on the VMAX3 storage pool.
Resources > Volumes > <volume name > More Details > Compression Ratio Displays the compression ratio set on the volume.

ViPR Controller CLI

The -enablecompression option with the ViPR Controller CLI viprcli vpool create and viprcli vpool update commands is provided for this feature.

viprcli vpool create [-enablecompression <enable_compression>]

Back to Top

Support for Dell SC Series

Dell SC Series (formerly Compellent) arrays may be discovered using ViPR Controller UI or CLI commands. Discover the Dell SC arrays by connecting to a Dell Storage Manager (DSM, formerly known as Dell Enterprise Manager). DSM manages one or more Dell SC arrays and can be configured to allow access per array based on the credentials that are used. The DSM IP address and credentials are obtained from the System Administrator at the time of discovery.

Use these ViPR Controller services to manage Dell SC Series arrays:
  • Provisioning
  • Exports
  • Snaps and clones
  • Support for Consistency Groups
  • Ingestion
  • Discovery of ports and pools
  • Detection of volume WWN
  • Ability to specify ports for volume exports

Discovery

Provisioning and export

Snapshot and clone operations

Ingestion

Communication with the array

The ViPR Controller Dell SC driver uses the REST API available with Dell Storage Manager 2015 R3 or above. All API communication uses HTTPS over port 3033.

ViPR Controller UI and CLI updates

The Dell SC option has been added to the ViPR Controller UI, and the array type, dellsc, has been added to the ViPR Controller CLI. Use these options for selecting Dell SC storage to add, virtualize or manage such as when adding Dell SC storage systems to ViPR Controller.

Exclusions and limitations

These features are not supported using ViPR Controller at this time. The storage administrator may still enable them using Dell SC management tools if desired.
  • Remote replication
  • QoS (Quality of Service)
  • File storage on Dell Storage FluidFS
  • Deduplication
  • Compression
  • Live volume auto-failover

Back to Top

Support for IBM XIV

ViPR Controller manages XIV arrays using an SMI-S provider and (optionally) Hyper Scale Manager.

VPLEX with IBM XIV backing volumes

In earlier releases, IBM XIV could be used as a VPLEX backend storage system only if configured using OpenStack Cinder. Cinder is no longer a requirement. All VPLEX operations supported by ViPR Controller can be used when VPLEX is configured natively with IBM XIV backing volumes.

ViPR Controller supports the following configurations for VPLEX with IBM XIV backing volumes

Discovery, provisioning, and export

Use either of these pages to configure the XIV storage system from the ViPR Controller user interface: When configuring the storage system provider, choose one of two methods:
  1. SMI-S only

    SMI-S is required for storage provider or storage system discovery. If ViPR Controller is using only SMI-S to provision host clusters, then ViPR Controller creates stand-alone host objects for each cluster member on the IBM XIV storage system. ViPR Controller ensures a consistent HLU is used for cluster export.

  2. SMI-S plus Hyper Scale Manager

    When ViPR Controller is using SMI-S plus Hyper Scale Manager, ViPR Controller can create cluster objects on IBM XIV. This ensures a consistent HLU is used for cluster export.

When you configure the storage system in ViPR Controller to use IBM Hyper Scale Manager, the REST API is used, allowing volumes to be exported to an IBM XIV cluster host. Use of IBM Hyper Scale Manager is optional; however, you cannot delete Hyper Scale Manager from the storage system configuration after adding it.
Note Image
SMI-S does not identify clusters. If only SMI-S is used to export volumes to a cluster, then standalone hosts are created for each member of the cluster on IBM XIV.

ViPR Controller CLI updates

viprcli storage provider create and viprcli storage provider update have new options to support the Hyper Scale Manager. For example:
viprcli storageprovider create -name xiv -provip <IP address or FQDN> -provport 5989 -user admin -if ibmxiv -hyperScaleHost <IP address or FQDN> -hyperScalePort 8443 -secondary_username admin
Enter password of the storage provider:
Retype password:
Enter password of the secondary password:
Retype password:
viprcli storageprovider update -n xiv -provip <IP address or FQDN> -provport 5989 -user admin -if ibmxiv -hyperScaleHost <IP address or FQDN> -hyperScalePort 8443 -secondary_username admin -newname xiv
Enter password of the storage provider:
Retype password:
Enter password of the secondary password:
Retype password:

Back to Top

Enhanced support for EMC Unity

In addition to the EMC Unity feature support delivered with ViPR Controller version 3.0.0.1, the following support for EMC Unity has been added to ViPR Controller.

VPLEX with Unity backing volumes

All VPLEX operations supported by ViPR Controller can be used when VPLEX is configured with Unity backing volumes.

ViPR Controller supports the following configurations for VPLEX with Unity backing volumes

RecoverPoint with Unity backed volumes

All RecoverPoint operations supported by ViPR Controller are also supported when RecoverPoint is backed with Unity volumes.

ViPR Controller supports the following RecoverPoint configurations with Unity backed volumes:

Limitations to the ViPR Controller support of RecoverPoint with Unity backed volumes:

ViPR Controller Application support for Unity volumes

All application operations supported by ViPR Controller are supported for Unity storage systems, include:

Limitations of support for Unity volumes in ViPR Controller applications:

Enhanced ingestion support

ViPR Controller can be used to ingest the following types of Unity volumes:

ViPR Controller UI and CLI updates

The EMC Unity option has been added to the ViPR Controller UI, and the unity option has been added to the ViPR Controller CLI for selecting Unity storage to add, virtualize or manage in ViPR Controller such as when adding Unity storage systems to ViPR Controller.

Back to Top

Automate cloning of Cisco zonesets

Ciscos zonesets are created, modified, or deleted when ViPR Controller operations, which include exporting, and unexporting, block volumes to and from hosts are performed.

When using ViPR Controller to perform operations on Cisco zonesets, you can enable ViPR Controller to automatically create a clone of the zoneset prior to committing the change on the zoneset. The clone can then be used as a backup of the zoneset prior to the change.

Additionally, you can control whether a ViPR Controller operation can continue in the event that the creation of a zoneset failed.

Automation of cloning Cisco zonesets can be performed from the ViPR Controller UI, CLI or REST API.

Configuration requirements

When defining automation of Cisco zonesets through ViPR Controller note the following

ViPR Controller UI

The following options in the ViPR Controller UI General Configuration > Controllers tab are provided for this feature:

Option Description
Clone Cisco Zoneset Set to:
  • True, to enable automatic creation of zoneset clones.
  • False, to disable automatic creation of zoneset clones.

This option is set to False by default.

Allow Cisco Zoneset Commit Set to:
  • True, to allow the operation to continue even when creation of the zoneset backup failed.
  • False, to pause the operation when the creation of the zoneset fails.

This option is set to False by default.

ViPR Controller CLI

The following options in the ViPR Controller CLI viprcli system set-properties -pn command are provided to for this feature.

Option Description
controller_mds_clone_zoneset -pvf property_value.txt Sets the value to true, to enable automatic creation of zoneset clones.

When this value is not added to the viprcli system set- properties -pn option, automatic creation of zoneset clones is disabled by default.

controller_mds_allow_zoneset_commit -pvf property_value.txt Sets the value to true, to allow the operation to continue even when creation of the zoneset backup failed.

When this value is not added to the viprcli system set- properties -pn option, the ViPR Controller operation will not proceed if the creation of the clone failed.

Back to Top

New feature support for VPLEX

The following new features have been added to ViPR Controller to enhance VPLEX support:

Back to Top

Support for VPLEX with SRDF backing volumes

ViPR Controller provides support of VPLEX with SRDF backing volumes in the following two types of deployments:

  • SRDF connectivity between two VPLEX local systems.
  • VPLEX distributed with VMAX3, SRDF backing volumes

For details refer to the ViPR Controller Support for VPLEX and VPLEX with EMC Data Protection User and Administration Guide which is available from the ViPR Controller Product Documentation Index.

Back to Top

Thin virtual volume provisioning on VPLEX with XtremIO backing volumes

ViPR Controller will perform thin provisioning on VPLEX virtual volumes with XtremIO backing volumes.

When creating virtual pools, you can select the thin provisioning type to have the thin-capable volumes added to the virtual pool. At the time of provisioning ViPR Controller will perform thin provisioning when the volumes are created from the virtual pool.

You can move the VPLEX volume from a non-thin virtual pool to a thin enabled virtual pool to change the virtual volume to be thin-enabled.

If you run the Move into VPLEX service order on a thin-capable non-VPLEX volume the virtual volume (local or distributed) becomes thin-enabled.

Ingestion of VPLEX thin-enabled virtual volumes with XtremIO backing volumes is also supported.

Configuration requirements

  • ViPR Controller only supports this feature for VPLEX with XtremIO backing volumes.
  • The VPLEX version must support thin provisioning. Refer to the ViPR Controller Support Matrix to determine which versions of VPLEX support this functionality.

Back to Top

Support for moving exported VPLEX volumes to a different virtual array

ViPR Controller migration allows you to move exported VPLEX virtual volumes from one virtual array to another.

Configuration requirements

You can only move exported VPLEX virtual volumes from one virtual array to another when:

  • The target virtual array is part of the same VPLEX cluster as the source volumes.
  • The hosts initiators are present on both the source and target virtual arrays.

Back to Top

Allow MetroPoint when adding RecoverPoint protection in the Change Virtual Pool or Change Volume Virtual Pool service

ViPR Controller migration allows you to change the RecoverPoint protection to include MetroPoint.

Configuration requirements

RecoverPoint protected VPLEX volumes or MetroPoint (VPLEX Metro only) volumes are eligible for VPLEX Data Migration too. For these volumes, the original virtual pool is compared to the target virtual pool and migrations are based on changes in
  • Source virtual pool
  • Source journal virtual pool
  • Target virtual pools
  • Target journal virtual pools

Targets and Journals can be implicitly migrated if there are changes in the new virtual pool when compared to other virtual pools. (The other virtual pools must be eligible for migration.)

The same rules apply to all virtual pools when determining whether or not a migration will be triggered.

RecoverPoint protected VPLEX volumes or MetroPoint (VPLEX Metro only) volumes that are in consistency groups with array consistency enabled OR are in Applications will be grouped together for migration.

RecoverPoint or MetroPoint (VPLEX Metro only) Target volumes that are in Applications will be grouped together for migration.

Back to Top

Suspend, resume, or roll back orders during VPLEX data migration

You can suspend migration so you can check data integrity before the operation is committed on the migration appliance. Then resume or roll back the order.

This feature allows you to check the integrity of an application, such as the virtual machine, database, or file system, before the original source volumes associated with migration are deleted.

Roll back the migration if you find problems that need to be addressed when the migration is suspended.

Checking the Suspend box in the Service Catalog > Migration Services > VPLEX Data Migration page suspends the migration operation before the commit and delete workflow of the original source volumes occurs.

When the commit() operation is suspended (before commit), you will see the Order and Task in a suspended state in the UI and will be offered rollback and resume buttons. When using CLI commands to list the tasks, you will see the task is in a suspended_no_error state and the description field of the task will explain which step it is suspended on.

When the delete original source step is suspended (before delete), you will see the Order and Task in a suspended state in the UI and will be allowed to only resume.

ViPR Controller UI and CLI updates

Suspend, Resume, and Rollback options have been added to the VPLEX Data Migration service.

The viprcli task command lists task objects for respective domain objects. for example, if you issue a CLI command to run migration, you can run viprcli volume tasks to obtain a list of tasks associated with the volume(s) being migrated. Then you can use the viprcli task command with the {resume|rollback} arguments on any task that is in the suspend_no_error state.

Two arguments, {rollback,resume}, have been added to the viprcli task command.

usage: viprcli task -h -hostname <hostname> -port <port_number> -portui <ui_port_number> -cf <cookiefile> -tid <task_id> 
{rollback,resume} 
 
Identify the task with the task_id option when issuing the resume or rollback arguments.
 resume Resume a suspended task
 rollback Rollback a suspended task

Back to Top

Custom names for VPLEX volumes

You can set up a custom volume naming convention so the volume names will match between VPLEX and ViPR Controller. You can also customize the volume name to include other identifiers, such as the project name, host name, and so forth.

Access the Volume Naming feature from the Physical > Controller Config page. Three options are available for use in customizing volume names:
Custom Volume Naming Enabled
This option is disabled by default. When set to Yes, the values in the next two choices in the drop-down list are used to name the volumes.
Volume Custom Name
When provisioning volumes using Catalog > Block Storage Services, the custom configuration settings specified in the Volume Custom Name values are used to name the volumes. This allows the user-supplied volume label to display in both VPLEX and ViPR Controller. You can customize any of these variables:
  • volume_label
  • volume_wwn
  • project_name
  • tenant_name
Export Custom Volume Name
When you want the volume name to include the name of the compute resource that the volume will be exported to, edit the Export Custom Volume Name values. For example, when you use Catalog > Block Storage Services > Create Block Volume for Host, you specify a Host in the service, If you have enabled the Export Custom Volume Name option, ViPR Controller will name the volume with the user-supplied label plus the export_name of the Host. For example, Demo1_lglw1024 where lgl21024 is the Host name. You can customize any of these variables:
  • volume_label
  • volume_wwn
  • project_name
  • tenant_name
  • export_name
You may provision volumes using other host catalogs too, such as Block Services for Windows, Linux, and VMware.
Note Image
The user-supplied volume label cannot start with a numeric character. The label can only begin with the underscore character (_) or an alpha character.

Other services that use the custom volume naming conventions include:
  • VPLEX volume clones
  • VPLEX volume snapshots exposed as VPLEX volumes
  • A vPool change to import a non-VPLEX volume
  • VPLEX volume with mirror where the mirror is detached and promoted to become a new VPLEX volume
  • Change virtual array
  • VPLEX data migration
Note Image
At this time, you cannot rename a volume when you unexport and export to a different host.

Back to Top

Port selection for VPLEX exports are now based on performance metrics

When exporting VPLEX volumes to a host or cluster, the storage port selection is based on port/director performance metrics.

Collection of VPLEX performance metrics allows port selection algorithms to be applied. The metrics are collected in files with .csv format on the VPLEX management server. The .csv files:
  • Contain director/port metrics for a director
  • Are located on the VPLEX management server: /var/log/VPlex/cli
  • Have a file name regex: *PERPETUAL_vplex_sys_perf*log director-2-1-A_PERPETUAL_vplex_sys_perf_mon.log
  • Are updated approximately every 30 seconds
  • Have a default rollover size of 10 MB
  • Require you to log in with SSH on the management server to read the file
Additional information:
  • Performance metrics for a director (and its ports) are only available from the management server on the cluster containing the director.
    Note Image
    This differs from discovery of physical and logical configurations which can be obtained for both clusters from a single management server.

  • Both management servers for a VPLEX metro configuration must be managed to collect metrics for all directors and ports.

File header

Time,Time (UTC),cache.dirty (KB),cache.miss (counts/s),cache.rhit (counts/s),cache.subpg (counts/s),fc-com-port.bytes-active A2-FC00 (counts),fc-com-port.bytes-active A2-FC01 (counts),fc-com-port.bytes-active A2-FC02 (counts),fc-com-port.bytes-active A2-FC03 (counts),fc-com-port.bytes-queued A2-FC00 (counts),fc-com-port.bytes-queued A2-FC01 (counts),fc-com-port.bytes-queued A2-FC02 (counts),fc-com-port.bytes-queued A2-FC03 (counts),fc-com-port.bad-crc A2-FC00 (counts/s),fc-com-port.bad-crc A2-FC01 (counts/s),fc-com-port.bad-crc A2-FC02 (counts/s),fc-com-port.bad-crc A2-FC03 (counts/s),fc-com-port.discarded-frames A2-FC00 (counts/s),fc-com-port.discarded-frames A2-FC01 (counts/s),fc-com-port.discarded-frames A2-FC02 (counts/s),fc-com-port.discarded-frames A2-FC03 (counts/s),fc-com-port.protocol-error A2-FC00 (counts/s),fc-com-port.protocol-error A2-FC01 (counts/s),fc-com-port.protocol-error A2-FC02 (counts/s),fc-com-port.protocol-error A2-FC03 (counts/s),fc-com-port.ops-active A2-FC00 (counts),fc-com-port.ops-active A2-FC01 (counts),fc-com-port.ops-active A2-FC02 (counts),fc-com-port.ops-active A2-FC03 (counts),fc-com-port.ops-queued A2-FC00 (counts),fc-com-port.ops-queued A2-FC01…

Data sample

2014-10-07 22:09:55,1412719795902,0,0,0,0,0,0,no data,0,0,0,no data,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,no data,0,0,0,no data,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,419,319,797,406,2,152,0,2,91,0,0,0,0,0.0,0,0.0,0.0,0.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,56,0,0,0.258,0,3.89,0.258,0,3.89,0,0,0,0,0,0,0,0,0,0.0,0,0.0,0.0,0.0,14,10,4,11,1,11,1,0

Back to Top

VPLEX performance metrics

The VPLEX metrics collection is contingent on having metering turned on and configured. Set Enable Metering to true in System > General Configuration > Controller .

Each management server in a VPLEX MetroPoint configuration is a storage provider for VPLEX . Add the provider details for each of the VPLEX management servers using the Physical > Storage Providers > Add page in the ViPR Controller UI. This adds both cluster manager IP addresses to ViPR Controller and enables VPLEX port performance on both front-end ports.

The table describes the metrics collected from VPLEX that ViPR Controller uses to allocate ports.

These metrics are used to calculate:
  • Percent busy for the port (FEPort) which is computed from kbytesTransferred over the time period since the last valid sample.

Back to Top

Block volume serviceability improvements

This release of ViPR Controller includes improvements in management of volumes, masks, and initiators.

ViPR Controller has improved to adapt to externally-modified changes to array masks, including initiators (hosts/clusters) and volumes that are not under ViPR Controller management. The improvements in export management allow users to perform add and remove operations outside of ViPR Controller with confidence that operations performed in ViPR Controller on the same array masks consider such unmanaged volumes, hosts, and clusters. Additionally, ViPR Controller adds a new layer of protection and verbose validation reporting of array masks that would have been adversely impacted by aViPR Controller user request.

Back to Top

New Remove Block Volume service added

A new service, Remove Block Volume, removes only volumes with no dependencies. The original Remove Block Volume service is renamed to Unexport and Remove Block Volume.

The new Catalog > Block Storage Services Remove Block Volume service will only remove volumes without dependencies such as the following:
  • Export
  • Snapshot
  • Snapshot session
  • Full copy
  • Continuous copy
Note Image
The service order will fail if any dependencies are found on the volume.

If you wish to have ViPR Controller orchestrate the removal of Block Volumes or Consistency Groups and their related exports, use the Unexport and Remove Block Volume service instead of the Remove Block Volume service.

Back to Top

New and changed ViPR Controller services

The services have been introduced or changed in this release of ViPR Controller.

ViPR Controller UI Catalog ViPR Controller CLI command Service description
All Flash Services Not applicable This category groups the services together than can be used to provision and manage the storage on all flash arrays.
Block Protection Services > Export Continuous Copies
viprcli exportgroup add_vol

with the following option:

-blockmirror|bmr
Used to export a continuous copy to a host.
Block Protection Services > Unexport Continuous Copies
viprcli exportgroup remove_vol

with the following option:

-blockmirror|bmr
Used to unexport a continuous copy from a host.
Block Protection Services > Failover Block Volume
viprcli volume continuous_copies update-access-mode

with the following option:

[-accessmode|am <accessmode> {DIRECT_ACCESS}]
viprcli consistencygroup update-access-mode

with the following option:

[-accessmode|am <accessmode> {DIRECT_ACCESS}]
Updates the access mode for RecoverPoint consistency groups only. Currently, the only supported value is DIRECT_ACCESS.
Block Services > Change Volume Virtual Pool and Block Services > Change Virtual Pool Not applicable The Operation, Add RecoverPoint Protection, accepts VPLEX Metro volumes. The virtual pool can define MetroPoint (CDP or CRR).
Block Services > Unexport and Remove Block Volume Not applicable This was formerly named Remove Block Volumes. This service deletes the block volume and orchestrates other changes (such as removing snapshots or full copies) if needed to successfully complete the deletion of the volume.
Block Services > Remove Block Volume Not applicable This is a new service that removes unexported block volumes or consistency groups. If there are snapshots or other objects associated with the volumes or consistency groups, you must manually remove them before this service can succeed.
File Services for Linux > Mount a NFS Export Updated:
viprcli filesystem mount
Mount a previously created NFS Export of a file system to a Linux Host.
File Services for Linux > Unmount a NFS Export New:
viprcli filesystem unmount
Unmount a previously mounted NFS Export on a Linux Host.
File Services for Linux > Create a file system, NFS export and mount it Not applicable. Create a new file system, create an NFS export and mount it to a Linux Host.
Not applicable. New:
viprcli filesystem mountlist
Return a list of mounted NFS exports for a specified project
File Protection Services > Failover File System Updated:
viprcli filesystem failover-replication
Perform a Disaster Recover Failover operation using a File System.

Isilon configurations consists of NFS Exports, Export Rules, CIFS shares and ACLs.

Replicate the source file system NFS export, export rules, CIFS shares and ACLs to target file system. During the first failover all configuration will get replicated to target file system but during subsequent failover only the delta will get replicated.

File Protection Services > Failback File System Updated:
viprcli filesystem failback-replication
Perform a Disaster Recover Failback operation using a File System.

Isilon configurations consists of NFS Exports, Export Rules, CIFS shares and ACLs.

Replicate the target file system NFS export, export rules, CIFS shares and ACLs to source file system, only the delta will get replicated.

File Storage Services > Create File System and NFS Export Modified:
vipercli filesystem export
vipercli filesystem export-rule
vipercli filesystem show-exports

Ability to define multiple export rules for a single Isilon export

Support an NFS export on a file system directory with a security flavor to NFS hosts with different access permissions (read-only, read-write and root)

Support an NFS export with multiple security flavors to same set of hosts with different access permissions.

Supports multiple exports on a file system directory, each export with different security flavor to set of NFS hosts with different access permissions.

Ingestion of export rules with one or multiple security flavors

File Storage Services > Create File System and NFS Export File Storage Services > Create File System and CIFS Share File Storage Services > Remove File System Not applicable For VNX, support concurrent file system provisioning operations, including one or multiple arrays
Migration Services > VPLEX Data Migration
viprcli task -h
usage: viprcli task -h -hostname <hostname> -port <port_number>
-portui <ui_port_number> -cf <cookiefile> -tid <task_id> {rollback,resume} ...
Support for suspend, resume, or rollback during the migration process.

Back to Top

New and changed ViPR Controller resources

These resources have been introduced or changed in this release of ViPR Controller.

ViPR Controller UI ViPR Controller CLI command Description
Resources > Actionable Events page is added to the UI.

An icon shaped like a horn or loud speaker is added to this page and to the top banner in the UI.

viprcli event {list,show,delete,approve,decline,details}
...
with the following options:
[-h][-hostname <hostname>][-portui <ui_port_number>] [-cf
<cookiefile>]
If changes occur during vCenter or Host discovery, ViPR Controller may need to trigger export group updates in order to maintain the correct state among hosts, clusters, and their export groups. Actionable events provides more information. The number on the top of the icon indicates the number of actionable events that need to be accepted or declined. You must accept or decline an actionable event in order to continue processing orders for the devices affected by the event.
Resources > File Systems Modified:
vipercli filesystem export
vipercli filesystem export-rule
vipercli filesystem show-exports

Ability to define multiple export rules for a single Isilon export

Support an NFS export on a file system directory with a security flavor to NFS hosts with different access permissions (read-only, read-write and root)

Support an NFS export with multiple security flavors to same set of hosts with different access permissions.

Supports multiple exports on a file system directory, each export with different security flavor to set of NFS hosts with different access permissions.

Ingestion of export rules with one or multiple security flavors

Back to Top

Scheduling service orders

The Scheduler allows you to schedule a protection service order to run at a later time, or recurring, to view scheduled orders, and to edit or cancel a scheduled order.

The Scheduler is available for the following services:

When an order is scheduled, a scheduled event is created in ViPR Controller which is used to manage and schedule all the associated future orders.

When a recurring order is scheduled, only the next order is scheduled. Not all future orders are immediately scheduled. Once a scheduled order is run, the next order defined by the recurrence, is then scheduled.

Configuration requirements and information

Be aware of the following when scheduling orders:

ViPR Controller UI

The following pages, and options are available in the ViPR Controller UI to use this feature:

ViPR Controller pages and options Description
Catalog > Block Protection Services
  • Create Block Snapshot
  • Create Block Full Copy

Catalog > File Protection Services > Create File System Snapshot

The following options have been added to the service orders for scheduling services:
  • Enable Scheduler — Select to schedule the order.
  • Start Date/Time — Enter the date and time for the first time the order will be run.
  • Frequency — Select how often to run the service order.
  • Recur every — If you have scheduled the service order run indefinitely, or a recurrence, select how often during the frequency to run the order, for example 1 Day would be daily, 2 Days, would schedule it to run every other day.
  • Number of recurrences — If you selected End after recurrences for the Frequency, you will enter the number of recurrences to schedule here. Once you have reached the given number of recurrences, no more copies will be taken for this order.
  • Automatic Expiration — Maximum number of snapshots or copies to keep. Once the retention is met, the oldest snapshot or copy is removed before the new one is created.

    It is recommended that you do not manually remove any of the snapshots or copies being managed by the ViPR Controller scheduler.

    Note Image
    When working with block full copy orders, if you have set the order to create multiple copies, Automatic Expiration applies to all the copies created for the order for example: If you set the order to create 3 full copies, and you set automatic expiration to keep 5 full copies, ViPR Controller, will maintain 15 (3x5) full copies, and delete the three oldest copies after the retention is met.

Catalog > Scheduled Orders Displays the list of scheduled orders. Tenant Administrators can cancel scheduled orders or edit recurring scheduled orders, or edit the expiration time set on the scheduler.

Scheduled Time column, displays the time the order is scheduled to run.

ViPR Controller CLI

The following commands are provided to use this feature.

Option Description
viprcli scheduled_event create Schedule a new event.
viprcli scheduled_event update Update an existing event schedule.
viprcli scheduled_event cancel Cancel a scheduled order
viprcli scheduled_event get Get a list of scheduled events
viprcli scheduled_event delete Delete a scheduled event.

Back to Top

Options to add or remove hosts, clusters, or initiators from export groups have been removed from the Resources>Export Groups page

The Add and Remove options have been removed from the ViPR Controller UI Resources > Export Groups page.

ViPR Controller automatically adds the export to the export group during the following operations:

ViPR Controller automatically removes the export from the export group during the following operations:

Back to Top

Actionable events

If changes occur during vCenter or Host discovery, ViPR Controller may need to update export groups in order to maintain the correct state among hosts, clusters, and their export groups. Instead of performing these updates automatically, a list of actionable events is generated. Only the Tenant Administrator can approve or decline the event.

In many cases changes in vCenter discovery are temporary and are due to maintenance activities. Usually, the environment returns to the previous state after maintenance. ViPR Controller no longer performs updates automatically when it detects post-discovery changes. Instead, the tenant administrator is given a chance to approve or decline the update based upon knowledge of the data center activities.

Use the Resources > Actionable Events page to review the list of Pending, Approved, Failed, or Declined actionable events. (There is also a loud speaker icon at the top of the screen showing the number of events that need to be reviewed. Click the icon to open the actionable events page.) In addition to viewing actionable events, you can click the event and delete associated tasks.
Note Image
The Auto-Export option has been removed from the Physical > Clusters > Add Cluster page. Default behavior for automatic exports varies depending upon the type of host or cluster:
  • For vCenter, Windows, Linux, AIX, or HP-UX-discovered hosts, you must use the Resources > Actionable Events page to manage export group updates. Automatic export is turned off.
  • Automatic export is on by default for manual or user-created clusters when moving hosts between clusters in the UI.
  • Automatic export is off by default when using the CLI commands.
  • There is no automatic export for NFS exports. Actionable events are created only if the host is in a shared block export group and is being removed/added to a cluster. An actionable event is created if the host moved to a different datacenter or if it was added/removed from vCenter.
  • If a host is removed from vCenter (not discoverable at all through vCenter), then an actionable event is created even if it doesn't have any block exports. Approving the event will unassign the host from vCenter but not perform any block export updates.

ViPR Controller UI ViPR Controller CLI command Description
Resources > Actionable Events
viprcli event {list,show,delete,approve,decline,details}
...
with the following options:
[-h][-hostname <hostname>][-portui <ui_port_number>] [-cf
<cookiefile>]
If changes occur during vCenter or Host discovery, ViPR Controller may need to update export group in order to maintain the correct state among hosts, clusters, and their export groups. Actionable events provides more information.
Back to Top

Option to Enable Direct Access has been added to the Failover Block Volume page for RecoverPoint-protected volumes

The Enable Direct Access option has been added to the ViPR Controller UI Catalog > Block Protection Services > Failover Block Volume page.

Select Enable Direct Access for RecoverPoint volume or consistency group failover if you are worried about the journal volume running out of room during the failover. By default, ViPR failover (using the RecoverPoint test copy) enables image access for a target copy in logged access mode. In this mode, all new writes are written to the replica volume and undo information is stored in the image access log, which is located within the journal. With direct access, the journal is not kept and a full sweep is done after direct access mode is complete. This is an option for long term tests where the journal may not have enough space for long term image access mode.

Back to Top

New Southbound SDK features

Support for the following Southbound SDK (software development kit) features has been added:

Note Image
Documentation is maintained in this web location: https://coprhd.atlassian.net/wiki/display/COP/Storage+Driver+SDK+for+Array+Integration+to+CoprHD. The Southbound SDK jar files may be downloaded from the documentation page.

Back to Top

New feature support for Isilon storage systems

The following new features have been added to ViPR Controller to enhance Isilon storage system support:

Back to Top

Allow multiple export rules with different security flavors to be set on an export

Enhanced Isilon exports to allow multiple export rules with different security flavors. Customers can now create and ingest exports with multiple export rules set on them.

  • Enhanced Isilon exports:
    • Ability to define multiple export rules for a single Isilon export
    • Supports an NFS export on a file system directory with a security flavor to NFS hosts with different access permissions (read-only, read-write and root)
    • Supports an NFS export with multiple security flavors to the same set of hosts with different access permissions
    • Supports multiple exports on a file system directory, each export with a different security flavor to a set of NFS hosts with different access permissions
  • Ingestion of export rules with one or multiple security flavors
  • File Storage Services > Create File System and NFS Export
  • Resources > File Systems
  • Modified CLIs:
    • vipercli filesystem export
    • vipercli filesystem export-rule
    • vipercli filesystem showexports
Back to Top

File system custom path and modify namespace

Change the path location and "ViPR" namespace.

In previous releases, ViPR Controller created a file system using the hard-coded pattern /ifs/vipr/{virtual pool}/{tenant}/{project}/{file system name}. ViPR Controller stored everything under /ifs/vipr and ingested file systems only from /ifs/vipr and /ifs/sos directory structures.

  • New ViPR Controller configuration:
    • To specify a custom path for provisioning file system resources created by ViPR for Isilon
    • To use a customized namespace for the system access zone of Isilon
    • To Ingest Brownfield environment using a custom defined path
  • Possible variables (at least one of them should be present in the path):
    • vpool_name
    • project_name
    • tenant_name
    • isilon_cluster_name
  • File system placement with a new customized path

Customers can now configure user-defined directory structures and maintain their company specific directory policy requirements. They can ingest existing file systems from any directory.

ViPR Controller UI Pages and Options Description
Physical > Controller Config > Isilon To specify a custom path for provisioning file system resources, select File System Directory Path. The default path is shown in the first row and is grayed out. Add another line and specify a new value to override the default.
Physical > Controller Config > Isilon To use a customized namespace for the system access zone of Isilon, select System Access Zone Directory. The default namespace is shown in the first row and is grayed out. Add another line and specify a new value to override the default.
Physical > Controller Config > Isilon To ingest a Brownfield environment using a custom path, select Unmanaged File System Locations. The default path is shown in the first row and is grayed out. Add another line and specify a new value to override the default.
Back to Top

Replicate Isilon file system configurations during Failover and Failback operations

Replicate Isilon file system configurations during Failover and Failback using SyncIQ and support the Failback operation through the ViPR Controller UI.

In previous releases, ViPR Controller only replicates the data (using SyncIQ) not the metadata of file systems. In order to cover Disaster Recovery scenarios effectively, the configuration data should also be replicated. Otherwise, the source configuration needs to be manually replicated to the target array.
Note Image
This feature is enabled by default; if it is not required, the user can disable it in the ViPR Controller UI.

  • Isilon file system configurations:
    • Isilon file system configurations consist of NFS Exports, Export Rules, CIFS shares and ACLs
    • During a Failover operation:
      • Replicate the source file system NFS export, export rules, CIFS shares and ACLs to the target file system. During the first Failover, the entire configuration is replicated to the target file system, but during subsequent Failovers only the delta is replicated.
    • During a Failback Operation:
      • Replicate the target file system NFS export, export rules, CIFS shares and ACLs to the source file system. Only the delta is replicated.
  • Failback operation through the ViPR Controller UI:
    • In previous releases, the Failback operation was supported only through the ViPR Controller CLI. With this release, a new service has been added File Protection Services > Failback File System. By default, the target configuration is replicated back to the source, but it can be turned off if required.
  • File Protection Services > Failover File System
  • File Protection Services > Failback File System
  • Modified CLIs:
    • viprcli filesystem failover-replication
    • viprcli filesystem failback-replication
Back to Top

New feature support for file systems

The following new features have been added to ViPR Controller to enhance file system support:

Back to Top

Support for quota directory ingestion

Additional support for quota directory ingestion.

For ingestion completeness on the file side, quota directory ingestion is required.

Quota Directory Ingestion (Brownfield):

  • Added support for VNX and Unity filers

This feature is now available for VNX and Unity.

Back to Top

File system concurrency support for VNX

Support concurrent file system provisioning operations for VNX.

  • Support concurrent file system provisioning operations, including one or multiple arrays (VNX):
    • Create File System and CIFS Share
    • Create File System and NFS Export
    • Remove File System (with CIFS/NFS)
  • File Storage Services > Create File System and CIFS Share File
  • File Storage Services > Create File System and NFS Export File
  • File Storage Services > Remove File System File
Back to Top

File exports - support for host-side operations on a Linux host

Provide host-side operations support for NFS exports -- for example, mount, unmount.

In previous releases, ViPR Controller did not perform any host-side operations (for example, mounting a file share) for the file exports. The customer had to do this manually.

With this release, the admin does not need to do anything manually. ViPR Controller completes the file operation, from creating a file system through exporting it to a host to finally mounting it on the host.

Back to Top

Custom login banner

You can now specify a custom login banner that displays whenever you login to ViPR Controller.

Back to Top

External CIFS servers for backups

Configure a CIFS/SMB server as a backup external server.

In previous releases, ViPR Controller supported uploading backups through FTP/FTPS only. With this release, customers can take backups using external CIFS/SMB servers as well.

Back to Top

Syslog Forwarding

This feature allows the forwarding and consolidation of all real-time log events to one or more common, configured, remote Syslog server(s), and will help the user to analyze the log events. All logs from all ViPR services except Nginx (for example, syssvc, apisvc, dbsvc, etc.) are forwarded in real time to the remote Syslog server after successful configuration. Audit logs are also forwarded.

In previous releases, all the ViPR Controller logs were persisted on the ViPR Controller nodes. This new feature allows consolidation of all real-time ViPR Controller log events to a common configured remote syslog server, and also helps the user to analyze the log events.

Back to Top

Tenant Administrator role for multiple tenants

A non-root user can now have a Tenant Administrator role for multiple tenants.

It is now possible to configure a user or a group of users that can have a Tenant Administrator role for multiple tenants:
  • This user/group of users must match or belong to the provider tenant. However, they do not have to have a Tenant Administrator role in the provider tenant.
  • Users can use this functionality in multi-tenant environments in cases where they need a group of users to perform provisioning operations for multiple tenants and they do not want to use root user for these operations
  • Tenant Administrator is the only role that has this functionality
  • When a user/group with rights to multiple tenants logs in to ViPR UI, they can select the tenant they need in the Tenant drop-down

Back to Top

EMC® ViPR Controller Plug-in for VMware® vRealize Orchestrator

The EMC ViPR Controller Plug-in for VMware vRealize Orchestrator has been updated to Version 3.5

Back to Top

Service Catalog workflows

Service Catalog workflows call the Service Catalog API in ViPR Controller, as opposed to the REST APIs, which are called in all other workflows. These workflows have been added to ensure that rollback is supported. The Service Catalog API calls support rollback, whereas REST API calls do not trigger rollback. There are three Service Catalog workflows and they call the corresponding workflow, which is also exposed in the ViPR Controller Service Catalog Portal.

Back to Top

Deprecated workflows

The workflows listed below are deprecated and not recommended for usage. These workflows have not been updated after Plugin version 2.1. Instead, you should use the corresponding workflows (with the same name) from the Multiple category (EMC VIPR > Multiple), since those workflows have the enhancements made after Plugin version 2.1.

Note Image
The deprecated workflows are marked with a "Red Downward Arrow with exclamation point" in the Plugin icon. While these workflows can still be run, they are not recommended to be used as no updates have been made post Plugin version 2.1. For example, instead of using the "Create EMC ViPR Volume" workflow, use the "Create EMC ViPR Volume - Multiple" workflow from the EMC ViPR > Multiple category.

Deprecated Workflows list:

EMC ViPR > General:
  • Create EMC ViPR Volume
  • Create Raw Disk (RDM)
  • Delete EMC ViPR File System
  • Delete EMC ViPR Volume
  • Delete Raw Disk (RDM)
  • Export EMC ViPR Volume
  • Provision EMC ViPR Volume for Hosts
EMC ViPR > vCenter > Hosts/Clusters:
  • Delete Raw Disk (RDM) and EMC ViPR Storage
  • Provision Raw Disk (RDM) with EMC ViPR Storage
  • Provision VMFS Datastore for Cluster with EMC ViPR Storage

Back to Top