New Features and Changes for ViPR Controller 2.4 and 2.4 Service Packs

Table of Contents

ViPR Controller 2.4 Service Pack 1 new features and changes

New features and changes introduced in ViPR Controller 2.4 Service Pack 1 are described in the following sections.

This section provides descriptions of the features added to ViPR Controller 2.4 Service Pack 1.

Back to Top

Platform support

The following platform support is being added to ViPR Controller 2.4 Service Pack 1.

  • For Isilon storage systems
    • Access Zones are discovered as vNAS in ViPR Controller
    • NFSv4 protocol
    • ACL management for NFSv4
    • Tolerence for OneFS 8.0
  • Tolerence for Ontap 8.3 for NetApp storage systems
  • Support for eNAS on VMAX storage systems
  • Support for Hitachi Data Systems (HDS) Virtual Storage Platform (VSP) G1000 systems

For complete details about ViPR Controller supported platforms, and the features supported with each platform, refer to the ViPR Controller Support Matrix.

Back to Top

Managing backups from the ViPR Controller UI

The following functionality has been added to ViPR Controller to allow better management of backups that are used for native backup and restore operations.

The following pages can be used to manage the backups from the ViPR Controller UI.

For details about the native backup and restore functionality see the EMC ViPR Controller Backup and Disaster Recovery Guide at theViPR Controller Product Documentation Index. Please note that the guide is based on ViPR Controller 2.4 functionality, and the new functionality for ViPR Controller 2.4 Service Pack 1 is not included in the document.

Back to Top

Backup management from the ViPR Controller UI

The following pages have been added or updated to manage, and create point-in-time backups form the ViPR Controller UI.

System > Data Backup page

Use this page to:

  • View the list of both automatically and manually created backups.
  • Upload manually created backups to the FTP site configured in ViPR Controller for ViPR Controller backups.
  • Delete backups from the ViPR Controller.
    Note Image
    This operation does not delete the backup from the FTP location.

  • Access the Add Backup page to create a point-in-time backup of the ViPR Controller nodes.

While working with backups it is helpful to know:

  • Only System Administrators can view, and perform the operations on this page.
  • Only backup sets that were successfully created are listed on the Data Backup page. If you attempted to create a backup set that failed it will not be listed on the Data Backup page.
  • By default, ViPR Controller will not generate a backup set when 1 or more nodes are down or unavailable in a 3 node deployment, or 2 or more nodes are down or unavailable in a 5 node deployment, you can however chose to override the default and force the creation of the backup set.
  • All backups are automatically uploaded to the FTP site at the time defined in the ViPR Controller backup scheduler. Alternatively, you can upload manually created backups to the FTP site, on demand, from the Data Backup page.
    Note Image
    Automatic creation and upload of backups can be scheduled from the ViPR Controller REST API, CLI, or UI Settings > General Configuration > Backup page.

  • A backup set can only be uploaded to the FTP site once. Once it has been uploaded the Upload button is disabled for that backup set. Alternatively you can manually download or copy backup sets to secondary storage using the ViPR Controller REST API, or CLI.
  • When manual backups are zipped, and added to the FTP site, the file name is:manualbackupname.<total number of nodes in installation>-<number of nodes backed up>.zip

    For example:

    backupname-3-2.zip

Settings > Upgrade page

You can choose to create a point-in-time backup set prior to upgrading ViPR Controller from the Upgrade page.

After upgrading, the backup file you generated prior to upgrade, will be visible from the Settings > Data BackupBackups page.

Back to Top

ViPR Controller backup triggers

In addition to creating, and uploading automatic backups at the time defined in the ViPR Controller scheduler, the following will also trigger ViPR Controller backup operations.

When the following occurs, it triggers ViPR Controller to check to see if there are backups on the ViPR Controller that haven't been uploaded to the FTP site:

  • Changes to any one of the ViPR Controller backup schedule options which includes the schedule enabler, backup time, or external server settings.
  • Changes to the node status such as: node reboot, cluster reboot, or upgrade.

Additionally, if during unplanned triggering backup schedule, the scheduler detects the previous scheduled backup failed, it will run again.

Back to Top

Database Consistency Check

The database consistency check can be run to validate that the database is in a consistent state across all ViPR Controller nodes.

This operation can be run at any time, however it is a very resource intensive operation, and therefore it is recommended that the database consistency check only be run prior to upgrading ViPR Controller, or when EMC Customer support has requested that you run it.

The database consistency check can be run from the ViPR Controller UI System > Upgrade page.

Once the database consistency check is run, the Check DB Consistency Progress page opens. You can monitor the progress, see the results, or cancel the db check from this page.

Back to Top

Ability to create multiple volumes using one Service Catalog order

You no longer need to process multiple orders to create multiple volumes. Using the Block Storage Services > Create Block Volume and the Block Storage Services > Create Block Volume for a Host services, you can create numerous volumes, each with a unique name and size, in one order.

Back to Top

Ability to override the virtual pool parameters during a volume and snapshot export

You have the option to override the Minimum Paths, Maximum Paths, and Paths per Initiator values set on a volume's virtual pool when using ViPR Controller export operation on a volume, or snapshot.

ViPR Controller UI

Options to override the Minimum Paths, Maximum Paths, and Paths per Initiator values set on a volume's virtual pool has been added to these Service Catalog services:

  • Block Storage Services > Create Block Volume for a Host
  • Block Storage Services > Export Volume to a Host
  • Block Storage Services > Export VPLEX Volume
  • Block Protection Services > Export Snapshot to a Host

ViPR Controller REST API

The Create and Update Export APIs are updated to reflect these changes. For details refer to the ViPR Controller REST API Reference which is available from the ViPR Controller Product Documentation Index.

ViPR Controller CLI

The following three optional override commands have been added to: viprcli exportgroup add_host

Command Description
-maxpaths <MaxPaths>, -mxp <MaxPaths> The maximum number of paths that can be used between a host and a storage volume.
-minpaths <MinPaths>, -mnp <MinPaths> The minimum number of paths that can be used between a host and a storage volume.
-pathsperinitiator <PathsPerInitiator>, -ppi <PathsPerInitiator> The number of paths per initiator.

Command examples:

To create the export group:

viprcli exportgroup create -name host1_eg -pr testproject -varray ciscovarray -type Host -exportdestination host1

To export a volume to a host using overrides:

viprcli exportgroup add_vol -n host3_eg -volume testvolume-1 -pr project1 -mxp 1 -mnp 1 -ppi 1

To export a snapshot to the same host:

viprcli exportgroup add_vol -n host3_eg -snapshot testvolume-1-snap1 -v testvolume-1 -pr project1 -mxp 4 -mnp 2 -ppi 2

For details about using the viprcli exportgroup add_host command rViPR Controller CLI Reference Guide refer to the which is available from the ViPR Controller Product Documentation Index.

Back to Top

Support for choosing consistency groups in the Failover Block Volume and Swap Continuous Copies services

In the previous release, you selected a source volume in the Failover Block Volume and Swap Continuous Copies services, and ViPR Controller would derive the corresponding consistency group. You can now select the consistency group or a volume.

The following UI changes have been made in the service dialogs:

  • A new Storage Type field in which you select either Volume or Consistency Group
  • The Volume field has been renamed to Volume / Consistency Group, and will contain the following for your selection:
    • If you selected Volume for the Storage Type, a list of volumes is displayed.
    • If you selected Consistency Group for the Storage Type, a list of consistency groups is displayed.

The figure shows the changes in the Block Protection Services > Failover Block Volume screen. The same fields are contained in the Block Protection Services > Swap Continuous Copies service screen

New consistency group changes in Failover Block Service screen

Changes in the Failover Block Volume service screen

The following commands have been added to the ViPR Controller CLI:

viprcli consistencygroup failover Provide access to the latest image at the remote site. This command is silent on success.

Example:

viprcli consistencygroup failover -name mpSanity-10247103-196-cg -project sanity -tenant standalone -copyvarray varray3 -type rp
viprcli consistencygroup failover_cancel Cancel the operation started with viprcli consistencygroup_failover command. This command is silent on success.

Example:

viprcli consistencygroup failover_cancel -name mpSanity-10247103-196-cg -project sanity -tenant standalone -copyvarray varray3 -type rp
viprcli consistencygroup _swap Swap the personalities of the source and target. The source becomes the target and the target becomes the source. This command is silent on success.

Example:

viprcli consistencygroup swap -name mpSanity-10247103-196-cg -project sanity -tenant standalone -copyvarray varray3 -type rp
Back to Top

ViPR Controller support for removing RecoverPoint protection from volumes

The Remove RecoverPoint Protection operation has been added to the Block Storage Services > Change Virtual Pool and Block Storage Services > Change Volume Virtual Pool services.

The Remove RecoverPoint operation removes RecoverPoint protection from volumes. The source volumes remain intact, but the target volumes are deleted. In addition, if these volumes are the last volumes in the consistency group, then journal volumes are also deleted.

The target virtual pool must be identical to the original virtual pool, with the exception that the target virtual pool does not include RecoverPoint protection.

You can remove the RecoverPoint protection from VPLEX source volumes, only if those volumes do not have any snapshots.

Note Image
If you have run the Block Protection Services > Swap Continuous Copies service to make the RecoverPoint target become the source, you cannot remove the RecoverPoint protection, until you run the Swap Continuous Copies service again to reverse the personalities of the source and target virtual pools.

Configuration requirements to use ViPR Controller to remove RecoverPoint protection from volumes

Volume configuration Requirements to remove protection from volumes
RecoverPoint protection — CDP, CRR, CLR
  • Cannot be in a swapped state.
  • Target virtual volume must be identical to original virtual pool.
RecoverPoint protection with VPLEX — CDP, CRR, CLR
  • Cannot be in a swapped state.
  • All snapshots need to be deleted.
  • Target virtual volume must be identical to original virtual pool.
MetroPoint protection — CDP, CRR
  • Cannot be in a swapped state.
  • All snapshots need to be deleted.
  • Target virtual volume must be identical to original virtual pool except for RecoverPoint protection.

Back to Top

ViPR Controller support for RecoverPoint with VPLEX and OpenStack

ViPR Controller can now be used in environments where RecoverPoint is running with VPLEX and OpenStack when VPLEX supported storage systems are being managed by OpenStack.

This is supported for both CRR and MetroPoint volumes.

Back to Top

ViPR Controller support for Vblock Systems

New features have been added to the ViPR Controller UI and ViPR Controller CLI to improve Vblock System support.

ViPR Controller UI updates

The following operations can be configured from the ViPR Controller UI:

  • Compute image servers can be viewed, added, edited, and deleted from the Physical Assets > Compute Image Servers page.
  • Compute image servers can be associated with a compute system from the Physical Assets > Compute systems > Add Compute Systems page or the Physical Assets > Compute systems > Edit Compute Systems page.

ViPR Controller CLI updates

The name command has been added to the following two CLI commands to allow you to add or edit the compute image server name using the ViPR Controller CLI:

  • viprcli computeimageserver create
    viprcli computeimageserver create -name lgly5185 -imageserverip 10.247.85.185 -imageserversecondip 12.0.51.7 -imageimporttimeout 1500 -tftpbootdir /opt/tftpboot/ -user root -itm 3600 -sshtimeout 20
  • viprcli computeimageserver update
    viprcli computeimageserver update -name lgly5185 -imageserverip 10.247.85.185 -imageserversecondip 12.0.51.7 -imageimporttimeout 1500 -tftpbootdir /opt/tftpboot/ -user root -itm 3600 -sshtimeout 25

Back to Top

Ability to create a VPLEX volume from a block snapshot

ViPR Controller allows a user to create a block snapshot of a VPLEX virtual volume using the Create Block Snapshot service.

This operation results in the creation of a native snapshot of the source-side backend storage volume used by the VPLEX virtual volume. The user may then subsequently export that block snapshot using the Export Block Snapshot service. However, this requires that the host/cluster to which it is to be exported have direct connectivity to this backend storage system.

In this release, ViPR Controller has added a new catalog service and API that allows a user to expose the block snapshot as a VPLEX virtual volume. The Create VPLEX Volume from Block Snapshot service allows the user create a VPLEX virtual volume from a block snapshot. The user may then export this VPLEX virtual volume using the Export Block Volume service. In this way, the user can export the snapshot to the host/cluster through the VPLEX rather than through the backend storage array. The Create VPLEX Volume from Block Snapshot service is located in the ViPR Controller UI Service Catalog > Block Protection Services.

Be aware of the following while using ViPR Controller to perform this operation:

  • The user can only create a VPLEX volume from a block snapshot when the source volume for the block snapshot is a VPLEX backend volume. In other words, the block snapshot must be a snapshot of a VPLEX volume.
  • The VPLEX virtual volume so created can only be exported, unexported, and deleted. No other ViPR service are supported on the VPLEX virtual volume.
  • If a VPLEX volume is created from a block snapshot, the VPLEX volume must be deleted prior to deleting the block snapshot.
  • If the block snapshot is of a VPLEX local virtual volume, then using the new service to create a VPLEX virtual volume from this snapshot will result in a local virtual volume. If the block snapshot is of a VPLEX distributed virtual volume, then using this new feature to create a VPLEX volume from the snapshot will result is a VPLEX distributed virtual volume.
  • Note Image
    When a snapshot is created of a VPLEX distributed volume, ViPR Controller will only create a native block snapshot of the source-side backend volume. The source-side backend volume is the backend volume in the same virtual array as the VPLEX virtual volume and is the virtual array specified when the VPLEX volume was created.

The following commands have been added to ViPR Controller REST API, and CLI to allow you to export a VPLEX block snapshot through the VPLEX as a VPLEX virtual volume:

Back to Top

viprcli snapshot import-to-vplex

Import a VPLEX snapshot into VPLEX as a VPLEX volume.

viprcli snapshot import-to-vplex

viprcli snapshot import-to-vplex command imports a VPLEX snapshot into VPLEX as a VPLEX volume. The command is silent on completion.

name|n The name of a valid ViPR Controller snapshot. This is a mandatory parameter.
project|pr The name of a valid ViPR Controller project. This is a mandatory parameter.
tenant|tn The name of a tenant. If a tenant name is not specified, the default parent tenant is taken. This is an optional parameter.
volume|vol The name of a valid volume in ViPR Controller. This is an optional parameter.
Common Arguments This operation also takes the arguments listed in the GUID-B099D437-7FE2-4A99-AE2D-A7A72C11832C.

# viprcli snapshot import-to-vplex -name "KrisVPLEXSnap-1" -project "KrisViPRCG" -volume "KrisVPLEX-2"

Back to Top

ViPR Controller REST APIs and CLI commands for managing VPLEX data migration

New APIs have been implemented in the ViPR Controller REST API, as well as new CLI commands, that allow you to pause, resume, cancel, delete, show details, and list VPLEX data migrations.

These REST API requests and CLI commands only apply to the VPLEX Data Migration operation of the following catalog services:

  • Block Storage Services > Change Volume Virtual Pool
  • Block Storage Services > Change Virtual Pool
  • Migration Services > Data migration service using VPLEX
Example: Cancel a migration using the CLI
viprcli volume  migration-cancel -id urn:storageos:Migration:03416849-daf8-4ff1-bee2-5fbbc4a102b1:vdc1
{
   "associated_resources": [],
   "creation_time": 1449175021173,
   "description": "cancel migration",
   "global": false,
   "id": "urn:storageos:Task:380eaab8-109b-4bd0-81b5-c7137766bc9b:vdc1",
   "inactive": false,
   "internal": false,
   "link": {
      "href": "/vdc/tasks/urn:storageos:Task:380eaab8-109b-4bd0-81b5-c7137766bc9b:vdc1",
      "rel": "self"
   },
   "name": "CANCEL MIGRATION",
   "op_id": "62fe896b-829b-43ed-b14d-870325f42adb",
   "progress": 0,
   "remote": false,
   "resource": {
      "id": "urn:storageos:Volume:8768ec55-8fa9-45e8-a334-a8a01f601026:vdc1",
      "link": {
         "href": "/block/volumes/urn:storageos:Volume:8768ec55-8fa9-45e8-a334-a8a01f601026:vdc1",
         "rel": "self"
      },
      "name": "KRISFS"
   },
   "start_time": 1449175021172,
   "state": "pending",
   "tags": [],
   "tenant": {
      "id": "urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
      "link": {
         "href": "/tenants/urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
         "rel": "self"
      }
   },
   "vdc": {
      "id": "urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
      "link": {
         "href": "/vdc/urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
         "rel": "self"
      }
   }
}
Example: Pause a migration using the CLI
viprcli volume migration-pause -id "urn:storageos:Migration:df0e8ff2-92d4-4cfc-adb8-db229a337d6c:vdc1"
{
   "associated_resources": [],
   "creation_time": 1447472705345,
   "description": "puase migration",
   "global": false,
   "id": "urn:storageos:Task:2cb914c9-9ec1-4686-ae62-ec8ac9a74015:vdc1",
   "inactive": false,
   "internal": false,
   "link": {
      "href": "/vdc/tasks/urn:storageos:Task:2cb914c9-9ec1-4686-ae62-ec8ac9a74015:vdc1",
      "rel": "self"
   },
   "name": "PAUSE MIGRATION",
   "op_id": "d6f5d107-a427-4ea1-b9f0-c7928b01699d",
   "progress": 0,
   "remote": false,
   "resource": {
      "id": "urn:storageos:Volume:2b5cb82d-725c-4786-8c89-a679df099c38:vdc1",
      "link": {
         "href": "/block/volumes/urn:storageos:Volume:2b5cb82d-725c-4786-8c89-a679df099c38:vdc1",
         "rel": "self"
      },
      "name": "XIOBLK"
   },
   "start_time": 1447472705344,
   "state": "pending",
   "tags": [],
   "tenant": {
      "id": "urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
      "link": {
         "href": "/tenants/urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
         "rel": "self"
      }
   },
   "vdc": {
      "id": "urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
      "link": {
         "href": "/vdc/urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
         "rel": "self"
      }
   }
Example: Resume a paused migration using the CLi
viprcli volume  migration-resume -id urn:storageos:Migration:03416849-daf8-4ff1-bee2-5fbbc4a102b1:vdc1
{
   "associated_resources": [],
   "creation_time": 1449174913436,
   "description": "resume migration",
   "global": false,
   "id": "urn:storageos:Task:1b33c4ed-a276-46d9-9662-3f6a3e91b6c6:vdc1",
   "inactive": false,
   "internal": false,
   "link": {
      "href": "/vdc/tasks/urn:storageos:Task:1b33c4ed-a276-46d9-9662-3f6a3e91b6c6:vdc1",
      "rel": "self"
   },
   "name": "RESUME MIGRATION",
   "op_id": "d692afac-879d-4e45-a187-51bc8d36facc",
   "progress": 0,
   "remote": false,
   "resource": {
      "id": "urn:storageos:Volume:8768ec55-8fa9-45e8-a334-a8a01f601026:vdc1",
      "link": {
         "href": "/block/volumes/urn:storageos:Volume:8768ec55-8fa9-45e8-a334-a8a01f601026:vdc1",
         "rel": "self"
      },
      "name": "KRISFS"
   },
   "start_time": 1449174913435,
   "state": "pending",
   "tags": [],
   "tenant": {
      "id": "urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
      "link": {
         "href": "/tenants/urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
         "rel": "self"
      }
   },
   "vdc": {
      "id": "urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
      "link": {
         "href": "/vdc/urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
         "rel": "self"
      }
   }
}
Example: Delete a completed or cancelled migration using the CLI
viprcli volume migration-deactivate -id urn:storageos:Migration:7a9e794b-1eb2-497d-818a-c76a244a4288:vdc1
{
   "associated_resources": [],
   "creation_time": 1449152326802,
   "description": "delete migration",
   "global": false,
   "id": "urn:storageos:Task:f57f737d-bdb9-4f71-924d-0d02541c2646:vdc1",
   "inactive": false,
   "internal": false,
   "link": {
      "href": "/vdc/tasks/urn:storageos:Task:f57f737d-bdb9-4f71-924d-0d02541c2646:vdc1",
      "rel": "self"
   },
   "name": "DELETE MIGRATION",
   "op_id": "27ae6831-b3a6-4e0c-ad0a-b63b4d8ee6ef",
   "progress": 0,
   "remote": false,
   "resource": {
      "id": "urn:storageos:Volume:8768ec55-8fa9-45e8-a334-a8a01f601026:vdc1",
      "link": {
         "href": "/block/volumes/urn:storageos:Volume:8768ec55-8fa9-45e8-a334-a8a01f601026:vdc1",
         "rel": "self"
      },
      "name": "KRISFS"
   },
   "start_time": 1449152326800,
   "state": "pending",
   "tags": [],
   "tenant": {
      "id": "urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
      "link": {
         "href": "/tenants/urn:storageos:TenantOrg:6acd8ff4-09ed-40ba-9ad2-0cff4e6824f7:global",
         "rel": "self"
      }
   },
   "vdc": {
      "id": "urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
      "link": {
         "href": "/vdc/urn:storageos:VirtualDataCenter:18f249eb-c695-4780-807d-1afdf1820a6f:vdc1",
         "rel": "self"
      }
   }
}
Example: List all migrations using the CLI
viprcli volume migration-list
{
"migration": [
{
"id": "urn:storageos:Migration:73f64940-21bd-4e4d-9824-fdfc2b8cebb1:vdc1",
"link":
{ "href": "/block/migrations/urn:storageos:Migration:73f64940-21bd-4e4d-9824-fdfc2b8cebb1:vdc1", "rel": "self" }
,
"name": "M_151114-032854-120"
},
{
"id": "urn:storageos:Migration:9a7c4271-d59c-40be-aab9-9af546f00348:vdc1",
"link":
{ "href": "/block/migrations/urn:storageos:Migration:9a7c4271-d59c-40be-aab9-9af546f00348:vdc1", "rel": "self" }
,
"name": "M_151114-030339-859"
},
{
"id": "urn:storageos:Migration:f92eb130-abdf-4df4-86cc-f21478ca7e75:vdc1",
"link":
{ "href": "/block/migrations/urn:storageos:Migration:f92eb130-abdf-4df4-86cc-f21478ca7e75:vdc1", "rel": "self" }
,
"name": "M_151114-031418-805"
}
]
}
Example: Show the details of a migration using the CLI
viprcli volume migration-show -id "urn:storageos:Migration:f92eb130-abdf-4df4-86cc-f21478ca7e75:vdc1"
{
"global": null,
"percent_done": "100",
"remote": null,
"source": {
"id": "urn:storageos:Volume:32410086-b791-4cd9-83be-0d5f5050ec07:vdc1",
"link":
{ "href": "/block/volumes/urn:storageos:Volume:32410086-b791-4cd9-83be-0d5f5050ec07:vdc1", "rel": "self" }
},
"start_time": "Sat Nov 14 02:57:55 UTC 2015",
"status": "committed",
"tags": [],
"target": {
"id": "urn:storageos:Volume:925a5096-a010-4c4b-ab84-08cac5d4e94e:vdc1",
"link":
{ "href": "/block/volumes/urn:storageos:Volume:925a5096-a010-4c4b-ab84-08cac5d4e94e:vdc1", "rel": "self" }
},
"vdc": null,
"volume": {
"id": "urn:storageos:Volume:2b5cb82d-725c-4786-8c89-a679df099c38:vdc1",
"link":
{ "href": "/block/volumes/urn:storageos:Volume:2b5cb82d-725c-4786-8c89-a679df099c38:vdc1", "rel": "self" }
}
}
Back to Top

VMAX3 volume expansion

ViPR Controller now supports the native expansion of VMAX3 volumes, including those that are backend volumes for VPLEX.

Previously, ViPR Controller could only expand VMAX3 backend volumes for VPLEX through migration, such as the creation of a new volume. Now you can use the ViPR Controller, Expand Block Volume service to expand VMAX3 volumes.

ViPR Controller cannot expand VMAX3 volumes that are in a replication relationship, which include mirrors, clones, and snapshots.

Although ViPR Controller does not support the native expansion of volumes in an SRDF relationship, it does support this workflow to expand VMAX3 volumes:

SRDF links broken -> volume natively expanded -> SRDF links re-established

VMAX3 expansion requires SMI-S version 8.1.0.7. For complete details see the ViPR Controller Support Matrix.

Back to Top

Isilon access zones: discovery and file system placement

Isilon access zones (AZ) are virtual containers partitioned from an Isilon cluster that enable you to segregate data into separate self-contained units with their own sets of authentication providers, user mapping rules and SMB shares/NFS exports. ViPR Controller can now discover access zones and ingest them as vNAS servers and ingest their smart connect zones as storage ports. You can assign these vNAS servers to a project. Users of that project can then provision file systems using these assigned vNAS servers.

File system placement

ViPR Controller uses performance metrics and calculations when evaluating vNAS servers for file system placement. For access zones, this pertains to vNAS servers with static work loads. ViPR Controller collects the number of storage objects, such as file systems and snapshots, and their capacity. The performance statistics of a vNAS server is then calculated as the aggregate performance of its network interfaces.

  1. Uses FileShareScheduler>getRecommendationForPools to retrieve a list of storage pools from the virtual pool recommendation. If there are no recommended storage pools, a placement error occurs.
  2. If a project in the file system placement request has associated vNAS servers, retrieves all vNAS servers for that project in the virtual array.
  3. Filters out the vNAS servers that have reached maximum resources or capacity.
  4. If step 3 results in an empty vNAS list or the project in the request does not have any assigned vNAS servers, retrieves the virtual and System access zone that are unassigned.
  5. Filters out the vNAS servers that have reached maximum resources or capacity. If an empty list is created, generates an error stating that vNAS and System access zone have reached the maximum limits.
  6. Chooses the overlapping vNAS servers with storage pools that were recommended in step 1. If no vNAS servers exist, fails with a placement error.
  7. Based on least load and performance factors, places the file system on a qualified vNAS server.

Discover vNAS servers

Before running discovery on Isilon arrays, verify the following:

  • Authentication providers are configured.
  • Valid smart connect zones are associated with access zones.

When you add a storage system of type Isilon on the Physical Assets > Storage Systems page, ViPR Controller discovers and registers its vNAS servers and attributes, such as smart connect zones and base directory.

Associate a vNAS server to a project

Before associating a vNAS server to a project, verify the following:

  • The vNAS server and project are in the same domain.
  • The vNAS server is not tagged or associated with another project.
  • The vNAS server does not have file systems that belong to a different project.

To associate a vNAS server to a project, go to Physical Assets, select the name of the Isilon storage system, and click the vNAS button. Select the vNAS server and complete the page.

View performance metrics

You can view details about these vNAS servers, including performance metrics, on the Resources > vNAS Servers page or by clicking the vNAS button next to any Isilon array on the Storage Systems page.

Disable provisioning of file systems through physical NAS

By default, the ViPR Controller file system placement algorithm is set to provision file systems through physical NAS, which is the System access zone on Isilon systems and the physical data mover on VNX file systems, if no vNAS reservation is found or if an existing reservation is overloaded according to their static or dynamic loads.

To disable provisioning of file systems through physical NAS, go to the Physical Assets > Controller Config > NAS tab and deselect Use Physical NAS For Provisioning from the drop-down menu.

Troubleshoot

To investigate issues related to discovery, project association, and file system placement of access zones, view the Controllersvc.log. APIsvc.log file. These are examples of errors in this log file:

  • Removing vNAS <vNas> as it is overloaded
  • Removing <vNAS> as it does not support vpool protocols

Back to Top

Support for Isilon Storage Systems with NFSv4 protocol, and NFSv4 ACLs

ViPR Controller enhancements to support operations on Isilon storage systems include:

  • Discovery of Isilon Storage System that has NFSv4 protocol support.
  • Creation of file virtual pools with NFSv4 enabled.
  • Provision NFS file systems with enhanced support of NFSv4 as a supported protocol.
  • Manage Access Control Lists on NFS File Systems.

Back to Top

Manage Access Control Entries with NFSv4 file systems

When Isilon storage is configured with NFSv4 protocol, you can use ViPR Controller to perform the following actions on a file system, or a sub directory underneath the file system using the ViPR Controller UI, REST API, or CLI.

  • Add Access Control Entries (ACEs) to NFSv4 file systems
  • View Edit or delete ACEs
Note Image
These operations can only be performed when the ACE was added to an NFSv4 file system using ViPR Controller.

Refer to the following sections for the steps, calls, or commands to perform these actions:

Requirements when using ViPR Controller to manage ACEs with NFSv4 file systems

Be aware of the following before adding access controls to a file system or sub directory configured with NFSv4 protocol:

  • The ViPR Controller user must be able to access the mount path on which the access control will be added.
  • The user or group can be a domain or local user that has been configured on the Isilon storage system.
  • If multiple Access Control Entries (ACEs) are entered on a file system for the same user or groups, the lowest assigned permission will be the one used.
  • If ACEs with the same user or group are assigned to a file system, and the file system subdirectory, the lowest assigned permission will be the one used.
  • Sub directories are not created by ViPR Controller, you must know the name of a sub directory that already exists within the file system on the Isilon storage system and enter it exactly as it appears on the Isilon storage system.

Back to Top

Use ViPR Controller UI to add Access Controls to NFSv4 file systems

Use the following steps to add access control entries (ACEs) to NFSv4 file systems using the ViPR Controller UI.

See the ViPR Controller UI Online Help for more details.

Procedure
  1. Go to Resources > File Systems page and select the file system on which you will be adding the ACE.
  2. Expand NFS Access Controls, and click Add Access Controls.
  3. Enter the information for the ACE, click Add, and repeat for each ACE you want to add to the list.
  4. Click Save.
Back to Top

Use ViPR Controller UI to view, edit, or delete ACEs for NFSv4 file systems

Use ViPR Controller to view edit and delete Access Control Entries (ACEs) when the ACE was added to a file system, or a sub directory from the ViPR Controller.

See the ViPR Controller UI Online Help for more details.

Procedure
  1. Go to the Resources > File Systems page.
  2. 2. Expand NFS Access Controls, to see if any ACEs have been configured for the file system.
    Note Image
    If the area is empty no ACEs have been configured for the file system or file system sub directories.

  3. Click Access Controls to see the list of ACEs configured for the selected file system, or file system subdirectory.
  4. Select the check box next to the ACE, and click Delete to delete the ACE, or click on an ACE to edit the type of permission set (Allow, Deny), or to edit which permissions were set on the file system or sub directory.
Back to Top

ViPR Controller REST API Calls to manage ACLs for NFSv4 file systems

The following calls have been added to the ViPR Controller REST API to manage Access Control Lists (ACLs) on NFSv4 file systems when the ACL was added from the ViPR Controller.

New ViPR Controller REST API calls

REST API Call Description
GET /file/filesystems/{id}/acl
Get ACLs for the NFS file system.
PUT /file/filesystems/{id}/acl
Updates ACL for NFS file systems.
DELETE /file/filesystems/{id}/acl
Delete all ACLs for the NFS file system.

For details refer to the EMC ViPR Controller REST API Reference which is available from the ViPR Controller Product Documentation Index.

Back to Top

ViPR Controller CLI commands to manage ACLs for NFSv4 file systems

The following new commands were added to the ViPR Controller CLI to support management of Access Control Lists that were added to NFSv4 file systems from the ViPR Controller.

For command details refer to the ViPR Controller CLI Reference Guide which is available from the ViPR Controller Product Documentation Index.

NFSv4 protocol type

The NFSv4 protocol type was added to the following commands:

Command Description
viprcli vpool create Allows you to set the protocol type to NFSv4 on a file virtual pool.
viprcli filesystem export Allows you to set the protocol type to NFSv4 when creating file system exports.

Commands added to viprcli filesystem nfs-acl and viprcli snapshot share-acl

The following new commands have been added to viprcli filesystem nfs-acl, and viprcli snapshot share-acl commands:

REST API Call Description
List ACL List ACLs for the NFS file system, or file system snapshot.
Add ACL Adds an ACL, and permissions to an NFSv4 file system or file system snapshot.
Update ACL Edits the ACL, and permissions assigned to an NFSv4 file system or file system snapshot.
Delete Delete all ACLs for the NFS file system.

Example: List configured NFS ACLs on all sub directories of a filesystem, or a snapshot share:

viprcli filesystem nfs-list-acl -name CLI-FS-NFSv4-1 -project cli-nfsv4 -alldir
DOMAIN USER PERMISSIONS PERMISSION_TYPE TYPE
provisioning.bourne.local jai Write,Execute allow user
viprcli snapshot list-acl -fsname netcproshare1 -sname snap-netcproshare1 -project netpro -share snapcifsnetpro1
ERRORTYPE SNAPSHOT_ID PERMISSION SHARE_NAME USER GROUP
None urn:storageos:Snapshot:857fa072-adfa-4a5f-b2b5-511639723a96:vdc1 Read snapcifsnetpro1 testfile

Example: Add and setup and NFSv4 ACL on a subdirectory of a filesystem, or snapshot share:

viprcli filesystem nfs-acl -name CLI-FS-NFSv4-1 -operation add -permissions Write,Execute -pr cli-nfsv4 -user jai -domain provisioning.bourne.local -type user -permissiontype allow -subdirectory W-E
viprcli snapshot share-acl -fsname netcproshare1 -sname snap-netcproshare1 -share snapcifsnetpro1 -operation add -project netpro -user testfile -permission Read

Example: Delete all ACLs for an NFSv4 from a sub directory of a file system, or a snapshot share

viprcli filesystem nfs-delete-acl -name CLI-FS-NFSv4-1 -project cli-nfsv4 -subdirectory W-E
viprcli snapshot delete-acl -fsname netcproshare1 -sname snap-netcproshare1 -share snapcifsnetpro1 -project netpro

Back to Top

VMAX3 eNAS support

ViPR Controller can now detect, discover, and provision file solutions on VMAX3 arrays configured with eNAS (embedded network attached storage).

Discover VMAX3 eNAS

As part of detecting eNAS on VMAX3, you follow steps similar to the ones for discovering EMC VNX files in ViPR Controller. When you add a storage system of type EMC VNX File, ViPR Controller discovers eNAS file storage using the IP address of the eNAS software control station host.

To do this, go to Physical Assets > Storage Systems and click Add. When completing the Add Storage System page, make sure you do the following:
  • In Type, select EMC VNX File.
  • In Control Station IP, type the address of the eNAS software control station host.
  • In Storage Provider Host in the Onboard Storage Provider area, type the IP address of the eNAS software SMI-S Provider host, which is the same IP address as the eNAS software control station host.

After discovery, you must do the following to enable file provisioning for eNAS in ViPR Controller :

  • Create associated virtual arrays.
  • Add IP networks for the virtual arrays.
  • Create virtual pools.

These procedures are explained in ViPR Controller User Interface Virtual Data Center Configuration Guide.

Note Image
ViPR Controller does not list thin pools on eNAS storage even when a thin LUN is set on the virtual pool. ViPR Controller only lists these thin pools as thick.

Troubleshoot connectivity

If eNAS discovery fails showing an "Unable to connect" error, verify that you typed the correct IP address of the eNAS software control station host on the Add Storage System page.

Back to Top

New VM sizing guidelines for arrays accessed by ViPR Controller through Cinder

Guidelines have been developed for VM sizing of Cinder southbound arrays.

These guidelines assume:

  • The Cinder node is deployed as a separate Virtual Machine (VM).
  • The Cinder node is available exclusively to ViPR Controller.

The number of storage arrays and volume size does not impact the Cinder VM resource configuration.

Back to Top

ViPR Controller support for creating a volume from a snapshot in VPLEX +Cinder configurations

Enhancements have been made to the POST /block/snapshots/{id}/protection/full-copies API.

In previous releases, a snapshot full copy was created as a volume for non-VPLEX storage systems, using the POST /block/snapshots/{id}/protection/full-copies REST API method. This method has been enhanced to include creating snapshot full copies as volumes for VPLEX +Cinder configurations where the VPLEX backend storage system supports snapshot full copy, such as VNX and NetApp.

The POST /block/snapshots/{id}/protection/full-copies request payload and response have not changed.

Back to Top

Support for changing the SLO on an existing backend volume for VPLEX

Previously, when executing the VPLEX Data Migration operation in either the Change Volume Virtual Pool or Change Virtual Pool catalog services, it created a new backend VMAX volume. This was true, even when the intent was to just change the service level objective (SLO) on the backend VMAX volume involved in the VPLEX migration.

Now ViPR Controller enables the existing VMAX volumes to inherit the settings of the target virtual pool for auto-tiering Policy and Host IO Limits without creating a new device. This is accomplished using the drop down operation Change auto-tiering Policy or Host IO Limits in the and catalog services.

  • If you are changing SLOs for all VMAX3 backend volumes for VPLEX volumes within the masking view, then they need not be in parent/child relationship. It can be a flat storage group.
  • For VMAX2 it does not matter if it is a cascaded storage group (SG) or child, the policy change is requested for all volumes in a SG. If there are phantom SGs (SGs that are non-FAST and non-cascaded), then this restriction is not applicable.

Restrictions:

  • If you are changing the SLOs of only a subset of the VMAX3 backend volumes for VPLEX volumes within the masking view , then the VMAX3 backend volumes for VPLEX must all be contained in a child storage group, under a parent SG(cascaded). The parent SG should be associated to a Masking View (MV).
Note Image
If the above restrictions are not followed, an error similar to the following is encountered:
Error 12000: An error occurred while executing the job,
Op: updateStorageGroupPolicyAndLimits with message None of
the Storage Groups on ExportMask BE_Vplex242vmax3_1035_MV1
is updated with new FAST policy or Host IO Limits. Because
the given Volume list is not same as the one in Storage
Group (or) any of the criteria for 'moveMembers' didn't
meet in case of VMAX3 volumes. Please check log for more details.
	 

In this example, the VPLEX Backend (BE) masking view is a shared masking view:

  1. User1 creates 2 Bronze SLO VPLEX virtual volumes which were added to the existing BE Bronze Cascade MV1/Cascade SG1
  2. User2 creates 3 Bronze SLO VPLEX virtual volumes where were added to existing BE Bronze Cascade MV1/Cascade SG1
  3. User 1 tries to change the SLO from Bronze to Gold. The order will fail, and generate the Error 12000 since the Bronze BE SG1 is shared among all users. User 1 is trying to change the SLO of only a subset of the VPLEX virtual volumes in the Bronze SG, instead of all of the volumes in the SG.
Back to Top

FAST.X Support for VMAX3

ViPR Controller supports FAST.X which allows you to connect an EMC XtremIO to the backend of a VMAX3.

When ViPR Controller discovers the VMAX3, the XtremIO is displayed as a SRP or a SLO tier. All provisioning operations are done through the VMAX3.

Back to Top

Service pack release of ViPR Controller Plug-in for vRealize Orchestrator

ViPR Controller Plug-in 2.4 Service Pack 1 for vRealize Orchestrator is available with the ViPR Controller 2.4 Service Pack 1 release.

For details refer to the ViPR Controller Plug-in 2.4.1.0 for VMware vRealize Orchestrator-Readme which is available from the ViPR Controller Product Documentation Index.

Back to Top

ViPR Controller 2.4.0 new features and changes

New features and changes introduced in ViPR Controller 2.4.0 are described in the following sections.

Refer to the ViPR Controller documentation for requirements, and detailed information to use the ViPR Controller features. All ViPR Controller documentation can be accessed from the ViPR Controller Product Documentation Index.

Back to Top

ViPR Controller now manages ScaleIO using the ScaleIO REST API

In previous releases, ViPR Controller used the ScaleIO CLI to add and manage ScaleIO storage. In this release, ViPR Controller now uses the ScaleIO REST API to add and manage ScaleIO storage. ViPR Controller communicates with the ScaleIO REST API service through one of the ports on the ScaleIO Gateway.

If you are upgrading your ViPR Controller to version 2.4, you must also upgrade your ScaleIO to a supported version, as described in the ViPR Controller Support Matrix which is available from the ViPR Controller Product Documentation Index. As part of your ScaleIO upgrade, you must install the ScaleIO Gateway.

If you already discovered ScaleIO storage in a previous ViPR Controller release, then you need to update the storage provider that ViPR Controller created when you discovered the ScaleIO storage.

Note Image
You can update the previously added ScaleIO storage provider using the ViPR Controller UI, REST API, or CLI.

You need to update the:
  • storage provider type:
    • UI - change Type to ScaleIO Gateway
    • REST API - change interface_type to scaleioapi.
    • CLI - change interface to scaleioapi.
  • host IP address to the IP address of the ScaleIO Gateway host
    • UI - change Host
    • REST API - change ip_address
    • CLI - change provip
  • port number to the port number used to communicate with the ScaleIO REST API service
    • UI - change Port
    • REST API - change port_number
    • CLI - change provport
After the upgrade and modification of the ViPR Controller storage provider created for the ScaleIO storage, rediscover the associated ScaleIO storage systems. In the ViPR Controller UI, navigate to Physical Assets > Storage Systems, select a storage system and select Rediscover. You can also use the REST API and CLI to rediscover the individual ScaleIO storage systems.

For specific post-upgrade steps, see the ViPR Controller Installation, Upgrade, and Maintenance Guide which is available from the ViPR Controller Product Documentation Index.

Back to Top

Changing the ViPR Controller node names

After deploying ViPR Controller on VMware with a vApp, VMware without a vApp, or on Hyper-V, you can change the ViPR Controller node names. Changing the ViPR Controller node names in ViPR Controller allows you to easily identify the nodes in the ViPR Controller UI, REST API, and ViPR Controller logs. The custom names can also be used to SSH between the ViPR Controller nodes.

By default ViPR Controller is installed with the following node IDs, which are also the default node names:

Number of Nodes Node ID and default Node Names
3 ndes vipr1, vipr2, vipr3
5 nodes vipr1, vipr2, vipr3, vipr4, vipr5

During initial deployment, the default names are assigned to the nodes in ViPR Controller, vSphere for VMware installations, and SCVMM for Hyper-V installations.

Note Image
The node ids cannot be changed. Only the node names can be changed.

You can change the ViPR Controller node names in the ViPR Controller, in VMware vSphere, when ViPR Controller is deployed on VMware, and in Microsoft SCVMM when ViPR Controller is deployed on Hyper-V.

When the ViPR Controller node names are changed from the ViPR Controller UI, REST API, and CLI the node names are not changed in vSphere, or SCVMM. If you want the ViPR Controller node names to be the same in ViPR Controller and vSphere, or Hyper-V, you will need to go into vSphere for VMware installations, or SCVMM for Hyper-V installations, and manually change the node name to match the name you provided in ViPR Controller.

Alternatively, when the ViPR Controller node names are changed from vSphere for VMware installations, or SCVMM for Hyper-V installations the node names are not automatically updated in ViPR Controller. If you want ViPR Controller node names to match the vShere or SCVMM node names, you will need to manually update them in the ViPR Controller.

For further details, and the steps to change the node names in ViPR Controller see the ViPR Controller Installation, Upgrade, and Maintenance Guide.

Back to Top

New Security system settings

ViPR Controller Security Administrators can now set the following options in ViPR Controller.

All of the following settings can be configured from the ViPR Controller UI, General Configuration > Security tab.

The security settings can also be set from the ViPR Controller REST API or CLI. Refer to the ViPR Controller REST API Reference, or the ViPR Controller CLI Reference Guide for details.

Login Attempts

Allows you to modify the values for the maximum number of login attempts, and the length of time before you can attempt to login from the same client IP address, after a user has been locked out, due to failed log in attempts, from that client. If either the number of Login Attempts or the Lockout Time in Minutes is set to zero, then you can have an unlimited number of login attempts, and the client lockout will never occur.

Once a client has been locked out from ViPR Controller, it can be unlocked using the ViPR Controller REST API, or CLI. For ViPR Controller REST API use:
GET /config/login-failed-ips
DELETE /config/login-failed-ips/

For CLI use:

viprcli loginfailedip list
viprcli loginfailedip delete -loginfailedip <blocked_IP_address>

Set the ViPR Controller timeout values

The Token Life Time — allows you to define the maximum length of time your ViPR Controller session will last before you have to restart the ViPR Controller, and log into a new session. For example, the default value is 8 hours. If you start a ViPR Controller session at 9 AM, your session time will expire at 5 PM, (assuming during that time you do not exceed the Token Idle Time) and you will need to restart your ViPR Controller session. If you change the Token Life Time value, the new value will not take effect until the current session ends. Using the above example, if during the current session, you changed the Token Life Time to 2 hours, at 10 AM, you will still have 7 hours remaining of your session. The next time you start a ViPR Controller session, you will only have a 2 hour Life Time limit.

The Token Idle Time — allows you to define the maximum amount of time a ViPR Controller session can remain idle before it ends. Changes made to the Token Idle Time take effect immediately after the change is saved. When working in the ViPR Controller UI the session idle time restarts when a page is either automatically, or manually refreshed.

Ldap connection timeout period

Amount of time, in seconds, before ViPR Controller will timeout when connecting to an Ldap/AD server. A 0 value means ViPR Controller will never timeout of the operation.

Back to Top

Database Consistency Check replaced with Database Housekeeping Status

The Database Consistency Check page, has been replaced with the Database Housekeeping Status page, which can be accessed from the ViPR Controller UI System > Database Housekeeping Status menu.

The Database Consistency Status has also been removed from the Dashboard.

Back to Top

Shared vCenters across multiple tenants

vCenters added to ViPR Controller can be shared with multiple ViPR Controller tenants.

If a vCenter is shared across multiple tenants, then the datacenters within a shared vCenter can be assigned to different tenants, however, a datacenter cannot be shared between different tenants. Additionally, the hosts, and clusters within a datacenter will only be added to the ViPR Controller tenant to which the datacenter has been assigned for example:

MyvCenter contains two datacenters Datacenter A, and Datacenter B.

MyvCenter is shared with ViPRTenant1, and ViPRTenant2.

  • Datacenter A is assigned to ViPRTenant1
    • All of the Datacenter A hosts and clusters are part of the ViPRTenant1 physical assets
  • Datacenter B is assigned to ViPRTenant2
    • All of the Datacenter B hosts and clusters are part of the ViPRTenant2 physical assets

For user role requirements, and configuration requirements to share a vCenter across mulitple tenants, refer to the ViPR Controller Virtual Data Center Requirements and Information Guide which is available from the ViPR Controller Product Documentation Index.

Back to Top

System Administrator permissions

ViPR Controller System Administrators can now add, edit, and delete vCenters in the ViPR Controller physical assets, and assign a vCenter to be shared across multiple tenants.

For a detailed list of ViPR Controller user roles and functionality, refer to the ViPR Controller Virtual Data Center Requirements and Information Guide.

Back to Top

Support for multiple compute image servers for Vblock Systems

You can deploy a single or multiple compute image servers for each Vblock system you are adding to ViPR Controller. When Vblock compute systems are located in geographically distant sites it is useful to deploy a compute image server in a location local to each compute system for better performance of the OS Installation during provisioning.

The steps to deploy, and configure the compute image servers are as follows. Prior to performing these operations, review the Vblock system requirements and information provided in the ViPR Controller Virtual Data Center Requirements and Information Guide, which is available from the ViPR Controller Product Documentation Index.

  1. Load your compute images, which contain the os installation files, onto an FTP site.
  2. Deploy each compute image server you wish to use.
  3. Configure the compute image server networks for each compute image server you are deploying.

    Details to complete steps 2 and 3 are provided in the ViPR Controller Installation, Upgrade, and Maintenance Guide.

  4. Using the ViPR Controller REST API, or CLI, add the compute image server to the ViPR Controller physical assets, and repeat this step for each compute image server you deployed in step 2.
  5. Using the ViPR Controller UI, REST API, or CLI, add the compute image to the ViPR Controller physical assets, and repeat this step for each compute image you are using.
  6. Using the ViPR Controller REST API, or CLI, associate each Vblock compute system with a compute image server.
Note Image
Steps 4 and 6 can only be performed from the ViPR Controller REST API, or CLI. You cannot use the ViPR Controller UI to perform these operations.

If you are upgrading to version 2.4 of ViPR Controller the compute image server settings you configured will remain after upgrading, however, you will no longer be able to view or configure the compute image server from the ViPR Controller UI. Any changes you want to make to the compute image server configuration must be performed from the ViPR Controller REST API, or CLI.

Changes to ViPR Controller UI to support multiple compute image servers

The Compute Image Server page has been removed from the Settings > General Configuration page. The compute image server must be added, and configured through the ViPR Controller REST API, or CLI.

Changes to ViPR Controller REST API to support multiple compute image servers

  • The /compute/imageservers calls have been added to the ViPR Controller REST API to add, edit, delete, and view the compute image servers in your ViPR Controller environment.
  • The compute_image_server element has been added to the computesystems calls to associate the compute image server with the Vblock compute system.
  • When you use the GET /compute/images/{ID} to view the list of compute images you have added to ViPR Controller, you will be able to see if the compute images were successfully uploaded to the compute image servers. Successfully uploaded compute images are listed under available_image_servers. Compute images which were not successfully uploaded to the compute image servers are listed under failed_image_servers.

For details to use the ViPR Controller REST API to support multiple compute image servers, refer to the ViPR Controller REST API Virtual Data Center Configuration Guide, which is available from the ViPR Controller Product Documentation Index.

Back to Top

New virtual NAS server discovery and file system placement

You can now group file systems to different projects by associating a dedicated virtual NAS (vNAS) server to a project. Users of the project can then use the vNAS server for storage provisioning. This enables environments without multi-tenancy enabled at the organizational level to group file systems to different projects

ViPR Controller uses performance metrics and calculations when evaluating vNAS servers for file system placement. This includes vNAS servers with dynamic and static work loads. For static loads, ViPR Controller collects the number of storage objects, such as file systems and snapshots, and their capacity. For dynamic loads, ViPR Controller collects performance metrics, such as input and output IOPS of the network interfaces of vNAS servers. The performance statistics of a vNAS server is then calculated as the aggregate performance of its network interfaces.

Performance metrics for dynamic loads is not enabled by default. You enable this functionality on the Physical Assets > Controller Config page.

You can view details about vNAS servers, including performance metrics, on the Resources > vNAS Servers page or by clicking the vNAS button next to any VNX File array on the Storage Systems.

Back to Top

New object storage services

The Object Storage services enable you to create and manage Elastic Cloud Storage buckets in ViPR Controller. Buckets are containers for object data. A bucket belongs to an ECS namespace and object users are also assigned to an ECS namespace. Each object user can create buckets only in the namespace to which they belong.

You can do the following using the object storage services:

  • Create buckets and assign a valid ECS namespace to each bucket.
  • Modify the quota and retention period values of a bucket.
  • Remove buckets.

You access these services from Service Catalog > Object Storage services.

You can view details about buckets on the Service Catalog > Resources > Buckets page.

Documentation resources

All documents listed below are available from the ViPR Controller Product Documentation Index.

  • For the information required to configure your ECS environment for ViPR Controller management, see the ViPR Controller Virtual Data Center Requirements and Information Guide.
  • For the steps to add the ECS to the ViPR Controller, physical assets, and configure the ECS storage in virtual arrays and object virtual pools, see the ViPR Controller User Interface Virtual Data Center Configuration Guide.
  • For more details about the Object Storage services in the Service Catalog see the ViPR Controller Service Catalog Reference Guide.

Back to Top

Ingest improvements

The following improvements have been made to the ingest functionality.

Back to Top

New Ingest VMAX Block Volumes into Consistency Groups service

The Service Group > Block Storage Services > Ingest VMAX Block Volumes into Consistency Groups service enables you to add VMAX source volumes to a consistency group.

SRDF Ingestion

During ingestion of SRDF volumes in Device Groups, ViPR Controller ingests the SRDF R1 and R2 volumes, but does not group them into consistency groups.

After ingestion, use the Ingest VMAX Block Volumes into Consistency Group service to add an R1 volume to a consistency group. ViPR Controller then automatically creates a consistency group for the R1 volumes and also creates a new consistency group for the tartget R2 volumes.

This step is mandatory to complete ingestion of SRDF synchronous and asynchronous volumes in Device Groups on the storage system.

Back to Top

Ingestion method options for VPLEX virtual volumes

When ingesting a VPLEX virtual volume, you have the options to ingest the virtual volumes and the backend storage (Full Ingestion including Backend) or only ingest the virtual volumes (Ingest Only Virtual Volume). The Ingest Only Virtual Volume is useful when you want to ingest the virtual volumes that encapsulate storage from an array that is not supported by ViPR Controller. You can then migrate these volumes into an array supported by ViPR Controller. Any snapshots or clones on the backend storage are not ingested.

You can use these options on the Service Catalog > Block Storage Services > Ingest Exported Unmanaged Volumes page and the Service Catalog > Block Storage Services > Ingest Unexported Unmanaged Volumes page. You can also use these options in the viprcli volume unmanaged ingest command.

Back to Top

Ingestion of CIFS sub-directories and shares

When ViPR Controller ingests file systems, it also ingests the sub-directories and shares of the file systems.

Back to Top

Improvement of XtremIO support

The following features have been added or enhanced for improved support of XtremIO storage.

Back to Top

Support for EMC XtremIO 4.0 and 4.0.1

ViPR Controller now supports EMC XtremIO 4.0 and 4.0.1, in addition to earlier releases of EMC XtremIO

See theViPR Controller Support Matrix which can be found on the ViPR Controller Product Documentation Index for the list of supported EMC XtremIO releases.

With EMC XtremIO 4.0 and 4.0.1, a single XtremIO Management Server can manage multiple clusters. XtremIO storage is added to ViPR Controller as a storage provider, using the IP, port, and credentials to access the XtremIO Management Server. During discovery, a storage system is created and registered for each cluster.

The following functionality is available with ViPR Controller managed EMC XtremIO 4.0 and 4.0.1 storage:

Note Image
This new functionality is not available for any ViPR Controller managed EMC XtremIO storage that is not EMC XtremIO 4.0 and 4.0.1.

  • Support for Consistency Groups
    • Create a consistency group
    • Delete a consistency group
    • Remove volumes from a consistency group
    • Take a snapshot of a consistency group
    • Delete a snapshot of a consistency group
  • Read only snapshots
  • Snapshot restore and refresh

In previous releases of ViPR Controller, XtremIO storage was added as a storage system. If you already have ViPR Controller managed EMC XtremIO storage, and you upgrade to ViPR Controller 2.4, then a storage provider is automatically created for each XtremIO storage system, using the same name as the related storage system.

Back to Top

viprcli snapshot resync command for XtremIO

The new viprcli snapshot resync command updates an existing snapshot with all changes made to a volume or consistency group for XtremIO arrays.

Back to Top

Service Catalog improvements

The following features have been added or enhanced in the Service Catalog.

Back to Top

Option to delete a file system from the ViPR Controller database

When you delete a file system on the Service Catalog > File Storage Services > Remove File System page, you have the option to delete a file system from the ViPR Controller database (Inventory Only) or from both the ViPR Controller database and its backend storage system (Full). A Full delete removes the file system and all objects referencing the file system, such as CIFS shares, snapshots, and quota directory, from the ViPR Controller database and its backend storage system. An Inventory Only delete removes the file system and all objects referencing the file system from the ViPR Controller database.

You can also use this option on the Service Catalog > Resources > File Systems page.

Back to Top

Improved block protection services

On the Service Catalog > Block Protection Services page, you can now perform tasks on consistency groups using these services:

  • Create Block Snapshot
  • Restore Block Snapshot
  • Remove Block Snapshot
  • Create Full Copy
  • Restore From Full Copies
  • Resynchronize Full Copies
  • Detach Full Copies
Back to Top

Adding additional RecoverPoint journal capacity to a consistency group

The Add Journal Capacity service has been added as a new Block Protection Service in the Service Catalog. This service allows you to increase the journal capacity by adding new volumes to an existing RecoverPoint consistency group. In addition, you can add these new volumes to a different virtual array and virtual pool than those used for the original copy creation.

For further details, and the steps to add RecoverPoint journal capacity to a consistency group, see the ViPR Controller Service Catalog Reference Guide which can be found on the ViPR Controller Product Documentation Index.

Back to Top

New informational fields for volume resources

You can view the consistency groups and exports groups of each volume when you access the Service Catalog > Resources > Volumes page.

When you click on a specific volume on this page, you can view its access state to determine if the volume is ready for read/write operations.

Back to Top

Migration Services added to the Service Catalog

A new service category, Migration Services, has been added to the ViPR Controller Service Catalog.

There is one service, VPLEX Data Migration, in the Migration Services category. This service is used to move a volume from one virtual pool to another. The target virtual pool can be used to:

  • Change VPLEX local to VPLEX distributed.
  • Migrate data.
    Note Image
    You can configure the speed of the data migration using Physical Assets > Controller Config > VPLEX and then adding a new configuration for Data Migration Speed.

Back to Top

Option to disable network checks when adding volumes

On the Physical Assets > Controller Config > SAN Zoning page, you can use the Disable Zoning on Export Add Volume option to disable the network check that occurs each time a volume is added. Network checks ensure that all zones created by ViPR Controller for an export continue to exist and that any removed zone are re-created. These network checks can degrade performance.

Back to Top

Port allocation based on existing SAN zones

When a block volume is exported to a host via a SAN network, SAN zones are created between the host initiators and the storage array ports allocated to the export. By default, ViPR Controller ignores existing SAN zones and uses its own intelligence to select ports to assign to a host or a cluster export. You have the option to configure ViPR Controller to consider using existing zoned ports when assigning ports to a host or cluster export. For example, you can use existing alias-based zones instead of zones created by ViPR Controller.

ViPR Controller provides two options for SAN zoning: automatic and manual that are set on the block virtual array. If no network systems are discovered in ViPR Controller, zoning is treated as manual for all virtual arrays regardless of this SAN zoning setting.

When automatic zoning is on, ViPR Controller does the following when using existing zoned ports:
  • Gives zoned ports a higher priority for assignment than non-zoned ports.
  • If more ports are zoned than needed,ViPR Controller applies the port selection criteria and selects a subset of ports for the export.
  • If fewer ports are zoned than needed, ViPR Controller assigns additional ports and zones accordingly.
When automatic zoning is off, ViPR Controller does the following when using existing zoned ports:
  • If more ports are zoned than needed, ViPR Controller applies the port selection criteria and selects a subset of ports for the export.
  • If fewer ports are zoned than needed, ViPR Controller fails the operation because it cannot ensure that a sufficient number of paths exist.
Back to Top

Option to stop updating an export group when making changes to a cluster

When you set the Auto-Export option to off on the Physical Assets > Clusters > Add Cluster page, any changes to a cluster (such as removal of a host or an HBA change) that occur in ViPR Controller or made externally to ViPR Controller for a discovered cluster does not cause an update to the export group information in ViPR Controller. When set to on, the default, any changes to a cluster that occur in ViPR Controller or made externally to ViPR Controller for a discovered cluster cause an update to the export group information in ViPR Controller. For automatically-discovered clusters (such as ESX and Windows clusters), changes made externally to the hosts are found when discovery runs.

You can also use the Auto-Export option on the Physical Assets > Clusters > Edit Cluster page. It is available in the viprcli cluster create command and the viprcli cluster update command.

Back to Top

Set VPLEX data migration transfer speed

ViPR Controller system administrators can set the data migration transfer speed when using the VPLEX Data Migration option in the VPLEX Data Migration, Change Virtual Pool, and Change Volume Virtual Pool services. This transfer speed also applies to the Change Virtual Array service.

You set the VPLEX data migration transfer speed by navigating to Physical Assets > Controller Config and selecting the VPLEX tab. Select Data Migration Speed from the drop down. The default setting is Lowest. You can change the Data Migration Speed by clicking the Add button and defining a new speed. The setting is Global and will be picked up by subsequent orders.

There are five valid transfer speed settings, as described in the following table:

Back to Top

Support for RecoverPoint EX and SE configurations

ViPR Controller now supports the following RecoverPoint license offerings: RecoverPoint /CL, RecoverPoint /EX, and RecoverPoint /SE

Back to Top

ViPR Controller 2.4 Plug-in for VMware vRealize Orchestrator

The ViPR Controller 2.4 Plug-in for VMware vRealize Orchestrator (vRO) is being released with ViPR Controller 2.4.

The Configure EMC ViPR and Tenant workflow has been added to the ViPR Controller Plug-in for vRO. The new options defined in the new workflow are used to define which of the following options will be used by all of EMC ViPR Controller workflows run after the Configure EMC ViPR and Tenant workflow is configured:

  • The ViPR Controller instance (hostname/IP address), username, and password to use.
  • The ViPR Controller tenant, and project.
  • The default virtual array.
  • The workflow timeout period.
Note Image
In previous releases, these options were configured through the vRealize Orchestrator Configuration interface, and there were no options to select the tenant, or select the projects based on the tenants.

The Configure EMC ViPR and Tenant workflow options can be reconfigured at any time by re-running the workflow, and entering new parameters. When the options are changed, and workflows run after the changes will inherit the changed settings.

If upgrading to 2.4:

  • You will need to load the new plugin file into vRealize Orchestrator, and it will overwrite the old plugin version.
  • The previous configuration settings are supported, however it is recommended that run the new Configure EMC ViPR and Tenant workflow so that the tenant parameters are set.

ViPR Controller Plug-in for VMware vRealize Orchestrator workflows being deprecated

The following is a list of the EMC ViPR\General workflows that will be removed in a future release of the ViPR Controller Plug-in for VMware vRealize Orchestrator workflows. In preparation of the removal of these workflows, use the listed corresponding EMC ViPR\Multiple workflows instead. Deprecated workflows are still available in the 2.4 version of ViPR Controller Plug-in for VMware vRealize Orchestrator to allow for a smooth transition to the newer version.

Back to Top

Support for 2-site VPLEX MetroPoint CDP

ViPR Controller supports a 2-site VPLEX MetroPoint CDP configuration.

For additional information, see the ViPR Controller Integration with RecoverPoint and VPLEX User and Administration Guide which can be found on the ViPR Controller Product Documentation Index

Back to Top

Support for third party arrays as VPLEX backend arrays

ViPR Controller supports third party arrays discovered through Cinder as VPLEX backend storage systems.

For detailed information, see User Guide: Manage Third Party Block Storage Behind VPLEX.

Back to Top

Support for OpenStack Kilo release

ViPR Controller supports the Kilo release of OpenStack for both the northbound Cinder driver and the southbound Cinder arrays.

ViPR Controller supports the Kilo release of OpenStack for both the northbound Cinder driver and the southbound Cinder arrays.