[an error occurred while processing this directive]

ViPR 2.2 - Configuration Considerations While Virtualizing Your Storage in ViPR

Table of Contents

Overview

ViPR System Administrators can review the information that should be considered while configuring specific types of storage systems in ViPRvirtual arrays and virtual pools, and understand how ViPR works with the storage system element managers once the volumes or file systems are under ViPR management.

For details about how you can create virtual arrays, and storage virtual pools see the following articles:

Back to Top

Block storage configuration considerations

Before you create virtual arrays and virtual pools for block storage in ViPR, review the following sections for storage system specific configuration requirements and recommendations:

Back to Top

Hitachi Data Systems

Review the following configuration requirements and recommendations before virtualizing your Hitachi Data Systems (HDS) in ViPR.

Virtual pool considerations

ViPR provides auto-tiering for the six standard HDS auto-tiering policies for Hitachi Dynamic Tiering (HDT) storage pools.

The HDS auto-tiering policy options in ViPR are:

Policy name in ViPR Policy number HDS level Description
All 0 All Places the data on all tiers.
T1 1 Level 1 Places the data preferentially in Tier1.
T1/T2 2 Level 2 Places the data in both tiers when there are two tiers, and preferentially in Tiers 1 and 2 when there are three tiers
T2 3 Level 3 Places the data in both tiers when there are two tiers, and preferentially in Tier 2 when there are three tiers.
T2/T3 4 Level 4 Places the data in both tiers when there are two tiers, and preferentially in Tiers 2 and 3 when there are three tiers.
T3 5 Level 5 Places the data preferentially in Tier 2 when there are two tiers, and preferentially in Tier 3 when there are three tiers.

Back to Top

VMAX

Review the following configuration requirements and recommendations before virtualizing your VMAX system in ViPR.

Virtual Pool configuration requirements and recommendations

When VMAX is configured with Storage Tiers and FAST Policies:

  • Storage Tier and FAST Policy names must be consistent across all VMAX storage systems.
  • For more details about FAST policies see ViPR Support for FAST Policies.
  • Set these options when you build your virtual pool:
    Option Description
    RAID Level Select which RAID levels the volumes in the virtual pool will consist of.
    Unique Auto-tiering Policy Names VMAX only. When you build auto-tiering policies on a VMAX through Unisphere, you can assign names to the policies you build. These names are visible when you enable Unique Auto-tiering Policy Names.

    If you do not enable this option, the auto-tiering policy names displayed in the Auto-tiering Policy field are those built by ViPR.

    Auto-tiering Policy The Fully Automated Storage Tiering (FAST) policy for this virtual pool. FAST policies are supported on VMAX, VNX for Block, and VNXe.

    ViPR chooses physical storage pools to which the selected auto-tiering policy has been applied. If you create a volume in this virtual pool, the auto-tiering policy specified in this field is applied to that volume.

    Fast Expansion VMAX or VNX Block only. If you enable Fast Expansion, ViPR creates concatenated meta volumes in this virtual pool. If Fast Expansion is disabled, ViPR creates striped meta volumes.
    Host Front End Bandwidth Limit 0 - set this value to 0 (unlimited). This field limits the amount of data that can be consumed by applications on the VMAX volume. Host front end bandwidth limits are measured in MB/S.
    Host Front End I/O Limit 0 - set this value to 0 (unlimited). This field limits the amount of data that can be consumed by applications on the VMAX volume. Host front end I/O limits are measured in IOPS.

Back to Top

VMAX3

Review the following configuration requirements and recommendations before virtualizing your VMAX3 system in ViPR.

Set these options when you build your virtual pool:

Back to Top

EMC VNX for Block

Review the following configuration consideration before adding VNX for Block storage to the ViPR virtual pools.

Virtual pool configuration considerations

  • Fibre Channel networks for VNX for Block storage systems require an SP-A and SP-B port pair in each network, otherwise virtual pools cannot be created for the VNX for Block storage system.
  • Prior to ViPR version 2.2, if no auto-tiering policy was set on the virtual pool created from VNX for Block storage, ViPR creates volumes from the virtual pools with auto-tiering enabled. Starting with ViPR version 2.2, if no policy is set on the virtual pool created for VNX for Block storage, ViPR will create volumes from the virtual pool with the "start high then auto-tier" enabled on the new volumes created in the same virtual pool.

Back to Top

EMC VNXe for Block

It is recommended when exporting a VNXe for Block volume to a host using ViPR that the host is configured with either Fibre Channel only, or iSCSi only connectivity to the storage.

Back to Top

EMC VPLEX

Review the following configuration requirements, and recommendations before virtualizing your third-party block storage in VPLEX .

Virtual array configuration requirements and recommendations

While creating virtual arrays, manually assign the VPLEX front-end and back-end ports of the cluster (1 or 2) to a virtual array, so that each VPLEX cluster is in its own ViPR virtual array.

Virtual pool configuration requirements and recommendations

When running VPLEX with VMAX, the Storage Tier and FAST Policy names must be consistent across all VMAX storage systems.

Back to Top

Third-party block (OpenStack) storage systems

Review the following configuration requirements, and recommendations before virtualizing your third-party block storage in ViPR.

Virtual pool recommendations and requirements

If the discovered storage system is configured for multipathing, the values set in the virtual pool can be increased once the target ports are detected by ViPR.

Back to Top

Block storage systems under ViPR management

Once a volume is under ViPR management, and has been provisioned or exported to a host through a ViPR service, you should no longer use the storage system element manager to provision or export the volume to hosts. Using only ViPR to manage the volume will prevent conflicts between the storage system database and the ViPR database, as well as avoid concurrent lock operations being sent to the storage system. Some examples of failures that could occur when the element manager and ViPR database are not synchronized are:
  • If you use the element manager to create a volume, and at the same time another user tries to run the "Create a Volume" service from ViPR on the same storage system, the storage system may be locked by the operation run from the element manager, causing the ViPR “Create a Volume” operation to fail.
  • After a volume was exported to a host through ViPR, the same masking view, which was used by ViPR during the export, was changed on the storage system through the element manager. When ViPR attempts to use the masking view again, the operation will fails because what ViPR has in the database for the masking view is not the same as the actual masking view reconfigured on the storage system.
You can, however, continue to use the storage system element manager to manage storage pools, add capacity, and troubleshoot ViPR issues.

Back to Top

File storage configuration considerations

Review the following information before you add file storage systems to ViPRvirtual arrays and virtual pools, and before you use the file systems in a ViPR service.

Virtual pool for configuration settings for all file storage systems

File systems are only thinly provisioned. You must set the virtual pool to Thin, when adding file storage to the virtual pool.

File storage systems under ViPR management

Once a filesystem is under ViPR management, and has been provisioned or exported to a host through a ViPR service, you should no longer use the storage system element manager to provision or export the filesystem to hosts. Using only ViPR to manage the volume will prevent conflicts between the storage system database and the ViPR database, as well as avoid concurrent lock operations being sent to the storage system. You can however continue to use the storage system element manager to manage storage pools, add capacity, and troubleshoot ViPR issues.

Specific storage system configuration requirements

Before you create virtual arrays and virtual pools for File storage in ViPR, review the following sections for storage system specific configuration requirements and recommendations:

Back to Top

EMC® Data Domain®

Review the following information before virtualizing the Data Domain storage in the ViPR virtual arrays and virtual pools.

Virtual pool configuration requirement and considerations

When creating the file virtual pool for Data Domain storage, the Long Term Retention attribute must be enabled.

While configuring the file virtual pools for Data Domain storage systems it is helpful to know that:

  • A Data Domain Mtree is represented as a file system in ViPR.
  • Storage pools are not a feature of Data Domain. However, ViPR uses storage pools to model storage system capacity. Therefore, ViPR creates one storage pool for each Data Domain storage system registered to ViPR, for example, if three Data Domain storage systems were registered to ViPR, there would be three separate Data Domain storage pools. One storage pool for each registered Data Domain storage system.

Back to Top

EMC VNX for File

When configuring a VNX file virtual pool that uses CIFS protocol , there must be at least one CIFS server on any one of the physical data movers.

Back to Top
[an error occurred while processing this directive]