ViPR 2.2 - Create ViPR Virtual Pools for Block Storage

Table of Contents

Overview

ViPR System Administrators can learn to create and configure ViPR virtual pools for block storage systems.

This article is part of a series

Before you can create virtual pools in ViPR, complete the steps in Step by Step: Setup a ViPR Virtual Data Center.

Back to Top

Create ViPR virtual pools for block storage

Create a virtual pool for block storage by specifying the criteria that physical storage pools must match.

Before you begin

Procedure

  1. Go to the Virtual Asset > Block Virtual Pools page.
  2. Click Add or select an existing virtual pool name to edit.
  3. Enter a name and a description for the virtual pool.
    The virtual pool will used for provisioning operations, so its name should convey some information about the type of storage that it provides (its performance and protection levels) or how it should be used. For example, "gold", "tier1", or "backup" etc.
  4. Select the virtual arrays for which the virtual pool will be created.
  5. Check or uncheck Enable Quota. If enabled enter the maximum amount of storage, in GB, that can be allocated to this virtual pool.
    While defining the virtual pool criteria, it is recommended to change the criteria one at a time and expand Storage Pools to check which storage pools matching the criteria are available.
    The pool matching algorithm runs shortly after a criteria has been selected and the matching pools will be from all systems that can provide pools that support the selected protocol.
  6. Expand Hardware to define the following criteria:
    Option Description
    Provisioning Type Thick or Thin
    Protocols The block protocols supported by the physical storage pools that will comprise the virtual pool. Possible protocols are FC and iSCSI. Only the protocols supported by the virtual array networks are listed.
    Drive Type The drive type that any storage pools in the virtual pool must support.

    NONE will allow storage pools to be contributed by any storage pool that support the other selected criteria .

    System Type The system type that you want the storage pools to be provided by.

    NONE will allow storage pools to be contributed by any array that supports the other selected criteria. Only the systems supported by the networks configured in the virtual array are selectable.

    Thin Volume Preallocation When Thin provisioning is selected.
    Multi-Volume Consistency When enabled, resources provisioned from the pool will support the use of consistency groups. If disabled, a resource cannot be assigned to a consistency group when running ViPR block provisioning services.
    Expandable When enabled:
    • Volumes can be expanded non-disruptively.
      Note Image
      In some cases this may cause a decrease in performance.

    • Native continuous copies will not be supported.

    When disabled, the underlying storage selected for volume creation will consider performance over expandability.

    If the storage system type is VMAX, then the following options are also presented:
    Option Description
    RAID Level Select which RAID levels the volumes in the virtual pool will consist of.
    Unique Auto-tiering Policy Names VMAX only. When you build auto-tiering policies on a VMAX through Unisphere, you can assign names to the policies you build. These names are visible when you enable Unique Auto-tiering Policy Names.

    If you do not enable this option, the auto-tiering policy names displayed in the Auto-tiering Policy field are those built by ViPR.

    Auto-tiering Policy The Fully Automated Storage Tiering (FAST) policy for this virtual pool. FAST policies are supported on VMAX, VNX for Block, and VNXe.

    ViPR chooses physical storage pools to which the selected auto-tiering policy has been applied. If you create a volume in this virtual pool, the auto-tiering policy specified in this field is applied to that volume.

    Fast Expansion VMAX or VNX Block only. If you enable Fast Expansion, ViPR creates concatenated meta volumes in this virtual pool. If Fast Expansion is disabled, ViPR creates striped meta volumes.
    Host Front End Bandwidth Limit 0 - set this value to 0 (unlimited). This field limits the amount of data that can be consumed by applications on the VMAX volume. Host front end bandwidth limits are measured in MB/S.
    Host Front End I/O Limit 0 - set this value to 0 (unlimited). This field limits the amount of data that can be consumed by applications on the VMAX volume. Host front end I/O limits are measured in IOPS.
  7. Expand SAN Multi Path to define the following criteria:
  8. If in a VPLEX environment, expand High Availability and select:.
    • VPLEX local to only use VPLEX local volumes in the virtual pool.
    • VPLEX distributed to only use VPLEX distributed volumes in the virtual pool.
    • None to use both VPLEX local and VPLEX distributed volumes, which match the other virtual pool settings, in the virtual pool.
  9. Expand Protection to define the following data protection criteria:
    Option Description
    Maximum Snapshots Maximum number of local snapshots allowed for resources from this virtual pool. To use the ViPR Create Snapshot services, a value of at least 1 must be specified.
    Maximum Continuous Copies Maximum number of native continuous copies allowed for resources from this virtual pool. To use the ViPR Create Continuous Copy services a value of at least 1 must be specified.
    Continuous Copies Virtual Pool Enables a different virtual pool to be specified which will be used for native continuous copies.

    Native continuous copies are not supported for virtual pools with the expandable attribute enabled.

    Protection System Enables volumes created in the virtual pool to be protected by a supported protection system. The possible values are:
    • None
    • EMC Recoverpoint
      • Recoverpoint protection requires a virtual array to act as the RecoverPoint target and optionally an existing target virtual pool.
      • Set the source journal size as needed. The RecoverPoint default is 2.5 times protected storage, or select one of the following:
        • A fixed value (in MB, GB or TB).
        • A multiplier of the protected storage.
        • Minimum allowable by RecoverPoint (10 GB).
      • Select Add Copy to add one or two RecoverPoint copies, specifying the destination Virtual Array, and optionally,
        • A Virtual Pool to specify the characteristics of the RecoverPoint target and journal volume.
        • The Recoverpoint target Journal Size. The RecoverPoint default is 2.5 times protected storage
    • VMAX SRDF
      • VMAX SRDF protection requires a virtual array to act as the SRDF target, and optionally an existing target virtual pool.
      • Select an SRDF Copy Mode, either Syncrhonous or Asynchronous. .
      • Select Add Copy to add an SRDF copy, specifying the destination virtual array, and optionally a virtual pool.
    • VPLEX Local
    • VPLEX Distributed
      • Select the ViPR virtual array that will provide the destination for the distributed volume.
      • Select the ViPR virtual pool that will be used when creating the distributed volume .
  10. Expand Access Control to restrict access in a multiple tenant environment.
    1. Enable Restrict Tenant Access.
    2. Select the Tenants that will have access to this virtual pool.
  11. Expand Storage Pools to view the discovered storage pools, and to choose how the Pool Assignment will be performed:
    • Automatic — the storage pools that make up the virtual pool will be updated as pools that meet the criteria are added or removed from the virtual array. This can occur when new pools that meet the criteria are added or removed from the system, or their registration or discovery status changes.
    • Manual — provides a checkbox against each pool to enable it to be selected. Only the selected storage pools will be included in the virtual pool.
    The pool matching algorithm runs shortly after a criteria has been selected and the matching pools will be from all systems that can provide pools that support the selected protocol.
  12. Select Save.
Back to Top