ViPR Controller 2.3 - Support for VCE Vblock™ Systems

Table of Contents

Overview

ViPR Controller, and VCE Vblock System administrators and users can learn about the Vblock system operations that can be automated using ViPR Controller services.

You can also:

Back to Top

ViPR Controller services to manage Vblock systems

ViPR Controller services automate the following operations on Vblock systems that have been virtualized in ViPR Controller:

ViPR block and file storage services

Additionally, ViPR Controller Block and File Storage services can be used to manage Vblock storage systems. For more information about services see ViPR Controller Service Catalog Reference Guide, which is available from the ViPR Controller Product Documentation Index.

ViPR Controller operations not supported for Vblock systems

ViPR Controller does not support ingestion of Vblock compute system blades that are being used outside of ViPR Controller management. The blades are discovered as unavailable to ViPR Controller, and will not be used by ViPR Controller for Vblock system provisioning or decommissioning services. However, you can add those hosts to the ViPR Controller physical assets, and UCS will discover the hosts from those compute systems through the operating system layer, and then ViPR Controller can export storage to those hosts.

ViPR Controller does not automate layer 2 network configuration. Layer 2 network configuration is managed by the UCS service profile templates assigned to the compute virtual pool.

Back to Top

How Vblock systems are discovered by ViPR Controller

A Vblock system is a converged hardware system from VCE (VMware® , Cisco® , and EMC® ) that is sold as a single unit consisting of the following components:

Vblock system

See the VCE Vblock System Release Certification Matrix for a list of ViPR Controller systems and system component support.

See the ViPR ControllerSupport Matrix on the EMC Community Network (community.emc.com) for the Vblock system component versions supported by ViPR Controller.

Back to Top

Add Vblock system components to ViPR Controller physical assets

For ViPR Controller to discover the Vblock system, ViPR Controller requires that you add each Vblock system component to the ViPR Controller physical assets as follows:

Back to Top

ViPR Controller discovery of Vblock system components

Once the Vblock system components are added to ViPR Controller, ViPR Controller automatically discovers the components and the component resources as follows:

ViPR Controller discovers each Vblock system component as an individual ViPR Controller physical asset. The connectivity between the Vblock system components is determined within the context of the ViPR Controller virtual array. When virtual arrays are created, ViPR Controller determines which compute systems have storage connectivity through the virtual array definition. The virtual arrays are used when defining compute virtual pools and during provisioning to understand connectivity of the Vblock system components.

For more information refer to How Vblock system components are virtualized in ViPR.

Ingestion of compute elements

Upon discovery of an ESX host or clusterViPR Controller discovers the compute element UUID, which allows ViPR Controller to identify the linkage between the host, or cluster and the compute elements (blades). When a host, or cluster is then decommissioned through the ViPR Controller, the ViPR Controller identifies the compute element as available, and makes it available to be used in other service operations.

Back to Top

ViPR Controller registration of added and discovered physical assets

After a physical asset is successfully added and discovered in ViPR Controller, ViPR Controller automatically registers all of the physical assets, which are not in use, and its resources to ViPR Controller. Physical assets that are registered in ViPR Controller are available to use as ViPR Controller resources by ViPR Controller services.

Compute elements, or blades, which are found to be in use, are automatically set to unregistered to prevent ViPR Controller from disturbing it.

Optionally, you can deregister physical assets that you would like to see in ViPR Controller, but do not want ViPR Controller to use as a resource, or some resources can be deleted from ViPR Controller entirely.

Vblock virtualization component

Vblock systems are provided with VMwarevSphereESX installation files. These files are added to the ViPR Controller physical assets as Compute Images, and are not required to be discovered or registered by ViPR Controller. Once added, the Compute Images can be used by ViPR Controller for ESX installation, or they can be deleted from ViPR Controller.

Back to Top

How Vblock system components are virtualized in ViPR Controller

Once the Vblock system components have been added to the ViPR Controller physical assets, the user can begin to virtualize the components into the virtual arrays, and virtual pools.

Back to Top

Vblock compute systems

The Vblock compute system is virtualized in ViPR Controller in both the compute virtual pools and the virtual array networks.

Compute virtual pools

Compute virtual pools are a group of compute elements (UCS blades). ViPR Controller system administrators can manually assign specific blades to a pool, or define qualifiers, which allow ViPR Controller to automatically assign the blades to a pool based on the criteria of the qualifier.

Service profile templates are also assigned to a compute virtual pool. Service profiles are associated to blades to assign the required settings. Additionally, the UCS has the concept of, "service profile templates, (SPTs)," and "updating Service Profile Templates (uSPTs)," that must be set up by UCS administrators. These service profile templates can be used by non-admin users to create the service profiles that turn a blade into a host.

ViPR Controller does not perform the functions of the UCS administrator, rather ViPR Controller utilizes service profile templates to assign the required properties to blades. A UCS administrator will need to create service profile templates that ViPR Controller can use to provision servers and hosts.

When a Vblock system provisioning service is run, ViPR Controller pulls the resources from the compute virtual pool selected in the service, and creates a cluster from the blades in the virtual pool, and applies the same service profile template settings to each of the blades in the virtual pool.

For more details about compute virtual pools, see the ViPR Controller Concpets, which is available from the ViPR Controller Product Documentation Index.

Back to Top

Vblock storage systems

Vblock storage systems are virtualized in the ViPR Controller block or file virtual pools, and in the virtual array.

Block or File virtual pools

Block and file virtual pools are storage pools grouped together according to the criteria defined by the ViPR Controller system administrator. Block and file virtual pools can consist of storage pools from a single storage system, or storage pools from different storage systems as long as the storage pool meets the criteria defined for the virtual pool. The block or file virtual pool can also be shared across different virtual arrays.

Vblock systems require the storage from block virtual pools for the boot LUN when ViPR Controller will be used to install an operating system on a Vblock compute system. Once the hosts are operational, ViPR Controller can use storage from any connected Vblock storage pools and export storage to those hosts.

Back to Top

Vblock networks in the virtual array

Connectivity between the Vblock storage system, and Vblock compute system is defined in the networks in the virtual array. The storage system, and Vblock compute systems that will be managed together must be on the same VSAN in the ViPR Controller virtual array.

Back to Top

Examples of virtual arrays for Vblock systems

ViPR Controller provides flexibility for how you can manage Vblock system resources in ViPR Controller, by how you configure the Vblock systems in ViPR Controller virtual arrays, or create virtual pools with Vblock system resources :

Back to Top

Traditional Vblock system

In this example, two Vblock systems are defined in two different virtual arrays.

Two Vblock systems configured in two virtual arrays

The two virtual arrays are isolated from each other by the physical connectivity.

  • Virtual Array A is defined by VSAN 20, from SAN Switch A and VSAN 21 from SAN Switch B.
  • Virtual Array B is defined by VSAN 20, from SAN Switch X and VSAN 21 from SAN Switch Y.
Note Image
While the UCS is not included in the virtual array, the networks that are defined in the virtual array will determine the UCS visibility to the ViPR Controller storage.

Back to Top

Multiple Vblock systems configured in a single ViPR Controller virtual array

In this example, two Vblock systems are configured in a single virtual array, and all the compute systems, and storage systems are communicating across the same VSANs: VSAN 20, and VSAN 21.

With this architecture, you could allow ViPR Controller to automate resource placement during provisioning using:

  • A single compute virtual pool to allow ViPR Controller to determine compute placement.
  • A single block virtual pool to allow ViPR Controller to determine storage placement.

Virtual array configured for automatic resource placement by ViPR Controller

Back to Top

Manual resource management with ViPR Controller virtual pools

You can also more granularly manage resource placement during provisioning by:

  • Creating multiple compute virtual pools, and manually assigning compute elements to each compute virtual pool.
  • Creating multiple storage virtual pools, and manually assigning storage groups to each storage virtual pool.

During provisioning the desired targets can be specified to ensure only the resources you need are used for your provisioning operation.

Manual resource management with virtual pools

Back to Top

Tenant isolation through virtual array networks

You can allocate Vblock system resources by assigning the virtual array VSANs to different tenants. In the following example:

  • Tenant A will only have visibility to the resources on VSANs 20, and 21.
  • Tenant B will only have visibility to the resources on VSANs 30, and 31.

When using tenant isolation of networks, separate service profile templates would need to be defined for the compute pools used for each network.

Resource management with tenant isolation of VSANs

For details about ViPR Controller tenant functionality, see Understanding ViPR Controller Multi-Tenant Configuration, which is available from the ViPR Controller Product Documentation Index.

Back to Top

Tenant isolation through compute virtual pools

Tenant isolation using compute pools is achieved by creating compute virtual pools with the compute resources (blades) dedicated to a specific tenant such as HR or Finance.

Tenant ACLs allow ViPR Controller to restrict access and visibility to the compute virtual pools outside of a user tenancy.

When defining tenant isolation through compute virtual pools, the service profile templates can still be shared since network isolation is not an issue.

Resource management through tenant isolation of compute virtual pools

Back to Top