ViPR 2.2 - Support for VCE Vblock™ Systems

Table of Contents

Overivew

ViPR, and VCE Vblock System administrators and users can: learn about the Vblock system operations that can be automated using ViPR services, learn how Vblock system components are added to ViPR, and then discovered and registered by ViPR, learn how Vblock system components are virtualized in ViPR, and review examples Vblock systems virtualized in ViPR.

ViPR, and VCE Vblock System administrators and users can: learn about the Vblock system operations that can be automated using ViPR services, learn how Vblock system components are added to ViPR, and then discovered and registered by ViPR, learn how Vblock system components are virtualized in ViPR, and review examples Vblock systems virtualized in ViPR.

ViPR services to manage Vblock systems

ViPR services automate the following operations on Vblock systems that have been virtualized in ViPR:

ViPR block and file storage services

Additionally, ViPR Block and File Storage services can be used to manage Vblock storage systems. Refer to the following articles for more information:

ViPR operations not supported for Vblock systems

ViPR does not support ingestion of Vblock compute system blades that are being used outside of ViPR management. The blades are discovered as unavailable to ViPR, and will not be used by ViPR for Vblock system provisioning or decommissioning services. However, you can add those hosts to the ViPR physical assets, and UCS will discover the hosts from those compute systems through the operating system layer, and then ViPR can export storage to those hosts.

ViPR does not automate layer 2 network configuration. Layer 2 network configuration is managed by the UCS service profile templates assigned to the compute virtual pool.

Back to Top

How Vblock systems are discovered by ViPR

A Vblock system is a converged hardware system from VCE (VMware® , Cisco® , and EMC® ) that is sold as a single unit consisting of the following components:

Vblock system

See the VCE Vblock System Release Certification Matrix for a list of ViPR systems and system component support.

See the ViPRSupport Matrix on the EMC Community Network (community.emc.com) for the Vblock system component versions supported by ViPR.

Back to Top

Add Vblock system components to ViPR physical assets

For ViPR to discover the Vblock system, ViPR requires that you add each Vblock system component to the ViPR physical assets as follows:

Back to Top

ViPR discovery of Vblock system components

Once the Vblock system components are added to ViPR, ViPR automatically discovers the components and the component resources as follows:

ViPR discovers each Vblock system component as an individual ViPR physical asset. The connectivity between the Vblock system components is determined within the context of the ViPR virtual array. When virtual arrays are created, ViPR determines which compute systems have storage connectivity through the virtual array definition. The virtual arrays are used when defining compute virtual pools and during provisioning to understand connectivity of the Vblock system components.

Refer to How Vblock system components are virtualized in ViPR for more information.

Back to Top

ViPR registration of added and discovered physical assets

After a physical asset is successfully added and discovered in ViPR, ViPR automatically registers all of the physical assets, and its resources to ViPR. Physical assets that are registered in ViPR are available to use as ViPR resources by ViPR services.

Optionally, you can deregister physical assets that you would like to see in ViPR, but do not want ViPR to use as a resource, or some resources can be deleted from ViPR entirely.

Vblock virtualization component

Vblock systems are provided with VMwarevSphereESX installation files. These files are added to the ViPR physical assets as Compute Images, and are not required to be discovered by ViPR. Once added, the Compute Images are automatically registered by ViPR.

Back to Top

How Vblock system components are virtualized in ViPR

Once the Vblock system components have been added to the ViPR physical assets, the user can begin to virtualize the components into the virtual arrays, and virtual pools.

Back to Top

Vblock compute systems

The Vblock compute system is virtualized in ViPR in both the compute virtual pools and the virtual array networks.

Compute virtual pools

Compute virtual pools are a group of a compute system elements (UCS blades). ViPR system administrators can manually assign specific blades to a pool, or define qualifiers, which allow ViPR to automatically assign the blades to a pool based on the criteria of the qualifier.

Service profile templates are also assigned to a compute virtual pool. Service profiles are associated to blades to assign the required settings. Additionally, the UCS has the concept of, "service profile templates," that can be set up by UCS administrators. These service profile templates can be used by non-admin users to create the service profiles that turn a blade into a bare-metal server.

ViPR does not perform the functions of the UCS administrator, rather ViPR utilizes service profile templates to assign the required properties to blades. A UCS administrator will need to create service profile templates that ViPR can use to provision servers and hosts.

When a Vblock system provisioning service is run, ViPR pulls the resources from the compute virtual pool selected in the service, and creates a cluster from the blades in the virtual pool, and applies the same service profile template settings to each of the blades in the virtual pool.

For more details about compute virtual pools, see the What is a ViPR Compute Virtual Pool? article.

Back to Top

Vblock storage systems

Vblock storage systems are virtualized in the ViPR block or file virtual pools, and in the virtual array.

Block or File virtual pools

Block and file virtual pools are storage pools grouped together according to the criteria defined by the ViPR system administrator. Block and file virtual pools can consist of storage pools from a single storage system, or storage pools from different storage systems as long as the storage pool meets the criteria defined for the virtual pool. The block or file virtual pool can also be shared across different virtual arrays.

Vblock systems require the storage from block virtual pools for the boot LUN when ViPR will be used to install an operating system on a Vblock compute system. Once the hosts are operational, ViPR can use storage from any connected Vblock storage pools and export storage to those hosts.

Back to Top

Vblock networks in the virtual array

Connectivity between the Vblock storage system, and Vblock compute system is defined in the networks in the virtual array. The storage system, and Vblock compute systems that will be managed together must be on the same VSAN in the ViPR virtual array.

Back to Top

Examples of virtual arrays for Vblock systems

ViPR provides flexibility for how you can manage Vblock system resources in ViPR, by how you configure the Vblock systems in ViPR virtual arrays, or create virtual pools with Vblock system resources :

Back to Top

Traditional Vblock system

In this example, two Vblock systems are defined in two different virtual arrays.

Two Vblock systems configured in two virtual arrays

The two virtual arrays are isolated from each other by the physical connectivity.

  • Virtual Array A is defined by VSAN 20, from SAN Switch A and VSAN 21 from SAN Switch B.
  • Virtual Array B is defined by VSAN 20, from SAN Switch X and VSAN 21 from SAN Switch Y.
Note Image
While the UCS is not included in the virtual array, the networks that are defined in the virtual array will determine the UCS visibility to the ViPR storage.

Back to Top

Multiple Vblock systems configured in a single ViPR virtual array

In this example, two Vblock systems are configured in a single virtual array, and all the compute systems, and storage systems are communicating across the same VSANs: VSAN 20, and VSAN 21.

With this architecture, you could allow ViPR to automate resource placement during provisioning using:

  • A single compute virtual pool to allow ViPR to determine compute placement.
  • A single block virtual pool to allow ViPR to determine storage placement.

Virtual array configured for automatic resource placement by ViPR

Back to Top

Manual resource management with ViPR virtual pools

You can also more granularly manage resource placement during provisioning by:

  • Creating multiple compute virtual pools, and manually assigning compute elements to each compute virtual pool.
  • Creating multiple storage virtual pools, and manually assigning storage groups to each storage virtual pool.

During provisioning the desired targets can be specified to ensure only the resources you need are used for your provisioning operation.

Manual resource management with virtual pools

Back to Top

Tenant isolation through virtual array networks

You can allocate Vblock system resources by assigning the virtual array VSANs to different tenants. In the following example:

  • Tenant A will only have visibility to the resources on VSANs 20, and 21.
  • Tenant B will only have visibility to the resources on VSANs 30, and 31.

When using tenant isolation of networks, separate service profile templates would need to be defined for the compute pools used for each network.

Resource management with tenant isolation of VSANs

See Understanding ViPR Multi-Tenant Configuration for details about ViPR tenant functionality.

Back to Top

Tenant isolation through compute virtual pools

Tenant isolation using compute pools is achieved by creating compute virtual pools with the compute resources (blades) dedicated to a specific tenant such as HR or Finance.

Tenant ACLs allow ViPR to restrict access and visibility to the compute virtual pools outside of a user tenancy.

When defining tenant isolation through compute virtual pools, the service profile templates can still be shared since network isolation is not an issue.

Resource management through tenant isolation of compute virtual pools

Back to Top