ViPR Controller 2.3 - Support for VCE Vblock™ Systems
Table of Contents
You can also:
- Learn how Vblock system components are added to ViPR Controller, and then discovered and registered by ViPR Controller.
- Learn how Vblock system components are virtualized in ViPR Controller, and review examples of Vblock systems virtualized in ViPR Controller.
ViPR block and file storage services
Additionally, ViPR Controller Block and File Storage services can be used to manage Vblock storage systems. For more information about services see ViPR Controller Service Catalog Reference Guide, which is available from the ViPR Controller Product Documentation Index.
ViPR Controller operations not supported for Vblock systems
ViPR Controller does not support ingestion of Vblock compute system blades that are being used outside of ViPR Controller management. The blades are discovered as unavailable to ViPR Controller, and will not be used by ViPR Controller for Vblock system provisioning or decommissioning services. However, you can add those hosts to the ViPR Controller physical assets, and UCS will discover the hosts from those compute systems through the operating system layer, and then ViPR Controller can export storage to those hosts.
ViPR Controller does not automate layer 2 network configuration. Layer 2 network configuration is managed by the UCS service profile templates assigned to the compute virtual pool.Back to Top
- Compute: Cisco Unified Computing System™ (UCS)
- Storage: EMC Storage System
- Pair of Cisco SAN switches
- A Pair of LAN switches when the UCS will not be plugged directly into a customer network
- Virtualization: VMwarevSphere®
See the VCE Vblock System Release Certification Matrix for a list of ViPR Controller systems and system component support.
See the ViPR ControllerSupport Matrix on the EMC Community Network (community.emc.com) for the Vblock system component versions supported by ViPR Controller.Back to Top
ViPR Controller discovers each Vblock system component as an individual ViPR Controller physical asset. The connectivity between the Vblock system components is determined within the context of the ViPR Controller virtual array. When virtual arrays are created, ViPR Controller determines which compute systems have storage connectivity through the virtual array definition. The virtual arrays are used when defining compute virtual pools and during provisioning to understand connectivity of the Vblock system components.
For more information refer to How Vblock system components are virtualized in ViPR.
Ingestion of compute elements
Upon discovery of an ESX host or clusterViPR Controller discovers the compute element UUID, which allows ViPR Controller to identify the linkage between the host, or cluster and the compute elements (blades). When a host, or cluster is then decommissioned through the ViPR Controller, the ViPR Controller identifies the compute element as available, and makes it available to be used in other service operations.Back to Top
Compute elements, or blades, which are found to be in use, are automatically set to unregistered to prevent ViPR Controller from disturbing it.
Optionally, you can deregister physical assets that you would like to see in ViPR Controller, but do not want ViPR Controller to use as a resource, or some resources can be deleted from ViPR Controller entirely.
Vblock virtualization component
Vblock systems are provided with VMwarevSphereESX installation files. These files are added to the ViPR Controller physical assets as Compute Images, and are not required to be discovered or registered by ViPR Controller. Once added, the Compute Images can be used by ViPR Controller for ESX installation, or they can be deleted from ViPR Controller.Back to Top
Compute virtual pools
Compute virtual pools are a group of compute elements (UCS blades). ViPR Controller system administrators can manually assign specific blades to a pool, or define qualifiers, which allow ViPR Controller to automatically assign the blades to a pool based on the criteria of the qualifier.
Service profile templates are also assigned to a compute virtual pool. Service profiles are associated to blades to assign the required settings. Additionally, the UCS has the concept of, "service profile templates, (SPTs)," and "updating Service Profile Templates (uSPTs)," that must be set up by UCS administrators. These service profile templates can be used by non-admin users to create the service profiles that turn a blade into a host.
ViPR Controller does not perform the functions of the UCS administrator, rather ViPR Controller utilizes service profile templates to assign the required properties to blades. A UCS administrator will need to create service profile templates that ViPR Controller can use to provision servers and hosts.
When a Vblock system provisioning service is run, ViPR Controller pulls the resources from the compute virtual pool selected in the service, and creates a cluster from the blades in the virtual pool, and applies the same service profile template settings to each of the blades in the virtual pool.
For more details about compute virtual pools, see the ViPR Controller Concpets, which is available from the ViPR Controller Product Documentation Index.Back to Top
Block or File virtual pools
Block and file virtual pools are storage pools grouped together according to the criteria defined by the ViPR Controller system administrator. Block and file virtual pools can consist of storage pools from a single storage system, or storage pools from different storage systems as long as the storage pool meets the criteria defined for the virtual pool. The block or file virtual pool can also be shared across different virtual arrays.
Vblock systems require the storage from block virtual pools for the boot LUN when ViPR Controller will be used to install an operating system on a Vblock compute system. Once the hosts are operational, ViPR Controller can use storage from any connected Vblock storage pools and export storage to those hosts.Back to Top
- Traditional Vblock system
- Multiple Vblock systems configured in a single virtual array for:
- Tenant isolation in virtual array through
The two virtual arrays are isolated from each other by the physical connectivity.
- Virtual Array A is defined by VSAN 20, from SAN Switch A and VSAN 21 from SAN Switch B.
- Virtual Array B is defined by VSAN 20, from SAN Switch X and VSAN 21 from SAN Switch Y.
Back to Top
With this architecture, you could allow ViPR Controller to automate resource placement during provisioning using:
- A single compute virtual pool to allow ViPR Controller to determine compute placement.
- A single block virtual pool to allow ViPR Controller to determine storage placement.
- Creating multiple compute virtual pools, and manually assigning compute elements to each compute virtual pool.
- Creating multiple storage virtual pools, and manually assigning storage groups to each storage virtual pool.
During provisioning the desired targets can be specified to ensure only the resources you need are used for your provisioning operation.Back to Top
- Tenant A will only have visibility to the resources on VSANs 20, and 21.
- Tenant B will only have visibility to the resources on VSANs 30, and 31.
When using tenant isolation of networks, separate service profile templates would need to be defined for the compute pools used for each network.
For details about ViPR Controller tenant functionality, see Understanding ViPR Controller Multi-Tenant Configuration, which is available from the ViPR Controller Product Documentation Index.Back to Top
Tenant ACLs allow ViPR Controller to restrict access and visibility to the compute virtual pools outside of a user tenancy.
When defining tenant isolation through compute virtual pools, the service profile templates can still be shared since network isolation is not an issue.Back to Top