ViPR 2.2 - Support for VCE Vblock™ Systems
Table of Contents
ViPR, and VCE Vblock System administrators and users can: learn about the Vblock system operations that can be automated using ViPR services, learn how Vblock system components are added to ViPR, and then discovered and registered by ViPR, learn how Vblock system components are virtualized in ViPR, and review examples Vblock systems virtualized in ViPR.
ViPR block and file storage services
Additionally, ViPR Block and File Storage services can be used to manage Vblock storage systems. Refer to the following articles for more information:
- What are the Service Catalog Block Storage Provisioning Services?
- What are the ViPR File Provisioning and Protection Facilitiess?
ViPR operations not supported for Vblock systems
ViPR does not support ingestion of Vblock compute system blades that are being used outside of ViPR management. The blades are discovered as unavailable to ViPR, and will not be used by ViPR for Vblock system provisioning or decommissioning services. However, you can add those hosts to the ViPR physical assets, and UCS will discover the hosts from those compute systems through the operating system layer, and then ViPR can export storage to those hosts.
ViPR does not automate layer 2 network configuration. Layer 2 network configuration is managed by the UCS service profile templates assigned to the compute virtual pool.Back to Top
- Compute: Cisco Unified Computing System™ (UCS)
- Storage: EMC Storage System
- Pair of Cisco SAN switches
- A Pair of LAN switches when the UCS will not be plugged directly into a customer network
- Virtualization: VMwarevSphere®
See the VCE Vblock System Release Certification Matrix for a list of ViPR systems and system component support.
See the ViPRSupport Matrix on the EMC Community Network (community.emc.com) for the Vblock system component versions supported by ViPR.Back to Top
ViPR discovers each Vblock system component as an individual ViPR physical asset. The connectivity between the Vblock system components is determined within the context of the ViPR virtual array. When virtual arrays are created, ViPR determines which compute systems have storage connectivity through the virtual array definition. The virtual arrays are used when defining compute virtual pools and during provisioning to understand connectivity of the Vblock system components.
Refer to How Vblock system components are virtualized in ViPR for more information.Back to Top
Optionally, you can deregister physical assets that you would like to see in ViPR, but do not want ViPR to use as a resource, or some resources can be deleted from ViPR entirely.
Vblock virtualization component
Vblock systems are provided with VMwarevSphereESX installation files. These files are added to the ViPR physical assets as Compute Images, and are not required to be discovered by ViPR. Once added, the Compute Images are automatically registered by ViPR.Back to Top
Compute virtual pools
Compute virtual pools are a group of a compute system elements (UCS blades). ViPR system administrators can manually assign specific blades to a pool, or define qualifiers, which allow ViPR to automatically assign the blades to a pool based on the criteria of the qualifier.
Service profile templates are also assigned to a compute virtual pool. Service profiles are associated to blades to assign the required settings. Additionally, the UCS has the concept of, "service profile templates," that can be set up by UCS administrators. These service profile templates can be used by non-admin users to create the service profiles that turn a blade into a bare-metal server.
ViPR does not perform the functions of the UCS administrator, rather ViPR utilizes service profile templates to assign the required properties to blades. A UCS administrator will need to create service profile templates that ViPR can use to provision servers and hosts.
When a Vblock system provisioning service is run, ViPR pulls the resources from the compute virtual pool selected in the service, and creates a cluster from the blades in the virtual pool, and applies the same service profile template settings to each of the blades in the virtual pool.
For more details about compute virtual pools, see the What is a ViPR Compute Virtual Pool? article.Back to Top
Block or File virtual pools
Block and file virtual pools are storage pools grouped together according to the criteria defined by the ViPR system administrator. Block and file virtual pools can consist of storage pools from a single storage system, or storage pools from different storage systems as long as the storage pool meets the criteria defined for the virtual pool. The block or file virtual pool can also be shared across different virtual arrays.
Vblock systems require the storage from block virtual pools for the boot LUN when ViPR will be used to install an operating system on a Vblock compute system. Once the hosts are operational, ViPR can use storage from any connected Vblock storage pools and export storage to those hosts.Back to Top
- Traditional Vblock system
- Multiple Vblock systems configured in a single virtual array for:
- Tenant isolation in virtual array through
The two virtual arrays are isolated from each other by the physical connectivity.
- Virtual Array A is defined by VSAN 20, from SAN Switch A and VSAN 21 from SAN Switch B.
- Virtual Array B is defined by VSAN 20, from SAN Switch X and VSAN 21 from SAN Switch Y.
Back to Top
With this architecture, you could allow ViPR to automate resource placement during provisioning using:
- A single compute virtual pool to allow ViPR to determine compute placement.
- A single block virtual pool to allow ViPR to determine storage placement.
- Creating multiple compute virtual pools, and manually assigning compute elements to each compute virtual pool.
- Creating multiple storage virtual pools, and manually assigning storage groups to each storage virtual pool.
During provisioning the desired targets can be specified to ensure only the resources you need are used for your provisioning operation.Back to Top
- Tenant A will only have visibility to the resources on VSANs 20, and 21.
- Tenant B will only have visibility to the resources on VSANs 30, and 31.
When using tenant isolation of networks, separate service profile templates would need to be defined for the compute pools used for each network.
See Understanding ViPR Multi-Tenant Configuration for details about ViPR tenant functionality.Back to Top
Tenant ACLs allow ViPR to restrict access and visibility to the compute virtual pools outside of a user tenancy.
When defining tenant isolation through compute virtual pools, the service profile templates can still be shared since network isolation is not an issue.Back to Top