ViPR 2.2 - Install EMC ViPR Controller on Hyper-V

Table of Contents

Install EMC ViPR Controller on Hyper-V

Follow this step-by-step procedure to install ViPR Controller on Hyper-V and perform the initial setup.

Back to Top

Prerequisites and procedure for deploying ViPR Controller on Hyper-V

This section describes the prerequisites and the step-by-step procedure for installing the ViPR Controller virtual machine in a Hyper-V environment.

Before you begin

Procedure

  1. Log in to the SCVMM server using the Administrator account, and copy the zip file to the SCVMM server node.
  2. Unzip the ZIP file.
  3. Open a PowerShell window and change to the unzip directory.
  4. Run the following command to import the ViPR Controller installation into the SCVMM library.
    .\vipr-release_version-uploadVirtualDisks.ps1 -librarypath librarypath
    Example: .\vipr-2.2.1.0.100-uploadVirtualDisks.ps1 -librarypath \\myVMMserver\MSSCVMMLibrary
  5. Run the following command for each ViPR Controller virtual machine you are creating. You need to run this command 3 times for a 2+1 deployment, or 5 times for a 3+2 deployment. Enter a different value for -vmname each time, that is, vipr1, vipr2, vipr3, etc.
    .\vipr-release_version-createVirtualMachine.ps1 -vmhostname hyperv-server -vmpath vm-destination-folder -vmname viprn -VirtualSwitchName vswitch-name -VmNetworkName vm-network-name -vlanid id -disktype [dynamic|fixed]
    Option Description
    -vmhostname Name of the backend Hyper-V server.
    -vmpath VM Path in host machine Note: user needs to make sure it exists.
    -vmname ViPR node name, such as vipr1, vipr2, vipr3, etc.
    -VirtualSwitchName Name of the virtual switch.
    -VmNetworkName Name of the VM network.
    -vlanid VLAN id. Required if VM network is configured with one or more VLANs; otherwise optional.
    -cpucount Number of CPUs per virtual machine. Minimum and default is 2. Optional. Refer to EMC ViPR Controller Support Matrix.
    -memory Memory in MB per virtual machine. Minimum and default is 8192. Optional. Refer to EMC ViPR Controller Support Matrix.
    -disktype Type of virtual hard disk: dynamic or fixed. Use fixed for deployment in a production environment.
    Example: .\vipr-2.2.0.1.999-createVirtualMachine.ps1 -vmhostname myHyperVserver.xyz.com -vmpath D:\HyperV -vmname vipr1 -VirtualSwitchName vSwitch2 -VmNetworkName vSwitch2_VMNetwork -vlanid 96 -disktype fixed
  6. After you create the virtual machines, go to the SCVMM UI and power on the first ViPR Controller virtual machine.
  7. From the SCVMM UI, right-click the ViPR VM and select Connect or View > Connect via Console to start the ViPR installer.
  8. On the ViPR installer Cluster Configuration screen, select configuration type (2+1 or 3+2) and select the ViPR node id.
  9. On the ViPR installer Network Configuration screen, enter the network settings:
    Node n address
    One IPv4 address for public network. Each Controller VM requires either a unique, static IPv4 address in the subnet defined by the netmask.
    Note than an address conflict across different ViPR installations can result in ViPR database corruption that would need to be restored from a previous good backup.
    VIP
    IPv4 address used for UI and REST client access. Also known as the public virtual IP address.
    Netmask
    IPv4 netmask for the public network interface.
    Gateway
    IPv4 address for the public network gateway.
    Server n IPv6 address
    One IPv6 address for public network. Each Controller VM requires either a unique, static IPv6 address in the subnet defined by the netmask, or a unique static IPv4 address, or both.
    Note than an address conflict across different ViPR installations can result in ViPR database corruption that would need to be restored from a previous good backup.
    Public virtual IPv6 address
    IPv6 address used for UI and REST client access.
    IPv6 prefix length
    IPv6 prefix length. Default is 64.
    IPv6 default gateway
    IPv6 address for the public network gateway.
  10. On the ViPR installer Deployment Confirmation screen, confirm the settings. Do not reboot until all virtual machines are configured.
    Note: if deployment is interrupted at this point (for example, by a power outage, or if you rebooted too early), the previously deployed node or nodes will be left in a state that prevents a successful, full deployment. In this case, you need to take some manual steps to correct the state. Refer to the workaround section below.
  11. Power on the next virtual machine. Select the Cluster VIP that you entered in when you configured the previous virtual machine then select Next.
  12. On the Cluster Configuration screen, select the appropriate node id (vipr2, or vipr3, etc.) then select Next.
  13. On the Deployment Confirmation screen, select Config. Wait for "Multicasting configuration" to appear on the screen.
  14. Repeat steps 11, 12, and 13 until all nodes have been configured.
  15. On each node, select Reboot. Perform this step only after all nodes have been configured.
  16. Wait a few minutes after powering on the nodes before you follow the next steps. This will give the ViPR Controller services time to start up.
  17. Open https://ViPR_virtual_ip with a supported browser and log in as root.
    Initial password is ChangeMe.
    The ViPR_virtual_IP is the ViPR public virtual IP address, also known as the network.vip (the IPv4 address) or the network.vip6 (IPv6). Either value, or the corresponding FQDN, can be used for the URL.
  18. Browse to and select the license file that was downloaded from the EMC license management web site, then Upload License.
  19. Enter new passwords for the root and system accounts.
    The passwords must meet these requirements:
    • at least 8 characters
    • at least 1 lowercase
    • at least 1 uppercase
    • at least 1 numeric
    • at least 1 special character
    • no more than 3 consecutive repeating
    • at least change 2 characters (settable)
    • not in last 3 change iterations (settable)
    The ViPR root account has all privileges that are needed for initial configuration; it is also the same as the root user on the Controller VMs. The system accounts (sysmonitor, svcuser, and proxyuser) are used internally by ViPR.
  20. For DNS servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by commas.
  21. For NTP servers, enter two or three IPv4 or IPv6 addresses (not FQDNs), separated by commas.
  22. Select a transport option for ConnectEMC (FTPS (default), SMTP, or none) and enter an email address (user@domain) for the ConnectEMC Service notifications.
    If you select the SMTP transport option, you must specify an SMTP server under SMTP settings in the next step. "None" disables ConnectEMC on the ViPR virtual appliance.
    In an IPv6-only environment, use SMTP for the transport protocol. (The ConnectEMC FTPS server is IPv4-only.)
  23. (Optional) Specify an SMTP server and port for notification emails (such as ConnectEMC alerts, ViPR approval emails), the encryption type (TLS/SSL or not), a From address, and authentication type (login, plain, CRAM-MD5, or none).
    Optionally test the settings and supply a valid addressee. The test email will be from the From Address you specified and will have a subject of "Mail Settings Test".
    If TLS/SSL encryption used, the SMTP server must have a valid CA certificate.
  24. Finish.
    At this point ViPR Controller services restart. This can take several minutes.

After you finish

After installation, the ViPR UI is open to the dashboard. Refer to Step by step:Set up an EMC ViPR virtual data center to configure ViPR.

Back to Top

Workaround for interrupted Hyper-V deployment of ViPR Controller

If deployment is interrupted before all nodes are deployed, the previously deployed node or nodes will be left in a state that prevents a successful, full deployment. In this case, you must perform some manual steps to correct the state of the previous nodes, before completing deployment.

This workaround is required if deployment was interrupted, for example, because of a power outage, or if the Reboot option was selected too early in the deployment procedure. If the message window in the Deployment Configuration screen for a previously deployed node does not say "Multicasting", then follow these steps.

Procedure

  1. From the SCVMM UI, power off all nodes.
  2. From the SCVMM UI, right-click a previously deployed ViPR VM (start with vipr1) and select Connect or View > Connect via Console to start the ViPR installer.
  3. From the GNU GRUB screen, select Configuration of a single node.
  4. On the Cluster Selection screen, accept the default and click Next.
  5. On the Cluster Configuration screen, accept the defaults and click Next.
  6. On the Deployment Configuration screen, accept the defaults and click Config.
    This step will put the VIPR node in the required Multicasting state.
  7. Repeat these steps for all previously deployed nodes, until all nodes are in the Multicasting state. Note that the state value is seen only on the Deployment Configuration screen of the ViPR node installer.

Results

After all previously deployed nodes are in the Multicasting state, you can return to the main deployment procedure above and continue deploying the remaining nodes, from the "Power on the next virtual machine".

Back to Top