EMC ViPR native backup and restore service

Table of Contents

Back to Top

Back to Top

EMC ViPR native backup and restore service

EMC ViPR has a native backup and restore service that creates a backup set of the ViPR controller nodes. The backup set can be created through REST API calls or on demand using the bkutils CLI or viprcli.

This article applies to EMC ViPR 2.0.

The native backup and restore service is the only supported method from ViPR 2.0 forward. You should transition to the native method from other backup types (VM snapshot or other third-party solutions) that you may be using in ViPR 1.x.

The VIPR backup set is a near point-in-time copy of the persistent data (the Cassandra and Zookeeper data files) on all the controller nodes. Volatile data such as logs and binaries are not part of the backup set.

The backup set is generated as a set of files on local storage (/data/backup/). For protection, you should copy backup sets to secondary storage with the REST call GET /backupset/download or with viprcli system download-backup.

In a configuration where ViPR data services are present, native service restore is not supported.

Backup and restore must be between the same major ViPR version.

The restore target must have the same number of nodes as the backup (for example, a backup of a 2+1 configuration can restore only to 2+1; it cannot restore to 3+2).

Restore of geodb among multisite ViPR VDCs is supported. At least one VDC must be up, from which the geodb can be copied and restored among the other sites in a multisite configuration.

Back to Top

Summary of bkutils options for EMC ViPR backup and restore service

You can run the backup and restore operations from a controller node using the bkutils command.

Location: /opt/storageos/bin/bkutils

Permissions: Run as root on the controller node.

Usage: bkutils
-c,--create <[backup]>
Create backup, default name is timestamp (for example 20140531193000). [backup] maximum length is 200 characters. Underscore (_) is not supported. If alpha characters are used in [backup], they must be lower case. Otherwise any character supported in a Linux filename can be used.
-d,--delete <arg>
Delete specific backup
-f,--force
Ignores errors and force-creates the backup
-l,--list
List all backups
-p,--purge <[ismultivdc]>
Purge the existing ViPR data with arg [ismultivdc], yes or no(default)
-q,--quota <[quota]>
Get or set backup quota info, unit:GB argument should be empty (get) or number (set)
-r,--restore <args>
Purge ViPR data and restore specific backup with args: <backupDir> <backupName>
Example: bkutils -r /data/backup/20140728195631 20140728195631
Back to Top

Summary of REST API for EMC ViPR backup and restore service

This is a summary of the REST API for the EMC VIPR backup and restore service.

Details are in EMC ViPR REST API Reference.

GET /backupset/
Lists all backups.
POST /backupset/backup/
Creates a new backup. Note the following restrictions on the backupsetname, which might not be covered in EMC ViPR REST API Reference:
  • The backupsetname maximum length is 200 characters.
  • Underscore (_) not supported.
  • Otherwise, any character supported in a Linux filename can be used.
DELETE /backupset/backup/
Deletes a backup.
GET /backupset/download?tag=backupsetname
Downloads a specific backup.
Below is an example using curl to download a backup.
curl -ik -X GET -H "X-SDS-AUTH-TOKEN: token_value" "https://vipr_ip:4443/backupset/download?tag=backupsetname" > backupsetname.zip
To obtain the token value refer to Authenticate with the ViPR REST API.

Back to Top

Summary of viprcli options for native backup

Beginning in ViPR 2.0 patch 1, you can create, delete, list, and download a backup using viprcli.

Restore is not currently available through viprcli.

The EMC ViPR CLI Reference guide describes how to install and use viprcli.

Create backup
viprcli system create-backup -n backupname
Delete backup
viprcli system delete-backup -n backupname
List all backups
viprcli system list-backup
Download backup
viprcli system download-backup -n backupname -fp filepath
Example: viprcli system download-backup -n 20140728155625 -fp C:\20140728155625.zip

Back to Top

Back up EMC ViPR internal databases

Use POST /backupset/backup/ or the bkutils CLI to back up the ViPR internal databases.

Before you begin

  • This task requires the System Administrator (SYSTEM_ADMIN) role in ViPR.
  • If you are using REST API or viprcli for backup in a multi-VDC environment, an authenticated (non-local) user is required; if the configuration is standalone (that is, not multi-VDC), root user can be used.
  • ViPR controller status must be Stable on at least two nodes in a 2+1 deployment, or three nodes in a 3+2 deployment.
  • The dbsvc, coordinatorsvc, and geodbsvc services must be running on at least two nodes in a 2+1 deployment, or three nodes in a 3+2 deployment.
  • Not required, but it is better to back up when no database repair is in progress. If the backup is created during database repair, the backup data of each node will not be consistent. Database node repair after restore will take a long time, resulting in a longer overall time to recovery. Check the dbsvc log and look for "Starting background repair" (repair is in progress) and "Ending Repair" (repair is complete).
  • It is recommended that the load on the system be light during the time of backup, especially on operations related to volume, fileshare, export, and snapshots.

Procedure

  1. On a ViPR controller node, initiate a backup using one of these methods. Any method creates the backup in /data/backups/ on all controller nodes. It is not necessary to run the command on each node:
    Method Command
    REST API POST /backupset/backup
    bkutils /opt/storageos/bin/bkutils -c [backupsetname]
    viprcli viprcli system create-backup -n backupname
  2. Use one of these methods to generate a file containing the backup set, which you can copy to secondary media:
    Method Command
    REST API GET /backupset/download?tag=backupsetname
    bkutils (download function not available with bkutils command)
    viprcli viprcli system download-backup -n backupname -fp filepath
Back to Top

Restore a backup using the EMC ViPR bkutils command

Use the bkutils CLI to restore a backup created by the EMC ViPR backup service. There is no REST API for restoring a backup.

Before you begin

  • This procedure assumes that each VIPR node's backup is available. If you are restoring when the ViPR minority node backups are not available (a minority node is one node on a 2+1 deployment, or two nodes on a 3+2), follow the steps in Restore ViPR backup when minority nodes have no backup.
  • This task requires the System Administrator (SYSTEM_ADMIN) role in ViPR. In a multi-VDC environment, an authenticated (non-local) user is required; if the configuration is standalone (that is, not multi-VDC), root user can be used.
  • The target system should be a fresh deployment and should meet these requirements:
    • The target system must be at the same ViPR version as the backed up system.
    • You need a target system that contains the same number of controller nodes as the system that was backed up. In other words, you should restore a 3+2 backup to a 3+2 deployment, and a 2+1 backup to a 2+1 deployment.
    • The target system must have the same IP addresses as the backed up system. Therefore, the original system should be shut down before restore; after the restore is successful, delete the original system.
  • Note that a backup set created on a standalone VDC should not be used to restore after the VDC is added to a multi-VDC configuration.
  • Total time required might be several hours.

Procedure

  1. If the VDC that you are restoring is part of a multi-VDC configuration, you must first disconnect the VDC. Follow the procedure in Disconnect failed VDC from multi-VDC before restoring.
  2. If you are restoring the backup to a new ViPR instance, deploy the target ViPR system through the initial setup steps. The dbsvc, geosvc, and controllersvc services must have started at least once.
  3. Shut down the old ViPR vApp.
  4. On each controller node, perform the following:
    1. Identify a location where you can copy the directory containing the backup files. For this procedure, we will create a directory called restore under /data/backup/.
    2. Copy the downloaded backup zip file to the controller node and unzip it to the restore directory.
    3. Validate the md5 checksum files for each backup file.
    4. Stop all ViPR services:
      service storageos stop
    5. Run the following command to restore the backup.
      /opt/storageos/bin/bkutils -r dir backupname
      Example: /opt/storageos/bin/bkutils -r /data/backup/restore 20150804190501
  5. Start all ViPR services:
    service storageos start
  6. Verify that the health of the system, and of all services, is good (in the ViPR UI under Admin > Dashboard > Health look for the green check mark).
  7. Check the node repair progress in dbsvc.log and geodbsvc.log. Look for "Current progress 100%". The restore is complete when dbsvc and geodbsvc repair progress is 100%. This might require several hours.

    2014-04-30 08:21:56,031 [DBRepairPool_223] INFO DbServiceImpl.java (line 477) Starting background repair run - trying to get repair lock 2014-04-30 08:32:04,566 [DBRepairPool_223] INFO DbServiceImpl.java (line 482) Got lock: triggering repair 2014-04-30 08:32:04,576 [DBRepairPool_223] INFO RepairJobRunner.java (line 132) Run repair job for StorageOS. Total # local ranges 256 2014-04-30 08:32:41,840 [DBRepairPool_223] INFO RepairJobRunner.java (line 269) 1 repair sessions finished. Current progress 0% … 2014-04-30 14:00:06,812 [DBRepairPool_247] INFO RepairJobRunner.java (line 269) 256 repair sessions finished. Current progress 100% 2014-04-30 14:00:06,812 [DBRepairPool_247] INFO RepairJobRunner.java (line 175) Stopped repair job monitor 2014-04-30 14:00:06,813 [DBRepairPool_247] INFO RepairJobRunner.java (line 182) Db repair consumes 133 minutes 2014-04-30 14:00:06,813 [DBRepairPool_247] INFO DbServiceImpl.java (line 499) Ending repair

  8. When you have verified the health of the new system, delete the old vApp. (Do not power on the old vApp; the old and new vApps use the same IP addresses, and IP conflict issues will result.)
Back to Top

Restore ViPR backup when minority nodes backups not available

This procedure describes how to restore backups in the case where the minority node backups (one node in a 2+1 configuration, or two nodes in a 3+2 configuration) are not available.

Before you begin

  • This task requires the System Administrator (SYSTEM_ADMIN) role in ViPR. In a multi-VDC environment, an authenticated (non-local) user is required; if the configuration is standalone (that is, not multi-VDC), root user can be used.
  • The target system should be a fresh deployment and should meet these requirements:
    • The target system must be at the same ViPR version as the backed up system.
    • You need a target system that contains the same number of controller nodes as the system that was backed up. In other words, you should restore a 3+2 backup to a 3+2 deployment, and a 2+1 backup to a 2+1 deployment.
    • The target system must have the same IP addresses as the backed up system. Therefore, the original system should be shut down before restore; after the restore is successful, delete the original system.
  • Total time required might be several hours.

Procedure

  1. If the VDC that you are restoring is part of a multi-VDC configuration, you must first disconnect the VDC. Follow the procedure in Disconnect failed VDC from multi-VDC before restoring.
  2. Deploy the target ViPR system through the initial setup steps. The dbsvc, geosvc, and controllersvc services must have started at least once.
  3. Shut down the old ViPR vApp.
  4. On each controller node, perform the following:
    1. Identify a location where you can copy the directory containing the backup files. For this procedure, we will create a directory called restore under /data/backup/.
    2. Copy the downloaded backup zip file to the controller node and unzip it to the restore directory.
    3. Validate the md5 checksum files for each backup file.
    4. Stop all ViPR services:
      service storageos stop
    5. Run the following command to restore the backup.
      /opt/storageos/bin/bkutils -r dir backupname
      Example: /opt/storageos/bin/bkutils -r /data/backup/restore 20150804190501
  5. After restore is complete: If the controller node has no db or geodb backup files, run one of the following purge commands:
    Single VDC configuration:
    /opt/storageos/bin/bkutils -p
    Multi-VDC configuration:
    /opt/storageos/bin/bkutils -p yes
  6. After restore and purge are complete on all nodes, start all ViPR services:
    service storageos start
  7. Verify that the health of the system and all services is good (in the ViPR UI under Admin > Dashboard > Health look for the green check mark).
  8. Check the node repair progress in dbsvc.log and geodbsvc.log. Look for "Current progress 100%". The restore is complete when dbsvc and geodbsvc repair progress is 100%. Total time required might be several hours.
  9. When you have verified the health of the new system, delete the old vApp. (Do not power on the old vApp; the old and new vApps are using the same IP addresses, and IP conflict issues will result.)