ECS 2.0 – New features in ECS 2.0

Table of Contents

New features

Describes the new features and additions for ECS 2.0.


In previous releases, ECS used the ViPR controller to provide a number of services, such as authorization, authentication, licensing, and logging. These capabilities are now part of ECS and ViPR is no longer required.

In addition, the need for a virtual machine to install ECS has also been removed as the first node in an ECS appliance now acts as the installer for all nodes.


ECS was previously configured from the ViPR UI. With ECS 2.0, a new portal is provided that provides configuration, management, and monitoring capabilities to ECS and its tenants.

Monitoring and Diagnostics

ECS now provides comprehensive monitoring and diagnostics capabilities through the ECS Portal and the ECS API. The following features are supported:

Geo Enhancements

The following enhancements have been made that improve the use of the ECS geo-replication mechanism:

Geo Enhancement: Temporary Site Failover

ECS 2.0 has automatic and sophisticated ways of handling temporary site failures and failbacks. With this new functionality, applications have access to data even when connectivity between federated VDCs is unavailable. There are some restrictions on some of the operations, such as creating new buckets while the sites are down. ECS will automatically resync the sites and reconcile the data when all the sites are operational and connected to each other again.

Geo Enhancement: Improved Recover Point Objective (RPO)

Prior to ECS 2.0, ECS object chunks were of a fixed size (128MB) and were sealed and replicated to a remote site only when the chunk was filled. Although this strategy was more efficient, it had the drawback that if an entire site or a rack went down, there could be many chunks with less than 128MB of data that had not been replicated. To overcome this, ECS 2.0 now starts the replication process as soon as a chunk starts receiving data.

Geo Enhancement: Multi-Site Access Performance Improvements through Geo-caching

Prior to ECS 2.0, where multiple VDCs were federated, any attempt to read an object had to go back to the VDC that owned the object. This meant that every time a user in another site accessed the data, it incurred a cost in terms of WAN bandwidth as well as slower performance caused by WAN latency.

ECS 2.0 solves this problem by caching objects at the secondary sites so that users can access the data locally without a WAN transfer.

Object Metering

Object metering enables key statistics to be retrieved for a tenant and for the buckets associated with a tenant. Metering data includes capacity, object count, objects created, objects deleted, and bandwidth (inbound as well as outbound) and can be retrieved for a specific point in time, or for a time range.

Retention Policies

Retention periods can be applied to object containers (object containers are referred to as buckets in Amazon S3, containers in OpenStack Swift, and subtenants in EMC Atmos) and objects to prevent critical data being modified within a specified period. In addition, ECS provides the ability to define policies that can be applied to objects so that the retention period is applied in accordance with the defined policy rather than a specific period.

Quota Management

Quota limits can be set on a bucket or namespace. This enables a tenant to define the maximum amount of storage that can be used for a namespace and enables tenants to create buckets that are quota limited. Hard and soft quotas can be applied independently or together, to record and event prior to imposing a hard limit.

Bucket and User Locking

Locking can be applied at the bucket and user level, using the ECS Management REST API, as a means of preventing access.

Bucket Auditing

Bucket auditing is provided to enable create, update, and delete buckets operations and bucket access permission changes to be logged.

Enable Rack Level Awareness

Racks can be treated as separate fault domains enabling object store chunks to be distributed across these domains so that failure of a rack does not cause failure of the whole site.

Back to Top