ECS Hardware and Cabling Guide

Table of Contents

ECS Appliance hardware components

Describes the hardware components that make up ECS Appliance hardware models.

ECS Appliance series

The ECS Appliance has two series:

Hardware generations

The original U-Series ardware (Gen1) was replaced in October 2015 with second generation hardware (Gen2). Statements made in this document apply to both generations except where noted.

U-Series

The U-Series ECS Appliance includes the following hardware components:

U-Series ECS Appliance minimum and maximum configurations

C-Series

The C-Series ECS Appliance includes the following hardware components:

C-Series ECS Appliance minimum and maximum configurations

Customers connect to an ECS Appliance by way of 10 GbE ports and their own interconnect cables. When multiple appliances are installed in the same data center, the private switches can be connected by daisy-chain or home-run connections to a customer-provided switch.

Back to Top

U-Series ECS Appliance (Gen2) configurations and upgrade paths

Describes the second generation U-Series ECS Appliance configurations and the upgrade paths between the configurations. The Gen2 hardware became generally available in October 2015.

U-Series configurations (Gen2)

The U-Series ECS Appliance is a dense storage solution using commodity hardware.

U-Series (Gen2) upgrade paths

U-Series upgrades consist of the disks and infrastructure hardware needed to move from the existing model number to the next higher model number. To upgrade by more than one model level, order the upgrades for each level and apply them in one service call. Upgrades become available 90 days from the hardware GA of October 2015. This guide will be updated when the Gen2 upgrades become available.

Back to Top

U-Series ECS Appliance (Gen1) configurations and upgrade paths

Describes the first generation ECS Appliance configurations and the upgrade paths between the configurations. Gen1 hardware became generally available in June 2014.

U-Series configurations (Gen1)

The U-Series ECS Appliance is a dense storage solution using commodity hardware.

U-Series (Gen1) upgrade paths

U-Series upgrades consist of the disks and infrastructure hardware needed to move from the existing model number to the next higher model number. To upgrade by more than one model level, order the upgrades for each level and apply them in one service call.

Back to Top

C-Series ECS Appliance (Gen1) configurations and upgrade paths

Describes the first generation C-Series ECS Appliance configurations and the upgrade paths between the configurations. Gen1 hardware became generally available in March 2015.

C-Series (Gen1) configurations

The C-Series ECS Appliance is a dense compute solution using commodity hardware.

C-Series (Gen1) upgrade paths

C-Series upgrades consist of the disks and infrastructure hardware needed to move from the existing model number to the next higher model number. To upgrade by more than one model level, order the upgrades for each level and apply them in one service call.

Back to Top

U-Series single-phase AC power cabling

Provides the single-phase power cabling diagram for the U-Series ECS Appliance.

The switches plug into the front of the rack and route through the rails to the rear.

Note Image
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5 through 8 and server chassis 2.

U-Series single-phase AC power cabling for eight-node configurations

Back to Top

U-Series three-phase AC power cabling

Provides cabling diagrams for three-phase AC delta and wye power.

U-Series three-phase delta AC power cabling

The legend below maps colored cables shown in the diagram to EMC part numbers and cable lengths.

Cable legend for three-phase delta AC power diagram

Note Image
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5 through 8 and server chassis 2.

Three-phase AC delta power cabling for eight-node configuration

Three-phase WYE AC power cabling

The legend below maps colored cables shown in the diagram to EMC part numbers and cable lengths.

Cable legend for three-phase WYE AC power diagram

Note Image
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5 through 8 and server chassis 2.

Three-phase WYE AC power cabling for eight-node configuration

Back to Top

C-Series single-phase AC power cabling

Provides the single-phase power cabling diagram for the C-Series ECS Appliance.

The switches plug into the front of the rack and route through the rails to the rear.

C-Series single-phase AC power cabling for eight-node configurations: Top

C-Series single-phase AC power cabling for eight-node configurations: Bottom

Back to Top

U-Series SAS cabling

Provides wiring diagrams for the SAS cables that connect nodes to DAEs.

Gen2

Gen2 use two SAS cables for each node to DAE connection.

The top port on the DAE is port 0 and always connects to the right port on the node. The bottom port is port 1 and always connects to the left port on the node.

Gen2 SAS cabling for eight-node configurations

Gen1

Note Image
Hardware diagrams number nodes starting with zero. In all other discussions of ECS architecture and software, nodes are numbered starting with one.

Gen1 SAS cabling for eight-node configurations

Back to Top

Switches

ECS Appliances include three switches:

Back to Top

Private switch: 7048T

The private switch is used for management traffic. It has 52 ports and dual power supply inputs. The switch is configured in the factory.

Arista 7048T configuration

  1. Ports 1-24 Management
  2. Ports 25-48 RMM/IPMI
  3. Port 49-50 Management connections to the 10 GbE public switches
  4. Port 51 NAN connectivity. In the first rack, this port uplinks to the customer network when access to the RMMs is needed.
  5. Port 52 NAN connectivity. Port 52 of the first rack connects to port 51 of the next rack and so on to all racks in the ECS site.
Note Image
The NAN (Nile Area Network) links all ECS Appliances at a site.

Back to Top

Public switch: Arista 7124SX

The Arista 7124SX switch is equipped with 24 SFP+ ports, dual hot-swap power supplies, and redundant field replaceable fan modules.

Arista 7124SX

  1. Port 1-8 Customer uplink data ports. These ports provide the connection to the user's 10 GbE infrastructure
  2. Port 9-20 Connected to the nodes as data ports
  3. Port 21-22 HA interconnect for MLAG interfaces between public switches
  4. Port 23-24 HA interconnect for MLAG interfaces between public switches
  5. Switch management 1 network interface
  6. Serial console
Back to Top

Public switch: Arista 7150S-24

The 7150S-24 switch is a 24 port switch. The switch is equipped with 24 SFP+ ports, dual hot-swap power supplies, and redundant, field-replaceable fan modules.

Arista 7150S-24

  1. Ports 1-8: Customer uplink
  2. Ports 9-12: Connect to nodes as data ports
  3. Ports 13-20: Connect to nodes as data ports
  4. Ports 21-22: Interconnect for MLAG interfaces between public switches
  5. Ports 23-24: Interconnect for MLAG interfaces between public switches
  6. Switch management 1 network interface
  7. Serial console: The console port is used to manage the switch via a serial connection and the Ethernet management port is connected to the 1 GbE management switch
Back to Top

Public switch: Arista 7050S-52

The 7150S-52 switch is a 52-port switch. The switch is equipped with 52 SFP+ ports, dual hot-swap power supplies, and redundant, field-replaceable fan modules.

Arista 7150S-52

  1. Ports 1-8: Customer uplink
  2. Ports 9-48: Connect to nodes as data ports
  3. Ports 49-50: Interconnect for MLAG interfaces between public switches
  4. Ports 51-52: Interconnect for MLAG interfaces between public switches
  5. Switch management 1 network interface
  6. Serial console: The console port is used to manage the switch via a serial connection and the Ethernet management port is connected to the 1 GbE management switch
Back to Top

Network cabling

Presents network cable diagrams for the public and private switches in an ECS Appliance.

The network cabling diagrams apply to both the U-Series and C-Series ECS Appliance.

To distinguish between the three switches in documentation, each switch has a nickname:

Public switch cabling for four nodes

Private switch cabling for four nodes

Back to Top

Nodes

ECS Appliance has two node types:

U-Series Gen2 nodes have the following standard features:

U-Series Gen1 nodes have the following standard features:

C-Series nodes have the following standard features:

Back to Top

Server front views

U-Series Phoenix-16 (Gen1) and Rinjin-16 (Gen2) server chassis front view

C-Series Phoenix-12 server chassis front view (Gen1)

LED indicators are on the left and right side of the server front panels.

LEDs (left side)

Back to Top

Server rear view

The Rinjin-16 (U-Series Gen2) and Phoenix-16 (U-Series Gen1) and the Phoenix-12 (C-Series Gen1) server chassis provide dual hot-swappable power supplies and four nodes. The chassis shares a common redundant power supply (CRPS) that enables HA power in each chassis shared across all nodes. The nodes are mounted on hot-swappable trays that fit into the four corresponding node slots accessible from the rear of the server.

Server chassis rear view (all)

  1. Node 1
  2. Node 2
  3. Node 3
  4. Node 4

Rear ports on nodes (all)

  1. 1 GbE: Connected to one of the data ports on the 1 GbE switch
  2. RMM: A dedicated port for hardware monitoring (per node)
  3. SAS to DAE. Gen1 has a single port. Gen2 hardware has two ports. Used on U-Series servers only.
  4. 10 GbE (primary): The right 10 GbE data port of each node is connected to one of the data ports on the primary 10 GbE switch
  5. 10 GbE (secondary): The left 10 GbE data port of each node is connected to one of the data ports on the secondary 10 GbE switch
Back to Top

Rack and node host names

Lists the default rack and node host names for an ECS appliance.

Default rack IDs and color names are assigned in installation order as shown below:

Nodes are assigned host names based on their order within the server chassis and within the rack itself. The following table lists the default host names.

Nodes positioned in the same slot in different racks at a site will have the same host name. For example node 4 will always be called ogden, assuming you use the default node names.

System outputs will identify nodes by a unique combination of node host name and rack name. For example, node 4 in rack 4 and node 4 in rack 5 will be identified as:

ogden-green
ogden-blue

Back to Top

Disk drives

Describes the disk drives used in ECS Appliances.

ECS Appliances use the following disk drives for storage:

All the disks in a DAE or integrated into a server chasis conform to these rules:

Back to Top

Integrated disk drives

Describes storage disks integrated into the server chassis of the Gen1 C-Series ECS Appliance.

U-Series integrated disks

In U-Series servers, OS disks are integrated into the server chassis and are accessible from the front of the server chassis. Each node has one (older) or two (current) OS disks per node.

C-Series integrated disks

In C-Series servers with integrated disks, the disks are accessible from the front of the server chassis. The disks are assigned equally to the four nodes in the chassis. All disks must be the same size and speed (6TB).

Note Image
The first disk drive assigned to each node is called disk drive zero (HDD0). These storage drives will contain some system data.

C-Series Integrated disks with node mappings

Back to Top

Disks and enclosures

The U-Series provides disk array enclosures (DAE). The DAE is a drawer that slides in and out of the 40U dense rack. The disk drives, LCC, and cooling modules for the DAE are located inside the DAE. The DAE has the following features:

C-Series servers have integrated disks: 12 3.5-inch disk drives accessible from the front of the server.

Back to Top

Disk drives in DAEs

Disk drives are encased in cartridge-style enclosures. Each cartridge has a latch that allows you to snap-out a disk drive for removal and snap-in for installation.

The inside of each DAE has physically printed labels located on the left and the front sides of the DAE that describe the rows (or banks) and columns (or slots) where the disk drives are installed in the DAE.

The banks are labeled from A to E and the slots are labeled from 0 to 11. When describing the layout of disk drives within the DAE, the interface format for the DAE is called E_D. That is, E indicates the enclosure, and D the disk. For example, you could have an interface format of 1_B11. This format is interpreted as enclosure 1, in row (bank) B/slot number 11.

Enclosures are numbered from 1 through 8 starting at the bottom of the rack. Rear cable connections are color-coded.

The arrangement of disk disks in a DAE must match the prescribed layouts shown in the figures below. Not all layouts are available for all hardware.

Looking at the DAE from the front and above, the following figure shows you the disk drive layout of the DAE.

U-Series disk layout for 10-disk configurations. Gen2 only.

U-Series disk layout for 15-disk configurations. Gen1 only.

U-Series disk layout for 30-disk configurations. Gen1 only.

U-Series disk layout for 45-disk configurations. Gen1 and Gen2

U-Series disk layout for 60-disk configurations. Gen1 and Gen2.

Back to Top

LCC

Each DAE includes a LCC whose main function is to be a SAS expander and provide enclosure services. The LCC independently monitors the environment status of the entire enclosure and communicates the status to the system. The LCC includes a fault LED and a power LED.

Note Image
Remove the power from the DAE before replacing the LCC.

LCC with LEDs

LCC Location

Back to Top

Fan control module

Each DAE includes three fan control modules (cooling modules) located on the front of the DAE. The fan control module augments the cooling capacity of each DAE. It plugs directly into the DAE baseboard from the top of the DAE. Inside the fan control module, sensors measure the external ambient temperatures to ensure even cooling throughout the DAE.

Fan control module with LED

Location of fan modules

Back to Top

ICM

The ICM is the primary interconnect management element. It is a plug-in module that includes a USB connector, RJ-12 management adapter, Bus ID indicator, enclosure ID indicator, two input SAS connectors and two output SAS connectors with corresponding LEDs indicating the link and activity of each SAS connector for input and output to devices.

Note Image
Disconnect power to the DAE when changing the ICM.

The ICM supports the following I/O ports on the rear:

  • Four 6-Gb/s PCI Gen 2 SAS ports
  • One management (RJ-12) connector to the SPS (field service diagnostics only)
  • One USB connector
  • 6-Gb/s SAS x8 ports

It supports four (two input and two output, one used in Gen1 hardware and two used in Gen2 hardware) 6-Gb/s SAS x8 ports on the rear of the ICM. This port provides an interface for SAS and NL-SAS drives in the DAE.

ICM LEDs

  1. Power fault LED (amber)
  2. Power LED (green)
  3. Link activity LEDs (blue/green)
  4. Single SAS port used for Gen1 hardware.
  5. Two SAS ports used for Gen2 hardware.
Back to Top

DAE power supply

The power supply is hot-swappable. It has a built-in thumbscrew for ease of installation and removal. Each power supply includes a fan to provide cooling to the power supply. The power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each supply supports a fully configured DAE and shares load currents with the other supply. The DAE, the power supplies provide four independent power zones. Each of the hot-swappable power supplies has the capability to deliver 1300 W at 12 V in its load-sharing highly-available configuration. Control and status are implemented throughout the I2C interface.

DAE power supply

Back to Top

ECS third-party rack requirements

Customers who want to assemble an ECS Appliance using their own racks must ensure that the racks meet the following requirements.

Note Image
EMC support personnel can refer to EMC Elastic Cloud Storage Third-Party Rack Installation Guide.

Back to Top