H3C UniServer R2700 G3 Server User Guide-6W101

HomeSupportServerH3C UniServer R2700 G3Install & UpgradeInstallation GuidesH3C UniServer R2700 G3 Server User Guide-6W101
01-Text
Title Size Download
01-Text 11.12 MB

Contents

1 Safety information· 1-1

Safety sign conventions· 1-1

Power source recommendations· 1-2

Installation safety recommendations· 1-2

General operating safety· 1-2

Electrical safety· 1-2

Rack mounting recommendations· 1-2

ESD prevention· 1-3

Cooling performance· 1-3

Battery safety· 1-4

2 Preparing for installation· 2-1

Installation site requirements· 2-1

Rack requirements· 2-1

Space and airflow requirements· 2-2

Temperature and humidity requirements· 2-3

Equipment room height requirements· 2-3

Cleanliness requirements· 2-3

Grounding requirements· 2-4

Installation tools· 2-4

3 Installing or removing the server 3-1

Installing the server 3-1

Installing rails· 3-1

Rack-mounting the server 3-1

Installing cable management brackets· 3-2

Connecting external cables· 3-3

Cabling guidelines· 3-3

Connecting a mouse, keyboard, and monitor 3-3

Connecting an Ethernet cable· 3-4

Connecting a USB device· 3-5

Connecting the power cord· 3-6

Securing cables· 3-10

Removing the server from a rack· 3-10

4 Powering on and powering off the server 4-1

Important information· 4-1

Powering on the server 4-1

Prerequisites· 4-1

Procedure· 4-1

Powering off the server 4-2

Prerequisites· 4-2

Procedure· 4-2

5 Configuring the server 5-1

Configuration flowchart 5-1

Powering on the server 5-1

Configuring basic BIOS settings· 5-2

Setting the server boot order 5-2

Setting the BIOS passwords· 5-2

Configuring RAID·· 5-2

Installing the operating system and hardware drivers· 5-2

Installing the operating system·· 5-2

Installing hardware drivers· 5-2

Updating firmware· 5-2

6 Installing hardware options· 6-1

Installing the security bezel 6-1

Installing SAS/SATA drives· 6-1

Installing NVMe drives· 6-3

Installing power supplies· 6-5

Installing riser cards and PCIe modules· 6-6

Installing storage controllers and power fail safeguard modules· 6-9

Guidelines· 6-9

Prerequisites· 6-10

Installing a Mezzanine storage controller and a power fail safeguard module· 6-10

Installing a standard storage controller and a power fail safeguard module· 6-14

Installing GPU modules· 6-16

Installing Ethernet adapters· 6-18

Guidelines· 6-18

Installing an mLOM Ethernet adapter 6-18

Installing a PCIe Ethernet adapter 6-19

Installing SATA M.2 SSDs· 6-20

Installing SD cards· 6-21

Installing an NVMe SSD expander module· 6-23

Installing the NVMe VROC module· 6-24

Installing the 2SFF drive cage· 6-24

Installing the front 2SFF drive cage (8SFF server only) 6-24

Installing the rear 2SFF drive cage (4LFF/10FF server only) 6-26

Installing the front media module (VGA and USB 2.0 connectors) 6-27

Installing the front media module for the 4LFF server 6-27

Installing the front media module for the 8SFF server and 10SFF server 6-29

Installing an optical drive· 6-31

Preparing for the installation· 6-31

Installing a SATA optical drive on the 4LFF server 6-31

Installing a SATA optical drive on the 8SFF server 6-32

Installing a diagnostic panel 6-35

Installing fans· 6-36

Installing processors· 6-38

Installing DIMMs· 6-41

Installing and setting up a TCM or TPM·· 6-45

Installation and setup flowchart 6-45

Installing a TCM or TPM·· 6-45

Enabling the TCM or TPM from the BIOS· 6-47

Configuring encryption in the operating system·· 6-47

7 Replacing hardware options· 7-1

Replacing the access panel 7-1

Guidelines· 7-1

Removing the access panel 7-1

Installing the access panel 7-2

Replacing the security bezel 7-3

Replacing a SAS/SATA drive· 7-3

Replacing an NVMe drive· 7-4

Replacing a power supply· 7-5

Replacing air baffles· 7-8

Removing air baffles· 7-8

Installing air baffles· 7-9

Replacing a riser card and a PCIe module· 7-10

Replacing a storage controller 7-12

Guidelines· 7-12

Preparing for replacement 7-12

Replacing the Mezzanine storage controller 7-12

Replacing a standard storage controller 7-13

Replacing the power fail safeguard module· 7-14

Preparing for power fail safeguard module replacement 7-14

Replacing the power fail safeguard module for the Mezzanine storage controller 7-14

Replacing the power fail safeguard module for a standard storage controller 7-16

Replacing a GPU module· 7-17

Replacing an Ethernet adapter 7-19

Replacing an mLOM Ethernet adapter 7-19

Replacing a PCIe Ethernet adapter 7-19

Replacing an M.2 transfer module and a SATA M.2 SSD·· 7-20

Replacing the front M.2 transfer module and a SATA M.2 SSD·· 7-20

Replacing an NVMe VROC module· 7-22

Replacing an SD card· 7-22

Replacing the dual SD card extended module· 7-23

Replacing an NVMe SSD expander module· 7-24

Replacing a fan· 7-25

Replacing a processor 7-26

Guidelines· 7-26

Prerequisites· 7-26

Removing a processor 7-27

Installing a processor 7-28

Verifying the replacement 7-29

Replacing a DIMM·· 7-29

Replacing the system battery· 7-30

Removing the system battery· 7-31

Installing the system battery· 7-31

Verifying the replacement 7-32

Replacing the system board· 7-32

Guidelines· 7-32

Removing the system board· 7-32

Installing the system board· 7-34

Replacing the drive expander module (10SFF server) 7-35

Replacing a drive backplane· 7-36

Removing a drive backplane· 7-36

Installing a drive backplane· 7-38

Verifying the replacement 7-40

Replacing the SATA optical drive· 7-40

Replacing the SATA optical drive (4LFF server) 7-40

Replacing the SATA optical drive (8SFF server) 7-41

Replacing the diagnostic panel 7-42

Replacing the chassis-open alarm module· 7-42

Removing the chassis-open alarm module· 7-43

Installing the chassis-open alarm module· 7-43

Verifying the replacement 7-44

Replacing the front media module· 7-44

Removing the front media module (4LFF server) 7-44

Removing the front media module (8SFF and 10SFF servers) 7-46

Replacing the air inlet temperature sensor 7-46

Replacing the front I/O component 7-48

Replacing the front I/O component (4LFF server) 7-48

Replacing the front I/O component (8SFF/10SFF server) 7-50

Replacing chassis ears· 7-52

Replacing the TPM/TCM·· 7-53

8 Connecting internal cables· 8-1

Connecting drive cables· 8-1

4LFF server 8-1

8SFF server 8-3

10SFF server 8-11

Connecting the flash card and supercapacitor of the power fail safeguard module· 8-13

Connecting the flash card on the Mezzanine storage controller 8-13

Connecting the flash card on a standard storage controller 8-14

Connecting the power cord of a GPU module· 8-15

Connecting the SATA M.2 SSD cable· 8-15

Connecting the SATA optical drive cable· 8-16

Connecting the front I/O component cable assembly· 8-17

Connecting the front media module cable· 8-18

Connecting the NCSI cable for a PCIe Ethernet adapter 8-19

9 Maintenance· 9-1

Guidelines· 9-1

Maintenance tools· 9-1

Maintenance tasks· 9-1

Observing LED status· 9-1

Monitoring the temperature and humidity in the equipment room·· 9-1

Examining cable connections· 9-2

Technical support 9-2

10 Appendix A  Server specifications· 10-1

Server models and chassis view·· 10-1

Technical specifications· 10-1

Components· 10-3

Front panel 10-4

Front panel view·· 10-4

LEDs and buttons· 10-5

Ports· 10-7

Rear panel 10-7

Rear panel view·· 10-7

LEDs· 10-8

Ports· 10-9

System board· 10-10

System board components· 10-10

System maintenance switches· 10-11

DIMM slots· 10-11

11 Appendix B  Component specifications· 11-1

About component model names· 11-1

Processors· 11-1

Intel processors· 11-1

Jintide-C series processors· 11-2

DIMMs· 11-2

DRAM specifications· 11-3

DCPMM specifications· 11-3

DRAM DIMM rank classification label 11-3

HDDs and SSDs· 11-4

Drive specifications· 11-4

Drive LEDs· 11-13

Drive configurations and numbering· 11-14

PCIe modules· 11-17

Storage controllers· 11-17

NVMe SSD expander modules· 11-26

GPU modules· 11-27

PCIe Ethernet adapters· 11-28

FC HBAs· 11-30

mLOM Ethernet adapters· 11-31

Riser cards· 11-31

Fans· 11-32

Fan layout 11-32

Fan specifications· 11-32

Power supplies· 11-33

Expander modules and transfer modules· 11-36

Diagnostic panels· 11-37

Diagnostic panel specifications· 11-37

Diagnostic panel view·· 11-37

LEDs· 11-37

Fiber transceiver modules· 11-39

Storage options other than HDDs and SDDs· 11-40

NVMe VROC modules· 11-40

TPM/TCM modules· 11-40

Security bezels, slide rail kits, and cable management brackets· 11-41

12 Appendix C  Managed hot removal of NVMe drives· 12-1

Performing a managed hot removal in Windows· 12-1

Prerequisites· 12-1

Procedure· 12-1

Performing a managed hot removal in Linux· 12-2

Prerequisites· 12-2

Performing a managed hot removal from the CLI 12-2

Performing a managed hot removal from the Intel®  ASM Web interface· 12-3

13 Appendix D  Environment requirements· 13-1

About environment requirements· 13-1

General environment requirements· 13-1

Operating temperature requirements· 13-1

Guidelines· 13-1

4LFF server with any drive configuration· 13-1

8SFF server with an 8SFF drive configuration· 13-2

8SFF server with a 10SFF drive configuration· 13-3

10SFF server with any drive configuration· 13-4

14 Appendix E  Product recycling· 14-1

15 Appendix F  Glossary· 15-1

16 Appendix G  Acronyms· 16-1

 


1 Safety information

Safety sign conventions

To avoid bodily injury or damage to the server or its components, make sure you are familiar with the safety signs on the server chassis or its components.

Table 1-1 Safety signs

Sign

Description

Circuit or electricity hazards are present. Only H3C authorized or professional server engineers are allowed to service, repair, or upgrade the server.

WARNING WARNING!

To avoid bodily injury or damage to circuits, do not open any components marked with the electrical hazard sign unless you have authorization to do so.

Electrical hazards are present. Field servicing or repair is not allowed.

WARNING WARNING!

To avoid bodily injury, do not open any components with the field-servicing forbidden sign in any circumstances.

The RJ-45 ports on the server can be used only for Ethernet connections.

WARNING WARNING!

To avoid electrical shocks, fire, or damage to the equipment, do not connect an RJ-45 port to a telephone.

The surface or component might be hot and present burn hazards.

WARNING WARNING!

To avoid being burnt, allow hot surfaces or components to cool before touching them.

The server or component is heavy and requires more than one people to carry or move.

WARNING WARNING!

To avoid bodily injury or damage to hardware, do not move a heavy component alone. In addition, observe local occupational health and safety requirements and guidelines for manual material handling.

The server is powered by multiple power supplies.

WARNING WARNING!

To avoid bodily injury from electrical shocks, make sure you disconnect all power supplies if you are performing offline servicing.

 

Power source recommendations

Power instability or outage might cause data loss, service disruption, or damage to the server in the worst case.

To protect the server from unstable power or power outage, use uninterrupted power supplies (UPSs) to provide power for the server.

Installation safety recommendations

To avoid bodily injury or damage to the server, read the following information carefully before you operate the server.

General operating safety

To avoid bodily injury or damage to the server, follow these guidelines when you operate the server:

·          Only H3C authorized or professional server engineers are allowed to install, service, repair, operate, or upgrade the server.

·          Place the server on a clean, stable table or floor for servicing.

·          Make sure all cables are correctly connected before you power on the server.

·          To avoid being burnt, allow the server and its internal modules to cool before touching them.

Electrical safety

WARNING

WARNING!

If you put the server in standby mode (system power LED in amber) with the power on/standby button on the front panel, the power supplies continue to supply power to some circuits in the server. To remove all power for servicing safety, you must first press the button, wait for the system to enter standby mode, and then remove all power cords from the server.

 

To avoid bodily injury or damage to the server, follow these guidelines:

·          Always use the power cords that came with the server.

·          Do not use the power cords that came with the server for any other devices.

·          Power off the server when installing or removing any components that are not hot swappable.

Rack mounting recommendations

To avoid bodily injury or damage to the equipment, follow these guidelines when you rack mount a server:

·          Mount the server in a standard 19-inch rack.

·          Make sure the leveling jacks are extended to the floor and the full weight of the rack rests on the leveling jacks.

·          Couple the racks together in multi-rack installations.

·          Load the rack from the bottom to the top, with the heaviest hardware unit at the bottom of the rack.

·          Get help to lift and stabilize the server during installation or removal, especially when the server is not fastened to the rails. As a best practice, a minimum of two people are required to safely load or unload a rack. A third person might be required to help align the server if the server is installed higher than check level.

·          For rack stability, make sure only one unit is extended at a time. A rack might get unstable if more than one server unit is extended.

·          Make sure the rack is stable when you operate a server in the rack.

ESD prevention

Electrostatic charges that build up on people and tools might damage or shorten the lifespan of the system board and electrostatic-sensitive components.

Preventing electrostatic discharge

To prevent electrostatic damage, follow these guidelines:

·          Transport or store the server with the components in antistatic bags.

·          Keep the electrostatic-sensitive components in the antistatic bags until they arrive at an ESD-protected area.

·          Place the components on a grounded surface before removing them from their antistatic bags.

·          Avoid touching pins, leads, or circuitry.

·          Make sure you are reliably grounded when touching an electrostatic-sensitive component or assembly.

Grounding methods to prevent electrostatic discharge

The following are grounding methods that you can use to prevent electrostatic discharge:

·          Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

·          Take adequate personal grounding measures, including wearing antistatic clothing and static dissipative shoes.

·          Use conductive field service tools.

·          Use a portable field service kit with a folding static-dissipating work mat.

Cooling performance

Poor cooling performance might result from improper airflow and poor ventilation and might cause damage to the server.

To ensure good ventilation and proper airflow, follow these guidelines:

·          Install blanks if the following module slots are empty:

¡  Drive bays.

¡  Fan bays.

¡  PCIe slots.

¡  Power supply slots.

·          Do not block the ventilation openings in the server chassis.

·          To avoid thermal damage to the server, do not operate the server for long periods in any of the following conditions:

¡  Access panel open or uninstalled.

¡  Air baffles uninstalled.

¡  PCIe slots, drive bays, fan bays, or power supply slots empty.

·          To maintain correct airflow and avoid thermal damage to the server, install rack blanks to cover unused rack units.

Battery safety

The server's system board contains a system battery, which is designed with a lifespan of 5 to 10 years.

If the server no longer automatically displays the correct date and time, you might need to replace the battery. When you replace the battery, follow these safety guidelines:

·          Do not attempt to recharge the battery.

·          Do not expose the battery to a temperature higher than 60°C (140°F).

·          Do not disassemble, crush, puncture, short external contacts, or dispose of the battery in fire or water.

·          Dispose of the battery at a designated facility. Do not throw the battery away together with other wastes.


2 Preparing for installation

Prepare a rack that meets the rack requirements and plan an installation site that meets the requirements for space and airflow, temperature, humidity, equipment room height, cleanliness, and grounding.

Installation site requirements

Rack requirements

IMPORTANT

IMPORTANT:

As a best practice to avoid affecting the server chassis, install power distribution units (PDUs) with the outputs facing backwards. If you install PDUs with the outputs facing the inside of the server, please perform onsite survey to make sure the cables won't affect the server rear.

 

The server is 1U high. The rack for installing the server must meet the following requirements:

·          A standard 19-inch rack.

·          A clearance of more than 50 mm (1.97 in) between the rack front posts and the front rack door.

·          A minimum of 1200 mm (47.24 in) in depth as a best practice. For installation limits for different rack depth, see Table 2-1.

Table 2-1 Installation limits for different rack depths

Rack depth

Installation limits

1000 mm (39.37 in)

·         The H3C cable management arm (CMA) is not supported.

·         A clearance of 60 mm (2.36 in) is reserved from the server rear to the rear rack door for cabling.

·         The slide rails and PDUs might hinder each other. Perform onsite survey to determine the PDU installation location and the proper PDUs. If the PDUs hinder the installation and movement of the slide rails anyway, use other methods to support the server, a tray for example.

1100 mm (43.31 in)

Make sure the CMA does not hinder PDU installation at the server rear before installing the CMA. If the CMA hinders PDU installation, use a deeper rack or change the installation locations of PDUs.

1200 mm (47.24 in)

Make sure the CMA does not hinder PDU installation or cabling. If the CMA hinders PDU installation or cabling, change the installation locations of PDUs.

For detailed installation suggestions, see Figure 2-1.

 

Figure 2-1 Installation suggestions for a 1200 mm deep rack (top view)

(1) 1200 mm (47.24 in) rack depth

(2) A minimum of 50 mm (1.97 in) between the front rack posts and the front rack door

(3) 790 mm (31.10 in) between the front rack posts and the rear of the chassis, including power supply handles at the server rear (not shown in the figure)

(4) 810 mm (31.89 in) server depth, including chassis ears

(5) 960 mm (37.80 in) between the front rack posts and the CMA

(6) 860 mm (33.86 in) between the front rack posts and the rear ends of the slide rails

 

Space and airflow requirements

For convenient maintenance and heat dissipation, make sure the following requirements are met:

·          A minimum clearance of 635 mm (25 in) is reserved in front of the rack.

·          A minimum clearance of 762 mm (30 in) is reserved behind the rack.

·          A minimum clearance of 1219 mm (47.99 in) is reserved between racks.

·          A minimum clearance of 2 mm (0.08 in) is reserved between the server and its adjacent units in the same rack.

Figure 2-2 Airflow through the server

(1) and (2) Directions of intake airflow through the chassis and power supplies

(3) Directions of exhaust airflow out of the power supplies

(4) to (7) Directions of exhaust airflow out of the chassis

 

Temperature and humidity requirements

To ensure correct operation of the server, make sure the room temperature and humidity meet the requirements as described in "Appendix A  Server specifications."

Equipment room height requirements

For the server to operate correctly, make sure the equipment room height meets the requirements described in "Appendix A  Server specifications."

Cleanliness requirements

Mechanically active substances buildup on the chassis might result in electrostatic adsorption, which causes poor contact of metal components and contact points. In the worst case, electrostatic adsorption can cause communication failure.

Table 2-2 Mechanically active substance concentration limits in the equipment room

Substance

Particle diameter

Concentration limit

Dust particles

≥ 5 µm

≤ 3 x 104 particles/m3

(No visible dust on the tabletop over three days)

Dust (suspension)

≤ 75 µm

≤ 0.2 mg/m3

Dust (sedimentation)

75 µm to 150 µm

≤ 1.5 mg/(m2h)

Sand

≥ 150 µm

≤ 30 mg/m3

 

The equipment room must also meet limits on salts, acids, and sulfides to eliminate corrosion and premature aging of components, as shown in Table 2-3.

Table 2-3 Harmful gas limits in an equipment room

Gas

Maximum concentration (mg/m3)

SO2

0.2

H2S

0.006

NO2

0.04

NH3

0.05

Cl2

0.01

 

Grounding requirements

Correctly connecting the server grounding cable is crucial to lightning protection, anti-interference, and ESD prevention.

The server can be grounded through the grounding wire of the power supply system and no external grounding cable is required.

Installation tools

Table 2-4 lists the tools that you might use during installation.

Table 2-4 Installation tools

Picture

Name

Description

T25 Torx screwdriver

Installs or removes screws inside chassis ears, including screw rack mount ears or multifunctional rack mount ears.

T30 Torx screwdriver

Installs or removes captive screws on processor heatsinks.

T15 Torx screwdriver (shipped with the server)

Installs or removes screws on access panels.

T10 Torx screwdriver (shipped with the server)

Installs or removes screws on the front media module.

Flat-head screwdriver

Installs or removes captive screws inside multifunctional rack mount ears or replaces system batteries.

Phillips screwdriver

Installs or removes screws on SATA M.2 SSDs.

Cage nut insertion/extraction tool

Inserts or extracts the cage nuts in rack posts.

Diagonal pliers

Clip insulating sleeves.

Tape measure

Measures distance.

Multimeter

Measures resistance and voltage.

ESD wrist strap

Prevents ESD when you operate the server.

Antistatic gloves

Prevents ESD when you operate the server.

Antistatic clothing

Prevents ESD when you operate the server.

Ladder

Supports high-place operations.

Interface cable (such as an Ethernet cable or optical fiber)

Connects the server to an external network.

Monitor (such as a PC)

Displays the output from the server.

 

 


3 Installing or removing the server

Installing the server

As a best practice, install hardware options as needed to the server before installing the server in the rack. For more information about how to install hardware options, see "Installing hardware options."

Installing rails

Install the inner rails to the server and the middle-outer rails to the rack. For information about installing the rails, see the document shipped with the rails.

Rack-mounting the server

WARNING

WARNING!

To avoid bodily injury, slide the server into the rack with caution for the sliding rails might squeeze your fingers.

 

1.        Slide the server into the rack. For more information about how to slide the server into the rack, see the document shipped with the rails.

Figure 3-1 Rack-mounting the server

Orch_136.png

 

2.        Secure the server:

If the server is installed with multifunctional rack mount ears, perform the following steps as shown in Figure 3-2:

a.     Push the server until the multifunctional rack mount ears are flush against the rack front posts.

b.    Unlock the latches of the multifunctional rack mount ears.

c.     Fasten the captive screws inside the chassis ears and lock the latches.

Figure 3-2 Securing the server with multifunctional rack mount ears

R170_047.png

 

If the server is installed with screw rack mount ears, perform the following steps as shown in Figure 3-3:

a.     Push the server until the screw rack mount ears are flush against the rack front posts.

b.    Fasten the captive screws on the screw rack mount ears.

Figure 3-3 Securing the server with screw rack mount ears

R170_046.png

 

Installing cable management brackets

Install cable management brackets if the server is shipped with cable management brackets. For information about how to install cable management brackets, see the installation guide shipped with the brackets.

Connecting external cables

Cabling guidelines

WARNING

WARNING!

To avoid electric shock, fire, or damage to the equipment, do not connect communication equipment to RJ-45 Ethernet ports on the server.

 

·          For heat dissipation, make sure no cables block the inlet or outlet air vents of the server.

·          To easily identify ports and connect/disconnect cables, make sure the cables do not cross.

·          Label the cables for easy identification.

·          Wrap unused cables onto an appropriate position on the rack.

·          To avoid damage to cables when extending the server out of the rack, do not route the cables too tight if you use cable management brackets.

Connecting a mouse, keyboard, and monitor

About this task

Perform this task before you configure BIOS, HDM, FIST, or RAID on the server or enter the operating system of the server.

The server provides a maximum of two DB-15 VGA connectors to connect monitors.

·          One on the front panel if the server is installed with a front media module.

·          One on the rear panel.

The server is not shipped with a standard PS2 mouse and keyboard. To connect a PS2 mouse and keyboard, you must prepare a USB-to-PS2 adapter.

Procedure

1.        Connect one plug of a VGA cable to a VGA connector on the server, and fasten the screws on the plug.

Figure 3-4 Connecting a VGA cable

 

2.        Connect the other plug of the VGA cable to the VGA connector on the monitor, and fasten the screws on the plug.

3.        Connect the mouse and keyboard.

¡  For a USB mouse and keyboard, directly connect the USB connectors of the mouse and keyboard to the USB connectors on the server.

¡  For a PS2 mouse and keyboard, insert the USB connector of the USB-to-PS2 adapter to a USB connector on the server. Then, insert the PS2 connectors of the mouse and keyboard into the PS2 receptacles of the adapter.

Figure 3-5 Connecting a PS2 mouse and keyboard by using a USB-to-PS2 adapter

 

Connecting an Ethernet cable

About this task

Perform this task before you set up a network environment or log in to the HDM management interface through the HDM dedicated network port to manage the server.

Procedure

1.        Determine the network port on the server.

¡  To connect the server to the external network, use the Ethernet port on the Ethernet adapter.

¡  To log in to the HDM management interface, use the HDM dedicated or shared network port on the server.

HDM shared network port is available only if an NCSI-capable mLOM or PCIe Ethernet adapter is installed.

For the position of the HDM dedicated network port, see "Rear panel view." For the position of the HDM shared network port on an mLOM or PCIe Ethernet adapter, see "Installing Ethernet adapters."

2.        Determine type of the Ethernet cable.

Verify the connectivity of the cable by using a link tester.

If you are replacing the Ethernet cable, make sure the new cable is the same type or compatible with the old cable.

3.        Label the Ethernet cable.

As a best practice, use labels of the same type for all cables and put the names and numbers of the server and its peer device on the labels.

If you are replacing the Ethernet cable, make sure the new label contains the same contents as the old label.

4.        Connect one end of the Ethernet cable to the network port on the server and the other end to the peer device.

Figure 3-6 Connecting an Ethernet cable

 

5.        Verify network connectivity.

After powering on the server, use the ping command to test the network connectivity. If the connection between the server and the peer device fails, verify that the Ethernet cable is securely connected.

6.        Secure the Ethernet cable. For information about how to secure cables, see "Securing cables."

Connecting a USB device

About this task

Perform this task before you install the operating system of the server or transmit data through a USB device.

The server provides a maximum of six USB connectors.

·          Two USB 2.0 connectors on the front panel if the server is installed with a front media module.

·          Two USB 3.0 connectors on the rear panel.

·          Two internal USB 3.0 connectors for connecting USB devices that are intended for frequent use without removal.

Guidelines

Before connecting a USB device, make sure the USB device can operate correctly and then copy data to the USB device.

USB devices are hot swappable.

As a best practice for compatibility, use H3C approved USB devices.

Procedure

1.        Remove the access panel if you are connecting the USB device to an internal USB connector. For information about the removal procedure, see "Removing the access panel."

2.        Connect the USB device to the USB connector, as shown in Figure 3-7.

Figure 3-7 Connecting a USB device to an internal USB connector

 

3.        Install the access panel. For information about the installation procedure, see "Installing the access panel."

4.        Verify that the server can identify the USB device.

If the server fails to identify the USB device, download and install the USB device driver. If the server still cannot identify the USB device, replace the USB device.

Connecting the power cord

Guidelines

WARNING

WARNING!

To avoid damage to the equipment or even bodily injury, use the power cord that ships with the server.

 

Before connecting the power cord, make sure the server and components are installed correctly.

Connecting the AC power cord for an AC or 240 V high-voltage DC power supply

1.        Insert the power cord plug into the power receptacle of a power supply at the rear panel, as shown in Figure 3-8.

Figure 3-8 Connecting the AC power cord

 

2.        Connect the other end of the power cord to the power source, for example, the power strip on the rack.

3.        Secure the power cord to avoid unexpected disconnection of the power cord.

a.     If the cable clamp blocks the power cord plug connection, press down the tab on the cable mount and slide the clip backward.

Figure 3-9 Sliding the cable clamp backward

 

b.    Open the cable clamp, place the power cord through the opening in the cable clamp, and then close the cable clamp, as shown by callouts 1, 2, 3, and 4 in Figure 3-10.

Figure 3-10 Securing the AC power cord

 

c.     Slide the cable clamp forward until it is flush against the edge of the power cord plug, as shown in Figure 3-11.

Figure 3-11 Sliding the cable clamp forward

 

Connecting the DC power cord for a –48 VDC power supply

WARNING

WARNING!

Provide a circuit breaker for each power cord. Make sure the circuit breaker is switched off before you connect a DC power cord.

 

To connect the DC power cord for a –48 VDC power supply:

1.        Connect the power cord plug to the power receptacle of a –48 VDC power supply at the rear panel, as shown in Figure 3-12.

Figure 3-12 Connecting the DC power cord

 

2.        Fasten the screws on the power cord plug to secure it into place, as shown in Figure 3-13.

Figure 3-13 Securing the DC power cord

 

3.        Connect the other end of the power cord to the power source, as shown in Figure 3-14.

The DC power cord contains three wires: –48V GND, –48V, and PGND. Connect the three wires to the corresponding terminals of the power source. The wire tags in the figure are for illustration only.

Figure 3-14 Three wires at the other end of the DC power cord

 

Securing cables

Securing cables to cable management brackets

For information about how to secure cables to cable management brackets, see the installation guide shipped with the brackets.

Securing cables to slide rails by using cable straps

You can secure cables to either left slide rails or right slide rails. As a best practice for cable management, secure cables to left slide rails.

When multiple cable straps are used in the same rack, stagger the strap location, making sure the straps are adjacent to each other when viewed from top to bottom. This positioning will enable the slide rails to slide easily in and out of the rack.

To secure cables to slide rails by using cable straps:

1.        Hold the cables against a slide rail.

2.        Wrap the strap around the slide rail and loop the end of the cable strap through the buckle.

3.        Dress the cable strap to ensure that the extra length and buckle part of the strap are facing outside of the slide rail.

Figure 3-15 Securing cables to a slide rail

Orch_140.png

 

Removing the server from a rack

1.        Power down the server. For more information, see "Powering off the server."

2.        Disconnect all peripheral cables from the server.

3.        Extend the server from the rack, as shown in Figure 3-16.

If the server is installed with multifunctional rack mount ears, perform the following steps as shown in Figure 3-16:

a.     Open the latches of the multifunctional rack mount ears.

b.    Loosen the captive screws inside the multifunctional rack mount ears.

c.     Slide the server out of the rack.

Figure 3-16 Extending the server from the rack

Orch_135.png

 

If the server is installed with screw rack mount ears, loosen the captive screws on the screw rack mount ears, and then slide the server out of the rack.

4.        Place the server on a clean, stable surface.

 


4 Powering on and powering off the server

Important information

If the server is connected to external storage devices, make sure the server is the first device to power off and the last device to power on. This restriction prevents the server from mistakenly identifying the external storage devices as faulty devices.

Powering on the server

Prerequisites

Before you power on the server, you must complete the following tasks:

·          Install the server and internal components correctly.

·          Connect the server to a power source.

Procedure

Choose one of the following methods as needed:

·          Powering on the server by pressing the power on/standby button

·          Powering on the server from the HDM Web interface

·          Powering on the server from the remote console interface

·          Configuring automatic power-on

Powering on the server by pressing the power on/standby button

Press the power on/standby button to power on the server.

The server exits standby mode and supplies power to the system. The system power LED changes from steady amber to flashing green and then to steady green. For information about the position of the system power LED, see "LEDs and buttons."

Powering on the server from the HDM Web interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Power on the server.

For more information, see HDM online help.

Powering on the server from the remote console interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Log in to a remote console and then power on the server.

For information about how to log in to a remote console, see HDM online help.

Configuring automatic power-on

You can configure automatic power-on from HDM or the BIOS.

To configure automatic power-on from HDM:

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Enable automatic power-on for the server.

For more information, see HDM online help.

To configure automatic power-on from the BIOS, set AC Restore Settings to Always Power On. For more information, see the BIOS user guide for the server.

Powering off the server

Prerequisites

Before powering off the server, you must complete the following tasks:

·          Back up all critical data.

·          Make sure all services have stopped or have been moved to other servers.

Procedure

Choose one of the following methods as needed:

·          Powering off the server from its operating system

·          Powering off the server by pressing the power on/standby button

·          Powering off the server forcedly by pressing the power on/standby button

·          Powering off the server from the HDM Web interface

·          Powering off the server from the remote console interface

Powering off the server from its operating system

1.        Connect a monitor, mouse, and keyboard to the server.

2.        Shut down the operating system of the server.

3.        Disconnect all power cords from the server.

Powering off the server by pressing the power on/standby button

1.        Press the power on/standby button and wait for the power on/standby button LED to turn into steady amber.

2.        Disconnect all power cords from the server.

Powering off the server forcedly by pressing the power on/standby button

IMPORTANT

IMPORTANT:

This method forces the server to enter standby mode without properly exiting applications and the operating system. Use this method only when the server system crashes. For example, a process gets stuck.

 

1.        Press and hold the power on/standby button until the system power LED turns into steady amber.

2.        Disconnect all power cords from the server.

Powering off the server from the HDM Web interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Power off the server.

For more information, see HDM online help.

3.        Disconnect all power cords from the server.

Powering off the server from the remote console interface

1.        Log in to HDM.

For information about how to log in to HDM, see the firmware update guide for the server.

2.        Log in to a remote console, and then power off the server.

For information about how to log in to a remote console, see HDM online help.

3.        Disconnect all power cords from the server.


5 Configuring the server

The following information describes the procedures to configure the server after the server installation is complete.

Configuration flowchart

Figure 5-1 Configuration flowchart

 

Powering on the server

1.        Power on the server. For information about the procedures, see "Powering on the server."

2.        Verify that the health LED on the front panel is steady green, which indicates that the system is operating correctly. For more information about the health LED status, see "LEDs and buttons."

Configuring basic BIOS settings

You can set the server boot order and the BIOS user and administrator passwords from the BIOS setup utility of the server.

Setting the server boot order

The server has a default boot order and you can change the server boot order from the BIOS. For more information about changing the server boot order, see the BIOS user guide for the server.

Setting the BIOS passwords

For more information about setting the BIOS passwords, see the BIOS user guide for the server.

Configuring RAID

Configure physical and logical drives (RAID arrays) for the server.

The supported RAID levels and RAID configuration methods vary by storage controller model. For more information, see the storage controller user guide for the server.

Installing the operating system and hardware drivers

Installing the operating system

Install a compatible operating system on the server by following the procedures described in the operating system installation guide for the server.

For information about the operating system compatibility, see the operating system compatibility matrix for the server.

Installing hardware drivers

IMPORTANT

IMPORTANT:

In case an update failure causes hardware unavailability, always back up the drivers before you update them.

 

For newly installed hardware to operate correctly, the operating system must have the required hardware drivers.

To install a hardware driver, see the operating system installation guide for the server.

Updating firmware

IMPORTANT

IMPORTANT:

Verify the hardware and software compatibility before firmware upgrade. For information about the hardware and software compatibility, see the software release notes.

 

You can update the following firmware from FIST or HDM:

·          HDM.

·          BIOS.

·          CPLD.

For information about the update procedures, see the firmware update guide for the server.


6 Installing hardware options

If you are installing multiple hardware options, read their installation procedures and identify similar steps to streamline the entire installation procedure.

Installing the security bezel

1.        Press the right edge of the security bezel into the groove in the right chassis ear on the server. See callout 1 in Figure 6-1.

2.        Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. See callouts 2 and 3 in Figure 6-1.

3.        Insert the key provided with the bezel into the lock on the bezel and lock the security bezel (see callout 4 in Figure 6-1). Then, pull out the key and keep it safe.

 

CAUTION

CAUTION:

To avoid damage to the lock, hold down the key while you are turning the key.

 

Figure 6-1 Installing the security bezel

 

Installing SAS/SATA drives

Guidelines

The SAS/SATA drives are hot swappable. If you hot swap an HDD repeatedly within 30 seconds, the system might fail to identify the drive.

If you are using SAS/SATA drives to create a RAID array, follow these restrictions and guidelines:

·          To build a RAID array (a logical drive) successfully, make sure all drives in the RAID array are the same type (HDDs or SSDs) and have the same connector type (SAS or SATA).

·          For efficient use of storage, use drives that have the same capacity to build a RAID array. If the drives have different capacities, the lowest capacity is used across all drives in the RAID array. If one drive is used in several RAID arrays, RAID performance might degrade and maintenance complexities will increase.

·          If the installed drive contains RAID information, you must clear the information before using the drive to build a RAID array. For more information, see the storage controller user guide for the server.

Procedure

1.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

2.        Press the latch on the drive blank inward, and pull the drive blank out of the slot, as shown in Figure 6-2.

Figure 6-2 Removing the drive blank

R190_006.png

 

3.        Install the drive:

a.     Press the button on the drive panel to release the locking lever.

Figure 6-3 Releasing the locking lever

准备安装硬盘.png

 

b.    Insert the drive into the drive bay and push it gently until you cannot push it further.

c.     Close the locking lever until it snaps into place.

Figure 6-4 Installing a drive

安装硬盘.png

 

4.        Install the removed security bezel. For more information, see "Installing the security bezel."

Verifying the installation

Use the following methods to verify that the drive is installed correctly:

·          Verify the drive properties (including its capacity) by using one of the following methods:

¡  Log in to HDM. For more information, see HDM online help.

¡  Access the BIOS. For more information, see the BIOS user guide for the server.

¡  Access the CLI or GUI of the server.

·          Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."

Installing NVMe drives

Guidelines

NVMe drives support hot insertion and managed hot removal.

Only one NVMe drive can be hot inserted at a time. To hot insert multiple NVMe drives, wait a minimum of 60 seconds for the previously installed NVMe drive to be identified before hot inserting another NVMe drive.

If you are using NVMe drives to create a RAID array, follow these restrictions and guidelines:

·          For efficient use of storage, use drives that have the same capacity to build a RAID array. If the drives have different capacities, the lowest capacity is used across all drives in the RAID array. An NVMe drive cannot be used to build multiple RAID arrays.

·          If the installed drive contains RAID information, you must clear the information before using the drive to build a RAID array. For more information, see the storage controller user guide for the server.

Procedure

1.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

2.        Push the latch on the drive blank inward, and pull the drive blank out of the slot, as shown in Figure 6-5.

Figure 6-5 Removing the drive blank

R190_006.png

 

3.        Install the drive:

a.     Press the button on the drive panel to release the locking lever.

Figure 6-6 Releasing the locking lever

准备安装硬盘.png

 

b.    Insert the drive into the drive bay and push it gently until you cannot push it further.

c.     Close the locking lever until it snaps into place.

Figure 6-7 Installing a drive

安装硬盘.png

 

4.        Install the removed security bezel. For more information, see "Installing the security bezel."

Verifying the installation

Use the following methods to verify that the drive is installed correctly:

·          Verify the drive properties (including the capacity) by using one of the following methods:

¡  Access HDM. For more information, see HDM online help.

¡  Access the BIOS. For more information, see the BIOS user guide for the server.

¡  Access the CLI or GUI of the server.

·          Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."

Installing power supplies

Guidelines

·          The power supplies are hot swappable.

·          Make sure the installed power supplies are the same model. HDM will perform power supply consistency check and generate an alarm if the power supply models are different.

·          To avoid hardware damage, do not use third-party power supplies.

Procedure

1.        As shown in Figure 6-8, remove the power supply blank from the target power supply slot.

Figure 6-8 Removing the power supply blank

 

2.        Align the power supply with the slot, making sure its fan is on the left.

3.        Push the power supply into the slot until it snaps into place.

Figure 6-9 Installing a power supply

 

4.        Connect the power cord. For more information, see "Connecting the power cord."

Verifying the installation

Use one of the following methods to verify that the power supply is installed correctly:

·          Observe the power supply LED to verify that the power supply is operating correctly. For more information about the power supply LED, see LEDs in "Rear panel."

·          Log in to HDM to verify that the power supply is operating correctly. For more information, see HDM online help.

Installing riser cards and PCIe modules

The server provides two PCIe riser connectors on the system board to install the riser cards for PCIe module expansion. For more information about the connector locations, see "System board components."

Guidelines

·          You can install a small-sized PCIe module in a large-sized PCIe slot. For example, an LP PCIe module can be installed in an FHFL PCIe slot.

·          A PCIe slot can supply power to the installed PCIe module if the maximum power consumption of the module does not exceed 75 W. If the maximum power consumption exceeds 75 W, a power cord is required. Only the GPU-M4000-1-X GPU module requires a power cord.

For more information about connecting the power cord, see "Connecting the power cord of a GPU module."

·          For more information about PCIe module and riser card compatibility, see "Riser cards."

Procedure

The riser card installation procedure is the same for PCIe riser connectors 1 and 2. This procedure uses PCIe riser connector 1 as an example.

To install a riser card and PCIe module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        If a fastening screw retains a PCIe module blank on the riser card, remove and then re-install that screw as follows:

a.     Open the retaining latch on the riser card and then remove the fastening screw, as shown in Figure 6-10.

Figure 6-10 Removing the fastening screw from the riser card

 

b.    Remove the PCIe module blank and re-install the fastening screw upside down, as shown in Figure 6-11.

Figure 6-11 Re-install the fastening screw to the riser card

 

5.        Install a PCIe module to the riser card:

a.     Open the retaining latch on the riser card, and then pull the PCIe module blank out of the slot, as shown in Figure 6-12.

Figure 6-12 Removing the PCIe module blank

 

b.    Insert the PCIe module into the slot along the guide rails and close the retaining latch to secure the PCIe module into place, as shown in Figure 6-13.

Figure 6-13 Installing the PCIe module

 

6.        Remove the blank on PCIe riser connector 1, as shown in Figure 6-14.

Figure 6-14 Removing the riser card blank on PCIe riser connector 1

 

7.        Install the riser card on the PCIe riser connector, with the two tabs on the card aligned with the notches in the chassis, as shown in Figure 6-15.

 

IMPORTANT

IMPORTANT:

Make sure the riser card is securely installed. The server cannot be powered up if the connection is loose.

 

Figure 6-15 Installing the riser card

 

8.        (Optional.) Connect PCIe module cables.

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Installing storage controllers and power fail safeguard modules

For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages.

A power fail safeguard module provides a flash card and a supercapacitor. When a system power failure occurs, this supercapacitor can provide power for a minimum of 20 seconds. During this interval, the storage controller transfers data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data.

Guidelines

The supercapacitor might have a low charge after the power fail safeguard module is installed. If the system displays that the supercapacitor has low charge, no action is required. The system will charge the supercapacitor automatically. You can view the state of the supercapacitor from the BIOS.

Each supercapacitor has a short supercapacitor cable attached to it and requires an extension cable for storage controller connection. The required extension cable varies by supercapacitor model and storage controller model. Use Table 6-1 to determine the extension cable to use.

Table 6-1 Supercapacitor extension cable selection

Storage controller type

Storage controller model

Supercapacitor

Extension cable P/N

Mezzanine

·         RAID-P430-M1

·         RAID-P430-M2

Supercapacitor of the Flash-PMC-G2 power fail safeguard module

N/A

The cable does not have a P/N.

RAID-P460-M2

BAT-PMC-G3

0404A0TG

RAID-P460-M4

BAT-PMC-G3

0404A0TG

RAID-L460-M4

BAT-LSI-G3

0404A0XH

Standard

·         RAID-LSI-9361-8i(1G)-A1-X

·         RAID-LSI-9361-8i(2G)-1-X

Supercapacitor of the Flash-LSI-G2 power fail safeguard module

0404A0SV

·         RAID-LSI-9460-8i(2G)

·         RAID-LSI-9460-8i(4G)

BAT-LSI-G3

0404A0VC

RAID-P460-B2

BAT-PMC-G3

0404A0TG

RAID-P460-B4

BAT-PMC-G3

0404A0TG

 

Prerequisites

·          Identify the type of the storage controller based on the drive configuration. For more information, see "Drive configurations and numbering."

·          If you are installing a power fail safeguard module, make sure it is compatible with the storage controller. For information about storage controllers and their compatibility matrices, see "Storage controllers."

Installing a Mezzanine storage controller and a power fail safeguard module

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the power supply air baffles if it hinders storage controller installation. For more information, see "Removing air baffles."

5.        Remove a riser card if it hinders storage controller installation. For more information, see "Replacing a riser card and a PCIe module."

6.        Align the pin holes in the Mezzanine storage controller with the guide pins on the system board. Insert the guide pins into the pin holes, and then fasten the three captive screws to secure the controller onto the system board, as shown in Figure 6-16.

 

 

NOTE:

The installation method for all Mezzanine storage controllers is the same. This figure is for illustration only.

 

Figure 6-16 Installing a Mezzanine storage controller

 

7.        Install the flash card of the power fail safeguard module to the storage controller:

 

IMPORTANT

IMPORTANT:

Skip this step if no power fail safeguard module is required or the storage controller has a built-in flash card. For information about storage controllers with a built-in flash card, see "Storage controllers."

 

a.     Install the two internal threaded studs supplied with the power fail safeguard module on the Mezzanine storage controller, as shown in Figure 6-17.

Figure 6-17 Installing the internal threaded studs

 

b.    Use screws to secure the flash card onto the storage controller, as shown in Figure 6-18.

Figure 6-18 Installing the flash card

 

8.        (Optional.) Install the supercapacitor:

a.     Install the supercapacitor holder. Place the supercapacitor holder in the chassis and then slide it until it snaps into place, as shown in Figure 6-19.

The server comes with a supercapacitor holder in the chassis. If the built-in supercapacitor holder is incompatible with the supercapacitor to be installed, remove the holder and install a compatible one. For more information about removing a supercapacitor holder, see "Replacing the power fail safeguard module for the Mezzanine storage controller."

 

 

NOTE:

The installation method for different supercapacitor holders is the same. This figure is for illustration only.

 

Figure 6-19 Installing the supercapacitor holder

 

b.    Install the supercapacitor. Insert the cableless end of the supercapacitor into the supercapacitor holder, pull the clip on the holder, insert the cable end of the supercapacitor into the holder, and then release the clip, as shown in Figure 6-20.

 

 

NOTE:

·      For simplicity, the figure does not show the cable attached to the supercapacitor.

·      The installation method for different supercapacitors is the same. This figure is for illustration only.

 

Figure 6-20 Installing the supercapacitor

 

c.     Connect the storage controller to the supercapacitor. Connect one end of the supercapacitor extension cable to the supercapacitor cable and the other to the storage controller. For more information about the connection method, see "Connecting the flash card and supercapacitor of the power fail safeguard module."

 

CAUTION

CAUTION:

Make sure the extension cable is the correct one. For more information, see Table 6-1.

 

9.        Connect front drive data cables to the Mezzanine storage controller. For more information, see "Connecting drive cables."

10.     Install the removed riser cards. For more information, see "Installing riser cards and PCIe modules."

11.     Install the removed power supply air baffle. For more information, see "Installing air baffles."

12.     Install the access panel. For more information, see "Installing the access panel."

13.     Rack-mount the server. For more information, see "Rack-mounting the server."

14.     Connect the power cord. For more information, see "Connecting the power cord."

15.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the Mezzanine storage controller, flash card, and supercapacitor are operating correctly. For more information, see HDM online help.

Installing a standard storage controller and a power fail safeguard module

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Install the flash card of the power fail safeguard module to the storage controller:

 

IMPORTANT

IMPORTANT:

Skip this step if no power fail safeguard module is required or the storage controller has a built-in flash card. For information about storage controllers with a built-in flash card, see "Storage controllers."

 

a.     Install the two internal threaded studs supplied with the power fail safeguard module on the storage controller, as shown in Figure 6-21.

Figure 6-21 Installing the internal threaded studs

 

b.    Slowly insert the flash card connector into the socket and use screws to secure the flash card on the storage controller, as shown in Figure 6-22.

Figure 6-22 Installing the flash card

 

5.        Connect one end of the supercapacitor extension cable to the flash card.

 

CAUTION

CAUTION:

Make sure the extension cable is the correct one. For more information, see Table 6-1.

 

¡  If the storage controller is installed with an external flash card, connect the supercapacitor extension cable to the flash card, as shown in Figure 6-23.

Figure 6-23 Connecting the supercapacitor extension cable to the flash card

 

¡  If the storage controller uses a built-in flash card, connect the supercapacitor extension cable to the supercapacitor connector on the storage controller.

6.        Install the storage controller to the server by using a riser card. For more information, see "Installing riser cards and PCIe modules."

7.        (Optional.) Install the supercapacitor, and then connect the other end of the supercapacitor extension cable to the supercapacitor. For more information, see "Connecting the flash card and supercapacitor of the power fail safeguard module."

8.        Connect the data cables of front drives to the storage controller. For more information, see "Connecting drive cables."

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the standard storage controller, flash card, and supercapacitor are operating correctly. For more information, see HDM online help.

Installing GPU modules

Guidelines

A riser card is required when you install a GPU module.

A GPU module comes with a GPU support bracket if it requires a power cord, as shown in Figure 6-24. This support bracket is required for secure installation on some server models. On an R2700 server, you do not need to install this support bracket.

The GPU-M4000-1-X GPU module requires a power cord (P/N 0404A0M3).

Figure 6-24 GPU support bracket

 

Procedure

The GPU module installation procedure is the same for PCIe slots 1 and 2. This procedure uses PCIe slot 1 as an example.

To install a GPU module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the PCIe module blank from PCIe slot 1. Make sure its fastening screw is re-installed upside down. For more information, see "Installing riser cards and PCIe modules."

5.        Install the GPU module in PCIe slot 1.

¡  If the GPU module does not require a power cord, open the retaining latch on the riser card, insert the GPU module into the slot, and then close the retaining latch, as shown in Figure 6-25.

Figure 6-25 Installing a GPU module that does not require a power cord (GPU-M4-1 GPU module)

 

¡  If the GPU module requires a power cord, open the retaining latch on the riser card and insert the GPU module into the slot. Then, connect the 6-pin connector of the GPU power cord to the GPU module and connect the other end of the power cord to the riser card.

Figure 6-26 Installing a GPU module that requires a power cord (GPU-M4000-1-X GPU module)

 

6.        Install the riser card on PCIe riser connector 1. For more information, see "Installing riser cards and PCIe modules."

7.        Connect cables for the GPU module as needed.

8.        Install fans. To guarantee cooling performance, you must install fans in all the fan bays. For more information, see "Installing fans."

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the GPU module is operating correctly. For more information, see HDM online help.

Installing Ethernet adapters

Guidelines

You can install an mLOM Ethernet adapter only in the mLOM Ethernet adapter connector on the system board. For more information about the connector location, see "System board components."

A riser card is required when you install a PCIe Ethernet adapter. For more information about PCIe Ethernet adapter and riser card compatibility, see "Riser cards."

The server supports one HDM shared network port for out-of-band HDM management, which is available if an NCSI-capable mLOM or PCIe Ethernet adapter is installed. By default, port 1 on the mLOM Ethernet adapter (if any) is used as the HDM shared network port. If no mLOM Ethernet adapter is installed, port 1 on the PCIe Ethernet adapter is used. You can change the HDM shared network port as needed from the HDM Web interface.

Installing an mLOM Ethernet adapter

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Install the mLOM Ethernet adapter:

a.     Insert the flathead screwdriver supplied with the server into the opening between the mLOM Ethernet adapter blank and its handle, pull part of the blank out of the chassis with screwdriver, and then pull the blank out of the chassis completely, as shown in Figure 6-27.

Figure 6-27 Removing the mLOM Ethernet adapter blank

 

b.    Insert the mLOM Ethernet adapter into the slot along the guide rails, and then fasten the captive screws to secure the Ethernet adapter into place, as shown in Figure 6-28.

Some mLOM Ethernet adapters have only one captive screw. This example uses an mLOM with two captive screws.

Figure 6-28 Installing an mLOM Ethernet adapter

 

3.        Connect network cables to the mLOM Ethernet adapter.

4.        Connect the power cord. For more information, see "Connecting the power cord."

5.        Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the mLOM Ethernet adapter is operating correctly. For more information, see HDM online help.

Installing a PCIe Ethernet adapter

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Install the PCIe Ethernet adapter. For more information, see "Installing riser cards and PCIe modules."

5.        If the adapter is NCSI-capable and is intended to provide an HDM shared network port, connect the NCSI cable for the PCIe Ethernet adapter. For more information, see "Connecting the NCSI cable for a PCIe Ethernet adapter."

6.        Connect network cables to the PCIe Ethernet adapter.

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the PCIe Ethernet adapter is operating correctly. For more information, see HDM online help.

Installing SATA M.2 SSDs

Guidelines

An M.2 transfer module is required to install SATA M.2 SSDs.

If you are installing two SATA M.2 SSDs, install two SATA M.2 SSDs of the same model to ensure high availability.

As a best practice, use SATA M.2 SSDs to install the operating system.

The installation procedure is the same for SATA M.2 SSDs on both sides of the M.2 transfer module.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffle. For more information, see "Removing air baffles."

5.        Insert the SSD into a socket on the M.2 transfer module, as shown in Figure 6-29. Then, fasten the screw supplied with the transfer module to secure the SSD into place.

 

CAUTION

CAUTION:

If you are installing only one SATA M.2 SSD, install it in the socket on top of the transfer module, as shown in Figure 6-29.

 

Figure 6-29 Installing a SATA M.2 SSD on the M.2 transfer module

 

6.        Align the screw holes on the M.2 transfer module with the threaded studs on the system board, and insert the transfer module onto the system board. Then, use screws to secure the transfer module into place, as shown in Figure 6-30.

Figure 6-30 Installing the M.2 transfer module on the system board

 

7.        Connect the SATA M.2 SSD cable to the system board. For more information, see "Connecting the SATA M.2 SSD cable."

8.        Install the removed chassis air baffle. For more information, see "Installing air baffles."

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Installing SD cards

Guidelines

SD card installation requires a dual SD card extended module.

The SD cards are hot swappable.

To gain redundancy and storage efficiency, install two same capacity SD cards.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Insert the SD card into a slot in the dual SD card extended module, with its gold-plated edge facing downward, as shown in Figure 6-31.

Figure 6-31 Installing an SD card

 

5.        Align the two blue clips on the extended module with the bracket on the power supply bay, and slowly slide the extended module downwards until it snaps into place, as shown in Figure 6-32.

Figure 6-32 Installing the dual SD card extended module

 

6.        Install the access panel. For more information, see "Installing the access panel."

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Installing an NVMe SSD expander module

Guidelines

To use NVMe drives, you must install NVMe SSD expander modules. For information about NVMe expander module and drive configuration compatibility, see "Drive configurations and numbering."

A riser card is required for NVMe SSD expander module installation.

Procedure

The 4-port and 8-port NVMe SSD expander modules use the same installation procedure. This procedure uses a 4-port NVMe SSD expander module as an example.

To install an NVMe SSD expander module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the power supply air baffle if it hinders the installation. For more information, see "Removing air baffles."

5.        Connect the four NVMe data cables to the NVMe SSD expander module, as shown in Figure 6-33.

Figure 6-33 Connecting an NVMe data cable to the NVMe SSD expander module

 

6.        Install the NVMe SSD expander module to the server by using a PCIe riser card. For more information, see "Installing riser cards and PCIe modules."

7.        Connect the NVMe data cables to the drive backplane. For more information, see "Connecting drive cables."

Make sure you connect the peer ports with the correct NVMe data cable. For more information, see "Connecting drive cables."

8.        Install the removed power supply air baffle. For more information, see "Installing air baffles."

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the NVMe SSD expander module is operating correctly. For more information, see HDM online help.

Installing the NVMe VROC module

1.        Identify the NVMe VROC module connector on the system board. For more information, see "System board components."

2.        Power off the server. For more information, see "Powering off the server."

3.        Remove the server from the rack. For more information, see "Removing the server from a rack."

4.        Remove the access panel. For more information, see "Removing the access panel."

5.        Remove the chassis air baffle. For more information, see "Removing air baffles."

6.        Insert the NVMe VROC module onto the NVMe VROC module connector on the system board, as shown in Figure 6-34.

Figure 6-34 Installing the NVMe VROC module

 

7.        Install the removed chassis air baffle. For more information, see "Installing air baffles."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Installing the 2SFF drive cage

Installing the front 2SFF drive cage (8SFF server only)

Only the 8SFF server supports installing a front 2SFF drive cage.

To install the front 2SFF drive cage:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the screws that secure the drive cage blank from the server, and then pull the blank out of the chassis, as shown in Figure 6-35.

Figure 6-35 Removing the drive cage blank

 

5.        Insert the 2SFF drive cage into the slot and then use screws to secure it into place, as shown in Figure 6-38.

Figure 6-36 Installing the 2SFF drive blank

 

6.        Connect the AUX signal cable, data cable, and power cord to the front 2SFF drive backplane. For more information about cable connection to the front 2SFF SAS/SATA drive backplane and the front 2SFF NVMe drive backplane, see Figure 8-9 and Figure 8-15, respectively.

7.        Install drives in the front 2SFF drive cage. For more information, see "Installing SAS/SATA drives."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Installing the rear 2SFF drive cage (4LFF/10FF server only)

Guidelines

Only the 4LFF and 10SFF servers support installing a rear 2SFF drive cage.

If drives are installed in the rear drive cage, make sure all the seven fans are present before you power on the server. For more information about installing fans, see "Installing fans."

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the power supply air baffle. For more information, see "Removing air baffles."

5.        Remove the blanks on PCIe riser connectors 1 and 2, as shown in Figure 6-37.

Figure 6-37 Removing the blanks on PCIe riser connectors 1 and 2

 

6.        Install the rear 2SFF drive cage:

a.     Align the three tabs on the cage with the three notches on the chassis, and place the drive cage in the chassis, as shown by callout 1 in Figure 6-38.

b.    Fasten the captive screw to secure the drive cage, as shown in Figure 6-38.

Figure 6-38 Installing the rear 2SFF drive cage

 

7.        For the 4LFF server, disconnect the existing 1-to-1 SAS/SATA data cable from the front drive backplane.

8.        Connect the AUX signal cable, 1-to-2 data cable, and power cord to the rear 2SFF drive backplane. For more information, see "Connecting drive cables."

9.        Install drives in the rear 2SFF drive cage. For more information, see "Installing SAS/SATA drives."

10.     Install the removed power supply air baffle. For more information, see "Installing air baffles."

11.     Install the access panel. For more information, see "Installing the access panel."

12.     Rack-mount the server. For more information, see "Rack-mounting the server."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Installing the front media module (VGA and USB 2.0 connectors)

A front media module provides a VGA connector and two USB 2.0 connectors.

Installing the front media module for the 4LFF server

1.        Identify the installation location. For more information, see "Front panel view."

2.        Power off the server. For more information, see "Powering off the server."

3.        Remove the server from the rack. For more information, see "Removing the server from a rack."

4.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

5.        Remove the access panel. For more information, see "Removing the access panel."

6.        Pull the front media module blank out of the slot, as shown in Figure 6-39.

Figure 6-39 Removing the front media module blank

 

7.        Place the front media module inside the chassis and push the module toward the front of the server until the connector on the module protrudes out of the front panel of the server, as shown in Figure 6-40.

Figure 6-40 Installing the front media module

 

8.        Connect the front media module cable to the system board:

a.     Remove the factory pre-installed chassis-open alarm module, if any. For more information, see "Removing the chassis-open alarm module."

b.    Install the chassis-open alarm module attached to the front media module. For more information, see "Installing the chassis-open alarm module."

c.     Connect the front media cable to the system board. For more information, see "Connecting the front media module cable."

9.        Connect the external VGA and USB 2.0 cable to the front media module, and then fasten the captive screws, as shown in Figure 6-41.

Figure 6-41 Connecting the external VGA and USB 2.0 cable to the front media module

 

10.     Install the access panel. For more information, see "Installing the access panel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Installing the front media module for the 8SFF server and 10SFF server

1.        Identify the installation location. For more information, see "Front panel view."

2.        Power off the server. For more information, see "Powering off the server."

3.        Remove the server from the rack. For more information, see "Removing the server from a rack."

4.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

5.        Remove the access panel. For more information, see "Removing the access panel."

6.        Remove the front I/O module from the slot shared with the front media module, and then remove the I/O component from the front I/O module. For more information, see "Replacing the front I/O component (8SFF/10SFF server)."

Figure 6-42 Slot shared by the front I/O module and the front media module

 

7.        Install the removed front I/O component in the front media module, as shown in Figure 6-43.

 

 

NOTE:

For simplicity, this figure does not show the front media module component.

 

Figure 6-43 Installing the front I/O component in the front media module

 

8.        Insert the front media module in the front media module slot, and then use a screw to secure it into place, as shown in Figure 6-44.

Figure 6-44 Installing the front media module

 

9.        Connect the front media module cable to the system board:

a.     Remove the factory pre-installed chassis-open alarm module. For more information about the removal procedure, see "Removing the chassis-open alarm module."

b.    Install the chassis-open alarm module attached to the front media module. For more information, see "Installing the chassis-open alarm module."

c.     Connect the front media module cable to the system board. For more information, see "Connecting the front media module cable."

10.     Connect the external VGA and USB 2.0 cable to the front media module, and then fasten the captive screws, as shown in "Installing the front media module for the 4LFF server."

11.     Connect the front I/O component cable assembly. For more information, see "Connecting the front I/O component cable assembly."

12.     Install the access panel. For more information, see "Installing the access panel."

13.     Rack-mount the server. For more information, see "Rack-mounting the server."

14.     Connect the power cord. For more information, see "Connecting the power cord."

15.     Power on the server. For more information, see "Powering on the server."

Installing an optical drive

Preparing for the installation

Use Table 6-2 to determine the location of the optical drive you are installing depending on the type of the optical drive.

Table 6-2 Optical drive installation locations

Optical drive

Installation location

USB 2.0 optical drive

Connect the optical drive to a USB 2.0 or USB 3.0 connector on the server.

SATA optical drive

·         4LFF server: Optical drive slot.

·         8SFF server: Optical drive slot.

·         10SFF server: Not supported.

For the location of the optical drive slot, see "Front panel view."

 

Installing a SATA optical drive on the 4LFF server

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

5.        Press the clip on the right side of the blank in the optical drive slot until the blank pops out partially, and pull the blank out of the slot, as shown in Figure 6-45.

Figure 6-45 Removing the blank from the optical drive slot

 

6.        Install the drive in the slot, with the guide pins on the chassis aligned with the two holes in one side of the optical drive, as shown in Figure 6-46.

Figure 6-46 Installing the SATA optical drive for the 4LFF drive configuration

 

7.        Connect the SATA optical drive cable. For more information, see "Connecting the SATA optical drive cable."

8.        Install the removed security bezel. For more information, see "Installing the security bezel."

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Installing a SATA optical drive on the 8SFF server

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

5.        Install the optical drive by using a drive enablement option.

a.     Press the clip on the right side of the blank in the enablement option until the blank pops out partially, and pull the blank out of the enablement option, as shown in Figure 6-47.

Figure 6-47 Removing the blank from the drive enablement option

 

b.    Insert the SATA optical drive into the enablement option and fasten the screw to secure it into place, as shown in Figure 6-48.

Figure 6-48 Inserting the SATA optical drive into the slot on the enablement option

 

6.        Install the enablement option in the front upper right slot of the server:

a.     Remove the fastening screws on the blank in the slot, and then push the blank out of the chassis, as shown in Figure 6-49.

Figure 6-49 Removing the blank in the front upper right slot of the server

 

b.    Insert the enablement option into the slot and use screws to secure it into place, as shown in Figure 6-50.

Figure 6-50 Installing the enablement option

 

7.        Connect the SATA optical drive cable. For more information, see "Connecting the SATA optical drive cable."

8.        Install the removed security bezel. For more information, see "Installing the security bezel."

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Installing a diagnostic panel

Preparing for the installation

Verify that the diagnostic panel is compatible with your server model, as follows:

·          SFF diagnostic panel for 8SFF and 10SFF servers.

·          LFF diagnostic panel for the 4LFF server.

For the installation location of the diagnostic panel, see "Front panel view."

Identify the diagnostic panel cable before you install the diagnostic panel. The P/N of the cable is 0404A0SP.

Procedure

The installation procedure is the same for SFF and LFF diagnostic panels. This procedure uses an SFF diagnostic panel as an example.

To install a diagnostic panel:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the blank or drive from the slot in which the diagnostic panel will be installed.

For more information about removing the blank, see "Installing SAS/SATA drives."

For more information about removing the drive, see "Replacing a SAS/SATA drive."

4.        Install the diagnostic panel:

a.     Connect the diagnostic panel cable to the diagnostic panel, as shown in Figure 6-51.

Figure 6-51 Connecting the diagnostic panel cable to the diagnostic panel

 

b.    Push the diagnostic panel into the slot until it snaps into place, as shown in Figure 6-52.

Figure 6-52 Installing the SFF diagnostic panel

 

5.        Install the removed security bezel. For more information, see "Installing the security bezel."

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Installing fans

Guidelines

The fans are hot swappable. If sufficient space is available for installation, you can install fans without powering off the server or removing the server from the rack. The following procedure is provided based on the assumption that no sufficient space is available for replacement.

The server provides seven fan bays. When you configure fans, use the following guidelines:

·          You must install functioning fans in all fan bays if any of the following components are installed:

¡  NVMe drives.

¡  Rear drives.

¡  GPU module.

·          If none of the components listed above are used, you can leave some of the fan bays empty depending on the number of processors, as follows:

¡  If only one processor is present, you can leave fan bays 1, 2 and 4 empty, with fan bays 3, 5, 6, and 7 populated with fans.

¡  If two processors are present, you can leave fan bay 4 empty, with all the remaining fan bays populated with fans.

·          If a fan bay is empty, make sure a fan blank is installed. For the locations of fans in the server, see "Fans."

The server will be powered off gracefully if any of its sensors detects that the temperature has reached the critical threshold. If the temperature of a critical component, such as a processor, exceeds the overtemperature threshold, the server is powered off immediately. You can view the detected temperatures and thresholds from the HDM Web interface.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffles. For more information, see "Removing air baffles."

5.        Lift the fan blank to remove it, as shown in Figure 6-53.

Figure 6-53 Removing a fan blank

 

6.        Install the fan in the fan bay, as shown in Figure 6-54.

Figure 6-54 Installing a fan

 

7.        Install the chassis air baffles. For more information, see "Installing air baffles."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the fans are operating correctly. For more information, see HDM online help.

Installing processors

Guidelines

·          To avoid damage to the processors or system board, only H3C-authorized personnel and professional server engineers are allowed to install a processor.

·          For the server to operate correctly, make sure processor 1 is always in position. For more information about processor locations, see "System board components."

·          Make sure the processors are the same model if two processors are installed.

·          The pins in the processor socket are very fragile. Make sure a processor socket cover is installed on an empty processor socket.

·          To avoid ESD damage, put on an ESD wrist strap before performing this task, and make sure the wrist strap is reliably grounded.

Procedure

1.        Back up all server data.

2.        Power off the server. For more information, see "Powering off the server."

3.        Remove the server from the rack. For more information, see "Removing the server from a rack."

4.        Remove the access panel. For more information, see "Removing the access panel."

5.        Remove the chassis air baffle. For more information, see "Removing air baffles."

6.        Install a processor onto the retaining bracket, as shown in Figure 6-55:

 

CAUTION

CAUTION:

To avoid damage to the processor, always hold the processor by its edges. Never touch the gold contacts on the processor bottom.

 

a.     Align the small triangle on the processor with the alignment triangle in the retaining bracket, and align the guide pin on the bracket with the notch on the triangle side of the processor.

b.    Lower the processor gently and make sure the guide pins on the opposite side of the bracket fit snugly into the notches on the processor.

Figure 6-55 Installing a processor onto the retaining bracket

 

7.        Install the retaining bracket onto the heatsink:

 

CAUTION

CAUTION:

When you remove the protective cover over the heatsink, be careful not to touch the thermal grease on the heatsink.

 

a.     Lift the cover straight up until it is removed from the heatsink, as shown in Figure 6-56.

Figure 6-56 Removing the protective cover

 

b.    Install the retaining bracket onto the heatsink. As shown in Figure 6-57, align the alignment triangle on the retaining bracket with the cut-off corner of the heatsink. Place the bracket on top of the heatsink, with the four corners of the bracket clicked into the four corners of the heatsink.

Figure 6-57 Installing the processor onto the heatsink

 

8.        Remove the processor socket cover.

 

CAUTION

CAUTION:

·      Take adequate ESD preventive measures when you remove the processor socket cover.

·      Be careful not to touch the pins on the processor socket, which are very fragile. Damage to pins will incur system board replacement.

·      Keep the pins on the processor socket clean. Make sure the socket is free from dust and debris.

 

Hold the cover by the notches on its two edges and lift it straight up and away from the socket. Put the cover away for future use.

Figure 6-58 Removing the processor socket cover

 

9.        Install the retaining bracket and heatsink onto the server.

a.     Place the heatsink on the processor socket. Make sure the alignment triangle on the retaining bracket and the pin holes in the heatsink are aligned with the cut-off corner and guide pins of the processor socket, respectively, as shown by callout 1 in Figure 6-59.

b.    Fasten the captive screws on the heatsink in the sequence shown by callouts 2 through 5 in Figure 6-59.

 

CAUTION

CAUTION:

To avoid poor contact between the processor and the system board or damage to the pins in the processor socket, tighten the screws to a torque value of 1.4 Nm (12 in-lbs).

 

Figure 6-59 Attaching the retaining bracket and heatsink to the processor socket

 

10.     Install fans. For more information, see "Installing fans."

11.     Install DIMMs. For more information, see "Installing DIMMs."

12.     Install the chassis air baffle. For more information, see "Installing air baffles."

13.     Install the access panel. For more information, see "Installing the access panel."

14.     Rack-mount the server. For more information, see "Rack-mounting the server."

15.     Connect the power cord. For more information, see "Connecting the power cord."

16.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the processor is operating correctly. For more information, see HDM online help.

Installing DIMMs

The server supports DCPMMs and DRAM DIMMs (both LRDIMM and RDIMM supported). Compared with DRAM DIMMs, DCPMMs provide larger capacity and can protect data from getting lost in case of unexpected system failures.

Both DCPMMs and DRAM DIMMs are referred to as DIMMs in this document, unless otherwise stated.

Guidelines

WARNING

WARNING!

The DIMMs are not hot swappable.

 

You can install a maximum of eight DIMMs for each processor, four DIMMs per memory controller. For more information, see "DIMM slots."

For a DIMM to operate at 2933 MHz, make sure the following conditions are met:

·          Use Cascade Lake processors that support 2933 MHz data rate.

·          Use DIMMs with a maximum of 2933 MHz data rate.

·          Install a maximum of one DIMM per channel.

The supported DIMMs vary by processor model, as shown in Table 6-3.

Table 6-3 Supported DIMMs of a processor

Processor

Supported DIMMs

Skylake

Only DRAM DIMMs.

Cascade Lake

·         Only DRAM DIMMs.

·         Mixture of DCPMM and DRAM DIMMs.

Jintide-C series

Only DRAM DIMMs.

 

Guidelines for installing only DRAM DIMMs

When you install only DRAM DIMMs, follow these restrictions and guidelines:

·          Make sure all DRAM DIMMs installed on the server have the same specifications.

·          Make sure the corresponding processor is present before powering on the server.

·          For the memory mode setting to operate correctly, make sure the following installation requirements are met when you install DRAM DIMMs for a processor:

 

Memory mode

DIMM slot population rules

Independent

·         If only one processor is present, see Figure 6-60.

·         If two processors are present, see Figure 6-61.

Mirror

Partial Mirror

·         A minimum of two DIMMs for a processor.

·         This mode does not support DIMM population schemes that are not recommended in Figure 6-60 and Figure 6-61.

·         DIMM installation scheme:

¡  If only processor 1 is present, see Figure 6-60.

¡  If two processors are present, see Figure 6-61.

Memory Rank Sparing

·         A minimum of 2 ranks per channel.

·         DIMM installation scheme:

¡  If only one processor is present, see Figure 6-60.

¡  If two processors are present, see Figure 6-61.

 

 

NOTE:

If the DIMM configuration does not meet the requirements for the configured memory mode, the system uses the default memory mode (Independent mode). For more information about memory modes, see the BIOS user guide for the server.

 

Figure 6-60 DIMM population schemes (one processor present)

 

Figure 6-61 DIMM population schemes (two processors present)

 

Guidelines for mixture installation of DCPMMs and DRAM DIMMs

When you install DRAM DIMMs and DCPMMs on the server, follow these restrictions and guidelines:

·          Make sure the corresponding processors are present before powering on the server.

·          Make sure all DRAM DIMMs have the same product code and all DCPMMs have the same product code.

·          As a best practice to increase memory bandwidth, install DRAM and DCPMM DIMMs in different channels.

·          A channel supports a maximum of one DCPMM.

·          As a best practice, install DCPMMs symmetrically across the two memory processing units for a processor.

·          To install both DRAM DIMM and DCPMM in a channel, install the DRAM DIMM in the white slot and the DCPMM in the black slot. To install only one DIMM in a channel, install the DIMM in the white slot if the DIMM is DCPMM.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffle. For more information, see "Removing air baffles."

5.        Install a DIMM:

a.     Identify the location of the DIMM slot.

Figure 6-62 DIMM slots numbering

 

b.    Open the DIMM slot latches.

c.     Align the notch on the DIMM with the connector key in the DIMM slot and press the DIMM into the socket until the latches lock the DIMM in place, as shown in Figure 6-63.

To avoid damage to the DIMM, do not force the DIMM into the socket when you encounter resistance. Instead, re-align the notch with the connector key, and then re-insert the DIMM.

Figure 6-63 Installing a DIMM

 

6.        Install the chassis air baffle. For more information, see "Installing air baffles."

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Use one of the following methods to verify that the memory size is correct:

·          Access the GUI or CLI of the server:

¡  In the GUI of a windows OS, click the Start icon in the bottom left corner, enter msinfo32 in the search box, and then click the msinfo32 item.

¡  In the CLI of a Linux OS, execute the cat /proc/meminfo command.

·          Log in to HDM. For more information, see HDM online help.

·          Access the BIOS. For more information, see the BIOS user guide for the server.

If the memory size is incorrect, re-install or replace the DIMM.

 

 

NOTE:

It is normal that the CLI or GUI of the server OS displays a smaller memory size than the actual size if the mirror, partial mirror, or memory rank sparing memory mode is enabled. In this situation, verify the memory size from HDM or the BIOS.

 

Installing and setting up a TCM or TPM

Installation and setup flowchart

Figure 6-64 TCM/TPM installation and setup flowchart

 

Installing a TCM or TPM

Guidelines

·          Do not remove an installed TCM or TPM. Once installed, the module becomes a permanent part of the system board.

·          When installing or replacing hardware, H3C service providers cannot enable the TCM or TPM or the encryption technology. For security reasons, only the customer can enable these features.

·          When replacing the system board, do not remove the TCM or TPM from the system board. H3C will provide a TCM or TPM with the spare system board for system board or module replacement.

·          Any attempt to remove an installed TCM or TPM from the system board breaks or disfigures the TCM or TPM security rivet. Upon locating a broken or disfigured rivet on an installed TCP or TPM, administrators should consider the system compromised and take appropriate measures to ensure the integrity of the system data.

·          H3C is not liable for blocked data access caused by improper use of the TCM or TPM. For more information, see the encryption technology feature documentation provided by the operating system.

Procedure

The installation procedure is the same for a TPM and a TCM. The following information uses a TPM to show the procedure.

To install a TPM:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the PCIe modules that might hinder TPM installation. For more information, see "Replacing a riser card and a PCIe module."

5.        Install the TPM:

a.     Press the TPM into the TPM connector on the system board, as shown in Figure 6-65.

Figure 6-65 Installing a TPM

Orch_144.png

 

b.    Insert the rivet pin as shown by callout 1 in Figure 6-66.

c.     Insert the security rivet into the hole in the rivet pin and press the security rivet until it is firmly seated, as shown by callout 2 in Figure 6-66.

Figure 6-66 Installing the security rivet

 

6.        Install the removed PCIe modules. For more information, see "Installing riser cards and PCIe modules."

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Enabling the TCM or TPM from the BIOS

By default, the TCM and TPM are enabled for a server. For more information about configuring the TCM or TPM from the BIOS, see the BIOS user guide for the server.

You can log in to HDM to verify that the TCM or TPM is operating correctly. For more information, see HDM online help.

Configuring encryption in the operating system

For more information about this task, see the encryption technology feature documentation that came with the operating system.

The recovery key/password is generated during BitLocker setup, and can be saved and printed after BitLocker is enabled. When using BitLocker, always retain the recovery key/password. The recovery key/password is required to enter Recovery Mode after BitLocker detects a possible compromise of system integrity or firmware or hardware change.

For security purposes, follow these guidelines when retaining the recovery key/password:

·          Always store the recovery key/password in multiple locations.

·          Always store copies of the recovery key/password away from the server.

·          Do not save the recovery key/password on the encrypted hard drive.

For more information about Microsoft Windows BitLocker drive encryption, visit the Microsoft website at http://technet.microsoft.com/en-us/library/cc732774.aspx.


7 Replacing hardware options

If you are replacing multiple hardware options, read their replacement procedures and identify similar steps to streamline the entire replacement procedure.

Replacing the access panel

Guidelines

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To avoid thermal damage to the server, do not operate the server for long periods with the access panel open or uninstalled.

 

If sufficient clearance is available for replacement, you can replace the access panel online without removing the server from the rack. The subsequent procedure is provided based on the assumption that no sufficient clearance is available for replacement.

Removing the access panel

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel, as shown in Figure 7-1:

a.     If the locking lever on the access panel is locked, unlock the locking lever. Use a T15 Torx screwdriver to turn the screw on the lever 90 degree anticlockwise. See callout 1 in the figure.

b.    Press the latch on the locking lever, pull the locking lever upward, and then release the latch. See callouts 2 and 3 in the figure.

The access panel will automatically slide to the rear of the server chassis.

c.     Lift the access panel to remove it. See callout 4 in the figure.

Figure 7-1 Removing the access panel

 

Installing the access panel

1.        Use a T15 Torx screwdriver to unlock the locking lever.

2.        Press the latch on the locking lever, pull the lever upward, and then release the latch.

3.        Install the access panel, as shown in Figure 7-2:

a.     Place the access panel on top of the server chassis, with the guide pin in the chassis aligned with the pin hole in the locking lever area.

b.    Close the locking lever. The access panel will automatically slide toward the server front to secure itself into place.

c.     (Optional.) Lock the locking lever. Use a T15 Torx screwdriver to turn the screw on the lever 90 degree clockwise, as shown by callout 3.

Figure 7-2 Installing the access panel

 

4.        Rack-mount the server. For more information, see "Rack-mounting the server."

5.        Connect the power cord. For more information, see "Connecting the power cord."

6.        Power on the server. For more information, see "Powering on the server."

Replacing the security bezel

1.        Insert the key provided with the bezel into the lock on the bezel and unlock the security bezel (see callout 1 in Figure 7-3).

 

CAUTION

CAUTION:

To avoid damage to the lock, hold down the key while you are turning the key.

 

2.        Press the latch at the left end of the bezel, open the security bezel, and then release the latch (see callouts 2 and 3 in Figure 7-3).

3.        Pull the right edge of the security bezel out of the groove in the right chassis ear to remove the security bezel (see callout 4 in Figure 7-3).

Figure 7-3 Removing the security bezel

 

4.        Install a new security bezel. For more information, see "Installing the security bezel."

Replacing a SAS/SATA drive

Guidelines

The drives are hot swappable.

To configure RAID settings after the drive is replaced, see the storage controller user guide for the server.

Prerequisites

To replace a drive in a non-redundancy RAID array, back up data in the RAID array.

Procedure

1.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

2.        Observe the drive LEDs to verify that the drive is not selected by the storage controller and is not performing a RAID migration or rebuilding. For more information about drive LEDs, see "Drive LEDs."

3.        Remove the drive, as shown in Figure 7-4:

¡  To remove an SSD, press the button on the drive panel to release the locking lever, and then hold the locking lever and pull the drive out of the slot.

¡  To remove an HDD, press the button on the drive panel to release the locking lever. Pull the drive 3 cm (1.18 in) out of the slot. Wait for a minimum of 30 seconds for the drive to stop rotating, and then pull the drive out of the slot.

Figure 7-4 Removing a drive

R190_009.png

 

4.        Install a new drive. For more information, see "Installing SAS/SATA drives."

5.        Install the removed security bezel, if any. For more information, see "Installing the security bezel."

Verifying the replacement

Use one of the following methods to verify that the drive has been replaced correctly:

·          Verify the drive properties (including its capacity) by using one of the following methods:

¡  Access HDM. For more information, see HDM online help.

¡  Access the BIOS. For more information, see the BIOS user guide for the server.

¡  Access the CLI or GUI of the server.

·          Observe the drive LEDs to verify that the drive is operating correctly. For more information about drive LEDs, see "Drive LEDs."

Replacing an NVMe drive

Guidelines

NVMe drives support hot insertion and managed hot removal.

To configure RAID settings after the drive is replaced, see the storage controller user guide for the server.

Procedure

1.        Identify the NVMe drive to be removed and perform managed hot removal for the drive. For more information about managed hot removal, see "Appendix C  Managed hot removal of NVMe drives."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the drive, as shown in Figure 7-5:

a.     Press the button on the drive panel to release the locking lever.

b.    Hold the locking lever and pull the drive out of the slot.

Figure 7-5 Removing a drive

R190_009.png

 

4.        Install a new drive. For more information, see "Installing NVMe drives."

5.        Install the removed security bezel, if any. For more information, see "Installing the security bezel."

Verifying the replacement

Use one of the following methods to verify that the drive has been replaced correctly:

·          Verify the drive properties (including its capacity) by using one of the following methods:

¡  Access HDM. For more information, see HDM online help.

¡  Access the BIOS. For more information, see the BIOS user guide for the server.

¡  Access the CLI or GUI of the server.

·          Observe the drive LEDs to verify that the drive is operating correctly. For more information about drive LEDs, see "Drive LEDs."

Replacing a power supply

Guidelines

The power supplies are hot swappable.

If two power supplies are installed and sufficient clearance is available for replacement, you can replace a power supply online without removing the server from the rack. The subsequent procedure is provided based on the assumption that no sufficient clearance is available for replacement.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        To remove the AC power cord from an AC power supply or a 240 V high-voltage DC power supply:

a.     Press the tab to disengage the ratchet from the tie mount, slide the cable clamp outward, and then release the tab, as shown by callouts 1 and 2 in Figure 7-6.

b.    Open the cable clamp and remove the power cord out of the clamp, as shown by callouts 3 and 4 in Figure 7-6.

c.     Unplug the power cord, as shown by callout 5 in Figure 7-6.

Figure 7-6 Removing the power cord

 

4.        To remove the DC power cord from a –48 VDC power supply:

a.     Loosen the captive screws on the power cord plug, as shown in Figure 7-7.

Figure 7-7 Loosening the captive screws

 

b.    Pull the power cord plug out of the power receptacle, as shown in Figure 7-8.

Figure 7-8 Pulling out the DC power cord

 

5.        Holding the power supply by its handle and pressing the retaining latch with your thumb, pull the power supply slowly out of the slot, as shown in Figure 7-9.

Figure 7-9 Removing the power supply

 

6.        Install a new power supply. For more information, see "Installing power supplies."

 

IMPORTANT

IMPORTANT:

If the server has only one power supply, you must install the power supply in slot 2.

 

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Use the following methods to verify that the power supply has been replaced correctly:

·          Observe the power supply LED to verify that the power supply LED is steady or flashing green. For more information about the power supply LED, see LEDs in "Rear panel."

·          Log in to HDM to verify that the power supply status is correct. For more information, see HDM online help.

Replacing air baffles

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Removing air baffles

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove air baffles:

¡  To remove the chassis air baffle, hold the air baffle, and lift the air baffle out of the chassis, as shown in Figure 7-10.

Figure 7-10 Removing the chassis air baffle

 

¡  To remove the power supply air baffle, pull outward the two clips that secure the air baffle, and lift the air baffle out of the chassis, as shown in Figure 7-11.

Figure 7-11 Removing the power supply air baffle

 

Installing air baffles

1.        Install air baffles:

¡  To install the chassis air baffle, place the air baffle on top of the chassis, with the standouts at both ends of the air baffle aligned with the notches on the chassis edges, as shown in Figure 7-12.

Figure 7-12 Installing the chassis air baffle

 

¡  To install the power supply air baffle, place the air baffle in the chassis as shown in Figure 7-13. Make sure the groove in the air baffle is aligned with the system board handle, and the extended narrow side indicated by the arrow mark makes close contact with the clip on the system board. Then gently press the air baffle until it snaps into place.

Figure 7-13 Removing the power supply air baffle

 

2.        Install the access panel. For more information, see "Installing the access panel."

3.        Rack-mount the server. For more information, see "Rack-mounting the server."

4.        Connect the power cord. For more information, see "Connecting the power cord."

5.        Power on the server. For more information, see "Powering on the server."

Replacing a riser card and a PCIe module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The replacement procedure is the same for riser cards on PCIe riser connector 1 and PCIe riser connector 2. This procedure uses the riser card on PCIe riser connector 1 as an example.

To replace a riser card and a PCIe module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect cables that might hinder the replacement.

5.        Lift the riser card slowly out of the chassis, as shown in Figure 7-14.

Figure 7-14 Removing the riser card on PCIe riser connector 1

 

6.        Open the retaining latch on the riser card, and then pull the PCIe module out of the slot, as shown in Figure 7-15.

Figure 7-15 Removing a PCIe module

 

7.        Install a new riser card and PCIe module. For more information, see "Installing riser cards and PCIe modules."

8.        Reconnect the removed cables.

9.        Install the access panel. For more information, see "Installing the access panel."

10.     Rack-mount the server. For more information, see "Rack-mounting the server."

11.     Connect the power cord. For more information, see "Connecting the power cord."

12.     Power on the server. For more information, see "Powering on the server."

Replacing a storage controller

Guidelines

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace the storage controller with a controller of a different model, reconfigure RAID after the replacement. For more information, see the storage controller user guide for the server.

To replace the storage controller with a controller of the same model, make sure the following configurations remain the same after replacement:

·          Storage controller operating mode.

·          Storage controller firmware version.

·          BIOS boot mode.

·          First boot option in Legacy mode.

For more information, see the storage controller user guide and the BIOS user guide for the server.

Preparing for replacement

To replace the storage controller with a controller of the same model, identify the following information before the replacement:

·          Storage controller location and cabling.

·          Storage controller model, operating mode, and firmware version.

·          BIOS boot mode.

·          First boot option in Legacy mode.

To replace the storage controller with a controller of a different model, back up data in drives and then clear RAID information before the replacement.

Replacing the Mezzanine storage controller

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect all cables from the Mezzanine storage controller.

5.        Remove the riser cards that might hinder the replacement. For more information, see "Replacing a riser card and a PCIe module."

6.        Remove the flash card installed on the Mezzanine storage controller, if any. For more information, see "Replacing the power fail safeguard module for the Mezzanine storage controller."

7.        Loosen the captive screws on the Mezzanine storage controller, and then lift the storage controller to remove it, as shown in Figure 7-16.

Figure 7-16 Removing the Mezzanine storage controller

 

8.        Remove the flash card installed on the storage controller, if any. For more information, see "Replacing the power fail safeguard module for the Mezzanine storage controller."

9.        Install a new Mezzanine storage controller. For more information, see "Installing a Mezzanine storage controller and a power fail safeguard module."

10.     Install the removed riser cards. For more information, see "Installing riser cards and PCIe modules."

11.     Install the access panel. For more information, see "Installing the access panel."

12.     Rack-mount the server. For more information, see "Rack-mounting the server."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the Mezzanine storage controller is in a correct state. For more information, see HDM online help.

Replacing a standard storage controller

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect all cables from the storage controller.

5.        Remove the storage controller. For more information, see "Replacing a riser card and a PCIe module."

6.        Remove the flash card on the storage controller, if any. For more information, see "Replacing the power fail safeguard module for a standard storage controller."

7.        Install a new standard storage controller. For more information, see "Installing a standard storage controller and a power fail safeguard module."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the standard storage controller is in a correct state. For more information, see HDM online help.

Replacing the power fail safeguard module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Preparing for power fail safeguard module replacement

Before you replace the power fail safeguard module, use drive LEDs to verify that none of the drives attached to the storage controller is performing RAID migration or rebuilding. You can perform a replacement only if the Present/Active LED on each drive is not flashing green, with the Fault/UID LED off.

 

CAUTION

CAUTION:

Server error might occur if you perform the replacement while a drive is performing RAID migration or rebuilding.

 

Replacing the power fail safeguard module for the Mezzanine storage controller

Preparing for replacement

See "Preparing for power fail safeguard module replacement."

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect the flash card on the storage controller from the supercapacitor cable.

5.        Remove the flash card on the storage controller, as shown in Figure 7-17.

Figure 7-17 Removing the flash card on the Mezzanine storage controller

 

6.        Pull the clip on the supercapacitor holder, take the supercapacitor out of the holder, and then release the clip, as shown in Figure 7-18. The removal procedure is the same for all types of supercapacitors.

 

 

NOTE:

For simplicity, the figure does not show the supercapacitor cable.

 

Figure 7-18 Removing the supercapacitor

 

7.        Lift the retaining latch at the bottom of the supercapacitor holder, slide the holder to remove it, and then release the retaining latch, as shown in Figure 7-19. The removal procedure is the same for all types of supercapacitor holders.

Figure 7-19 Removing the supercapacitor holder

 

8.        Install a new power fail safeguard module. For more information, see "Installing a Mezzanine storage controller and a power fail safeguard module."

9.        Connect the removed cables.

10.     Install the access panel. For more information, see "Installing the access panel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the flash card and the supercapacitor are in a correct state. For more information, see HDM online help.

Replacing the power fail safeguard module for a standard storage controller

Preparing for replacement

See "Preparing for power fail safeguard module replacement."

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect cables that might hinder the replacement.

5.        Remove the storage controller. For more information, see "Replacing a standard storage controller."

6.        Remove the flash card on the storage controller, and then remove the flash card, as shown in Figure 7-20.

Figure 7-20 Removing the flash card on a standard storage controller

 

7.        Remove the supercapacitor. For more information, see "Replacing the power fail safeguard module for a standard storage controller."

8.        Install a new power fail safeguard module. For more information, see "Installing a standard storage controller and a power fail safeguard module."

9.        Connect the removed cables.

10.     Install the access panel. For more information, see "Installing the access panel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the flash card and the supercapacitor are in a correct state. For more information, see HDM online help.

Replacing a GPU module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace a GPU module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the riser card that holds the GPU module. For more information, see "Replacing a riser card and a PCIe module."

5.        Remove the GPU module:

¡  If the GPU module is not connected with a power cord, open the retaining latch on the riser card, and pull the GPU module out of the slot, as shown in Figure 7-21.

The removal procedure is the same for GPU modules that do not require a power cord. This example uses the GPU-M4-1 GPU module to show the procedure.

Figure 7-21 Removing a GPU module that is not connected with a power cord

 

¡  If the GPU module is connected with a power cord, disconnect the power cord from the GPU module. Then, open the retaining latch on the riser card, pull the GPU module out of the slot, and disconnect the power cord from the riser card, as shown in Figure 7-22.

The removal procedure is the same for GPU modules that require a power cord. This example uses the GPU-M4000-1-X GPU module to show the procedure.

Figure 7-22 Removing a GPU module that is connected with a power cord

 

6.        Install a new GPU module. For more information, see "Installing GPU modules."

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Verifying the installation

Log in to HDM to verify that the GPU module is operating correctly. For more information, see HDM online help.

Replacing an Ethernet adapter

Replacing an mLOM Ethernet adapter

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Disconnect cables from the Ethernet adapter.

3.        Loosen the captive screws and then pull the Ethernet adapter out of the slot, as shown in Figure 7-23.

Some mLOM Ethernet adapters have only one captive screw. This example uses an Ethernet adapter with two screws.

Figure 7-23 Removing an mLOM Ethernet adapter

 

4.        Install a new mLOM Ethernet adapter. For more information, see "Installing an mLOM Ethernet adapter."

5.        Connect cables for the mLOM Ethernet adapter.

6.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the mLOM Ethernet adapter is in a correct state. For more information, see HDM online help.

Replacing a PCIe Ethernet adapter

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect cables from the PCIe Ethernet adapter.

5.        Remove the PCIe Ethernet adapter. For more information, see "Replacing a riser card and a PCIe module."

6.        Install a new PCIe Ethernet adapter. For more information, see "Installing riser cards and PCIe modules."

7.        Connect cables for the PCIe Ethernet adapter.

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the PCIe Ethernet adapter is in a correct state. For more information, see HDM online help.

Replacing an M.2 transfer module and a SATA M.2 SSD

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The replacement procedure is the same for SATA M.2 SSDs on both sides of the M.2 transfer module.

Replacing the front M.2 transfer module and a SATA M.2 SSD

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

4.        Remove the access panel. For more information, see "Removing the access panel."

5.        Disconnect the cable that connects the SATA M.2 SSD to the system board.

6.        Remove the M.2 transfer module, as shown in Figure 7-24:

a.     Remove the screws that secure the transfer module.

b.    Lift the module to remove it.

Figure 7-24 Removing an M.2 transfer module

 

7.        Remove a SATA M.2 SSD, as shown in Figure 7-25:

a.     Remove the screw that secures the SSD on the transfer module.

b.    Tilt the SSD by the screw-side edge, and then pull the SSD out of the socket.

Figure 7-25 Removing a SATA M.2 SSD

 

8.        Install a new SATA M.2 SSD. For more information, see "Installing SATA M.2 SSDs."

9.        Install the removed security bezel. For more information, see "Installing the security bezel."

10.     Install the access panel. For more information, see "Installing the access panel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Replacing an NVMe VROC module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To remove the NVMe VROC module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffle. For more information, see "Removing air baffles."

5.        Hold the ring part of the NVMe VROC module and pull the module out of the chassis, as shown in Figure 7-26.

Figure 7-26 Removing the NVMe VROC module

 

6.        Install a new NVMe VROC module. For more information, see "Installing the NVMe VROC module."

7.        Install the removed power supply air baffle. For more information, see "Installing air baffles."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Power on the server. For more information, see "Powering on the server."

Replacing an SD card

Guidelines

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To avoid thermal damage to the server, do not operate the server for long periods with the access panel open or uninstalled.

 

The SD cards are hot swappable. If sufficient clearance is available for replacement, you can replace an SD card without powering off the server and removing the server from the rack. The subsequent procedure is provided based on the assumption that no sufficient clearance is available for replacement.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Press the SD card to release it and then pull the SD card out of the slot, as shown in Figure 7-27.

Figure 7-27 Removing an SD card

 

5.        Install a new SD card. For more information, see "Installing SD cards."

6.        Install the access panel. For more information, see "Installing the access panel."

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Replacing the dual SD card extended module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace the dual SD card extended module:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Press the blue clip on the dual SD card extended module (as shown in Figure 7-28), pull the module out of the connector, and then release the clip.

Figure 7-28 Removing the dual SD card extended module

 

5.        Remove the SD cards installed on the extended module, as shown in Figure 7-27.

6.        Install a new dual SD card extended module and the removed SD cards. For more information, see "Installing SD cards."

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Replacing an NVMe SSD expander module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the NVMe SSD expander module:

a.     Disconnect the expander module from the front drive backplanes by removing the cables from the front drive backplanes.

b.    Remove the PCIe riser card that contains the NVMe SSD expander module, and then remove the NVMe SSD expander module. For more information, see "Replacing a riser card and a PCIe module."

c.     Disconnect cables from the NVMe SSD expander modules, as shown in Figure 7-29.

Figure 7-29 Disconnecting cables from an NVMe SSD expander module

 

5.        Install a new NVMe SSD expander module. For more information, see "Installing an NVMe SSD expander module."

6.        Install the access panel. For more information, see "Installing the access panel."

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the NVMe expander module is in a correct state. For more information, see HDM online help.

Replacing a fan

Guidelines

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

CAUTION

CAUTION:

To avoid thermal damage to the server, do not operate the server for long periods with the access panel open or uninstalled.

 

The fans are hot swappable. If sufficient clearance is available for replacement, you can replace a fan without powering off the server and removing the server from the rack. The subsequent procedure is provided based on the assumption that no sufficient clearance is available for replacement.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Pull the fan out of the fan bay, as shown in Figure 7-30.

Figure 7-30 Removing a fan

 

5.        Install a new fan. For more information, see "Installing fans."

6.        Install the access panel. For more information, see "Installing the access panel."

7.        Rack-mount the server. For more information, see "Rack-mounting the server."

8.        Connect the power cord. For more information, see "Connecting the power cord."

9.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that the fan is in a correct state. For more information, see HDM online help.

Replacing a processor

Guidelines

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

·          To avoid damage to a processor or the system board, only H3C authorized or professional server engineers can install, replace, or remove a processor.

·          Make sure the processors on the server are the same model.

·          Do not touch the pins in the processor sockets, which are very fragile and prone to damage. Install a protective cover if a processor socket is empty.

·          For the server to operate correctly, make sure processor 1 is in position. For more information about processor locations, see "System board components."

Prerequisites

To avoid ESD damage, wear an ESD wrist strap before performing this task, and make sure the wrist strap is reliably grounded.

Removing a processor

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffle. For more information, see "Removing air baffles."

5.        Remove the processor heatsink, as shown in Figure 7-31:

 

CAUTION

CAUTION:

The pins in the processor sockets are very fragile and prone to damage. Do not touch the pins.

 

a.     Loosen the captive screws in the same sequence as shown by callouts 1 to 4 in the figure.

b.    Lift the heatsink slowly to remove it.

Figure 7-31 Removing a processor heatsink

 

6.        Remove the processor retaining bracket from the heatsink, as shown in Figure 7-32:

a.     Insert a flat-head tool (such as a flat-head screwdriver) into the notch marked with TIM BREAKER to pry open the retaining bracket.

b.    Press the four clips in the four corners of the bracket to release the retaining bracket.

You must press the clip shown by callout 2 and its cater-cornered clip outward, and press the other two clips inward as shown by callout 3.

c.     Lift the retaining bracket to remove it from the heatsink.

Figure 7-32 Removing the processor retaining bracket

 

7.        Separate the processor from the retaining bracket, as shown in Figure 7-33.

Figure 7-33 Separating the processor from the retaining bracket

 

Installing a processor

1.        Install the processor onto the retaining bracket. For more information, see "Installing processors."

2.        Smear thermal grease onto the processor:

a.     Clean the processor and heatsink with isopropanol wipes. Allow the isopropanol to evaporate before you continue with the subsequent steps.

b.    Use the thermal grease injector to inject 0.6 ml of thermal grease to the five dots on the processor, 0.12 ml for each dot, as shown in Figure 7-34.

Figure 7-34 Smearing thermal grease onto the processor

R190_027.png

 

3.        Install the retaining bracket onto the heatsink. For more information, see "Installing processors."

4.        Install the heatsink onto the server. For more information, see "Installing processors."

5.        Past bar code label supplied with the processor over the original processor label on the heatsink.

 

IMPORTANT

IMPORTANT:

This step is required for you to obtain H3C's processor servicing.

 

6.        Install the chassis air baffle. For more information, see "Installing air baffles."

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verity that the processor is operating correctly. For more information, see HDM online help.

Replacing a DIMM

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffle. For more information, see "Removing air baffles."

5.        Open the DIMM slot latches and pull the DIMM out of the slot, as shown in Figure 7-35.

Figure 7-35 Removing a DIMM

 

6.        Install a new DIMM. For more information, see "Installing DIMMs."

7.        Install the chassis air baffle. For more information, see "Installing air baffles."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

During server startup, you can access the BIOS to configure the memory mode of the newly installed DIMM. For more information, see the BIOS user guide for the server.

Verifying the replacement

Use one of the following methods to verify that the memory size is correct:

·          Access the GUI or CLI of the server:

¡  In the GUI of a windows OS, click the Start icon in the bottom left corner, enter msinfo32 in the search box, and then click the msinfo32 item.

¡  In the CLI of a Linux OS, execute the cat /proc/meminfo command.

·          Log in to HDM. For more information, see HDM online help.

·          Access the BIOS. For more information, see the BIOS user guide for the server.

If the memory size is incorrect, re-install or replace the DIMM.

 

 

NOTE:

It is normal that the CLI or GUI of the server OS displays a smaller memory size than the actual size if the mirror, partial mirror, or memory rank sparing memory mode is enabled. In this situation, verify the memory size from HDM or the BIOS.

 

Replacing the system battery

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The server comes with a system battery (Panasonic BR2032) installed on the system board, which supplies power to the real-time clock and has a lifespan of 5 to 10 years. If the server no longer automatically displays the correct date and time, you might need to replace the battery. As a best practice, use a new Panasonic BR2032 battery to replace the old one.

 

 

NOTE:

The BIOS will restore to the default settings after the replacement. You must reconfigure the BIOS to have the desired settings, including the system date and time. For more information, see the BIOS user guide for the server.

 

Removing the system battery

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        (Optional.) Remove PCIe modules that might hinder system battery removal. For more information, see "Replacing a riser card and a PCIe module."

5.        Gently tilt the system battery to remove it from the battery holder, as shown in Figure 7-36.

Figure 7-36 Removing the system battery

 

 

NOTE:

For environment protection purposes, dispose of the used-up system battery at a designated site.

 

Installing the system battery

1.        Orient the system battery with the plus-sign (+) side facing up, and place the system battery into the system battery holder.

2.        Press the system battery to seat it in the holder.

Figure 7-37 Installing the system battery

 

3.        (Optional.) Install the removed PCIe modules. For more information, see "Installing riser cards and PCIe modules."

4.        Install the access panel. For more information, see "Installing the access panel."

5.        Rack-mount the server. For more information, see "Rack-mounting the server."

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

8.        Access the BIOS to reconfigure the system date and time. For more information, see the BIOS user guide for the server.

Verifying the replacement

Verify that the system date and time is displayed correctly on HDM or the connector monitor.

Replacing the system board

Guidelines

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To prevent electrostatic discharge, place the removed parts on an antistatic surface or in antistatic bags.

Removing the system board

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the power supplies. For more information, see "Replacing a power supply."

4.        Remove the access panel. For more information, see "Removing the access panel."

5.        Remove the air baffles. For more information, see "Removing air baffles."

6.        Disconnect all cables connected to the system board.

7.        Remove the Mezzanine storage controller, if any. For more information, see "Replacing the Mezzanine storage controller."

8.        Removed the installed rear 2SFF drive cage, if any. To remove the drive cage, loosen the captive screw, and then lift the drive cage out of the slot, as shown in Figure 7-38.

Figure 7-38 Removing the rear 2SFF drive cage

 

9.        Remove the PCIe riser cards and PCIe modules, if any. For more information, see "Replacing a riser card and a PCIe module."

10.     Remove the mLOM Ethernet adapter, if any. For more information, see "Replacing an mLOM Ethernet adapter."

11.     Remove the NVMe VROC module, if any. For more information, see "Replacing an NVMe VROC module."

12.     Removed the DIMMs. For more information, see "Replacing a DIMM."

13.     Remove the M.2 transfer module, if any. For more information, see "Replacing an M.2 transfer module and a SATA M.2 SSD."

14.     Remove the processors and heatsinks. For more information, see "Replacing a processor."

15.     Remove the system board, as shown in Figure 7-39:

a.     Loosen the two captive screws on the system board.

b.    Hold the system board by its handle and slide the system board toward the server front. Then, lift the system board to remove it from the chassis.

Figure 7-39 Removing the system board

 

Installing the system board

1.        Hold the system board by its handle and slowly place the system board in the chassis. Then, slide the system board toward the server rear until the connectors (for example, USB connectors and the Ethernet port) on it are securely seated. See callout 1 in Figure 7-40.

 

 

NOTE:

The connectors are securely seated if you cannot use the system board handle to lift the system board.

 

2.        Fasten the two captive screws on the system board. See callout 2 in the Figure 7-40.

Figure 7-40 Installing the system board

 

3.        Install the removed processors and heatsinks. For more information, see "Installing processors."

4.        Install the removed M.2 transfer module. For more information, see "Installing SATA M.2 SSDs."

5.        Install the NVMe VROC module. For more information, see "Installing the NVMe VROC module."

6.        Install the removed DIMMs. For more information, see "Installing DIMMs."

7.        Install the removed mLOM Ethernet adapter. For more information, see "Installing an mLOM Ethernet adapter."

8.        Install the removed PCIe riser cards and PCIe modules. For more information, see "Installing riser cards and PCIe modules."

9.        Install the removed Mezzanine storage controller. For more information, see "Installing a Mezzanine storage controller and a power fail safeguard module."

10.     Install the removed rear 2SFF drive cage. For more information, see "Installing the 2SFF drive cage."

11.     Connect cables to the system board.

12.     Install the air baffles. For more information, see "Installing air baffles."

13.     Install the access panel. For more information, see "Installing the access panel."

14.     Install the removed power supplies. For more information, see "Installing power supplies."

15.     Rack-mount the server. For more information, see "Rack-mounting the server."

16.     Connect the power cord. For more information, see "Connecting the power cord."

17.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verify that each part is operating correctly and no alert is generated. For more information, see HDM online help.

Replacing the drive expander module (10SFF server)

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect cables from the expander module.

5.        Loosen the captive screws that secure the expander module, and then lift the module out of the chassis, as shown in Figure 7-41.

Figure 7-41 Removing a 10SFF drive expander module

 

6.        Place a new expander module in the chassis and fasten the captive screws.

7.        Connect cables to the drive expander module.

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verity that the 10SFF drive expander module is in a correct state. For more information, see HDM online help.

Replacing a drive backplane

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Removing a drive backplane

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the security bezel (if any) if you are removing the front drive backplane. For more information, see "Replacing the security bezel."

5.        Remove the drives attached to the backplane. For more information, see "Replacing a SAS/SATA drive."

6.        Disconnect cables from the backplane.

7.        Remove the drive backplane:

¡  To remove the front 4LFF drive backplane, push the locking clip, slide the backplane rightward, and then pull the backplane out of the chassis, as shown in Figure 7-42.

Figure 7-42 Removing the front 4LFF drive backplane

 

¡  To remove the front 8SFF or 10SFF drive backplane, open the locking clip, slide the backplane leftward, and then pull the backplane out of the chassis, as shown in Figure 7-43.

The removal procedure is the same for the 8SFF and 10SFF drive backplanes. Figure 7-43 uses the 8SFF server as an example.

Figure 7-43 Removing the front 8SFF or 10SFF drive backplane

 

¡  To remove the front or rear 2SFF driver backplane, open the locking clip, slide the backplane rightward, and then pull the backplane out of the chassis, as shown in Figure 7-44.

The removal procedure is the same for the front 2SFF SAS/SATA drive backplane, front 2SFF NVMe drive backplane, and rear 2SFF drive backplanes. Figure 7-44 uses the rear 2SFF drive backplane as an example.

Figure 7-44 Removing the front or rear 2SFF drive backplane

 

Installing a drive backplane

1.        Install a drive backplane:

¡  To install the front 4LFF drive backplane in the 4LFF server, place the backplane in the slot. Then, slide the backplane leftward until the clip snaps into place, as shown in Figure 7-45.

Figure 7-45 Installing the front 4LFF drive backplane

 

¡  To install the front 8SFF drive backplane in the 8SFF server, place the backplane in the slot. Then, slide the backplane rightward until the clip snaps into place, as shown in Figure 7-46.

To install a front 10SFF drive backplane in the 10SFF server, use the same method as 8SFF drive backplane installation. Figure 7-46 uses the 8SFF SAS/SATA drive backplane as an example.

Figure 7-46 Installing the front 8SFF or 10SFF drive backplane

 

¡  To install the front or rear 2SFF drive backplane, place the backplane in the slot, slide the backplane rightward until it snaps into place, as shown in Figure 7-47.

The installation procedure is the same for the front 2SFF SAS/SATA drive backplane, front 2SFF NVMe drive backplane, and the rear 2SFF drive backplane. Figure 7-47 uses the rear 2SFF drive backplane as an example.

Figure 7-47 Installing the rear 2SFF drive backplane

 

2.        Connect cables to the drive backplane. For more information, see "Connecting drive cables."

3.        Installed the removed drives. For more information, see "Installing SAS/SATA drives."

4.        Install the removed security bezel. For more information, see "Installing the security bezel."

5.        Install the access panel. For more information, see "Installing the access panel."

6.        Rack-mount the server. For more information, see "Rack-mounting the server."

7.        Connect the power cord. For more information, see "Connecting the power cord."

8.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verity that the drive backplane is in a correct state. For more information, see HDM online help.

Replacing the SATA optical drive

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing the SATA optical drive (4LFF server)

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

5.        Disconnect the cable from the optical drive.

6.        Lift the optical drive by its right side, and then pull the optical drive out of the chassis, as shown in Figure 7-48.

Figure 7-48 Removing the SATA optical drive from the 4LFF server

 

7.        Install a new SATA optical drive. For more information, see "Installing a SATA optical drive on the 4LFF server."

8.        Connect the optical drive cable.

9.        Install the removed security bezel. For more information, see "Installing the security bezel."

10.     Install the access panel. For more information, see "Installing the access panel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Replacing the SATA optical drive (8SFF server)

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

5.        Disconnect the cable from the optical drive.

6.        Remove the drive enablement option that holds the optical drive, as shown in Figure 7-49.

Figure 7-49 Removing the drive enablement option from the 8SFF server

 

7.        Remove the screw that secures the optical drive, and pull the optical drive out of the drive enablement option, as shown in Figure 7-50.

Figure 7-50 Removing the optical drive from the enablement option

 

8.        Install a new SATA optical drive. For more information, see "Installing a SATA optical drive on the 4LFF server."

9.        Connect the optical drive cable.

10.     Install the removed security bezel. For more information, see "Installing the security bezel."

11.     Install the access panel. For more information, see "Installing the access panel."

12.     Rack-mount the server. For more information, see "Rack-mounting the server."

13.     Connect the power cord. For more information, see "Connecting the power cord."

14.     Power on the server. For more information, see "Powering on the server."

Replacing the diagnostic panel

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

To replace the diagnostic panel:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the security bezel, if any. For more information, see "Replacing the security bezel."

3.        Remove the diagnostic panel, as shown in Figure 7-51:

a.     Press the release button on the diagnostic panel.

The diagnostic panel pops out.

b.    Hold the diagnostic panel by its front edge to pull it out of the slot.

Figure 7-51 Removing the diagnostic panel

 

4.        Install a new diagnostic panel. For more information, see "Installing a diagnostic panel."

5.        Install the removed security bezel. For more information, see "Installing the security bezel."

6.        Connect the power cord. For more information, see "Connecting the power cord."

7.        Power on the server. For more information, see "Powering on the server."

Replacing the chassis-open alarm module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The server supports the following types of chassis-open alarm modules:

·          Independent chassis-open alarm module.

·          Chassis-open alarm module attached to the front media module.

Removing the chassis-open alarm module

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis-open alarm module, as shown in Figure 7-52:

a.     Disconnect the chassis-open alarm module cable from the chassis-open alarm module connector on the system board (see callout 1 in the figure).

b.    Open the module retaining clip and pull the chassis-open alarm module out of the chassis (see callouts 2 and 3 in the figure).

Figure 7-52 Removing the chassis-open alarm module

 

 

NOTE:

The removal procedure is the same for all types of chassis-open alarm modules. This figure uses the chassis-open alarm module attached to the front media module as an example.

 

5.        Disconnect the chassis-open alarm module cable from the front media module if the chassis-open alarm module is attached to the front media module. For more information, see "Replacing the front media module."

Installing the chassis-open alarm module

1.        Connect the chassis-open alarm signal cable to the front media module

2.        Install the chassis-open alarm module:

a.     Press the chassis-open alarm module into the slot until it snaps into place, as shown in Figure 7-53.

b.    Connect the chassis-open alarm signal cable to the chassis-open alarm module connector on the system board, as shown in Figure 7-53.

Figure 7-53 Installing the chassis-open alarm module

 

3.        Install the access panel. For more information, see "Installing the access panel."

4.        Rack-mount the server. For more information, see "Rack-mounting the server."

5.        Connect the power cord. For more information, see "Connecting the power cord."

6.        Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verity that the chassis-open alarm module is in a correct state. For more information, see HDM online help.

Replacing the front media module

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Removing the front media module (4LFF server)

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Loosen the captive screws on the plug of the VGA and USB 2.0 cable, and then disconnect the cable from the front media module, as shown in Figure 7-54.

Figure 7-54 Disconnecting the VGA and USB 2.0 cable

 

5.        Disconnect the chassis-open alarm signal cable from the system board, and remove the chassis-open alarm module attached to the front media module. For more information, see "Removing the chassis-open alarm module."

6.        Remove the front media module, as shown in Figure 7-55.

Figure 7-55 Removing the front media module

 

7.        Install a new front media module. For more information, see "Installing the front media module (VGA and USB 2.0 connectors)."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Removing the front media module (8SFF and 10SFF servers)

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Loosen the captive screws on the plug of the VGA and USB 2.0 cable, and then disconnect the cable from the front media module.

5.        Disconnect the front I/O component cable assembly. For more information, see "Replacing the front I/O component (8SFF/10SFF server)."

6.        Disconnect the chassis-open alarm signal cable from the system board, and then remove the chassis-open alarm module attached to the front media module. For more information, see "Removing the chassis-open alarm module."

7.        Remove the front media module, as shown in Figure 7-56.

Figure 7-56 Removing the front media module

 

8.        Remove the front I/O component from the front media module. For more information, see "Replacing the front I/O component (8SFF/10SFF server)."

9.        Install a new front media module. For more information, see "Installing the front media module (VGA and USB 2.0 connectors)."

10.     Install the access panel. For more information, see "Installing the access panel."

11.     Rack-mount the server. For more information, see "Rack-mounting the server."

12.     Connect the power cord. For more information, see "Connecting the power cord."

13.     Power on the server. For more information, see "Powering on the server."

Replacing the air inlet temperature sensor

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Remove the chassis air baffle. For more information, see "Removing air baffles."

5.        Remove the air inlet temperature sensor:

¡  On the 4LFF server, disconnect the temperature sensor cable from the system board. Then, open the sensor retaining clasp and remove the temperature sensor, as shown in Figure 7-57.

Figure 7-57 Removing a temperature sensor (4LFF server)

 

¡  On the 8SFF or 10SFF server, disconnect the temperature sensor cable from the system board. Then, pull the temperature sensor out of the slot, as shown in Figure 7-58.

Figure 7-58 Removing a temperature sensor (8SFF/10SFF server)

 

6.        Install a new temperature sensor:

¡  On the 4LFF server, secure the temperature sensor with the retaining clasp on the system board. Then, connect the temperature sensor cable to the temperature sensor connector on the system board. For locations of the clasp and connector, see Figure 7-57.

¡  On the 8SFF/10SFF server, insert the temperature sensor into the temperature sensor slot. Then, connect the temperature sensor cable to the temperature sensor connector on the system board. For locations of the slot and connector, see Figure 7-58.

7.        Install the chassis air baffle. For more information, see "Installing air baffles."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Log in to HDM to verity that the air inlet temperature sensor is in a correct state. The air inlet temperature sensor is named Inlet Temp in HDM. For more information, see HDM online help.

Replacing the front I/O component

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

Replacing the front I/O component (4LFF server)

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect the front I/O component cable assembly from the system board, as shown in Figure 7-59.

Figure 7-59 Disconnecting the front I/O component cable assembly

 

5.        Remove the front I/O component, as shown in Figure 7-60.

Figure 7-60 Remove the front I/O component

 

6.        Install a new front I/O component. Use a screw to secure the front I/O component on the system board. Then, connect the front I/O component assembly to the front I/O connector on the system board. For more information, see "Connecting the front I/O component cable assembly."

7.        Install the access panel. For more information, see "Installing the access panel."

8.        Rack-mount the server. For more information, see "Rack-mounting the server."

9.        Connect the power cord. For more information, see "Connecting the power cord."

10.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Power on the server and verify that the front I/O component LEDs are in a correct state. For more information about the LEDs, see "LEDs and buttons."

Replacing the front I/O component (8SFF/10SFF server)

The 8SFF and 10SFF servers support the following types of front I/O components:

·          Independent front I/O component.

·          Front I/O component attached to the front media module.

The replacement procedure is the same for all types of front I/O components. This procedure uses an independent front I/O component as an example.

Procedure

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove the access panel. For more information, see "Removing the access panel."

4.        Disconnect the front I/O component cable assembly from the system board, as shown in Figure 7-61.

Figure 7-61 Disconnecting the front I/O component cable assembly

 

5.        Remove the screw that secures the front I/O module, and then pull the front I/O module out of the slot, as shown in Figure 7-62.

Figure 7-62 Removing the front I/O module that holds the front I/O component

 

6.        Remove the front I/O component from the front I/O module, as shown in Figure 7-63.

Figure 7-63 Removing the front I/O component from the front I/O module

 

7.        Install a new front I/O component:

a.     Install the I/O component in the front I/O module.

b.    Install the front I/O module to the server.

c.     Connect the front I/O component cable assembly to the front I/O component connector on the system board. For more information, see "Connecting the front I/O component cable assembly."

8.        Install the access panel. For more information, see "Installing the access panel."

9.        Rack-mount the server. For more information, see "Rack-mounting the server."

10.     Connect the power cord. For more information, see "Connecting the power cord."

11.     Power on the server. For more information, see "Powering on the server."

Verifying the replacement

Power on the server and verify that the front I/O component LEDs is in a correct state. For more information, see "LEDs and buttons."

Replacing chassis ears

WARNING

WARNING!

To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.

 

The replacement procedure is the same for the left and right chassis ears. This procedure uses the right chassis ear as an example.

To replace a chassis ear:

1.        Power off the server. For more information, see "Powering off the server."

2.        Remove the server from the rack. For more information, see "Removing the server from a rack."

3.        Remove a chassis ear:

¡  To remove a screw rack mount ear, remove the screws that secure the ear, and then pull the screw rack mount ear out of the slot, as shown in Figure 7-64.

Figure 7-64 Removing a screw rack mount ear

 

¡  To remove a multifunctional rack mount ear, remove the security bezel, if any. Then, remove the screws that secure the multifunctional rack mount ear, and pull the multifunctional rack mount ear out of the slot, as shown in Figure 7-65.

Figure 7-65 Removing a multifunctional rack mount ear

 

4.        Install a new chassis ear. Insert the chassis ear into the slot and then use screws to secure the chassis ear.

5.        Install the removed security bezel. For more information, see "Installing the security bezel."

6.        Rack-mount the server. For more information, see "Rack-mounting the server."

7.        Connect the power cord. For more information, see "Connecting the power cord."

8.        Power on the server. For more information, see "Powering on the server."

Replacing the TPM/TCM

To avoid system damage, do not remove the installed TPM/TCM.

If the installed TPM/TCM is faulty, remove the system board, and contact H3C Support for system board and TPM/TCM replacement.


8 Connecting internal cables

Properly route the internal cables and make sure they are not squeezed.

Connecting drive cables

For more information about storage controller configurations, see "Drive configurations and numbering."

4LFF server

Front 4LFF SAS/SATA drive cabling

Use Table 8-1 to select the method for connecting the 4LFF SAS/SATA drive backplane to a storage controller depending on the type of the storage controller.

Table 8-1 4LFF SAS/SATA drive cabling methods

Storage controller

Cabling method

Embedded RSTe RAID controller

See Figure 8-1.

Mezzanine storage controller

See Figure 8-2.

Standard storage controller

See Figure 8-3.

 

Figure 8-1 4LFF SATA drive connected to the embedded RSTe RAID controller

(1) Power cord

(2) AUX signal cable

(3) SATA data cable

 

Figure 8-2 4LFF SAS/SATA drive connected to the Mezzanine storage controller

(1) Power cord

(2) AUX signal cable

(3) SAS/SATA data cable

 

Figure 8-3 4LFF SAS/SATA drive connected to a standard storage controller

(1) Power cord

(2) AUX signal cable

(3) SAS/SATA data cable

 

 

NOTE:

The standard storage controller must be installed in PCIe slot 1.

 

Front 4LFF SAS/SATA and rear 2SFF SAS/SATA drive cabling

Use Table 8-2 to select the method for connecting the front 4LFF SAS/SATA drive backplane and the rear 2SFF SAS/SATA drive backplane to a storage controller depending on the type of the storage controller.

Table 8-2 Front 4LFF SAS/SATA and rear 2SFF SAS/SATA drive cabling methods

Storage controller

Cabling method

Embedded RSTe RAID controller

See Figure 8-4.

Mezzanine storage controller

See Figure 8-5.

 

Figure 8-4 Front 4LFF SATA and rear 2SFF SATA drive connected to the embedded RSTe RAID controller

(1) Power cord (front 4LFF)

(2) AUX signal cable (front 4LFF)

(3) SATA data cable

(4) Power cord (rear 2SFF)

(5) AUX signal cable (rear 2SFF)

 

 

Figure 8-5 Front 4LFF SATA and rear 2SFF SAS/SATA drive connected to the Mezzanine storage controller

(1) Power cord (front 4LFF)

(2) AUX signal cable (front 4LFF)

(3) SAS/SATA data cable

(4) Power cord (rear 2SFF)

(5) AUX signal cable (rear 2SFF)

 

 

8SFF server

Front 8SFF SAS/SATA drive cabling

Use Table 8-3 to select the method for connecting the 8SFF SAS/SATA drive backplane to a storage controller depending on the type of the storage controller.

Table 8-3 8SFF SAS/SATA drive cabling methods

Storage controller

Cabling method

Embedded RSTe RAID controller

See Figure 8-6.

Mezzanine storage controller

See Figure 8-7.

Standard storage controller

See Figure 8-8.

 

Figure 8-6 8SFF SATA drive backplane connected to the embedded RSTe RAID controller

(1) AUX signal cable

(2) Power cord

(3) SATA data cable

 

Figure 8-7 8SFF SAS/SATA drive backplane connected to the Mezzanine storage controller

(1) AUX signal cable

(2) Power cord

(3) SAS/SATA data cable

 

Figure 8-8 8SFF SAS/SATA drive backplane connected to a standard storage controller

(1) AUX signal cable

(2) Power cord

(3) SAS/SATA data cable

 

 

NOTE:

The standard storage controller must be installed in PCIe slot 1.

 

Front 8SFF and 2SFF SAS/SATA drive cabling

Use Table 8-4 to select the method for connecting the front 8SFF and 2SFF SAS/SATA drive backplanes to a storage controller depending on the type of the storage controller.

Table 8-4 Front 8SFF and 2SFF SAS/SATA drive cabling methods

Storage controller

Front 8SFF drive cabling method

Front 2SFF drive cabling method

Embedded RSTe RAID controller

See Figure 8-6.

See Figure 8-9.

·         Mezzanine storage controller for front 8SFF drives

·         Embedded RSTe RAID controller for front 2SFF drives

See Figure 8-7.

See Figure 8-9.

·         Standard storage controller for front 8SFF drives

·         Embedded RSTe RAID controller for front 2SFF drives

See Figure 8-8.

See Figure 8-9.

 

Figure 8-9 Front 2SFF SAS/SATA drive backplane connected to the embedded RSTe RAID controller

(1) Power cord

(2) AUX signal cable

(3) SAS/SATA data cable

 

Front 4SFF SAS/SATA and 4SFF NVMe drive cabling

For the 4SFF NVMe drive configuration, you must install a 4-port NVMe SSD expander module. Install the expander module in PCIe slot 1 if the embedded RAID controller or Mezzanine storage controller is used, and in PCIe slot 2 if a standard storage controller is used.

Use Table 8-5 to determine the front drive cabling method depending on the type of the storage controller.

Table 8-5 Front 4SFF SAS/SATA and 4SFF NVMe drive cabling methods

Storage controller

Cabling method

Embedded RSTe RAID controller

See Figure 8-10.

Mezzanine storage controller

See Figure 8-11.

Standard storage controller

See Figure 8-12.

 

When connecting NVMe data cables, make sure you connect the peer ports with the correct NVMe data cable. Use Table 8-6 to determine the ports to be connected and the cable to use.

Figure 8-10 Front 4SFF SAS/SATA and 4SFF NVMe drive cabling (embedded RAID controller and 4-port NVMe SSD expander module)

(1) AUX signal cable

(2) Power cord

(3) SATA data cable

(4) NVMe data cables

 

Figure 8-11 Front 4SFF SAS/SATA and 4SFF NVMe drive cabling (Mezzanine storage controller and 4-port NVMe SSD expander module)

(1) AUX signal cable

(2) Power cord

(3) SATA data cable

(4) NVMe data cables

 

Figure 8-12 Front 4SFF SAS/SATA and 4SFF NVMe drive cabling (standard storage controller and 4-port NVMe SSD expander module)

(1) AUX signal cables

(2) and (3) Power cords

(4) NVMe data cables

(5) SAS/SATA data cable

 

 

NOTE:

Install the standard storage controller in PCIe slot 1 and the NVMe SSD expander module in PCIe slot 2.

 

Table 8-6 NVMe data cable and the peer ports on the drive backplane and 4-port NVMe SSD expander module

Mark on the NVMe data cable end

Port on the drive backplane

Port on the 4-port NVMe SSD expander module

NVMe 1

NVMe A1

NVMe 1

NVMe 2

NVMe A2

NVMe 2

NVMe 3

NVMe A3

NVMe 3

NVMe 4

NVMe A4

NVMe 4

 

Front 8SFF NVMe drive cabling

For the 8SFF NVMe drive configuration, you must install two 4-port NVMe SSD expander modules in PCIe slots 1 and 2 or one 8-port NVMe SSD expander module in PCIe slot 1.

Use Table 8-7 to determine the front drive cabling method depending on the type of the NVMe SSD expander module.

Table 8-7 8SFF NVMe drive cabling methods

NVMe SSD expander module

Cabling method

One 8-port NVMe SSD expander module

See Figure 8-13.

Two 4-port NVMe SSD expander modules

See Figure 8-14.

 

When connecting NVMe data cables, make sure you connect the peer ports with the correct NVMe data cable. For 4-port and 8-port NVMe SSD expander modules, use Table 8-8 and Table 8-9, respectively, to determine the ports to be connected and the cable to use.

Figure 8-13 8SFF NVMe drive cabling (one 8-port NVMe SSD expander module)

(1) AUX signal cable

(2) Power cord

(3) NVMe data cables

 

Figure 8-14 8SFF NVMe drive cabling (two 4-port NVMe SSD expander modules)

(1) AUX signal cable

(2) Power cord

(3) NVMe data cables

 

Table 8-8 NVMe data cable and the peer ports on the drive backplane and 4-port NVMe SSD expander modules

Mark on the NVMe data cable end

Port on the drive backplane

Port on the 4-port NVMe SSD expander modules

NVMe 1

NVMe A1

NVMe 1 (NVMe SSD expander module in PCIe slot 1)

NVMe 2

NVMe A2

NVMe 2 (NVMe SSD expander module in PCIe slot 1)

NVMe 3

NVMe A3

NVMe 3 (NVMe SSD expander module in PCIe slot 1)

NVMe 4

NVMe A4

NVMe 4 (NVMe SSD expander module in PCIe slot 1)

NVMe 1

NVMe B1

NVMe 1 (NVMe SSD expander module in PCIe slot 2)

NVMe 2

NVMe B2

NVMe 2 (NVMe SSD expander module in PCIe slot 2)

NVMe 3

NVMe B3

NVMe 3 (NVMe SSD expander module in PCIe slot 2)

NVMe 4

NVMe B4

NVMe 4 (NVMe SSD expander module in PCIe slot 2)

 

Table 8-9 NVMe data cable and the corresponding peer ports on the drive backplane and 8-port NVMe SSD expander module

Mark on the NVMe data cable end

Port on the drive backplane

Port on the 8-port NVMe SSD expander module

NVMe 1

NVMe A1

NVMe 1

NVMe 2

NVMe A2

NVMe 2

NVMe 3

NVMe A3

NVMe 3

NVMe 4

NVMe A4

NVMe 4

NVMe 1

NVMe B1

NVMe 1

NVMe 2

NVMe B2

NVMe 2

NVMe 3

NVMe B3

NVMe 3

NVMe 4

NVMe B4

NVMe 4

 

Front 8SFF NVMe and rear 2SFF SAS/SATA drive cabling

For the 8SFF NVMe drive configuration, you must install two 4-port NVMe SSD expander modules in PCIe slots 1 and 2 or an 8-port NVMe SSD expander module in PCIe slot 1.

Use Table 8-10 to determine the front drive cabling method depending on the type of the NVMe SSD expander module.

Table 8-10 Front 8SFF NVMe and rear 2SFF SAS/SATA drive cabling methods

Storage controller

Front 8SFF drive cabling method

Rear 2SFF drive cabling method

·         One 8-port NVMe SSD expander module for the front NVMe drives

·         Embedded RSTe RAID controller for the rear SAS/SATA drives

See Figure 8-13.

See Figure 8-9.

·         Two 4-port NVMe SSD expander modules for the front NVMe drives

·         Embedded RSTe RAID controller for the rear SAS/SATA drives

See Figure 8-14.

See Figure 8-9.

 

When connecting NVMe data cables, make sure you connect the peer ports with the correct NVMe data cable. For 4-port and 8-port NVMe SSD expander modules, use Table 8-8 and Table 8-9, respectively, to determine the ports to be connected and the cable to use.

Front 8SFF SAS/SATA and front 2SFF NVMe drive cabling

For the 2SFF NVMe drive configuration, you must install a 4-port NVMe SSD expander module in PCIe slot 1.

Use Table 8-11 to determine the front drive cabling method depending on the storage controller type.

Table 8-11 Front 8SFF SA/SATA and front 2SFF NVMe drive cabling methods

Storage controller

Front 8SFF drive cabling method

Front 2SFF drive cabling method

Embedded RSTe RAID controller

See Figure 8-6.

See Figure 8-15.

Mezzanine storage controller

See Figure 8-7.

See Figure 8-15.

Standard storage controller (in PCIe slot 2)

See Figure 8-8.

See Figure 8-15.

 

Figure 8-15 Front 2SFF NVMe drive connected to the 4-port NVMe SSD expander module

 

When connecting NVMe data cables, make sure you connect the peer ports with the correct NVMe data cable, as shown in Table 8-12.

Table 8-12 NVMe data cable and the peer ports on the drive backplane and 4-port NVMe SSD expander modules

Mark on the NVMe data cable end

Port on the drive backplane

Port on the 4-port NVMe SSD expander modules

NVMe 1

NVMe A1

NVMe 1

NVMe 2

NVMe A2

NVMe 2

 

10SFF server

Front 10SFF SAS/SATA drive cabling

Use Table 8-13 to select the method for connecting the 10SFF drive backplane to a storage controller depending on the type of the storage controller.

Table 8-13 10SFF drive cabling methods

Storage controller

Cabling method

Mezzanine storage controller

See Figure 8-16.

Standard storage controller

See Figure 8-17.

 

Figure 8-16 10SFF SAS/SATA drive connected to the Mezzanine storage controller

(1) AUX signal cable

(2) Power cord

(3) SAS/SATA data cable

 

Figure 8-17 10SFF SAS/SATA drive connected to a standard storage controller

(1) AUX signal cable

(2) Power cord

(3) SAS/SATA data cable

 

 

NOTE:

Install the standard storage controller in PCIe slot 1 as shown in Figure 8-17.

 

Front 10SFF and rear 2SFF SAS/SATA drive cabling

Figure 8-18 Front 10SFF and rear 2SFF SAS/SATA drive connected to the Mezzanine storage controller

(1) AUX signal cable (front 10SFF)

(2) Power cord (front 10SFF)

(3) SAS/SATA data cable to the Mezzanine storage controller

(4) Rear 2SFF drive SAS/SATA data cable to the drive expander module

(5) Power cord (rear 2SFF)

(6) AUX signal cable (rear 2SFF)

 

Connecting the flash card and supercapacitor of the power fail safeguard module

The flash card of the power fail safeguard module can be installed on a Mezzanine storage controller or on a standard storage controller. Choose the connection procedure depending on the location of the flash card.

Connecting the flash card on the Mezzanine storage controller

Connect the flash card on the Mezzanine storage controller to the supercapacitor as shown in Figure 8-19.

Figure 8-19 Connecting the flash card on the Mezzanine storage controller

(1) Supercapacitor extension cable

(2) Supercapacitor cable

 

Connecting the flash card on a standard storage controller

Use Table 8-14 to determine the cabling method depends on the location of the standard storage controller.

Table 8-14 10SFF drive cabling methods

Standard storage controller location

Cabling method

PCIe slot 1

See Figure 8-20.

PCIe slot 2

See Figure 8-21.

 

Figure 8-20 Connecting the flash card (standard storage controller in PCIe slot 1)

(1) Supercapacitor cable

(2) Supercapacitor extension cable

 

Figure 8-21 Connecting the flash card (standard storage controller in PCIe slot 2)

(1) Supercapacitor extension cable

(2) Supercapacitor cable

 

Connecting the power cord of a GPU module

Only the GPU-M4000-1-X GPU module requires a power cord.

Connect the power cord of a GPU module as shown in Figure 8-22.

Figure 8-22 Connecting the power cord of a GPU module

 

Connecting the SATA M.2 SSD cable

The SATA M.2 SSD cabling method depends on the number of SATA M.2 SSDs to be installed.

·          If you are installing only one SATA M.2 SSD, connect the cable as shown in Figure 8-23.

·          If you are installing two SATA M.2 SSDs, connect the cables as shown in Figure 8-24.

Figure 8-23 Connecting the front SATA M.2 SSD cable (one SATA M.2 SSD)

 

Figure 8-24 Connecting the front SATA M.2 SSD cables (two SATA M.2 SSDs)

 

Connecting the SATA optical drive cable

The SATA optical drive cabling method depends on the server model.

·          On a 4LFF server, connect the cable as shown in Figure 8-25.

·          On an 8SFF server, connect the cable as shown in Figure 8-26.

Figure 8-25 Connecting the SATA optical drive cable (4LFF server)

 

Figure 8-26 Connecting the SATA optical drive cable (8SFF server)

 

Connecting the front I/O component cable assembly

The front I/O component cabling method depends on the server model.

·          On a 4LFF server, connect the cable assembly as shown in Figure 8-27.

·          On an 8SFF or 10SFF server, connect the cable assembly as shown in Figure 8-28.

Figure 8-27 Connecting the front I/O component cable assembly (4LFF server)

 

Figure 8-28 Connecting the front I/O component cable assembly (8SFF/10SFF server)

 

Connecting the front media module cable

The front media module cabling method depends on the server model.

·          On a 4LFF server, connect the cable as shown in Figure 8-29.

·          On an 8SFF or 10SFF server, connect the cable as shown in Figure 8-30.

Figure 8-29 Connecting the front media module cable (4LFF server)

 

Figure 8-30 Connecting the front media module cable (8SFF/10SFF server)

 

Connecting the NCSI cable for a PCIe Ethernet adapter

The cabling method is the same for standard storage controllers in any PCIe slots. Figure 8-31 uses slot 1 to show the cabling method.

Figure 8-31 Connecting the NCSI cable for a PCIe Ethernet adapter

 


9 Maintenance

The following information describes the guidelines and tasks for daily server maintenance.

Guidelines

·          Keep the equipment room clean and tidy. Remove unnecessary devices and objects from the equipment room.

·          Make sure the temperature and humidity in the equipment room meet the server operating requirements.

·          Regularly check the server from HDM for operating health issues.

·          Keep the operating system and software up to date as required.

·          Make a reliable backup plan:

¡  Back up data regularly.

¡  If data operations on the server are frequent, back up data as needed in shorter intervals than the regular backup interval.

¡  Check the backup data regularly for data corruption.

·          Stock spare components on site in case replacements are needed. After a spare component is used, prepare a new one.

·          Keep the network topology up to date to facilitate network troubleshooting.

Maintenance tools

The following are major tools for server maintenance:

·          HygrothermographMonitor the operating environment of the server.

·          HDM and FIST—Monitor the operating status of the server.

Maintenance tasks

Observing LED status

Observe the LED status on the front and rear panels of the server to verify that the server modules are operating correctly. For more information about the status of the front and rear panel LEDs, see front panel and rear panel in "Appendix A  Server specifications."

Monitoring the temperature and humidity in the equipment room

Use a hygrothermograph to monitor the temperature and humidity in the equipment room.

The temperature and humidity in the equipment room must meet the server requirements described in "Appendix A  Server specifications."

Examining cable connections

Verify that the cables and power cords are correctly connected.

Guidelines

·          Do not use excessive force when connecting or disconnecting cables.

·          Do not twist or stretch the cables.

·          Organize the cables appropriately. For more information, see "Cabling guidelines."

Checklist

·          The cable type is correct.

·          The cables are correctly and firmly connected and the cable length is appropriate.

·          The cables are in good condition and are not twisted or corroded at the connection point.

Technical support

If you encounter any complicated problems during daily maintenance or troubleshooting, contact H3C Support.

Before contacting H3C Support, collect the following server information to facilitate troubleshooting:

·          Log and sensor information:

¡  Log information:

-      Event logs, HDM logs, and SDS logs in HDM.

-      Logs in iFIST.

¡  Sensor information in HDM.

·          Product serial number.

·          Product model and name.

·          Snapshots of error messages and descriptions.

·          Hardware change history, including installation, replacement, insertion, and removal of hardware.

·          Third-party software installed on the server.

·          Operating system type and version.


10 Appendix A  Server specifications

The information in this document might differ from your product if it contains custom configuration options or features.

Server models and chassis view

H3C UniServer R2700 G3 servers are 1U rack servers with two Intel Purley or Jintide-C series processors. They are suitable for cloud computing, IDC, and enterprise networks built based on new generation infrastructure.

Figure 10-1 Chassis view

 

The servers come in the models listed in Table 10-1. These models support different drive configurations. For information about the compatible drive and storage controller configurations, see "Drive configurations and numbering."

Table 10-1 R2700 G3 server models

Model

Maximum drive configuration

4LFF

4 front LFF SAS/SATA drives and 2 rear 2SFF SAS/SATA drives.

8SFF

·         8 front SFF NVMe drives and 2 front SFF SAS/SATA drives.

·         8 front SFF SAS/SATA drives and 2 front SFF SAS/SATA drives.

·         8 front SFF SAS/SATA drives and 2 front SFF NVMe drives.

·         4 front SFF SAS/SATA drives and 4 front SFF NVMe drives

10SFF

10 front SFF SAS/SATA drives and 2 rear SFF SAS/SATA drives.

 

Technical specifications

Item

4LFF

8SFF

10SFF

Dimensions (H × W × D)

·         Without a security bezel: 42.9 × 434.6 × 780 mm (1.69 × 17.11 × 30.71 in)

·         With a security bezel: 42.9 × 434.6 × 803 mm (1.69 × 17.11 × 31.61 in)

Max. weight

19.1 kg (42.11 lb)

19.45 kg (42.88 lb)

20.95 kg (46.19 lb)

Processors

2 × Intel Purley or Jintide-C series processors

(Up to 3.6 GHz base frequency, maximum 125 W power consumption, and 27.5 MB cache per processor)

Memory

512 GB (maximum)

16 × DDR4 DIMMs (8, 16 or 32 GB per DIMM)

Storage controllers

See "Storage controllers."

Chipset

Intel C621 Lewisburg chipset

Network connection

·         1 × onboard 1 Gbps HDM dedicated network port

·         1 × mLOM Ethernet adapter connector

I/O connectors

·         6 × USB connectors:

¡  4 × USB 3.0 connectors (two on the system board and two at the server rear)

¡  2 × USB 2.0 connectors (available with the front media module)

·         14 × SATA connectors in total:

¡  1 × onboard mini-SAS connector (×8 SATA connectors)

¡  1 × onboard mini-SAS connector (×4 SATA connectors)

¡  2 × onboard ×1 SATA connectors

·         1 × RJ-45 HDM dedicated port at the server rear

·         2 × VGA connectors (one at the server rear and one at the server front)

·         1 × BIOS serial port at the server rear

Expansion slots

4 × PCIe 3.0 connectors, including two standard connectors, one Mezzanine storage controller connector, and one Ethernet adapter connector

Optical drives

·         External USB optical drives

·         Internal SATA optical drive

·         External USB optical drives

·         Internal SATA optical drive

External USB optical drives

Power supplies

2 × hot-swappable power supplies in redundancy

Options:

·         550 W Platinum

·         550 W high-efficiency Platinum

·         800 W Platinum

·         800 W –48 VDC

·         800 W 336 V high-voltage DC

·         850 W high-efficiency Platinum

·         850 W Titanium

·         1200 W Platinum

Standards

CCC

CECP

SEPA

 

Components

Figure 10-2 R2700 G3 server components

 

Table 10-2 R2700 G3 server components

Item

Description

(1) Access panel

N/A

(2) Chassis-open alarm module

Generates a chassis open alarm every time the access panel is removed. The alarms can be displayed from the HDM Web interface.

(3) Chassis air baffle

Provides ventilation aisles for airflows in the chassis.

(4) NVMe VROC module

Works with VMD to provide RAID capability for the server to virtualize NVMe drives.

(5) Storage controller

Provides RAID capability for the server to virtualize storage resources of SAS/SATA drives. It supports RAID configuration, RAID capability expansion, configuration remembering, online upgrade, and remote configuration.

(6) Dual SD card extended module

Provides SD card slots.

(7) SATA M.2 SSD

Provides storage media for the server.

(8) M.2 transfer module

Expands the server with a maximum of two SATA M.2 SSDs.

(9) Riser card

Installed on the system board to provide additional slots for PCIe modules.

(10) Drive cage

Encloses drives.

(11) System battery

Supplies power to the system clock.

(12) Power supply

Supplies power to the server. It supports hot swapping and 1+1 redundancy.

(13) Riser blank

Installed on an empty riser card connector to ensure good ventilation.

(14) mLOM Ethernet adapter

Installed on the mLOM Ethernet adapter connector of the system board for network expansion.

(15) Chassis ears

Attach the server to the rack.

(16) Front media module

Provides one VGA connector and two USB 2.0 connectors.

(17) Diagnostic panel

Displays information about faulty components for quick diagnosis.

(18) Drive

Drive for data storage, which is hot swappable.

(19) Drive expander module

Provides additional data channels for both front and rear drives.

(20) Drive backplane

Provides power and data channels for drives.

(21) Fan blank

Installed in an empty fan bay to ensure good ventilation.

(22) Fan

Supports hot swapping and N+1 redundancy.

(23) Supercapacitor holder

Secures a supercapacitor in the chassis.

(24) Supercapacitor

Supplies power to the flash card of the power fail safeguard module, which enables the storage controller to back up data to the flash card for protection when power outage occurs.

(25) Memory

Stores computing data and data exchanged with external storage.

(26) System board

The main printed circuit board that accommodates and interconnects various server components, including processors, memory, BIOS chip, HDM chip, PCIe connectors, and fans.

(27) Processor

Integrates a memory processing unit and a PCIe controller to provide data processing capabilities for the server.

(28) Processor retaining bracket

Attaches a processor to the heatsink.

(29) Processor heatsink

Cools the processor.

 

Front panel

Front panel view

Figure 10-3, Figure 10-4, and Figure 10-5 show the front panel views of 4LFF, 8SFF, and 10SFF servers, respectively.

Figure 10-3 4LFF front panel

(1) Serial label pull tab module

(2) Optical drive (optional)

(3) Front media module (provides one VGA connector and two USB 2.0 connectors) (optional)

(4) Drive or diagnostic panel (optional)

(5) LFF drives

 

Figure 10-4 8SFF front panel

(1) Serial label pull tab module

(2) Optical drive or 2SFF drives (optional)

(3) Front media module (provides one VGA connector and two USB 2.0 connectors) (optional)

(4) Drive or diagnostic panel (optional)

(5) SFF drives

 

Figure 10-5 10SFF front panel

(1) Serial label pull tab module

(2) Front media module (provides one VGA connector and two USB 2.0 connectors) (optional)

(3) Drive or diagnostic panel (optional)

(4) SFF drives

 

LEDs and buttons

The LED and buttons are the same on all server models. Figure 10-6, Figure 10-7, and Figure 10-8 shows the front panel LEDs and buttons of 4LFF, 8SFF, and 10SFF drives, respectively. Table 10-3 describes the status of the front panel LEDs.

Figure 10-6 4LFF front panel LEDs and buttons

(1) UID button LED

(2) Health LED

(3) mLOM Ethernet adapter Ethernet port LED

(4) Power on/standby button and system power LED

 

Figure 10-7 8SFF front panel LEDs and buttons

(1) UID button LED

(2) Health LED

(3) mLOM Ethernet adapter Ethernet port LED

(4) Power on/standby button and system power LED

 

Figure 10-8 10SFF front panel LEDs and buttons

(1) UID button LED

(2) Health LED

(3) mLOM Ethernet adapter Ethernet port LED

(4) Power on/standby button and system power LED

 

Table 10-3 LEDs and buttons on the front panel

Button/LED

Status

UID button LED

·         Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡  Press the UID button LED.

¡  Activate the UID LED from HDM.

·         Flashing blue:

¡  1 Hz—The firmware is being upgraded or the system is being managed from HDM.

¡  4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds.

·         Off—UID LED is not activated.

Health LED

·         Steady green—The system is operating correctly.

·         Flashing green (4 Hz)—HDM is initializing.

·         Flashing amber (0.5 Hz)—A predictive alarm has occurred.

·         Flashing amber (1 Hz)—A general alarm has occurred.

·         Flashing red (1 Hz)—A severe alarm has occurred.

If a system alarm is present, log in to HDM to obtain more information about the system running status.

mLOM Ethernet adapter Ethernet port LED

·         Steady green—A link is present on the port.

·         Flashing green (1 Hz)—The port is receiving or sending data.

·         Off—No link is present on the port.

Power on/standby button and system power LED

·         Steady green—The system has started.

·         Flashing green (1 Hz)—The system is starting.

·         Steady amber—The system is in Standby state.

·         Off—No power is present. Possible reasons:

¡  No power source is connected.

¡  No power supplies are present.

¡  The installed power supplies are faulty.

¡  The system power cords are not connected correctly.

 

Ports

The server does not provide fixed USB 2.0 or VGA connectors on its front panel. However, you can install a front media module if a USB 2.0 or VGA connection is needed. For more information about USB 2.0 and VGA connectors, see Table 10-4. For detailed port locations, see "Front panel view."

Table 10-4 Optional ports on the front panel

Port

Type

Description

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

USB connector

USB 2.0

Connects the following devices:

·         USB flash drive.

·         USB keyboard or mouse.

 

Rear panel

Rear panel view

Figure 10-9 shows the rear panel view.

Figure 10-9 Rear panel components

(1) PCIe slot 1 (processor 1)

(2) PCIe slot 2 (processor 2)

(3) Power supply 2

(4) Power supply 1

(5) BIOS serial port

(6) VGA connector

(7) USB 3.0 connectors

(8) HDM dedicated network port (1 Gbps, RJ-45, default IP address 192.168.1.2/24)

(9) mLOM Ethernet adapter (optional)

 

 

LEDs

Figure 10-6 shows the rear panel LEDs. Table 10-5 describes the status of the rear panel LEDs.

Figure 10-10 Rear panel LEDs

(1) Link LED of the Ethernet port

(2) Activity LED of the Ethernet port

(3) UID LED

(4) Power supply 1 LED

(5) Power supply 2 LED

 

 

Table 10-5 LEDs on the rear panel

LED

Status

Link LED of the Ethernet port

·         Steady green—A link is present on the port.

·         Off—No link is present on the port.

Activity LED of the Ethernet port

·         Flashing green (1 Hz)—The port is receiving or sending data.

·         Off—The port is not receiving or sending data.

UID LED

·         Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡  Press the UID button LED.

¡  Enable UID LED from HDM.

·         Flashing blue:

¡  1 Hz—The firmware is being updated or the system is being managed by HDM.

¡  4 Hz—HDM is restarting. To restart HDM, press the UID button LED for eight seconds.

·         Off—UID LED is not activated.

Power supply LED

·         Steady green—The power supply is operating correctly.

·         Flashing green (1 Hz)—Power is being input correctly but the system is not powered on.

·         Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·         Flashing green (2 Hz)—The power supply is updating its firmware.

·         Steady amber—Either of the following conditions exists:

¡  The power supply is faulty.

¡  The power supply does not have power input, but the other power supply has correct power input.

·         Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·         Off—No power supplies have power input, which can be caused by an incorrect power cord connection or power source shutdown.

 

Ports

For detailed port locations, see "Rear panel view."

Table 10-6 Ports on the rear panel

Port

Type

Description

BIOS serial port

DB-9

The BIOS serial port is used for the following purposes:

·         Log in to the server when the remote network connection to the server has failed.

·         Establish a GSM modem or encryption lock connection.

VGA connector

DB-15

Connects a display terminal, such as a monitor or KVM device.

USB connector

USB 3.0

Connects the following devices:

·         USB flash drive.

·         USB keyboard or mouse.

·         USB optical drive for operating system installation.

HDM dedicated network port

RJ-45

Establishes a network connection to manage HDM from its Web interface.

Power receptacle

Standard single-phase

Connects the power supply to the power source.

 

System board

System board components

Figure 10-11 System board components

(1) TPM/TCM connector

(2) Mezzanine storage controller connector

(3) PCIe riser connector 1 (processor 1)

(4) SATA DOM power connector 1

(5) System maintenance switch 1

(6) SATA DOM power connector 2

(7) mLOM Ethernet adapter connector

(8) Mini-SAS HD port (×4 SATA port)

(9) Ethernet adapter NCSI connector

(10) System battery

(11) Mini-SAS HD port (×8 SATA port)

(12) System maintenance switch 2

(13) System maintenance switch 3

(14) Front I/O connector

(15) Optical/SATA port 1

(16) NVMe VROC module connector

(17) Air inlet temperature sensor connector

(18) Fan bay 7

(19) Fan bay 6

(20) Fan bay 5

(21) Fan bay 4

(22) M.2 transfer module connector

(23) Fan bay 3

(24) Fan bay 2

(25) Fan bay 1

(26) Dual internal USB 3.0 connectors

(27) Drive backplane AUX connector 2

(28) Shared connector for the front chassis-open alarm module and the VGA and USB 2.0 cable

(29) Drive backplane power connector 1

(30) Drive backplane AUX connector 1

(31) Drive backplane power connector 2

(32) SATA port 0

(33) PCIe riser connector 2 (processor 2)

(34) Dual SD card extended module connector

 

System maintenance switches

Use the system maintenance switch if you forget HDM username, HDM password, or BIOS password, or need to restore default BIOS settings, as described in Table 10-7. To identify the location of the switches on the system board, see Figure 10-11.

Table 10-7 System maintenance switches

Item

Description

Remarks

System maintenance switch 1

·         Pins 1-2 jumped (default)—HDM login requires the username and password of a valid HDM user account.

·         Pins 2-3 jumped—HDM login requires the default username and password.

For security purposes, jump pins 1 and 2 after you complete tasks with the default username and password as a best practice.

System maintenance switch 2

·         Pins 1-2 jumped (default)—Normal server startup.

·         Pins 2-3 jumped—Restores the default BIOS settings.

To restore the default BIOS settings, jump pins 2 and 3 for over 30 seconds, and then jump pins 1 and 2 for normal server startup.

System maintenance switch 3

·         Pins 1-2 jumped (default)—Normal server startup.

·         Pins 2-3 jumped—Clears all passwords from the BIOS at server startup.

To clear all passwords from the BIOS, jump pins 2 and 3 and then start the server. All the passwords will be cleared from the BIOS. Before the next server startup, jump pins 1 and 2 to perform a normal server startup.

 

DIMM slots

The server provides 6 DIMM channels per processor, 12 channels in total. Channels 1 and 4 each contain one white-coded slot and one black-coded slot, and all other channels each contain only one white-coded slot, as shown in Table 10-8. In total, you can configure a maximum of eight DIMMs for each processor.

Table 10-8 DIMM slot numbering and color-coding scheme

Processor

DlMM slots

Processor 1

A1 through A6 (white coded)

A7 and A8 (black coded)

Processor 2

B1 through B6 (white coded)

B7 and B8 (black coded)

 

Figure 10-12 shows the physical layout of the DIMM slots on the system board. For more information about the DIMM slot population rules, see the guidelines in "Installing DIMMs."

Figure 10-12 DIMM physical layout

 


11 Appendix B  Component specifications

About component model names

The model name of a hardware option in this document might differ slightly from its model name label.

A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR4-2666-8G-1Rx8-R memory model represents memory module labels including UN-DDR4-2666-8G-1Rx8-R, UN-DDR4-2666-8G-1Rx8-R-F, and UN-DDR4-2666-8G-1Rx8-R-S, which have different suffixes.

Processors

Intel processors

Table 11-1 Skylake processors

Model

Base frequency

Power

Number of cores

Cache (L3)

UPI links

UPI speed

Supported max. data rate of DIMMs

8156

3.6 GHz

105 W

4

16.50 MB

3

10.4 GT/s

2666 MHz

8153

2.0 GHz

125 W

16

22.00 MB

3

10.4 GT/s

2666 MHz

6138

2.0 GHz

125 W

20

27.50 MB

3

10.4 GT/s

2666 MHz

6130

2.1 GHz

125 W

16

22.00 MB

3

10.4 GT/s

2666 MHz

6128

3.4 GHz

115 W

6

19.25 MB

3

10.4 GT/s

2666 MHz

6126

2.6 GHz

125 W

12

19.25 MB

3

10.4 GT/s

2666 MHz

5122

3.6 GHz

105 W

4

16.50 MB

2

10.4 GT/s

2666 MHz

5120

2.2 GHz

105 W

14

19.25 MB

2

10.4 GT/s

2400 MHz

5118

2.3 GHz

105 W

12

16.5 MB

2

10.4 GT/s

2400 MHz

5117

2.0 GHz

105 W

14

19.25 MB

2

10.4 GT/s

2400 MHz

5115

2.4 GHz

85 W

10

13.75 MB

2

10.4 GT/s

2400 MHz

4116

2.1 GHz

85 W

12

16.5 MB

2

9.6 GT/s

2400 MHz

4114

2.2 GHz

85 W

10

13.75 MB

2

9.6 GT/s

2400 MHz

4112

2.6 GHz

85 W

4

8.25 MB

2

9.6 GT/s

2400 MHz

4110

2.1 GHz

85 W

8

11 MB

2

9.6 GT/s

2400 MHz

4108

1.8 GHz

85 W

8

11 MB

2

9.6 GT/s

2400 MHz

3106

1.7 GHz

85 W

8

11 MB

2

9.6 GT/s

2133 MHz

3104

1.7 GHz

85 W

6

8.25 MB

2

9.6 GT/s

2133 MHz

6138T

2.0 GHz

125 W

20

27.50 MB

3

10.4 GT/s

2666 MHz

6130T

2.1 GHz

125 W

16

22.00 MB

3

10.4 GT/s

2666 MHz

6126T

2.6 GHz

125 W

12

19.25 MB

3

10.4 GT/s

2666 MHz

5120T

2.2 GHz

105 W

14

19.25 MB

2

10.4 GT/s

2400 MHz

5119T

1.9 GHz

85 W

14

19.25 MB

2

10.4 GT/s

2400 MHz

4116T

2.1 GHz

85 W

12

16.5 MB

2

9.6 GT/s

2400 MHz

4114T

2.2 GHz

85 W

10

13.75 MB

2

9.6 GT/s

2400 MHz

4109T

2.0 GHz

70 W

8

11 MB

2

9.6 GT/s

2400 MHz

 

Table 11-2 Cascade Lake processors

Model

Base frequency

Power

Number of cores

Cache (L3)

UPI links

UPI speed

Supported max. data rate of DIMMs

6230

2.1 GHz

125 W

20

27.5 MB

3

10.4 GT/s

2666 MHz

5220

2.2 GHz

125 W

18

24.75 MB

2

10.4 GT/s

2666 MHz

5218

2.3 GHz

125 W

16

22 MB

2

10.4 GT/s

2933 MHz

 

Jintide-C series processors

Model

Base frequency

Power

Number of cores

Cache (L3)

UPI links

UPI speed

Supported max. data rate of DIMMs

C1640

2.1 GHz

125 W

16

22 MB

3

10.4 GT/s

2666 MHz

C1450

2.2 GHz

105 W

14

19.25 MB

2

10.4 GT/s

2400 MHz

C1230

2.3 GHz

105 W

12

16.5 MB

2

10.4 GT/s

2400 MHz

C1020

2.2 GHz

85 W

10

13.75 MB

2

9.6 GT/s

2400 MHz

C0810

2.1 GHz

85 W

8

11 MB

2

9.6 GT/s

2400 MHz

 

DIMMs

The server provides 6 DIMM channels per processor, 12 channels in total. Each DIMM channel supports a maximum of eight ranks. For the physical layout of DIMM slots, see "DIMM slots."

DRAM specifications

Product code (P/N)

Model

Type

Capacity

Data rate

Rank

0231AADX

DDR4-16G-1Rx4-R-1

RDIMM

16 G

2400 MHz

Single-rank

0231AADY

DDR4-32G-2Rx4-R-1

RDIMM

32 G

2400 MHz

Dual-rank

0231A6SR

DDR4-2666-8G-1Rx8-R

RDIMM

8 GB

2666 MHz

Single-rank

0231A6SP

DDR4-2666-16G-1Rx4-R

RDIMM

16 GB

2666 MHz

Single-rank

0231A6SQ

DDR4-2666-16G-2Rx8-R

RDIMM

16 GB

2666 MHz

Dual-rank

0231AADP

DDR4-2666-16G-1Rx4-R-1

RDIMM

16 GB

2666 MHz

Single-rank

0231AAEF

DDR4-2666-16G-1Rx4-R-2

RDIMM

16 GB

2666 MHz

Single-rank

0231AAEG

DDR4-2666-16G-1Rx4-R-3

RDIMM

16 GB

2666 MHz

Single-rank

0231AAEH

DDR4-2666-16G-2Rx8-R-1

RDIMM

16 GB

2666 MHz

Dual-rank

0231AAE8

DDR4-2666-16G-2Rx8-R-2

RDIMM

16 GB

2666 MHz

Dual-rank

0231AAE0

DDR4-2666-16G-2Rx8-R-3

RDIMM

16 GB

2666 MHz

Dual-rank

0231A6SS

DDR4-2666-32G-2Rx4-R

RDIMM

32 GB

2666 MHz

Dual-rank

0231AAE9

DDR4-2666-32G-2Rx4-R-1

RDIMM

32 GB

2666 MHz

Dual-rank

0231AAEJ

DDR4-2666-32G-2Rx4-R-2

RDIMM

32 GB

2666 MHz

Dual-rank

0231AAEK

DDR4-2666-32G-2Rx4-R-3

RDIMM

32 GB

2666 MHz

Dual-rank

0231A8QJ

DDR4-2666-64G-4Rx4-L

LRDIMM

64 GB

2666 MHz

Quad-rank

0231AADQ

DDR4-2666-64G-4Rx4-L-1

LRDIMM

64 GB

2666 MHz

Quad-rank

0231AADT

DDR4-2666-64G-4Rx4-L-2

LRDIMM

64 GB

2666 MHz

Quad-rank

0231AADR

DDR4-2666-64G-4Rx4-L-3

LRDIMM

64 GB

2666 MHz

Quad-rank

0231AC4S

DDR4-2933P-16G-1Rx4-R

RDIMM

16 GB

2933 MHz

Single-rank

0231AC4V

DDR4-2933P-16G-2Rx8-R

RDIMM

16 GB

2933 MHz

Dual-rank

0231AC4T

DDR4-2933P-32G-2Rx4-R

RDIMM

32 GB

2933 MHz

Dual-rank

0231AC4N

DDR4-2933P-64G-2Rx4-R

RDIMM

64 GB

2933 MHz

Dual-rank

 

DCPMM specifications

Product code

Model

Type

Capacity

Data rate

0231AC5R

AP-128G-NMA1XBD128GQSE

Apache Pass

128 GB

2666 MHz

0231AC7P

AP-256G-NMA1XBD256GQSE

Apache Pass

256 GB

2666 MHz

0231AC65

AP-512G-NMA1XBD512GQSE

Apache Pass

512 GB

2666 MHz

 

DRAM DIMM rank classification label

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

To determine the rank classification of a DRAM DIMM, use the label attached to the DIMM, as shown in Figure 11-1.

Figure 11-1 DIMM rank classification label

 

Table 11-3 DIMM rank classification label description

Callout

Description

Remarks

1

Capacity

N/A

2

Number of ranks

N/A

3

Data width

·         ×4—4 bits.

·         ×8—8 bits.

4

DIMM generation

Only DDR4 is supported.

5

Data rate

·         2133P—2133 MHz.

·         2400T—2400 MHz.

·         2666V—2666 MHz.

·         2933Y—2933 MHz.

6

DIMM type

·         L—LRDIMM.

·         R—RDIMM.

 

HDDs and SSDs

Drive specifications

SAS HDDs

Model

Form factor

Capacity

Rate

Rotating speed

HDD-300G-SAS-12G-10K-SFF-EP

SFF

300 GB

12 Gbps

10000 RPM

HDD-300G-SAS-12G-15K-SFF

SFF

300 GB

12 Gbps

15000 RPM

HDD-600G-SAS-12G-10K-SFF

SFF

600 GB

12 Gbps

10000 RPM

HDD-600G-SAS-12G-10K-SFF-1

SFF

600 GB

12 Gbps

10000 RPM

HDD-600G-SAS-12G-15K-SFF-1

SFF

600 GB

12 Gbps

15000 RPM

HDD-900G-SAS-12G-10K-SFF

SFF

900 GB

12 Gbps

10000 RPM

HDD-900G-SAS-12G-15K-SFF

SFF

900 GB

12 Gbps

15000 RPM

HDD-1.2T-SAS-12G-10K-SFF

SFF

1.2 TB

12 Gbps

10000 RPM

HDD-1.8T-SAS-12G-10K-SFF

SFF

1.8 TB

12 Gbps

10000 RPM

HDD-2.4T-SAS-12G-10K-SFF

SFF

2.4 TB

12 Gbps

10000 RPM

HDD-300G-SAS-12G-10K-LFF-EP

LFF

300 GB

12 Gbps

10000 RPM

HDD-300G-SAS-12G-15K-LFF-EP

LFF

300 GB

12 Gbps

15000 RPM

HDD-600G-SAS-12G-10K-LFF-1

LFF

600 GB

12 Gbps

10000 RPM

HDD-600G-SAS-12G-10K-LFF

LFF

600 GB

12 Gbps

10000 RPM

HDD-600G-SAS-12G-15K-LFF-1

LFF

600 GB

12 Gbps

15000 RPM

HDD-900G-SAS-12G-15K-LFF

LFF

900 GB

12 Gbps

15000 RPM

HDD-2T-SAS-12G-7.2K-LFF

LFF

2 TB

12 Gbps

7200 RPM

HDD-2.4T-SAS-12G-10K-LFF

LFF

2.4 TB

12 Gbps

10000 RPM

HDD-4T-SAS-12G-7.2K-LFF

LFF

4 TB

12 Gbps

7200 RPM

HDD-6T-SAS-12G-7.2K-LFF

LFF

6 TB

12 Gbps

7200 RPM

HDD-8T-SAS-12G-7.2K-LFF

LFF

8 TB

12 Gbps

7200 RPM

HDD-10T-SAS-12G-7.2K-LFF

LFF

10 TB

12 Gbps

7200 RPM

HDD-12T-SAS-12G-7.2K-LFF

LFF

12 TB

12 Gbps

7200 RPM

 

SATA HDDs

Model

Form factor

Capacity

Rate

Rotating speed

HDD-1T-SATA-6G-7.2K-SFF-1

SFF

1 TB

6 Gbps

7200 RPM

HDD-2T-SATA-6G-7.2K-SFF

SFF

2 TB

6 Gbps

7200 RPM

HDD-1T-SATA-6G-7.2K-LFF

LFF

1 TB

6 Gbps

7200 RPM

HDD-1T-SATA-6G-7.2K-LFF-1

LFF

1 TB

6 Gbps

7200 RPM

HDD-2T-SATA-6G-7.2K-LFF

LFF

2 TB

6 Gbps

7200 RPM

HDD-2T-SATA-6G-7.2K-LFF-1

LFF

2 TB

6 Gbps

7200 RPM

HDD-2T-SATA-6G-7.2K-LFF-4

LFF

2 TB

6 Gbps

7200 RPM

HDD-4T-SATA-6G-7.2K-LFF

LFF

4 TB

6 Gbps

7200 RPM

HDD-4T-SATA-6G-7.2K-LFF-BS

LFF

4 TB

6 Gbps

7200 RPM

HDD-4T-SATA-6G-7.2K-LFF-BH

LFF

4 TB

6 Gbps

7200 RPM

HDD-4T-SATA-6G-7.2K-LFF-1

LFF

4 TB

6 Gbps

7200 RPM

HDD-4T-SATA-6G-7.2K-LFF-2

LFF

4 TB

6 Gbps

7200 RPM

HDD-4T-SATA-6G-7.2K-LFF-3

LFF

4 TB

6 Gbps

7200 RPM

HDD-6T-SATA-6G-7.2K-LFF

LFF

6 TB

6 Gbps

7200 RPM

HDD-6T-SATA-6G-7.2K-LFF-BS

LFF

6 TB

6 Gbps

7200 RPM

HDD-6T-SATA-6G-7.2K-LFF-BH

LFF

6 TB

6 Gbps

7200 RPM

HDD-8T-SATA-6G-7.2K-LFF

LFF

8 TB

6 Gbps

7200 RPM

HDD-8T-SATA-6G-7.2K-LFF-C

LFF

8 TB

6 Gbps

7200 RPM

HDD-8T-SATA-6G-7.2K-LFF-2

LFF

8 TB

6 Gbps

7200 RPM

HDD-8T-SATA-6G-7.2K-LFF-3

LFF

8 TB

6 Gbps

7200 RPM

HDD-8T-SATA-6G-7.2K-LFF-4

LFF

8 TB

6 Gbps

7200 RPM

HDD-10T-SATA-6G-7.2K-LFF

LFF

10 TB

6 Gbps

7200 RPM

HDD-10T-SATA-6G-7.2K-LFF

LFF

10 TB

6 Gbps

7200 RPM

HDD-10T-SATA-6G-7.2K-LFF-1

LFF

10 TB

6 Gbps

7200 RPM

HDD-12T-SATA-6G-7.2K-LFF

LFF

12 TB

6 Gbps

7200 RPM

HDD-12T-SATA-6G-7.2K-LFF-1

LFF

12 TB

6 Gbps

7200 RPM

HDD-12T-SATA-6G-7.2K-LFF-2

LFF

12 TB

6 Gbps

7200 RPM

HDD-14T-SATA-6G-7.2K-LFF

LFF

14 TB

6 Gbps

7200 RPM

 

SATA SSDs

Model

Vendor

Form factor

Capacity

Rate

SSD-150G-SATA-6G-SFF-EV

Intel

SFF

150 GB

6 Gbps

SSD-240G-SATA-6G-SFF-EM-i

Intel

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-SFF-1-EV-i

Intel

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-SFF-i

Intel

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-SFF-1

Samsung

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-SFF-3

Micron

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-SFF-S3

Micron

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-EM-SFF-i-2

Intel

SFF

240 GB

6 Gbps

SSD-240G-SATA-6G-EV-SFF-i-1

Intel

SFF

240 GB

6 Gbps

SSD-480G-SATA-6G-SFF-2

Micron

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-EM-i

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EV-SFF-i-2

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EM-SFF-i-3

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-EV

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-i

Intel

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-1

Micron

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-2

Micron

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-3

Samsung

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-4

Samsung

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-SFF-5

Micron

SFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EV-SFF-sa

Samsung

SFF

480 GB

6 Gbps

SSD-480G-SATA-Ny1351-SFF-6

Seagate

SFF

480 GB

6 Gbps

SSD-480G-SATA-Ny1351-SCL

Seagate

SFF

480 GB

6 Gbps

SSD-800G-SATA-6G-SFF-i-2

Intel

SFF

800 GB

6 Gbps

SSD-800G-SATA-6G-SFF-1

TOSHIBA

SFF

800 GB

6 Gbps

SSD-960G-SATA-6G-SFF-EM-i

Intel

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-2

Micron

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-3

Samsung

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EM-SFF-m

Micron

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EV-SFF-i

Intel

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EM-SFF-i-2

Intel

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-2

Micron

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-4

Micron

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-6

Samsung

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-i

Intel

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-SFF-1

Samsung/Micron

SFF

960 GB

6 Gbps

SSD-960G-SATA-Ny1351-SFF-7

Seagate

SFF

960 GB

6 Gbps

SSD-960G-SATA-Ny1351-SCL

Seagate

SFF

960 GB

6 Gbps

SSD-960G-SATA-6G-PM883-SFF

Samsung

SFF

960 GB

6 Gbps

SSD-960G-SATA-PM883-SFF

Samsung

SFF

960 GB

6 Gbps

SSD-1.2T-SATA-6G-SFF-i-1

Intel

SFF

1.2 TB

6 Gbps

SSD-1.6T-SATA-6G-SFF-i-1

Intel

SFF

1.6 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF

Micron

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EM-SFF-i-1

Intel

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF-1

Samsung

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF-2

Samsung

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF-3

Micron

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EM-SFF-m

Micron

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EV-SFF-i

Intel

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF-i

Intel

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-SFF-EM-i

Intel

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-PM883-SFF

Samsung

SFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-PM883-SFF

Samsung

SFF

1.92 TB

6 Gbps

SSD-3.84T-SATA-6G-EM-SFF-i

Intel

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-EV-SFF-i

Intel

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-SFF

Micron

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-SFF-1

Samsung

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-SFF-2

Micron

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-SFF-3

Samsung

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-SFF-i

Intel

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-PM883-SFF

Samsung

SFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-PM883-SFF

Samsung

SFF

3.84 TB

6 Gbps

SSD-150G-SATA-6G-LFF-i

Intel

LFF

150 GB

6 Gbps

SSD-240G-SATA-6G-LFF-1-EV-i

Intel

LFF

240 GB

6 Gbps

SSD-240G-SATA-6G-LFF-i-EM

Intel

LFF

240 GB

6 Gbps

SSD-240G-SATA-6G-LFF-EV

Intel

LFF

240 GB

6 Gbps

SSD-240G-SATA-6G-LFF-1

Samsung

LFF

240 GB

6 Gbps

SSD-240G-SATA-6G-LFF-3

Micron

LFF

240 GB

6 Gbps

SSD-240G-SATA-6G-EV-SCL-i

Intel

LFF

240 GB

6 Gbps

SSD-240G-SATA-6G-EM-SCL-i-1

Intel

LFF

240 GB

6 Gbps

SSD-480G-SATA-6G-LFF

Micron

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-LFF-2

Samsung

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-LFF-4

Micron

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-LFF-5

Samsung

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EV-SCL-i-1

Intel

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EM-SCL-i-2

Intel

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-LFF-i-EM

Intel

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-LFF-EV

Intel

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-EV-SCL-sa

Samsung

LFF

480 GB

6 Gbps

SSD-480G-SATA-6G-LFF-i

Intel

LFF

480 GB

6 Gbps

SSD-800G-SATA-6G-LFF-i

Intel

LFF

800 GB

6 Gbps

SSD-800G-SATA-6G-LFF-B-i

Intel

LFF

800 GB

6 Gbps

SSD-800G-SATA-6G-LFF-1

TOSHIBA

LFF

800 GB

6 Gbps

SSD-960G-SATA-6G-LFF-i-EM

Intel

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-LFF-EV

Intel

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EM-SCL-m

Micron

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EV-SCL-i

Intel

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-EM-SCL-i

Intel

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-LFF

Samsung

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-LFF-1

Micron

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-LFF-2

Samsung

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-LFF-4

Micron

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-LFF-5

Samsung

LFF

960 GB

6 Gbps

SSD-960G-SATA-6G-PM883-SCL

Samsung

LFF

960 GB

6 Gbps

SSD-960G-SATA-PM883-SCL

Samsung

LFF

960 GB

6 Gbps

SSD-1.92T-SATA-6G-LFF

Micron

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EM-SCL-i

Intel

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EM-SCL-m

Micron

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-EV-SCL-i

Intel

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-LFF-i-EM

Intel

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-LFF-1

Samsung

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-LFF-2

Samsung

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-LFF-3

Micron

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-LFF-EV-2

Intel

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-6G-PM883-SCL

Samsung

LFF

1.92 TB

6 Gbps

SSD-1.92T-SATA-PM883-SCL

Samsung

LFF

1.92 TB

6 Gbps

SSD-3.84T-SATA-6G-LFF

Micron

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-LFF-1

Samsung

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-LFF-EV

Intel

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-LFF-3

Micron

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-EM-SCL-i

Intel

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-EV-SCL-i

Intel

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-LFF-2

Samsung

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-6G-PM883-SCL

Samsung

LFF

3.84 TB

6 Gbps

SSD-3.84T-SATA-PM883-SCL

Samsung

LFF

3.84 TB

6 Gbps

 

NVMe SSDs

Model

Vendor

Form factor

Capacity

Interface

Rate

SSD-375G-NVMe-SFF-i

Intel

SFF

375 GB

PCIe

8 Gbps

SSD-375G-NVMe-SFF-i-1

Intel

SFF

375 GB

PCIe

8 Gbps

SSD-450G-NVMe-SFF-i

Intel

SFF

450 GB

PCIe

8 Gbps

SSD-750G-NVMe-SFF-i

Intel

SFF

750 GB

PCIe

8 Gbps

SSD-750G-NVMe-SFF-i-1

Intel

SFF

750 GB

PCIe

8 Gbps

SSD-960G-NVMe-SFF

Samsung

SFF

960 GB

PCIe

8 Gbps

SSD-960G-NVMe-EV-SFF-sa

Samsung

SFF

960 GB

PCIe

8 Gbps

SSD-960G-NVMe-SFF-1

HGST

SFF

960 GB

PCIe

8 Gbps

SSD-1T-NVMe-SFF-i-2

Intel

SFF

1.0 TB

PCIe

8 Gbps

SSD-1T-NVMe-SFF-i

Intel

SFF

1.0 TB

PCIe

8 Gbps

SSD-1T-NVMe-SFF-i-1

Intel

SFF

1.0 TB

PCIe

8 Gbps

SSD-1.2T-NVMe-SFF-i

Intel

SFF

1.2 TB

PCIe

8 Gbps

SSD-1.6T-NVMe-EM-SFF-i

Intel

SFF

1.6 TB

PCIe

8 Gbps

SSD-1.6T-NVMe-SFF-1

HGST

SFF

1.6 TB

PCIe

8 Gbps

SSD-1.6T-NVMe-SFF-i-1

Intel

SFF

1.6 TB

PCIe

8 Gbps

SSD-1.92T-NVMe-SFF-1

HGST

SFF

1.92 TB

PCIe

8 Gbps

SSD-1.92T-NVMe-EV-SFF-sa

Samsung

SFF

1.92 TB

PCIe

8 Gbps

SSD-2T-NVMe-SFF-2

Memblaze

SFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-SFF-i

Intel

SFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-SFF-i-6

Intel

SFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-SFF-i-2

Intel

SFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-SFF-i-3

Intel

SFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-SFF-i-1

Intel

SFF

2.0 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-EM-SFF-mbl

Memblaze

SFF

3.2 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-EM-SFF-i

Intel

SFF

3.2 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-SFF-1

Intel

SFF

3.2 TB

PCIe

8 Gbps

SSD-4T-NVMe-SFF-i-2

Intel

SFF

4.0 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-SFF-2

HGST

SFF

3.2 TB

PCIe

8 Gbps

SSD-3.84T-NVMe-SFF-1

HGST

SFF

3.84 TB

PCIe

8 Gbps

SSD-3.84T-NVMe-EV-SFF-sa

Samsung

SFF

3.84 TB

PCIe

8 Gbps

SSD-4T-NVMe-SFF-i-1

Intel

SFF

4.0 TB

PCIe

8 Gbps

SSD-4T-NVMe-SFF-1

Memblaze

SFF

4.0 TB

PCIe

8 Gbps

SSD-4T-NVMe-SFF-i-5

Intel

SFF

4.0 TB

PCIe

8 Gbps

SSD-6.4T-NVMe-SFF-1

HGST

SFF

6.4 TB

PCIe

8 Gbps

SSD-6.4T-NVMe-EM-SFF-mbl

Memblaze

SFF

6.4 TB

PCIe

8 Gbps

SSD-6.4T-NVMe-EM-SFF-i

Intel

SFF

6.4 TB

PCIe

8 Gbps

SSD-7.68T-NVMe-CE-SFF-i

Intel

SFF

7.68 TB

PCIe

8 Gbps

SSD-7.68T-NVMe-EM-SFF-i

Intel

SFF

7.68 TB

PCIe

8 Gbps

SSD-8T-NVMe-SFF-i

Intel

SFF

8.0 TB

PCIe

8 Gbps

SSD-375G-NVMe-LFF-i

Intel

LFF

375 GB

PCIe

8 Gbps

SSD-375G-NVMe-SCL-i

Intel

LFF

375 GB

PCIe

8 Gbps

SSD-450G-NVMe-LFF

Intel

LFF

450 GB

PCIe

8 Gbps

SSD-750G-NVMe-SCL-i

Intel

LFF

750 GB

PCIe

8 Gbps

SSD-750G-NVMe-LFF-i

Intel

LFF

750 GB

PCIe

8 Gbps

SSD-960G-NVMe-EV-SCL-sa

Samsung

LFF

960 GB

PCIe

8 Gbps

SSD-960G-NVMe-LFF

Samsung

LFF

960 GB

PCIe

8 Gbps

SSD-960G-NVMe-LFF-1

HGST

LFF

960 GB

PCIe

8 Gbps

SSD-1T-NVMe-LFF-i-2

Intel

LFF

1.0 TB

PCIe

8 Gbps

SSD-1T-NVMe-LFF-i

Intel

LFF

1.0 TB

PCIe

8 Gbps

SSD-1T-NVMe-LFF-i-1

Intel

LFF

1.0 TB

PCIe

8 Gbps

SSD-1.2T-NVMe-LFF

Intel

LFF

1.2 TB

PCIe

8 Gbps

SSD-1.6T-NVMe-EM-SCL-i

Intel

LFF

1.6 TB

PCIe

8 Gbps

SSD-1.6T-NVMe-LFF-1

HGST

LFF

1.6 TB

PCIe

8 Gbps

SSD-1.6T-NVMe-LFF-i

Intel

LFF

1.6 TB

PCIe

8 Gbps

SSD-1.92T-NVMe-LFF-1

HGST

LFF

1.92 TB

PCIe

8 Gbps

SSD-1.92T-NVMe-EV-SCL-sa

Samsung

LFF

1.92 TB

PCIe

8 Gbps

SSD-2T-NVMe-LFF

Intel

LFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-LFF-1

Memblaze

LFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-LFF-i-3

Intel

LFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-LFF-i-2

Intel

LFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-LFF-i-1

Intel

LFF

2.0 TB

PCIe

8 Gbps

SSD-2T-NVMe-LFF-i

Intel

LFF

2.0 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-EM-SCL-mbl

Memblaze

LFF

3.2 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-EM-SCL-i

Intel

LFF

3.2 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-LFF-2

HGST

LFF

3.2 TB

PCIe

8 Gbps

SSD-3.2T-NVMe-LFF

HGST

LFF

3.2 TB

PCIe

8 Gbps

SSD-3.84T-NVMe-LFF-1

HGST

LFF

3.84TB

PCIe

8 Gbps

SSD-3.84T-NVMe-EV-SCL-sa

Samsung

LFF

3.84TB

PCIe

8 Gbps

SSD-4T-NVMe-LFF-i

Intel

LFF

4.0 TB

PCIe

8 Gbps

SSD-4T-NVMe-LFF-i-1

Intel

LFF

4.0 TB

PCIe

8 Gbps

SSD-4T-NVMe-LFF

Memblaze

LFF

4.0 TB

PCIe

8 Gbps

SSD-4T-NVMe-LFF-i-2

Intel

LFF

4.0 TB

PCIe

8 Gbps

SSD-6.4T-NVMe-LFF-1

HGST

LFF

6.4 TB

PCIe

8 Gbps

SSD-6.4T-NVMe-EM-SCL-mbl

Memblaze

LFF

6.4 TB

PCIe

8 Gbps

SSD-6.4T-NVMe-EM-SCL-i

Intel

LFF

6.4 TB

PCIe

8 Gbps

SSD-7.68T-NVMe-CE-SCL-i

Intel

LFF

7.68 TB

PCIe

8 Gbps

SSD-7.68T-NVMe-EM-SCL-i

Intel

LFF

7.68 TB

PCIe

8 Gbps

SSD-8T-NVMe-LFF-i

Intel

LFF

8.0 TB

PCIe

8 Gbps

 

SATA M.2 SSDs

Model

Dimensions

Capacity

Interface

Rate

SSD-240G-SATA-S4510-M.2

M.2 2280: 80 × 22 mm (3.15 × 0.87 in)

240 GB

SATA

6 Gbps

SSD-240G-SATA-M2

M.2 2280: 80 × 22 mm (3.15 × 0.87 in)

240 GB

SATA

6 Gbps

SSD-256G-SATA-M2

M.2 2280: 80 × 22 mm (3.15 × 0.87 in)

256 GB

SATA

6 Gbps

SSD-480G-SATA-5100ECO-M.2

M.2 2280: 80 × 22 mm (3.15 × 0.87 in)

480 GB

SATA

6 Gbps

SSD-480G-SATA-S4510-M.2

M.2 2280: 80 × 22 mm (3.15 × 0.87 in)

480 GB

SATA

6 Gbps

 

NVMe SSD PCIe accelerator module

Model

Vendor

Form factor

Capacity

Interface

Rate

Link width

SSD-NVME-375G-P4800X

Intel

HHHL

375 GB

PCIe

8 Gbps

×4

SSD-NVME-750G-P4800X

Intel

HHHL

750 GB

PCIe

8 Gbps

×4

SSD-NVME-1.6T-EM-2

SHANNON

LP

1.6 TB

PCIe

8 Gbps

×8

SSD-1.6T-NVME-PM1725b

Samsung

HHHL

1.6 TB

PCIe

8 Gbps

×8

SSD-1.6T-NVME-PM1725b-M

Samsung

HHHL

1.6 TB

PCIe

8 Gbps

×8

SSD-1.6T-NVME-PB516

Memblaze

HHHL

1.6 TB

PCIe

8 Gbps

×8

SSD-NVME-3.2T-EM-2

SHANNON

LP

3.2 TB

PCIe

8 Gbps

×8

SSD-NVME-2T-EV

Memblaze

LP

2.0 TB

PCIe

8 Gbps

×8

SSD-NVME-2T-P4600

Intel

HHHL

2.0 TB

PCIe

8 Gbps

×8

SSD-NVME-3.2T-PBlaze5

Memblaze

HHHL

3.2 TB

PCIe

8 Gbps

×8

SSD-NVME-4T-PBlaze5

Memblaze

HHHL

4.0 TB

PCIe

8 Gbps

×8

SSD-NVME-4T-P4500

Intel

HHHL

4.0 TB

PCIe

8 Gbps

×4

SSD-NVME-4T-P4600

Intel

HHHL

4.0 TB

PCIe

8 Gbps

×8

SSD-NVME-6.4T-PBlaze5

Memblaze

HHHL

6.4 TB

PCIe

8 Gbps

×8

 

Drive LEDs

The server supports SAS, SATA, and NVMe drives, of which SAS and SATA drives are hot swappable by default. You can use the LEDs on a drive to identify its status after it is connected to a storage controller.

Figure 11-2 shows the location of the LEDs on a drive.

Figure 11-2 Drive LEDs

R190_硬盘编号1、2.png

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 11-4. To identify the status of an NVMe drive, use Table 11-5.

Table 11-4 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Table 11-5 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Off

The managed hot removal process is completed. You can remove the drive safely.

Flashing amber (4.0 Hz)

Off

The drive is in hot plug process.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing a RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive configurations and numbering

4LFF server

Table 11-6 presents the drive configurations available for the 4LFF server and their compatible types of storage controllers.

Table 11-6 Drive and storage controller configurations (4LFF server)

Drive configuration

Storage controller

4LFF

(4 front LFF SAS/SATA drives)

·         Embedded RSTe

·         Mezzanine

·         Standard

4LFF+2SFF

(4 front LFF SAS/SATA drives + 2 rear SFF SAS/SATA drives)

·         Embedded RSTe

·         Mezzanine

 

These drive configurations use different drive numbering schemes, as shown in Table 11-7.

Table 11-7 Drive numbering schemes (4LFF server)

Drive configuration

Drive numbering

4LFF (4 front LFF drives)

See Figure 11-3.

4LFF+2SFF (4 front LFF drives and 2 rear SFF drives)

See Figure 11-4.

 

Figure 11-3 Drive numbering for 4LFF drive configurations (4LFF server)

 

Figure 11-4 Drive numbering for 4LFF+2SFF drive configurations (4LFF server)

 

8SFF server

Table 11-8 presents the drive configurations available for the 8SFF server and their compatible types of storage controllers and NVMe SSD expander modules.

Table 11-8 Drive, storage controller, and NVMe SSD expander configurations (8SFF server)

Drive configuration

Storage controller

NVMe SSD expander

Drive backplane and installation requirements

8SFF

(8 front SFF SAS/SATA drives)

·         Embedded RSTe

·         Mezzanine

·         Standard

N/A

Use the drive backplane that supports only 8SFF SAS/SATA drives.

10SFF

(8 front SFF SAS/SATA drives + 2 front SFF SAS/SATA drives)

·         Embedded RSTe

·         Mezzanine + embedded RSTe

·         Standard + embedded RSTe

N/A

Use the drive backplane that supports only 8SFF SAS/SATA drives.

8SFF

(4 front SFF SAS/SATA drives + 4 front SFF NVMe drives)

·         Embedded RSTe

·         Mezzanine

·         Standard

1 × 4-port NVMe SSD expander module

Use the drive backplane that supports both 4SFF SAS/SATA drives and 4SFF NVMe drives.

Install the SAS/SATA drives in drive slots 4 through 7 and install the NVMe drives in drives slots 0 through 3.

8SFF

(8 front SFF NVMe drives)

N/A

·         1 × 8-port NVMe SSD expander module

·         2 × 4-port NVMe SSD expander modules

Use the drive backplane that supports only 8SFF NVMe drives.

10SFF

(8 front SFF NVMe drives + 2 front SFF SAS/SATA drives)

Embedded RSTe

·         1 × 8-port NVMe SSD expander module

·         2 × 4-port NVMe SSD expander modules

Use the drive backplane that supports only 8SFF NVMe drives.

10SFF

(8 front SFF SAS/SATA drives + 2 front SFF NVMe drives)

·         Embedded RSTe

·         Mezzanine

·         Standard

1 × 4-port NVMe SSD expander module

Use the drive backplane that supports only 8SFF SAS/SATA drives.

 

These drive configurations use different drive numbering schemes, as shown in Table 11-9.

Table 11-9 Drive numbering schemes (8SFF server)

Drive configuration

Drive numbering

8SFF

See Figure 11-5.

10SFF

See Figure 11-6.

 

Figure 11-5 Drive numbering for the 8SFF drive configurations (8SFF server)

 

Figure 11-6 Drive numbering for the 10SFF drive configurations (8SFF server)

 

10SFF server

Table 11-10 presents the drive configurations available for the 10SFF server and their compatible types of storage controllers.

Table 11-10 Drive and storage controller configurations (10SFF server)

Drive configuration

Storage controller

10SFF

(10 SFF front SAS/SATA drives)

·         Mezzanine

·         Standard

12SFF

(10SFF front SAS/SATA drives + 2SFF rear SAS/SATA drives)

Mezzanine

 

These drive configurations use different drive numbering schemes, as shown in Table 11-9.

Table 11-11 Drive numbering schemes (8SFF server)

Drive configuration

Drive numbering

10SFF

See Figure 11-7.

12SFF

See Figure 11-8.

 

Figure 11-7 Drive numbering for the 10SFF drive configuration (10SFF server)

 

Figure 11-8 Drive numbering for the 12SFF (10 front + 2 rear) drive configuration (10SFF server)

 

PCIe modules

Typically, the PCIe modules are available in the following standard form factors:

·          LP—Low profile.

·          FHHL—Full height and half length.

·          FHFL—Full height and full length.

·          HHHL—Half height and half length.

·          HHFL—Half height and full length.

Some PCIe modules, such as mezzanine storage controllers, are in non-standard form factors.

Storage controllers

The server supports the following types of storage controllers depending on their form factors:

·          Embedded RAID controllerEmbedded in the server and does not require installation.

·          Mezzanine storage controllerInstalled on the mezzanine storage controller connector of the system board and does not require a riser card for installation.

·          Standard storage controllerComes in a standard PCIe form factor and typically requires a riser card for installation.

For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data. If the storage controller contains a built-in flash card, you can order only a supercapacitor.

Embedded RSTe RAID controller

Item

Specifications

Type

Embedded in PCH of the system board

Connectors

·         One onboard ×8 mini-SAS connector

·         One onboard ×1 SATA connectors

Number of internal ports

9 internal SATA ports

Drive interface

6 Gbps SATA 3.0

PCIe interface

PCIe2.0 ×4

RAID levels

0, 1, 5, 10

Built-in cache memory

N/A

Supported drives

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Upgraded with BIOS

 

HBA-1000-M2-1

Item

Specifications

Type

Mezzanine storage controller

Form factor

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 10

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-H460-B1

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 10

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-H460-M1

Item

Specifications

Type

Mezzanine storage controller

Form factor

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 10

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-LSI-9300-8i-A1-X

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

Not supported

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-LSI-9311-8i

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 1E, 10

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

HBA-LSI-9440-8i

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 10, 50

Built-in cache memory

N/A

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Not supported

Firmware upgrade

Online upgrade

 

RAID-L460-M4

Item

Specifications

Type

Mezzanine storage controller

Dimensions

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

4 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-LSI-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

RAID-LSI-9361-8i(1G)-A1-X

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

1 GB internal cache module (DDR3-1866 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Flash-LSI-G2

The power fail safeguard module is optional.

Built-in flash card

N/A

Supercapacitor connector

N/A

The supercapacitor connector is on the flash card of the power fail safeguard module.

Firmware upgrade

Online upgrade

 

RAID-LSI-9361-8i(2G)-1-X

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID level

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

2 GB internal cache module (DDR3-1866 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Flash-LSI-G2

The power fail safeguard module is optional.

Built-in flash card

N/A

Supercapacitor connector

N/A

The supercapacitor connector is on the flash card of the power fail safeguard module.

Firmware upgrade

Online upgrade

 

RAID-LSI-9460-8i(2G)

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

2 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-LSI-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

RAID-LSI-9460-8i(4G)

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.1 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

4 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-LSI-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

RAID-P430-M1

Item

Specifications

Type

Mezzanine storage controller

Form factor

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 1E, 5, 6, 10, 50, 60, simple volume

Built-in cache memory

1 GB internal cache module (DDR3-1600 MHz, 72-bit bus at 12.8 Gbps)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Flash-PMC-G2

The power fail safe guard module is optional.

Built-in flash card

N/A

Supercapacitor connector

N/A

The supercapacitor connector is on the flash card of the power fail safeguard module.

Firmware upgrade

Online upgrade

 

RAID-P430-M2

Item

Specifications

Type

Mezzanine storage controller

Form factor

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 1E, 5, 6, 10, 50, 60, simple volume

Built-in cache memory

2 GB internal cache module (DDR3-1600 MHz, 72-bit bus at 12.8 Gbps)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

Flash-PMC-G2

The power fail safeguard module is optional.

Built-in flash card

N/A

Supercapacitor connector

N/A

The supercapacitor connector is on the flash card of the power fail safeguard module.

Firmware upgrade

Online upgrade

 

RAID-P460-B2

Item

Specifications

Type

Standard storage controller

Dimensions

LP

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

2 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-PMC-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

RAID-P460-B4

Item

Specifications

Type

Standard storage controller

Form factor

LP

Connectors

One ×8 mini-SAS connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

4 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-PMC-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

RAID-P460-M2

Item

Specifications

Type

Mezzanine storage controller

Dimensions

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS-HD connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 5, 6, 10, 50, 60

Built-in cache memory

2 GB internal cache module (DDR4-2133 MHz)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-PMC-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

RAID-P460-M4

Item

Specifications

Type

Mezzanine storage controller

Dimensions

137 × 103 mm (5.39 × 4.06 in)

Connectors

One ×8 mini-SAS connector

Number of internal ports

8 internal SAS ports (compatible with SATA)

Drive interface

12 Gbps SAS 3.0 (compatible with 6 Gbps SATA 3.0)

PCIe interface

PCIe3.0 ×8

RAID levels

0, 1, 1E, 5, 6, 10, 50, 60

Built-in cache memory

4 GB internal cache module (DDR3-2133 MHz, 72-bit bus at 12.8 Gbps)

Supported drives

·         SAS HDD

·         SAS SSD

·         SATA HDD

·         SATA SSD

Power fail safeguard module

BAT-PMC-G3

The supercapacitor is optional.

Built-in flash card

Available

Supercapacitor connector

Available

Firmware upgrade

Online upgrade

 

NVMe SSD expander modules

Model

Specifications

EX-4NVMe-A

4-port NVMe SSD expander module, which supports a maximum of 4 NVMe SSD drives.

EX-8NVMe-A

8-port NVMe SSD expander module, which supports a maximum of 8 NVMe SSD drives.

 

GPU modules

GPU-M4-1

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

LP, single-slot wide

Maximum power consumption

75 W

Display connectors

N/A

Memory size

4 GB GDDR5

Memory bus width

128 bits

Memory bandwidth

88 Gbps

Power connector

N/A

 

GPU-M4000-1-X

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

FH3/4FL, single-slot wide

Maximum power consumption

120 W

Display connectors

·         1 × DVI-I connector

·         2 × DP connectors

Memory size

8 GB GDDR5

Memory bus width

256 bits

Memory bandwidth

192 Gbps

Power connector

Available

 

GPU-M2000

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

FHHL, single-slot wide

Maximum power consumption

75 W

Display connectors

4 × DP connectors

Memory size

4 GB GDDR5

Memory bus width

128 bits

Memory bandwidth

105.7 Gbps

Power connector

N/A

 

GPU-P4-X

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

LP, single-slot wide

Maximum power consumption

75 W

Display connectors

N/A

Memory size

8 GB GDDR5

Memory bus width

256 bits

Memory bandwidth

192 Gbps

Power connector

N/A

 

GPU-T4

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

LP, single-slot wide

Maximum power consumption

70 W

Display connectors

N/A

Memory size

16 GB GDDR6

Memory bus width

256 bits

Memory bandwidth

320 Gbps

Power connector

N/A

 

GPU-MLU100-D3

Item

Specifications

PCIe interface

PCIe3.0 ×16

Form factor

HHHL, single-slot wide

Maximum power consumption

75 W

Memory size

8 GB

Memory bus width

256 bits

Memory bandwidth

102.4 Gbps

Power interface

N/A

 

PCIe Ethernet adapters

In addition to the PCIe Ethernet adapters, the server also supports mLOM Ethernet adapters (see "mLOM Ethernet adapters").

Figure 11-9 PCIe Ethernet adapter specifications

Model

Ports

Connector

Data rate

Bus type

Form factor

NCSI

CNA-10GE-2P-510F-B2-1-X

2

SFP+

10 Gbps

PCIe 3.0 ×8

LP

Not supported

CNA-10GE-2P-560F-B2-1-X

2

SFP+

10 Gbps

PCIe 2.0 ×8

LP

Not supported

CNA-560T-B2-10Gb-2P-1-X

2

RJ-45

10 Gbps

PCIe 3.0 ×8

LP

Not supported

CNA-QL41262HLCU-11-2*25G

2

SFP28

25 Gbps

PCIe 3.0 ×8

LP

Not supported

IB-MCX555A-ECAT-100Gb-1P

1

QSFP28

100 Gbps

PCIe 3.0 ×16

LP

Not supported

IB-MCX555A-ECAT-100Gb-1P-1

1

QSFP28

100 Gbps

PCIe 3.0 ×16

LP

Not supported

IB-MCX453A-FCAT-56/40Gb-1P

1

QSFP28

56 Gbps

PCIe 3.0 ×8

LP

Not supported

IB-MCX453A-FCAT-56/40Gb-1P-1

1

QSFP28

56 Gbps

PCIe 3.0 ×8

LP

Not supported

IB-MCX354A-FCBT-56/40Gb-2P-X

2

QSFP+

40/56 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-10GE-2P-520F-B2-1-X

2

SFP+

10 Gbps

PCIe 3.0 ×8

LP

Not supported

NIC-10GE-2P-530F-B2-1-X

2

SFP+

10 Gbps

PCIe 2.0 ×8

LP

Not supported

NIC-620F-B2-25Gb-2P-1-X

2

SFP28

25 Gbps

PCIe 3.0 ×8

LP

Supported

NIC-GE-4P-360T-B2-1-X

4

RJ-45

10/100/1000 Mbps

PCIe 2.0 ×4

LP

Not supported

NIC-BCM957416-T-B-10Gb-2P

2

RJ-45

10 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-BCM957302-F-B-10Gb-2P

2

SFP+

10 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-BCM957412-F-B-10Gb-2P

2

SFP+

10 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-BCM957414-F-B-25Gb-2P

2

SFP28

25 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-957454A4540C-B-100G-1P

1

QSFP28

100 Gbps

PCIe 3.0 ×16

LP

Not supported

NIC-CAVIUM-F-B-25Gb-2P

2

SFP28

25 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-MCX415A-F-B-100Gb-1P

1

QSFP28

100 Gbps

PCIe3.0 ×16

LP

Not supported

NIC-MCX416A-F-B-40/56-2P

2

QSFP28

56 Gbps

PCIe 3.0 ×16

LP

Not supported

NIC-MCX416A-F-B-100Gb-2P

2

SFP28

25 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-MCX4121A-F-B-10Gb-2P

2

SFP28

10 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-X520DA2-F-B-10Gb-2P

2

SFP+

10 Gbps

PCIe 2.0 ×8

LP

Not supported

NIC-X540-T2-T-10Gb-2P

2

RJ-45

10 Gbps

PCIe2.0 ×8

LP

Not supported

NIC-XL710-QDA1-F-40Gb-1P

1

QSFP+

40 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-XL710-QDA2-F-40Gb-2P

2

QSFP+

40 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-X710DA2-F-B-10Gb-2P-2

2

SFP+

10 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-X710DA4-F-B-10Gb-4P

4

SFP+

10 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-MCX4121A-F-B-25Gb-2P

2

SFP28

25 Gbps

PCIe 3.0 ×8

LP

Not supported

NIC-MCX512A-ACAT-F-2*25Gb

2

SFP28

25 Gbps

PCIe3.0 ×8

LP

Not supported

NIC-XXV710-F-B-25Gb-2P

2

SFP28

25 Gbps

PCIe 3.0 ×8

LP

Not supported

NIC-OPA-100Gb-1P

1

QSFP28

100 Gbps

PCIe3.0 ×16

LP

Not supported

NIC-10/25Gb-2P-640FLR-SFP28

2

SFP28

25 Gbps

PCIe 3.0 ×8

FLOM

Supported

NIC-iETH-PS225-H16

2

SFP28

25 Gbps

PCIe 3.0 ×8

LP

Supported

 

FC HBAs

Figure 11-10 FC HBA specifications

Model

Ports

Connector

Data rate

Form factor

FC-HBA-QLE2560-8Gb-1P-1-X

1

SFP+

8 Gbps

LP

FC-HBA-QLE2562-8Gb-2P-1-X

2

SFP+

8 Gbps

LP

FC-HBA-QLE2690-16Gb-1P-1-X

1

SFP+

16 Gbps

LP

FC-HBA-QLE2692-16Gb-2P-1-X

2

SFP+

16 Gbps

LP

HBA-8Gb-LPe12000-1P-1-X

1

SFP+

8 Gbps

LP

HBA-8Gb-LPe12002-2P-1-X

2

SFP+

8 Gbps

LP

HBA-16Gb-LPe31000-1P-1-X

1

SFP+

16 Gbps

LP

HBA-16Gb-LPe31002-2P-1-X

2

SFP+

16 Gbps

LP

FC-HBA-LPe32000-32Gb-1P-X

1

SFP+

32 Gbps

LP

FC-HBA-LPe32002-32Gb-2P-X

2

SFP+

32 Gbps

LP

FC-HBA-QLE2740-32Gb-1P

1

SFP+

32 Gbps

LP

FC-HBA-QLE2742-32Gb-2P

2

SFP+

32 Gbps

LP

 

mLOM Ethernet adapters

In addition to mLOM Ethernet adapters, the server also supports PCIe Ethernet adapters (see "PCIe Ethernet adapters").

The server supports one HDM shared network port for out-of-band HDM management, which is available if an NCSI-capable mLOM or PCIe Ethernet adapter is installed.

By default, port 1 on the mLOM Ethernet adapter (if any) is used as the HDM shared network port. If no mLOM Ethernet adapter is installed, port 1 on the PCIe Ethernet adapter is used. You can change the HDM shared network port as needed from the HDM Web interface.

NIC-GE-4P-360T-L3

Item

Specifications

Dimensions

128 × 68 mm (5.04 × 2.68 in)

Ports

4

Connector

RJ-45

Data rate

1000 Mbps

Bus type

1000BASE-X ×4

NCSI

Supported

 

Riser cards

To expand the server with PCIe modules, you can install riser cards on PCIe riser connectors 1 and 2. Riser connector 1 is for processor 1, and riser connector 2 is for processor 2. When a riser card is installed on riser connector 1 or riser connector 2, the PCIe slot provided by the riser card is numbered 1 or 2, respectively.

Each PCIe slot in a riser card can supply a maximum of 75 W power to the PCIe module. You must connect a separate power cord to the PCIe module only if it requires more than 75 W power.

RC-FHHL-1U-G3

Item

Specifications

PCIe riser connector

·         Connector 1

·         Connector 2

PCIe slots

Slot 1/2: PCIe3.0 ×16

Form factors of PCIe modules

·         Slot 1: FHHL

·         Slot 2: FHFL

Maximum power supplied per PCIe slot

75 W

 

Figure 11-11 RC-FHHL-1U-G3 riser card

 

Figure 11-12 PCIe slots when two riser cards are installed

 

Fans

Fan layout

The server supports a maximum of seven hot swappable fans. Figure 11-13 shows the layout of the fans in the chassis.

Figure 11-13 Fan layout

 

Fan specifications

Item

Specifications

Model

FAN-1U-G3

Form factor

1U standard fan

 

Power supplies

The power supplies have an overtemperature protection mechanism. A power supply stops working when an overtemperature occurs and automatically recovers when the overtemperature condition is removed.

550 W Platinum power supply

Item

Specifications

Model

·         PSR550-12A

·         PSR550-12A-1

·         PSR550-12A-2

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         8.0 A @ 100 VAC to 240 VAC

·         2.75 A @ 240 VDC

Maximum rated output power

550 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

550 W high-efficiency Platinum power supply

Item

Specifications

Model

DPS-550W-12A

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         7.1 A @ 100 VAC to 240 VAC

·         2.8 A @ 240 VDC

Maximum rated output power

550 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·         Operating temperature: 0°C to 55°C (32°F to 131°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W Platinum power supply

Item

Specifications

Model

PSR800-12A

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         10.0 A @ 100 VAC to 240 VAC

·         4.0 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W –48 VDC power supply

Item

Specifications

Model

DPS-800W-12A-48V

Rated input voltage range

–48 VDC to –60 VDC

Maximum rated input current

20.0 A @ –48 VDC to –60 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

92%

Temperature requirements

·         Operating temperature: 0°C to 55°C (32°F to 131°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

800 W 336 V high-voltage DC power supply

Item

Specifications

Model

PSR800-12AHD

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·         180 VDC to 400 VDC (240 to 336 HVDC power source)

Maximum rated input current

·         10.0 A @ 100 VAC to 240 VAC

·         3.8 A @ 240 VDC

Maximum rated output power

800 W

Efficiency at 50 % load

94%

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

850 W high-efficiency Platinum power supply

Item

Specifications

Model

DPS-850W-12A

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         10.0 A @ 100 VAC to 240 VAC

·         4.4 A @ 240 VDC

Maximum rated output power

850 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·         Operating temperature: 0°C to 55°C (32°F to 131°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

850 W Titanium power supply

Item

Specifications

Model

PSR850-12A

Rated input voltage range

·         100 VAC to 240 VAC @ 50/60 Hz (10 A receptacle)

·         192 VDC to 288 VDC (240 HVDC power source)

Maximum rated input current

·         11 A @ 100 VAC to 240 VAC

·         4.0 A @ 240 VDC

Maximum rated output power

850 W

Efficiency at 50 % load

96%, 80 Plus Titanium level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 85%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

1200 W Platinum power supply

Item

Specifications

Model

PSR1200-12A

Rated input voltage range

·         100 VAC to 127 VAC @ 50/60 Hz (1000 W)

·         100 VAC to 240 VAC @ 50/60 Hz (1200 W)

·         192 VDC to 288 VDC (1200 W)

Maximum rated input current

·         12.0 A @ 100 VAC to 240 VAC

·         6.0 A @ 240 VDC

Maximum rated output power

1200 W

Efficiency at 50 % load

94%, 80 Plus Platinum level

Temperature requirements

·         Operating temperature: 0°C to 50°C (32°F to 122°F)

·         Storage temperature: –40°C to +70°C (–40°F to +158°F)

Operating humidity

5% to 90%

Maximum altitude

5000 m (16404.20 ft)

Redundancy

1+1 redundancy

Hot swappable

Yes

Cold backup

Yes

 

Expander modules and transfer modules

Model

Specifications

ODD-Cage-1U

Common expander module for optical drive expansion on the 8SFF server

DSD-EX

Dual SD card extended module (supports RAID 1)

RS-M2-B

M.2 transfer module (supports two SATA M.2 SSDs)

UV-1U-LFF

Front media module 1 (available for the 4LFF server)

UV-1U-SFF

Front media module 2 (available for 8SFF and 10SFF servers)

HDD-Cage-2SFF-Rear-1U

Rear 2SFF SAS/SATA drive cage

HDD-Cage-2SFF-Front

Front 2SFF SAS/SATA drive cage

HDD-Cage-2SFF-1U-G3-NVMe

Front 2SFF NVMe drive cage

HDD-Cage-2SFF-Front

Front 2SFF drive cage

 

Diagnostic panels

Diagnostic panels provide diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the diagnostic panels in conjunction with the event log generated in HDM.

 

 

NOTE:

A diagnostic panel displays only one component failure at a time. When multiple component failures exist, the diagnostic panel displays all these failures one by one at intervals of 4 seconds.

 

Diagnostic panel specifications

Model

Specifications

SD-SFF-A

SFF diagnostic panel for the 8SFF and 10SFF servers

SD-LFF-G3-A

LFF diagnostic panel for the 4LFF server

 

Diagnostic panel view

Figure 11-14 shows the error code and LEDs on a diagnostic panel.

Figure 11-14 Diagnostic panel view

(1) Error code

(2) LEDs

 

For more information about the LEDs and error codes, see "LEDs."

LEDs

POST LED

LED status

Error code

Description

Steady green

Code for the current POST phase (in the range of 00 to 99)

The server is performing POST without detecting any error.

Flashing red

Code for the current POST phase (in the range of 00 to 99)

The POST process encountered an error and stopped in the displayed phase.

Off

00

The server is operating correctly when the error code is 00 and all LEDs are off.

 

TEMP LED

LED status

Error code

Description

Flashing red

Temperature sensor ID

A severe temperature warning is present on the component monitored by the sensor.

This warning might occur because the temperature of the component has exceeded the upper threshold or dropped below the lower threshold.

 

CAP LED

LED status

Error code

Description

Flashing amber

01

The system power consumption has exceeded the power cap value.

 

Component LEDs

An alarm is present if a component LED has one of the following behaviors:

·          Flashing amber (0.5 Hz)—A predictive alarm has occurred.

·          Flashing amber (1 Hz)—A general alarm has occurred.

·          Flashing red (1 Hz)—A severe alarm has occurred.

Use Table 11-12 to identify the faulty item if a component LED has one of those behaviors. To obtain records of component status changes, use the event log in HDM. For information about using the event log, see HDM online help.

Table 11-12 LED, error code and faulty item matrix

LED

Error code

Faulty item

BRD

11

System board

21, 22, or 23

Front drive backplane

32

Rear 2SFF drive backplane

71

Mezzanine storage controller power

81

Reserved

91

mLOM Ethernet adapter

NOTE:

If the error code field displays 11 and any other code alternatively, replace the faulty item other than the system board. If the issue persists, replace the system board.

CPU (processor)

01

Processor 1

02

Processor 2

DIMM

A1 through A8

DIMMs in slots A1 through A9

b1 through b8

DIMMs in slots B1 through B9

HDD

00 through 03

Relevant front drive (4LFF server)

00 through 07

Relevant front drive (8SFF server)

00 through 09

Relevant front drive (10SFF server)

00 or 01

Relevant rear drive

PCIE

01 or 02

PCIe modules in PCIe slot 1 or 2 of the riser card

PSU

01

Power supply 1

02

Power supply 2

RAID

04

Mezzanine storage controller status

FAN

01 through 07

Fan 1 through Fan 7

VRD

01

System board P5V voltage

02

System board P1V05 PCH voltage

03

System board PVCC HPMOS voltage

04

System board P3V3 voltage

05

System board P1V8 PCH voltage

06

System board PVCCIO processor 1 voltage

07

System board PVCCIN processor 1 voltage

08

System board PVCCIO processor 2 voltage

09

System board PVCCIN processor 2 voltage

10

System board VPP processor 1 ABC voltage

11

System board VPP processor 1 DEF voltage

12

System board VDDQ processor 1 ABC voltage

13

System board VDDQ processor 1 DEF voltage

14

System board VTT processor 1 ABC voltage

15

System board VTT processor 1 DEF voltage

16

System board VPP processor 1 ABC voltage

17

System board VPP processor 1 DEF voltage

18

System board VDDQ processor 2 ABC voltage

19

System board VDDQ processor 2 DEF voltage

20

System board VTT processor 2 ABC voltage

21

System board VTT processor 2 DEF voltage

 

Fiber transceiver modules

Model

Central wavelength

Connector

Max transmission distance

SFP-XG-SX-MM850-A1-X

850 nm

LC

300 m (984.25 ft)

SFP-XG-SX-MM850-E1-X

850 nm

LC

300 m (984.25 ft)

SFP-25G-SR-MM850-1-X

850 nm

LC

100 m (328.08 ft)

 

Storage options other than HDDs and SDDs

Model

Specifications

SD-32G-Micro-A

32 G microsSD mainstream flash media kit module

SD-32G-Micro-1

32 G microsSD mainstream flash media kit module

USB-32G-A

32 G USB 3.0 storage disk module

DVD-RW-Mobile-USB-A

Removable USB DVDRW optical drive

NOTE:

The optical drive can be connected only to a USB 3.0 port.

DVD-RW-SATA-9.5MM-A

9.5mm SATA DVD-RW optical drive

DVD-ROM-SATA-9.5MM-A

9.5mm SATA DVD-ROM optical drive

 

NVMe VROC modules

Model

RAID levels

Compatible NVMe SSDs

NVMe-VROC-Key-S

0, 1, 10

All NVMe SSDs

NVMe-VROC-Key-P

0, 1, 5, 10

All NVMe SSDs

NVMe-VROC-Key-i

0, 1, 5, 10

Intel NVMe SSDs

 

TPM/TCM modules

Trusted platform module (TPM) is a microchip embedded in the system board. It stores encryption information (such as encryption keys) for authenticating server hardware and software. The TPM operates with drive encryption programs such as Microsoft Windows BitLocker to provide operating system security and data protection. For information about Microsoft Windows BitLocker, visit the Microsoft website at http://www.microsoft.com.

Trusted cryptography module (TCM) is a trusted computing platform-based hardware module with protected storage space, which enables the platform to implement password calculation.

Table 11-13 describes the TPM and TCM modules supported by the server.

Table 11-13 TPM/TCM specifications

Model

Specifications

TPM-2-X

Trusted Platform Module 2.0

TCM-1-X

Trusted Cryptography Module 1.0

 

Security bezels, slide rail kits, and cable management brackets

Model

Description

SEC-Panel-1U-X

1U security bezel

SL-1U-FR

1U standard rail

SL-1U-BB

1U ball bearing rail

CMA-1U-A

1U cable management bracket

 


12 Appendix C  Managed hot removal of NVMe drives

Managed hot removal of NVMe drives enables you to remove NVMe drives safely while the server is operating.

Use Table 12-1 to determine the managed hot removal method depending on the VMD status and the operating system. For more information about VMD, see the BIOS user guide for the server.

Table 12-1 Managed hot removal methods

VMD status

Operating system

Managed hot removal method

Auto

Windows

Performing a managed hot removal in Windows.

Linux

Performing a managed hot removal in Linux.

Disabled

N/A

Contact the support.

 

Performing a managed hot removal in Windows

Prerequisites

Install Intel® Rapid Storage Technology enterprise (Intel® RSTe).

To obtain Intel® RSTe, use one of the following methods:

·          Go to https://platformsw.intel.com/KitSearch.aspx to download the software.

·          Contact Intel Support.

Procedure

1.        Stop reading data from or writing data to the NVMe drive to be removed.

2.        Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.        Run Intel® RSTe.

4.        Unmount the NVMe drive from the operating system, as shown in Figure 12-1:

¡  Select the NVMe drive to be removed.

¡  Click Activate LED to turn on the Fault/UID LED on the drive.

¡  Click Remove Disk.

Figure 12-1 Removing an NVMe drive

 

5.        Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue and the drive is removed from the Devices list, remove the drive from the server.

For more information about the removal procedure, see "Replacing an NVMe drive."

Performing a managed hot removal in Linux

In Linux, you can perform a managed hot removal of NVMe drives from the CLI or by using Intel® Accelerated Storage Manager.

Prerequisites

·          Identify that your operating system is a non-SLES Linux operating system. SLES operating systems do not support managed hot removal of NVMe drives.

·          To perform a managed hot removal by using Intel®  ASM, install Intel®  ASM.

To obtain Intel® ASM, use one of the following methods:

¡  Go to https://platformsw.intel.com/KitSearch.aspx to download the software.

¡  Contact Intel Support.

Performing a managed hot removal from the CLI

1.        Stop reading data from or writing data to the NVMe drive to be removed.

2.        Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.        Access the CLI of the server.

4.        Execute the lsblk | grep nvme command to identify the drive letter of the NVMe drive, as shown in Figure 12-2.

Figure 12-2 Identifying the drive letter of the NVMe drive to be removed

 

5.        Execute the ledctl locate=/dev/drive_letter command to turn on the Fault/UID LED on the drive. The drive_letter argument represents the drive letter, for example, nvme0n1.

6.        Execute the echo 1 > /sys/block/drive_letter/device/device/remove command to unmount the drive from the operating system. The drive_letter argument represents the drive letter, for example, nvme0n1.

7.        Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue, remove the drive from the server.

For more information about the removal procedure, see "Replacing an NVMe drive."

Performing a managed hot removal from the Intel®  ASM Web interface

1.        Stop reading data from or writing data to the NVMe drive to be removed.

2.        Identify the location of the NVMe drive. For more information, see "Drive configurations and numbering."

3.        Run Intel® ASM.

4.        Click RSTe Management.

Figure 12-3 Accessing RSTe Management

 

5.        Expand the Intel(R) VROC(in pass-thru mode) menu to view operating NVMe drives, as shown in Figure 12-4.

Figure 12-4 Viewing operating NVMe drives

 

6.        Click the light bulb icon to turn on the Fault/UID LED on the drive, as shown in Figure 12-5.

Figure 12-5 Turning on the drive Fault/UID LED

 

7.        Click the removal icon, as shown in Figure 12-6.

Figure 12-6 Removing an NVMe drive

 

8.        In the confirmation dialog box that opens, click Yes.

Figure 12-7 Confirming the removal

 

9.        Remove the drive from the server. For more information about the removal procedure, see "Replacing an NVMe drive."


13 Appendix D  Environment requirements

About environment requirements

The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement.

Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance. In a real data center, server cooling performance might decrease because of adverse external factors, including poor cabinet cooling performance, high power density inside the cabinet, or insufficient spacing between devices.

General environment requirements

Item

Specifications

Operating temperature

Minimum: 5°C (41°F)

Maximum: Varies depending on the power consumed by the processors and presence of expansion modules. For more information, see "Operating temperature requirements."

Storage temperature

–40°C to +70°C (–40°F to +158°F)

Operating humidity

8% to 90%, noncondensing

Storage humidity

5% to 90%, noncondensing

Operating altitude

–60 m to +3000 m (–196.85 ft to +9842.52 ft)

The allowed maximum temperature decreases by 0.33 °C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

Storage altitude

–60 m to +5000 m (–196.85 ft to +16404.20 ft)

 

Operating temperature requirements

Guidelines

If a fan fails or is absent, performance of the following components might degrade:

·          DCPMMs.

·          GPU modules.

4LFF server with any drive configuration

Use Table 13-1 to determine the maximum operating temperature of the 4LFF server that uses any drive configuration. A maximum server operating temperature applies if the server contains any options in its matching hardware option list.

If a single fan fails, the maximum server operating temperature drops by 5 °C (41°F) and cannot exceed 35°C (95°F).

 

 

NOTE:

All maximum server operating temperature values are provided on the basis that the fans are installed as needed and operating correctly. For more information about fan configurations, see the guidelines in "Installing fans."

 

Table 13-1 Temperature requirements for the 4LFF server with any drive configuration

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

125 W

Rear drives.

125 W to 165 W (exclusive)

GPU modules:

·         GPU-P4-X.

·         GPU-M4-1.

·         GPU-T4.

·         GPU-MLU100-D3.

35°C (95°F)

Any

·         NVMe SSD PCIe accelerator modules.

·         DCPMMs.

Lower than 125 W

·         Rear drives.

·         GPU modules:

¡  GPU-P4-X.

¡  GPU-M4-1.

¡  GPU-T4.

¡  GPU-MLU100-D3.

40°C (104°F)

Any

·         Rear Ethernet adapter installed with transceiver modules.

·         GPU module GPU-M4000-1-X or GPU-M2000.

·         Ethernet adapters:

¡  IB-MCX453A-FCAT-56/40Gb-1P.

¡  IB-MCX453A-FCAT-56/40Gb-1P-1.

Higher than 125 W

None of the above hardware options or operating conditions exists and seven operating fans are present.

45°C (113°F)

125 W or lower

None of the above hardware options or operating conditions exists and seven operating fans are present.

 

8SFF server with an 8SFF drive configuration

Use Table 13-2 to determine the maximum operating temperature of the 8SFF server with an 8SFF drive configuration. A maximum server operating temperature applies if the server contains any options in its matching hardware option list.

If a single fan fails, the maximum server operating temperature drops by 5 °C (41°F) and cannot exceed 35°C (95°F).

 

 

NOTE:

All maximum server operating temperature values are provided on the basis that the fans are installed as needed and operating correctly. For more information about fan configurations, see the guidelines in "Installing fans."

 

Table 13-2 Temperature requirements for the 8SFF server with an 8SFF drive configuration

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

125 W to 165 W (exclusive)

GPU modules:

·         GPU-P4-X.

·         GPU-M4-1.

·         GPU-T4.

·         GPU-MLU100-D3.

35°C (95°F)

Any

·         NVMe SSD PCIe accelerator module.

·         Samsung NVMe drives.

·         DCPMMs.

Lower than 125 W

GPU modules:

·         GPU-P4-X.

·         GPU-M4-1.

·         GPU-T4.

·         GPU-MLU100-D3.

40°C (104°F)

Any

·         Rear Ethernet adapter installed with a transceiver module.

·         NVMe drives, excluding Samsung NVMe drives.

·         GPU module GPU-M4000-1-X or GPU-M2000.

·         Ethernet adapters:

¡  IB-MCX453A-FCAT-56/40Gb-1P.

¡  IB-MCX453A-FCAT-56/40Gb-1P-1.

Higher than 125 W

None of the above hardware options or operating conditions exists and seven operating fans are present.

45°C (113°F)

125 W or lower

None of the above hardware options or operating conditions exists and seven operating fans are present.

 

8SFF server with a 10SFF drive configuration

Use Table 13-3 to determine the maximum operating temperature of the 8SFF server with a 10SFF drive configuration. A maximum server operating temperature applies if the server contains any options in its matching hardware option list.

If a single fan fails, the maximum server operating temperature drops by 5 °C (41°F) and cannot exceed 35°C (95°F).

 

 

NOTE:

All maximum server operating temperature values are provided on the basis that the fans are installed as needed and operating correctly. For more information about fan configurations, see the guidelines in "Installing fans."

 

Table 13-3 Temperature requirements for the 8SFF server with a 10SFF drive configuration

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

125 W to 165 W (exclusive)

GPU modules:

·         GPU-P4-X.

·         GPU-M4-1.

·         GPU-T4.

·         GPU-MLU100-D3.

35°C (95°F)

Lower than 125 W

·         NVMe SSD PCIe accelerator module.

·         DCPMMs.

·         Samsung NVMe drives.

·         GPU modules:

¡  GPU-P4-X.

¡  GPU-M4-1.

¡  GPU-T4.

¡  GPU-MLU100-D3.

40°C (104°F)

Any

None of the above hardware options or operating conditions exists and seven operating fans are present.

 

10SFF server with any drive configuration

Use Table 13-4 to determine the maximum operating temperature of the 10SFF server. A maximum server operating temperature applies if the server contains any options in its matching hardware option list.

If a single fan fails, the maximum server operating temperature drops by 5 °C (41°F) and cannot exceed 35°C (95°F).

 

 

NOTE:

All maximum server operating temperature values are provided on the basis that the fans are installed as needed and operating correctly. For more information about fan configurations, see the guidelines in "Installing fans."

 

Table 13-4 Temperature requirements for the 10SFF server with any drive configuration

Maximum server operating temperature

Processor configuration

Hardware option configuration

30°C (86°F)

125 W

Rear drives.

125 W to 165 W (exclusive)

GPU modules:

·         GPU-P4-X.

·         GPU-M4-1.

·         GPU-T4.

·         GPU-MLU100-D3.

35°C (95°F)

Lower than 125 W

·         NVMe SSD PCIe accelerator module.

·         Rear drives.

·         DCPMMs.

·         Samsung NVMe drives.

·         GPU modules:

¡  GPU-P4-X.

¡  GPU-M4-1.

¡  GPU-T4.

¡  GPU-MLU100-D3.

40°C (104°F)

Any

None of the above hardware options or operating conditions exists.

 


14 Appendix E  Product recycling

New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.

For product recycling services, contact New H3C at

·          Tel: 400-810-0504

·          E-mail: service@h3c.com

·          Website: http://www.h3c.com


15 Appendix F  Glossary

Item

Description

B

BIOS

Basic input/output system is non-volatile firmware pre-installed in a ROM chip on a server's system board. The BIOS stores basic input/output, power-on self-test, and auto startup programs to provide the most basic hardware initialization, setup and control functionality.

C

CPLD

Complex programmable logic device is an integrated circuit used to build reconfigurable digital circuits.

E

Ethernet adapter

An Ethernet adapter, also called a network interface card (NIC), connects the server to the network.

F

FIST

Fast Intelligent Scalable Toolkit provided by H3C for easy and extensible server management. It can guide users to configure a server quickly with ease and provide an API interface to allow users to develop their own management tools.

Front media module

A module installed at the server front to provide one VGA port and two USB 2.0 ports.

G

 

GPU module

Graphics processing unit module converts digital signals to analog signals for output to a display device and assists processors with image processing to improve overall system performance.

H

HDM

H3C Device Management is the server management control unit with which administrators can configure server settings, view component information, monitor server health status, and remotely manage the server.

Hot swapping

A module that supports hot swapping (a hot-swappable module) can be installed or removed while the server is running without affecting the system operation.

K

KVM

KVM is an abbreviation for keyboard, video, and mouse. KVM is a management method that allows remote users to use their local video display, keyboard, and mouse to monitor and control the server.

N

NVMe SSD expander module

An expander module that facilitates communication between the system board and the front NVMe hard drives. The module is required if a front NVMe hard drive is installed.

NVMe VROC module

A module that works with VMD to provide RAID capability for the server to virtualize storage resources of NVMe drives.

R

RAID

Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical hard drives into a single logical unit to improve storage and security performance.

Redundancy

A mechanism that ensures high availability and business continuity by providing backup modules. In redundancy mode, a backup or standby module takes over when the primary module fails.

S

Security bezel

A locking bezel mounted to the front of a server to prevent unauthorized access to modules such as hard drives.

T

Temperature sensor

A temperature sensor detects changes in temperature at the location where it is installed and reports the temperature data to the server system.

U

U

A unit of measure defined as 44.45 mm (1.75 in) in IEC 60297-1. It is used as a measurement of the overall height of racks, as well as equipment mounted in the racks.

V

VMD

VMD provides hot removal, management, and fault-tolerance functions for NVMe drives to increase availability, reliability, and serviceability.

 


16 Appendix G  Acronyms

Acronym

Full name

B

BIOS

Basic Input/Output System

C

CMA

Cable Management Arm

CPLD

Complex Programmable Logic Device

D

DCPMM

Data Center Persistent Memory Module

DDR

Double Data Rate

DIMM

Dual In-Line Memory Module

DRAM

Dynamic Random Access Memory

F

FIST

Fast Intelligent Scalable Toolkit

G

GPU

Graphics Processing Unit

H

HBA

Host Bus Adapter

HDD

Hard Disk Drive

HDM

H3C Device Management

I

IDC

Internet Data Center

K

KVM

Keyboard, Video, Mouse

L

LFF

Large Form Factor

LRDIMM

Load Reduced Dual Inline Memory Module

M

mLOM

Modular LAN-on-Motherboard

N

NCSI

Network Controller Sideband Interface

NVMe

Non-Volatile Memory Express

P

PCIe

Peripheral Component Interconnect Express

PDU

Power Distribution Unit

POST

Power-On Self-Test

R

RAID

Redundant Array of Independent Disks

RDIMM

Registered Dual Inline Memory Module

S

SAS

Serial Attached Small Computer System Interface

SATA

Serial ATA

SD

Secure Digital

SDS

Secure Diagnosis System

SFF

Small Form Factor

SSD

Solid State Drive

T

TCM

Trusted Cryptography Module

TDP

Thermal Design Power

TPM

Trusted Platform Module

U

UID

Unit Identification

UPI

Ultra Path Interconnect

UPS

Uninterruptible Power Supply

USB

Universal Serial Bus

V

VROC

Virtual RAID on CPU

VMD

Volume Management Device

 

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网