H3C UniServer R4950 G7 Server User Guide-6W100

HomeSupportServersH3C UniServer R4950 G7Technical DocumentsInstall & UpgradeInstallation GuidesH3C UniServer R4950 G7 Server User Guide-6W100
01-Text
Title Size Download
01-Text 8.65 MB

Contents

Safety information· 1

General operating safety· 1

Electrical safety· 1

Battery safety· 1

Rack mounting recommendations· 2

ESD prevention· 2

Preventing electrostatic discharge· 2

Grounding methods to prevent electrostatic discharge· 2

Safety sign conventions· 3

Identifying the server 4

Server models and chassis view· 4

Server specifications· 5

Technical specifications· 5

Physical specifications· 6

Components· 7

Front panel 9

Front panel components· 9

LEDs and buttons· 11

Ports· 13

Rear panel 13

Rear panel components· 13

Rear panel LEDs· 14

Ports· 15

System board· 16

System board layout 16

System maintenance switch· 18

DIMM slots· 19

HDDs and SSDs· 19

Drive numbering· 19

Drive LEDs· 20

Drive backplanes· 22

Front 8SFF SAS/SATA drive backplane (PCA-BP-8SFF-2U-G6) 22

Front 8SFF UniBay drive backplane (PCA-BP-8UniBay-A-2U-G6) 22

Front 8LFF SAS/SATA drive backplane (PCA-BP-8LFF-2U-G6) 23

Front 8LFF UniBay drive backplane (BP-8LFF-UniBay-A-2U-G6) 24

Front 12LFF SAS/SATA drive backplane (PCA-BP-12LFF-2U-G7) 25

Front 12LFF UniBay drive backplane (PCA-BP-12LFF-UniBay-2U-G7) 25

Front 12LFF drive backplane (8SAS/SATA+4UniBay, PCA-BP-12LFF-4NVMe-2U-G7) 26

Front 12LFF drive backplane (4SAS/SATA+8UniBay, BP-12LFF-EXP-A-2U-G6) 27

Rear 2SFF UniBay drive backplane (HDDCage-2UniBay-A-2U-G6) 28

Front 25SFF drive backplane (BP-25SFF-A-2U-G6) 29

Rear 2LFF SAS/SATA drive backplane (HDDCage-2LFF-2U-G6) 30

Rear 4SFF UniBay drive backplane (HDDCage-4UniBay-A-2U-G6) 30

Rear 4LFF SAS/SATA drive backplane (HDDCage-4LFF-2U-G6) 31

Riser cards· 31

RC-1FHHL-2U-G6· 32

RC-2FHHL-2U-G6· 32

RC-3FHFL-2U-G6-1· 33

RC-3FHHL-2U-G6· 34

PCA-R4900-4GPU-G6· 36

Riser 3 assembly module (supporting two HHHL modules) 37

Riser 4 assembly module (supporting two HHHL modules) 38

Riser 4 assembly module (supporting one FHFL module) 38

Riser 4 assembly module (supporting two FHFL modules) 39

Fan· 40

LCD smart management module· 41

Server B/D/F information· 42

Server B/D/F information· 42

Component installation guidelines· 42

Processors· 42

Memory· 43

SAS/SATA drives· 44

NVMe drives· 45

M.2 SSDs· 45

Server management module· 46

Riser cards and PCIe modules· 46

Storage controllers and power fail safeguard modules· 59

Network adapters· 60

GPUs· 61

Power supplies· 62

Fans· 62

Installing or removing the server 63

Installation flowchart 63

Preparing for the installation· 63

Rack requirements· 63

Airflow direction of the server 65

Temperature and humidity requirements· 66

Equipment room height requirements· 66

Corrosive gas concentration requirements· 66

Cleanliness requirements· 68

Grounding requirements· 69

Storage requirements· 69

Installation tools· 69

Installing or removing the server 71

(Optional) Installing rails· 71

Rack-mounting the server 72

(Optional) Installing cable management brackets· 72

Connecting external cables· 73

Connecting a mouse, keyboard, and monitor 73

Connecting an Ethernet cable· 74

Connecting a power cord· 75

Securing cables· 77

Cabling guidelines· 78

Removing the server from a rack· 78

Powering on and powering off the server 79

Powering on the server 79

Prerequisites· 79

Procedure· 79

Powering off the server 80

Prerequisites· 80

Procedure· 80

Configuring the server 82

Configuration flowchart 82

Powering on the switch· 82

Configuring basic BIOS settings· 82

Setting the server boot order 83

Setting the BIOS passwords· 83

Configuring the RAID·· 83

Installing the operating system and hardware drivers· 83

Install an operating system·· 83

Installing hardware drivers· 83

Updating firmware· 84

Replacing hardware options· 84

Replace a processor 84

Prerequisites· 84

Procedure· 85

Replacing a DIMM·· 86

Prerequisites· 86

Procedure· 86

Verifying the installation· 87

Replacing the system board· 87

Prerequisites· 87

Procedure· 88

Replacing the server management module· 89

Prerequisites· 89

Procedure· 89

Replacing a SAS/SATA drive· 90

Prerequisites· 90

Procedure· 91

Verifying the installation· 91

Adding an NVMe drive· 92

Prerequisites· 92

Procedure· 92

Verifying the installation· 92

Replacing an NVMe drive· 93

Prerequisites· 93

Procedure· 93

Verifying the installation· 93

Replacing a drive backplane· 94

Prerequisites· 94

Procedure· 94

Installing a rear drive cage· 95

Prerequisites· 95

Procedure· 95

Replacing riser cards and PCIe modules· 95

Prerequisites· 95

Procedure· 96

Installing PCIe modules and a riser card in PCIe riser bay 3· 97

Prerequisites· 97

Procedure· 97

Installing PCIe modules and a riser card in PCIe riser bay 4· 98

Prerequisites· 98

Procedure· 98

Replacing a storage controller and a power fail safeguard module· 99

Prerequisites· 99

Procedure· 99

Replacing a GPU module· 100

Prerequisites· 100

Procedure· 100

Replacing a standard PCIe network adapter 101

Prerequisites· 101

Procedure· 101

Installing OCP network adapter 1· 102

Prerequisites· 102

Procedure· 102

Installing OCP network adapter 2· 103

Prerequisites· 103

Procedure· 103

Replacing the OCP network adapter 104

Prerequisites· 104

Procedure· 104

Replacing a SATA M.2 SSD and the front M.2 SSD expander module· 104

Prerequisites· 104

Procedure· 105

Replacing a chassis ear 105

Procedure· 106

Replacing the air baffle· 107

Procedure· 107

Installing the LCD smart management module· 107

Prerequisites· 107

Procedure· 107

Replacing the LCD smart management module· 108

Prerequisites· 108

Procedure· 108

Replacing a fan module· 109

Replacing a fan module· 109

Installing and setting up a TCM or TPM·· 109

Installation and setup flowchart 110

Prerequisites· 110

Installing the TPM or TCM module· 110

Enabling the TCM or TPM in the BIOS· 111

Configuring encryption in the operating system·· 111

Replacing a power supply· 112

Prerequisites· 112

Procedure· 112

Replace the system battery· 113

Prerequisites· 113

Procedure· 113

Replacing a rear 4GPU module· 114

Prerequisites· 114

Procedure· 114

Installing a GPU module on the rear 4GPU module· 115

Prerequisites· 115

Procedure· 115

Installing or removing filler panels· 116

Prerequisites· 116

Procedure· 116

Connecting internal cables· 117

Internal cabling guidelines· 117

Restrictions and guidelines· 117

Connecting drive cables· 117

Front 12LFF (8SAS/SATA+4UniBay) 117

Front 12LFF (4SAS/SATA+8UniBay, LSI Expander backplane)+rear 4SFF UniBay· 120

Front 8SFF UniBay+8SFF UniBay+8SFF UniBay· 122

Front 25SFF drives (17SAS/SATA+8UniBay) 124

Connecting the LCD smart management module cable· 127

Connecting cables for the front M.2 SSD expander module· 128

Connecting SATA data cables for the front M.2 SSD expander module· 128

Connecting NVMe data cables for the front M.2 SSD expander module· 129

Connecting cables for OCP 3.0 network adapter 1· 129

Connecting cables for OCP 3.0 network adapter 2· 130

Connecting cables for riser cards· 131

Connecting the supercapacitor cable· 135

Connecting cables for the rear 4GPU module· 135

Connecting cables for the chassis ears· 137

Maintenance· 137

Guidelines· 137

Maintenance tools· 137

Maintenance operations· 138

Maintenance tasks· 138

Checking the server LEDs· 138

Monitoring the temperature and humidity of the equipment room·· 138

Inspecting the cables· 138

Viewing server status· 138

Collecting server logs· 138

Updating the server firmware· 139

Troubleshooting· 139

 


Safety information

General operating safety

·     Only H3C authorized or professional server engineers are allowed to install, service, repair, operate, or upgrade the server.

·     Place the server on a clean, stable table or floor for servicing.

·     Make sure all cables are correctly connected before you power on the server.

·     To ensure good ventilation and proper airflow, follow these guidelines:

¡     Do not block the ventilation openings in the server chassis.

¡     Install blanks if the following module slots are empty:

-     Drive bays.

-     Fan bays.

-     PCIe slots.

-     Power supply slots.

¡     To avoid thermal damage to the server, do not operate the server for long periods in any of the following conditions:

-     Access panel open or uninstalled.

-     Air baffles uninstalled.

-     PCIe slots, drive bays, fan bays, or power supply slots empty.

¡     Minimize the time of removing the access panel when maintaining hot-pluggable components.

·     To avoid being burnt, allow the server and its internal modules to cool before touching them.

·     When you stack the server and other devices vertically in a cabinet, leave a minimum vertical gap of 2 mm (0.08 in) between two devices.

Electrical safety

WARNING

WARNING!

If you put the server in standby mode (system power LED in amber) with the power on/standby button on the front panel, the power supplies continue to supply power to some circuits in the server. To remove all power for servicing safety, you must first press the button, wait for the system to enter standby mode, and then remove the power cords from the server.

 

·     To avoid bodily injury or damage to the server, always use the power cords that came with the server.

·     Do not use the power cords that came with the server for any other devices.

·     Power off the server when installing or removing any components that are not hot swappable.

Battery safety

The server's system board contains a system battery, which is designed with a lifespan of 3 to 5 years.

If the server no longer automatically displays the correct date and time, replace the battery. When you replace the battery, follow these safety guidelines:

·     Do not attempt to recharge the battery.

·     Do not expose the battery to a temperature higher than 60°C (140°F).

·     Do not disassemble, crush, puncture, short external contacts, or dispose of the battery in fire or water.

·     Dispose of the battery at a designated facility. Do not throw the battery away together with other wastes.

Rack mounting recommendations

 

NOTE:

To protect the server from unstable power or power outage, use uninterrupted power supplies (UPSs) to provide power for the server.

 

To avoid bodily injury or damage to the equipment, follow these guidelines when you rack mount a server:

·     Mount the server in a standard 19-inch rack.

·     Make sure the leveling jacks are extended to the floor and the full weight of the rack rests on the leveling jacks.

·     Couple the racks together in multi-rack installations.

·     Load the rack from the bottom to the top, with the heaviest hardware unit at the bottom of the rack.

·     Get help to lift and stabilize the server during installation or removal, especially when the server is not fastened to the rails. As a best practice, a minimum of two people are required to safely load or unload a rack. A third person might be required to help align the server if the server is installed higher than chest level.

·     For rack stability, make sure only one unit is extended at a time. A rack might get unstable if more than one server unit is extended.

·     Make sure the rack is stable when you operate a server in the rack.

·     To maintain correct airflow and avoid thermal damage to the server, use blank panels to fill empty rack units.

ESD prevention

Preventing electrostatic discharge

Electrostatic charges that build up on people and tools might damage or shorten the lifespan of the system board and electrostatic-sensitive components.

To prevent electrostatic damage, follow these guidelines:

·     Transport or store the server with the components in antistatic bags.

·     Keep the electrostatic-sensitive components in separate antistatic bags until they arrive at an ESD-protected area.

·     Place the components on a grounded surface before removing them from their antistatic bags.

·     Avoid touching pins, leads, or circuitry.

Grounding methods to prevent electrostatic discharge

The following are grounding methods that you can use to prevent electrostatic discharge:

·     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded. Keep the wristband close to the skin and make sure it can flexibly stretch.

·     Take adequate personal grounding measures, including wearing antistatic clothing and static dissipative shoes.

·     Use conductive field service tools.

·     Use a portable field service kit with a folding static-dissipating work mat.

Safety sign conventions

To avoid bodily injury or damage to the server or its components, make sure you are familiar with the safety signs on the server chassis or its components.

Table 1 Safety signs

Sign

Description

Warning

Circuit or electricity hazards are present. Only H3C authorized or professional server engineers are allowed to service, repair, or upgrade the server.

WARNING WARNING!

To avoid bodily injury or damage to circuits, do not open any components marked with the electrical hazard sign unless you have authorization to do so.

Electrical hazards are present. Field servicing or repair is not allowed.

WARNING WARNING!

To avoid bodily injury, do not open any components with the field-servicing forbidden sign in any circumstances.

The RJ-45 ports on the server can be used only for Ethernet connections.

WARNING WARNING!

To avoid electrical shocks, fire, or damage to the equipment, do not connect an RJ-45 port to a telephone.

The surface or component might be hot and present burn hazards.

WARNING WARNING!

To avoid being burnt, allow hot surfaces or components to cool before touching them.

The server or component is heavy and requires more than one people to carry or move.

WARNING WARNING!

To avoid bodily injury or damage to hardware, do not move a heavy component alone. In addition, observe local occupational health and safety requirements and guidelines for manual material handling.

The server is powered by multiple power supplies.

WARNING WARNING!

To avoid bodily injury from electrical shocks, make sure you disconnect all power supplies if you are performing offline servicing.

 


Identifying the server

 

NOTE:

·     The information in this document might differ from your product if it contains custom configuration options or features.

·     The model name of a hardware option in this document might differ slightly from its model name label. A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, the DDR5-4800-32G-1Rx4 memory model represents memory module labels including UN-DDR5-4800-32G-1Rx4-R, UN-DDR5-4800-32G-1Rx4-F, and UN-DDR5-4800-32G-1Rx4-S, which have different suffixes.

·     The figures in this document are for illustration only.

 

Server models and chassis view

H3C UniServer R4950 G7 (hereinafter referred to as R4950 G7) is a 2U rack server independently developed by H3C. It is equipped with the new-generation AMD Turin processor and support dual CPU configuration. This server is specifically designed for HPC, cloud computing, distributed storage, and video storage, and is suitable for enterprise basic operations and telecommunication applications. The R4950 G7 features high computing performance, large storage capacity, high scalability, and high reliability, and is easy to manage and deploy.

Figure 1 Chassis view

 

The servers come in the models listed in Table 2.

Table 2 R4950 G7 server models

Model

Maximum drive configuration

LFF

12LFF drives at the front + (2LFF+4SFF) or (4LFF+2SFF) drives at the rear

SFF

25SFF drives at the front + 4SFF drives at the rear

 

Server specifications

Technical specifications

Item

Description

Processor

·     Up to two AMD Zen5 processors

¡     Up to 500W power consumption per processor

¡     Up to 512MB cache per processor

¡     Integrated memory controller that supports 12 memory channels per processor

¡     Integrated PCIe controller that supports PCIe5.0 and 64 PCIe lanes to the external per processor (a total of 128 PCIe lanes if two processors are installed)

¡     4-line XGMI bus interconnect, with each line supporting a transfer rate of up to 32 Gb/s

¡     Up to 4.2GHz base frequency

·     For more information, use the Compatibility Matrix by Server Model tool at https://iconfig-chl.h3c.com/iconfig/OSHostIndex?language=en

Memory

Supports up to 24 memory modules (supporting DDR5) with a speed up to 6400 MT/s, and supports 12 DIMM slots per processor, and up to a maximum capacity of 3TB for dual-processor configuration

Storage controllers

·     Embedded SATA controller

·     High-performance storage controller

Integrated graphics card

Graphics card chip integrated in BMC management chip of model AST2600

Supported resolution ratio: 1920 × 1200@60Hz (32bpp), where:

·     Resolution ratio:

¡     1920 × 1200: 1920 horizontal pixels and 1200 vertical pixels

¡     60 Hz: Screen refresh rate of 60 times per second

¡     32Bpp: Color bits. Higher the color bits, more colors that can be displayed.

·     Only after the graphics driver compatible with the operating system version is installed, can the integrated graphics card support a maximum resolution of 1920 × 1200 pixels. If the driver installed is incompatible, the server supports only the default resolution of the operating system.

·     When both the front and rear VGA connectors are used, only the monitor connected to the front VGA connector is available.

Network connectors

·     1 × 1Gb/s onboard HDM dedicated network port

·     Up to two OCP 3.0 network adapter slots (OCP network adapter slot 1 supports multihost)

I/O connectors

·     Up to 5 × USB 3.0 connectors

¡     One in the middle and two at the rear.

¡     One on the right chassis ear and one on the right chassis ear only when the multifunctional rack mount kit is used

·     Up to 32 direct SATA outputs: Presents 4 MCIO interfaces externally (multiplexing PCIe 5.0 x8)

·     13 × built-in MCIO connectors (12 × PCIe5.0 x8 connectors and 1 × PCIe3.0 x4 connector)

·     1 × RJ-45 HDM dedicated network port (on the rear panel)

·     2 × VGA connectors (one on the rear panel and one on the left chassis ear only when the multifunctional rack mount kit is used)

·     1 × HDM dedicated management port (on the front panel, available only when the multifunctional rack mount kit is used)

Expansion slots

Supports up to 10 PCIe 5.0 standard slots and 2 dedicated slot for OCP 3.0 network adapters

Power supply

2 × hot-swappable power supplies with 1+1 redundancy

 

Physical specifications

Category

Item

Description

Physical specifications

Dimensions (H × W × D)

·     Without security bezel: 87.5 × 445.4 × 800.0 mm (3.44 × 17.54 × 31.50 in)

·     With security bezel: 87.5 × 445.4 × 828.0 mm (3.44 × 17.54 × 32.60 in)

Maximum weight (fully equipped)

41 kg (90.39 lb)

Power consumption

The power consumption varies by configuration. For more information, see the Server Power Consumption Evaluation tool at https://iconfig-chl.h3c.com/iconfig/PowerCalIndex?language=en

Environment specifications

Temperature

Operating temperature: 5°C to 40°C (41°F to 104°F)

NOTE:

The maximum temperature varies by hardware option presence. For more information, see the appendix A.

Storage temperature: –40°C to +70°C (–40°F to +158°F)

Humidity

·     Operating humidity: 8% to 90%, noncondensing

·     Storage humidity: 5% to 95%, noncondensing

Altitude

·     Operating altitude: –60 m to +3000 m (–196.85 ft to +9842.52 ft)
The allowed maximum temperature decreases by 0.33 °C (32.59°F) as the altitude increases by 100 m (328.08 ft) from 900 m (2952.76 ft)

·     Storage altitude: –60 m to +5000m (–196.85 ft to +16404.20 ft)

 

Components

Figure 2 R4950 G7 server components

 

Table 3 Server component description

No.

Name

Description

1

Access panel

/

2

Processor heatsink

Cools the processor.

3

OCP network adapter

Network adapter installed onto the OCP network adapter connector on the system board.

4

Processor

Integrates memory and PCIe controllers to provide data processing capabilities for the server.

5

Storage controller

Provides RAID capability to SAS/SATA drives, including RAID configuration and RAID scale-up. It supports online upgrade of the controller firmware and remote configuration.

6

Standard PCIe network adapter

Installed in a standard PCIe slot to provide network ports.

7

Riser card

Provides PCIe slots.

8

Memory

Used to temporarily store computational data from the processors and data exchanged with external storage devices such as drives. The server supports DDR5 memory modules.

9

Processor socket cover

Installed over an empty processor socket to protect pins in the socket.

10

Server management module

Provides I/O connectors and HDM out-of-band management features.

11

System board

One of the most important parts of a server, on which multiple components are installed, such as processor, memory, and fan. It is integrated with basic server components, including the BIOS chip and PCIe connectors.

12

Rear drive backplane

Provides power to drives installed at the server rear and offers a data transmission channel.

13

Rear drive cage

Installed at the server rear to accommodate drives.

14

Riser card blank

Installed on an empty PCIe riser connector to ensure good ventilation.

15

Power supply

Supplies power to the server. The power supplies support hot swapping and 1+1 redundancy.

16

Chassis

Accommodate all components.

17

Multifunctional rack mount kit

Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with a VGA connector, an HDM dedicated management connector (Type-C), and a USB 3.0 connector.

18

Front drive backplane

Provides power to drives installed at the server front and offers a data transmission channel.

19

Drive

Provides data storage space. Drives support hot swapping. The server supports SSDs and HDDs and various drive interface types, such as SAS, SATA, M.2, and PCIe.

20

Supercapacitor holder

Secures a supercapacitor onto the air baffle in the chassis.

21

Supercapacitor

Used to supply power to the flash card on the storage controller during an unexpected power outage, achieving power fail safeguard for data on the storage controller.

22

M.2 adapter

Expands the server with SATA M.2 SSDs.

23

SATA M.2 SSD

Provides data storage space.

24

Encryption module

Provides encryption services for the server to enhance data security.

25

Fan cage

Accommodates fan modules.

26

Fan

Helps server ventilation. Fans support hot swapping and N+1 redundancy.

27

System battery

Powers the system clock to ensure that the system date and time are correct.

28

Chassis-open alarm module

Detects if the access panel is removed. The detection result can be displayed from the HDM Web interface.

29

Air baffle

Provides ventilation aisles for processor heatsinks and memory modules and provides support for the supercapacitor.

30

GPU

Provides image processing and artificial intelligence computing services for servers.

 

Front panel

Front panel components

Figure 3 Front panel (8LFF drive configuration)

 

Table 4 Front panel components (8LFF drive configuration)

No.

Description

1

USB 3.0 connector

2

Drive or LCD smart management module (optional)

3

Serial label pull tab

4

HDM dedicated management port

5

USB 3.0 connector

6

VGA connector

 

Figure 4 Front panel (12LFF drive configuration)

 

Table 5 Front panel components (12LFF drive configuration)

No.

Description

1

12LFF drives (optional)

2

USB 3.0 connector

3

Drive or LCD smart management module (optional)

4

Serial label pull tab

5

HDM dedicated management port

6

USB 3.0 connector

7

VGA connector

 

Figure 5 Front panel (8SFF drive configuration)

 

Table 6 Front panel components (8SFF drive configuration)

No.

Description

1

Bay 1 for 8SFF drives (optional)

2

Bay 2 for 8SFF drives (optional)

3

Bay 3 for 8SFF drives (optional)

4

USB 3.0 connector

5

LCD smart management module (optional)

6

Serial label pull tab

7

HDM dedicated management port

8

USB 3.0 connector

9

VGA connector

 

Figure 6 Front panel (25SFF drive configuration)

 

Table 7 Front panel components (25SFF drive configuration)

No.

Description

1

25SFF drives (optional)

2

USB 3.0 connector

3

Drive or LCD smart management module (optional)

4

Serial label pull tab

5

HDM dedicated management port

6

USB 3.0 connector

7

VGA connector

 

LEDs and buttons

Front LEDs and buttons

Figure 7 Front LEDs and buttons

 

 

Table 8 Description of front LEDs and buttons

No.

Description

Status

1

Power On/Standby button and system power LED

·     Steady green—The system has started.

·     Flashing green (1 Hz)—The system is starting.

·     Steady amber—The system is in standby state.

·     Off—No power is present.

2

LED for Ethernet interfaces on an OCP 3.0 network adapter

·     Steady green—A link is present on a port of an OCP 3.0 network adapter.

·     Flashing green (1 Hz)—A port on an OCP 3.0 network adapter is receiving or sending data.

·     Off—No link is present on any port of either OCP 3.0 network adapter.

NOTE:

The server supports a maximum of two OCP3.0 network adapters.

3

Health LED

·     Steady green—The system is operating correctly or a minor alarm has occurred.

·     Flashing green (4 Hz)—HDM is being initialized.

·     Flashing amber (1 Hz)—A major alarm is present.

·     Flashing red (1 Hz)—A critical alarm is present.

4

UID button/LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Enable UID LED from HDM.

·     Flashing blue:

¡     1Hz—The system is being remotely managed by HDM or HDM is performing out-of-band firmware update. Do not power off the device.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for a minimum of 8 seconds.

·     Off—The UID LED is not activated.

·     If the health LED indicates that an error is present, check the operating status of the system through HDM.

·     If the system power LED is off, possible reasons include:

¡     No power source is connected.

¡     No power supplies are present.

¡     The installed power supplies are faulty.

¡     The system power cords are not connected correctly.

 

LEDs on the intelligent security bezel

The security bezel provides hardened security and uses effect light to visualize operation and health status to help inspection and fault location. The default effect light is as shown in Table 9.

Figure 8 Intelligent security bezel

 

Table 9 LEDs on the intelligent security bezel

Description

Decorative LED state

Standby

Standby

Steady white

Startup

POST in process

The white LEDs are gradually turning on from the middle to both sides, reflecting the percentage of the POST progress.

POST completed

The white LEDs turns on from the middle to both ends in sequence and then goes off three times, displaying a flowing light effect.

Running

Normal

Breathing white (0.2 Hz gradual change in brightness), while the percentage of lit LEDs indicates the level of server load, with more LEDs gradually lighting up from the middle to both sides as the entire device's power consumption increases. The percentage of lit LEDs represents the level of server load as follows:

·     Below 10%: No load

·     10% to 50%: Low load

·     50% to 80%: Medium load

·     Over 80%: Heavy load

Pre-alarm (only for drive pre-failure)

Breathing white (1 Hz gradual change in brightness)

Major alarm

Flashing amber (1 Hz)

Critical alarm (only for power supply errors)

Flashing red (1 Hz)

Remote management

The system is being remotely managed or HDM is performing out-of-band firmware update. Do not power off the device.

All LEDs flashing white (1 Hz)

HDM is restarting.

Some LEDs flashing white (1 Hz)

 

Ports

Table 10 Ports on the front panel

Interface name.

Type

Services

VGA connector

DB15

Used to connect display terminals, such as displays or KVM devices.

USB connector

USB 3.0 connector

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

HDM dedicated management port

Type-C

Connects a Type-C to USB adapter cable, which connects to a USB Wi-Fi adapter or USB drive.

 

Rear panel

Rear panel components

Figure 9 Rear panel components

 

Table 11 Front panel component description

No.

Description

1

PCIe slots 1 through 3

2

PCIe slots 4 through 6

3

PCIe slots 7 and 8

4

PCIe slots 9 and 10

5

Power supply 2

6

Power supply 1

7

OCP 3.0 network adapter or serial port & DSD module (optional)

8

VGA connector

9

Two USB 3.0 connectors

10

HDM dedicated connector (1 Gbps, RJ-45, default IP address 192.168.1.2/24)

11

OCP 3.0 network adapter (optional)

 

Rear panel LEDs

Figure 10 Rear panel LEDs

 

Table 12 Description of rear panel LEDs and buttons

No.

Description

Status

1

Power supply LED

·     Steady green—The power supply is operating correctly.

·     Flashing green (0.33 Hz)—The power supply is in standby state and does not output power.

·     Flashing green (2 Hz)—The power supply is updating its firmware.

·     Steady amber—Either of the following conditions exists:

¡     The power module has experienced a critical fault.

¡     The power module has no input, but the other power module has normal input.

·     Flashing amber (1 Hz)—An alarm has occurred on the power supply.

·     Off—No power supplies have power input. Possible reasons:

¡     The power cord connection fails.

¡     The external power supply system has lost power.

2

State LED for power supply 1

3

Activity LED of the Ethernet port

·     Flashing green (1 Hz)—The port is receiving or sending data.

·     Off—The port is not receiving or sending data.

4

Link LED of the Ethernet port

·     Steady green—A link is present on the port.

·     Off—No link is present on the network port.

5

UID LED

·     Steady blue—UID LED is activated. The UID LED can be activated by using the following methods:

¡     Press the UID button LED.

¡     Enable UID LED from HDM.

·     Flashing blue:

¡     1 Hz—The system is being remotely managed or HDM is performing out-of-band firmware update. Do not power off the device.

¡     4 Hz—HDM is restarting. To restart HDM, press the UID button LED for a minimum of 8 seconds.

·     Off—UID LED not activated.

 

Ports

Table 13 Ports on the rear panel

Port

Type

Description

VGA connector

DB-15

Used to connect display terminals, such as displays or KVM devices.

BIOS serial port

DB-9

The BIOS serial port is used for the following purposes:

·     Log in to the server when the remote network connection to the server has failed.

·     Establish a GSM modem or encryption lock connection.

USB connector

USB 3.0

Connects the following devices:

·     USB flash drive.

·     USB keyboard or mouse.

·     USB optical drive for operating system installation.

HDM dedicated network port

RJ-45

Establishes a network connection to manage HDM from its Web interface.

Power connector

Standard single-phase

Connects the power supply to the power source.

 

System board

System board layout

Figure 11 System board layout

 

Table 14 System board component description

No.

Description

Mark

1

AUX connector 8

AUX8(OCP/NCSI)

2

OCP 3.0 network adapter slot 2

OCP2&DSD&UART CARD

3

Fan connector for OCP2 network adapter

OCP2 FAN

4

PCIe riser connector 1 (for processor 1)

RISER1 PCIe X16

5

Server management module connector

BMC CON

6

Drive backplane AUX connector 4

AUX4

7

System battery

N/A

8

Fan connector for OCP1 network adapter

OCP1 FAN

9

x8L connector for OCP 3.0 network adapter 1

OCP1_X8L

10

OCP 3.0 network adapter slot 1

OCP1

11

x8H connector for OCP 3.0 network adapter 1

OCP_X8H

12

LCD module connector

DIAG LCD

13

MCIO connector C1-G1C (for processor 1)

C1-G1C

14

M.2 SSD connector

M.2 PORT

15

Drive backplane AUX connector 7

AUX7

16

Drive backplane AUX connector 9

AUX9

17

Front I/O connector

RIGHT EAR

18

MCIO connector C1-P0A (for processor 1)

C1-P0A

19

MCIO connector C1-P0C (for processor 1)

C1-P0C

20

MCIO connector C1-P2C (for processor 1)

C1-P2C

21

MCIO connector C1-P2A (for processor 1)

C1-P2A

22

Drive backplane AUX connector 3

AUX3

23

Drive backplane AUX connector 1

AUX1

24

Drive backplane power connector 3

PWR3

25

Temperature sensor connector

TEMP SENSE

26

Drive backplane power connector 2

PWR2

27

MCIO connector C2-P4A (for processor 2)

C2-P4A

28

MCIO connector C2-P0A (for processor 2)

C2-P0A

29

MCIO connector C2-P0C (for processor 2)

C2-P0C

30

Drive backplane AUX connector 2

AUX2

31

MCIO connector C2-P2C (for processor 2)

C2-P2C

32

MCIO connector C2-P2A (for processor 2)

C2-P2A

33

Front VGA and USB 3.0 connector

LEFT EAR

34

Chassis-open alarm module connector

INTRUDER

35

Drive backplane power connector 1

PWR1

36

Fan board AUX connector

FAN AUX

37

Fan board power connector

FAN PWR

38

Drive backplane power connector 5

PWR5

39

MCIO connector C2-G3A (for processor 2)

C2-G3A

40

MCIO connector C2-G3C (for processor 2)

C2-G3C

41

Drive backplane power connector 7

PWR7

42

Connector for the liquid leakage detection module

LEAKDET

43

PCIe expansion connector for OCP 3.0 network adapter slot 2

OCP2 X8

44

PCIe riser connector 2 (for processor 2)

RISER2 PCIe X16

45

Drive backplane power connector 6

PWR6

46

Drive backplane AUX connector 5

AUX5

47

Drive backplane power connector 8

PWR8

48

Built-in USB 3.0 connector

INTER USB3.0

49

Drive backplane power connector 4

PWR4

50

Drive backplane AUX connector 6

AUX6

51

TPM/TCM connector

TPM

52

Power board AUX connector

PDB AUX

53

MCIO connector C1-G1A (for processor 1)

C1-G1A

X

System maintenance switch

N/A

 

System maintenance switch

The system maintenance switch has eight dip switches, as shown in Figure 12.

Figure 12 System maintenance switch

 

Table 15 describes how to use the maintenance switch. For information about the position of the system maintenance switch, see "System board layout."

Table 15 System maintenance switch description

Location

Description (Off by default)

Restrictions and guidelines

1

·     Off—HDM login requires the username and password of a valid HDM user account.

·     On—HDM login requires the default username and password.

When switch 1 is turned on, you can log in to HDM with the default username and password. As a best practice for security purposes, turn off the switch after you complete tasks with the default username and password.

5

·     Off—Normal server startup.

·     On—Restores the default BIOS settings.

To restore the default BIOS settings:

1.     Power off the server, and turn on switch 5.

2.     Power on the server and wait for a minimum of 10 seconds.

3.     Power off the server, and turn off switch 5.

4.     Start the server. If the screen prompts "The CMOS defaults were loaded" at the POST phase, it indicates that the default BIOS settings have been restored.

CAUTION CAUTION:

The server cannot start up when the switch is turned on. To avoid service data loss, stop running services and power off the server before turning on the switch.

6

·     Off—Normal server startup.

·     On—Clears all passwords from the BIOS at server startup.

To clear all passwords from the BIOS, turn on the switch and then start the server. All the passwords will be cleared from the BIOS. To perform a normal compute module startup, turn off the switch as a best practice before the next compute module startup.

2,3,4,7,8

Reserved for future use.

N/A

 

DIMM slots

Figure 13 shows the DIMM slot population, where A0, B0, F0, L0, K0...G0 each represent a DIMM slot. For more information about DIMM installation guidelines, see "Memory."

Figure 13 DIMM slots on the system board

 

HDDs and SSDs

Drive numbering

Drive numbering specifies each drive's position and aligns precisely with the labels on the server's front and rear panels.

Figure 14 Drive numbering for front 25SFF drive configuration

 

Figure 15 Drive numbering for front 12LFF drive configuration

 

Figure 16 Drive numbering for front 8LFF drive configuration

 

Figure 17 Drive numbering for rear 4LFF+2SFF drive configuration

 

Figure 18 Drive numbering for rear 2LFF+4SFF drive configuration

 

Drive LEDs

The server supports SAS, SATA, and NVMe drives. You can use the LEDs on a drive to identify its status. Figure 19 shows the location of the LEDs on a drive.

Figure 19 Drive LEDs

(1) Fault/UID LED

(2) Present/Active LED

 

To identify the status of a SAS or SATA drive, use Table 16. To identify the status of an NVMe drive, use Table 17.

Table 16 SAS/SATA drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Table 17 NVMe drive LED description

Fault/UID LED status

Present/Active LED status

Description

Flashing amber (0.5 Hz)

Off

The drive has completed the hot removal process and can be hot removed.

Flashing amber (4 Hz)

Off

The drive is in hot insertion process.

Flashing amber (0.5 Hz)

Steady green/Flashing green (4.0 Hz)

A drive failure is predicted. As a best practice, replace the drive before it fails.

Steady amber

Steady green/Flashing green (4.0 Hz)

The drive is faulty. Replace the drive immediately.

Steady blue

Steady green/Flashing green (4.0 Hz)

The drive is operating correctly and is selected by the RAID controller.

Off

Flashing green (4.0 Hz)

The drive is performing RAID migration or rebuilding, or the system is reading or writing data to the drive.

Off

Steady green

The drive is present but no data is being read or written to the drive.

Off

Off

The drive is not securely installed.

 

Drive backplanes

Front 8SFF SAS/SATA drive backplane (PCA-BP-8SFF-2U-G6)

The PCA-BP-8SFF-2U-G6 is a 8SFF drive backplane that can be installed at the server front to support a maximum of eight 2.5-inch SAS/SATA drives.

Figure 20 8SFF SAS/SATA drive backplane

 

Table 18 8SFF SAS/SATA drive backplane components

No.

Description

Mark

1

x8 SlimSAS connector

SAS PORT1

2

AUX connector

AUX

3

Power connector

PWR

 

Front 8SFF UniBay drive backplane (PCA-BP-8UniBay-A-2U-G6)

The PCA-BP-8UniBay-A-2U-G6 is a 8SFF UniBay drive backplane that can be installed at the server front to support a maximum of eight 2.5-inch SAS/SATA/NVMe drives.

Figure 21 8SFF UniBay drive backplane

 

Table 19 8SFF UniBay drive backplane components

No.

Description

Mark

1

x8 SlimSAS connector

SAS PORT

2

AUX connector

AUX

3

MCIO connector B3/B4 (PCIe5.0 x8)

NVMe B3/B4

4

Power connector

POWER

5

MCIO connector B1/B2 (PCIe5.0 x8)

NVMe B1/B2

6

MCIO connector A3/A4 (PCIe5.0 x8)

NVMe A3/A4

7

MCIO connector A1/A2 (PCIe5.0 x8)

NVMe A1/A2

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Front 8LFF SAS/SATA drive backplane (PCA-BP-8LFF-2U-G6)

The PCA-BP-8LFF-2U-G6 is an 8LFF drive backplane that can be installed at the server front to support a maximum of eight 3.5-inch SAS/SATA drives.

Figure 22 8LFF SAS/SATA drive backplane

 

Table 20 8LFF SAS/SATA drive backplane components

No.

Description

Mark

1

X8 Mini-SAS-HD connector

SAS PORT

2

Power connector

PWR

3

AUX connector

AUX

 

Front 8LFF UniBay drive backplane (BP-8LFF-UniBay-A-2U-G6)

The PCA-BP-8LFF-UniBay-A-2U-G6 is a 8LFF Unibay drive backplane that can be installed at the server front to support a maximum of eight 3.5-inch SAS/SATA/NVMe drives.

Figure 23 8LFF UniBay drive backplane

 

Table 21 8LFF UniBay drive backplane components

No.

Description

Mark

1

SlimSAS connector B3/B4 (PCIe5.0 x8), supporting NVMe drives

NVMe-B3/B4

2

SlimSAS connector B1/B2 (PCIe5.0 x8), supporting NVMe drives

NVMe-B1/B2

3

x8 Mini-SAS-HD connector

SAS-PORT 1

4

SlimSAS connector A3/A4 (PCIe5.0 x8), supporting NVMe drives

NVMe-A3/A4

5

SlimSAS connector A1/A2 (PCIe5.0 x8), supporting NVMe drives

NVMe-A1/A2

6

Power connector

PWR

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Front 12LFF SAS/SATA drive backplane (PCA-BP-12LFF-2U-G7)

The UN-PCA-BP-12LFF-2U-G7 is a 12LFF drive backplane that can be installed at the server front to support a maximum of twelve 3.5-inch SAS/SATA drives.

Figure 24 12LFF SAS/SATA drive backplane

 

Table 22 12LFF SAS/SATA drive backplane components

No.

Description

Mark

1

x4 SlimSAS connector (controls SAS/SATA drives in the last four slots attached to the backplane)

SAS PORT 2

2

Power connector 1

PWR 1

3

Power connector 2

PWR 2

4

x8 SlimSAS connector (controls SAS/SATA drives in the first eight slots attached to the backplane)

SAS PORT 1

5

AUX connector

AUX

 

Front 12LFF UniBay drive backplane (PCA-BP-12LFF-UniBay-2U-G7)

The UN-PCA-BP-12LFF-UniBay-2U-G7 is a 12LFF UniBay drive backplane that can be installed at the server front to support a maximum of twelve 3.5-inch SAS/SATA/NVMe drives.

Figure 25 12LFF UniBay drive backplane

 

Table 23 12LFF UniBay drive backplane components

No.

Description

Mark

1

MCIO connector A3 (PCIe5.0 x4)

NVMe-A3

2

x4 SlimSAS connector (controls SAS/SATA drives in the last four slots attached to the backplane)

SAS PORT 2

3

MCIO connector B1/B2 (PCIe5.0 x8)

NVMe-B1/B2

4

Power connector 1

PWR 1

5

Power connector 2

PWR 2

6

MCIO connector C1 (PCIe5.0 x4)

NVMe-C1

7

x8 SlimSAS connector (controls SAS/SATA drives in the first eight slots attached to the backplane)

SAS PORT 1

8

AUX connector

AUX

9

MCIO connector C3/C4 (PCIe5.0 x8)

NVMe-C3/C4

10

MCIO connector C2 (PCIe5.0 x4)

NVMe-C2

11

MCIO connector B3/B4 (PCIe5.0 x8)

NVMe-B3/B4

12

MCIO connector A4 (PCIe5.0 x4)

NVMe-A4

13

MCIO connector A1/A2 (PCIe5.0 x8)

NVMe-A1/A2

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Front 12LFF drive backplane (8SAS/SATA+4UniBay, PCA-BP-12LFF-4NVMe-2U-G7)

The UN-PCA-BP-12LFF-4NVMe-2U-G7 is a 12LFF drive backplane that can be installed at the server front to support a maximum of twelve 3.5-inch drives, including eight SAS/SATA drives and four SAS/SATA/NVMe drives.

Figure 26 12LFF drive backplane (8SAS/SATA+4UniBay)

 

Table 24 12LFF drive backplane (8SAS/SATA+4UniBay) components

No.

Description

Mark

1

MCIO connector A3 (PCIe5.0 x4), supporting NVMe drives (in slot 9)

NVMe-A3

2

x4 SlimSAS connector (controls SAS/SATA drives in the last four slots attached to the backplane)

SAS PORT 2

3

Power connector 1

PWR 1

4

Power connector 2

PWR 2

5

x8 SlimSAS connector (controls SAS/SATA drives in the first eight slots attached to the backplane)

SAS PORT 1

6

AUX connector

AUX

7

MCIO connector A4 (PCIe5.0 x4), supporting NVMe drives (in slot 8)

NVMe-A4

8

MCIO connector A1/ A2 (PCIe5.0 x8), supporting NVMe drives (in slots 10 and 11)

NVMe-A1/A2

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Front 12LFF drive backplane (4SAS/SATA+8UniBay, BP-12LFF-EXP-A-2U-G6)

The PCA-BP-12LFF-EXP-A-2U-G6 is a 12LFF drive backplane that can be installed at the server front to support a maximum of twelve 3.5-inch drives, including four SAS/SATA drives and eight SAS/SATA/NVMe drives. The drive backplane is integrated with an Expander chip and can manage 12 SAS/SATA drives through a x8 SlimSAS connector. The backplane also provides three downlink interfaces for connecting other drive backplanes to support more drives.

Figure 27 12LFF drive backplane (4SAS/SATA+8UniBay)

 

Table 25 12LFF drive backplane (4SAS/SATA+8UniBay) components

No.

Description

Mark

1

x8 SlimSAS uplink connector, used to control all drives attached to the backplane

SAS PORT

2

x4 SlimSAS downlink connector 3

SAS EXP3

3

Power connector 2

PWR2

4

MCIO connector B1/B2 (PCIe5.0 x8), supporting NVMe drives

NVMe-B1/B2

5

Power connector 1

PWR1

6

x8 SlimSAS downlink connector 2

SAS EXP2

7

x4 SlimSAS downlink connector 1

SAS EXP1

8

AUX connector

AUX

9

MCIO connector B3/B4 (PCIe5.0 x8), supporting NVMe drives

NVMe B3/B4

10

MCIO connector A3/A4 (PCIe5.0 x8), supporting NVMe drives

NVMe A3/A4

11

MCIO connector A1/A2 (PCIe5.0 x8), supporting NVMe drives

NVMe A1/A2

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Rear 2SFF UniBay drive backplane (HDDCage-2UniBay-A-2U-G6)

The UN-HDDCage-2UniBay-A-2U-G6 is a 2SFF UniBay drive backplane that can be installed at the server rear to support a maximum of two 2.5-inch SAS/SATA/NVMe drives.

Figure 28 2SFF UniBay drive backplane

 

Table 26 2SFF UniBay drive backplane components

No.

Description

Mark

1

Power connector

PWR

2

x4 Mini-SAS-HD connector

SAS PORT

3

SlimSAS connector (PCIe4.0 x8)

NVMe

4

AUX connector

AUX

PCIe4.0 x8 description:

·     PCIe4.0: Fourth-generation signal rate.

·     x8: Bus bandwidth.

 

Front 25SFF drive backplane (BP-25SFF-A-2U-G6)

The PCA-BP-25SFF-A-2U-G6 is a 25SFF drive backplane that can be installed at the server front to support a maximum of twenty-five 2.5-inch drives, including 17 SAS/SATA drives and 8 SAS/SATA/NVMe drives. The drive backplane can manage 25 SAS/SATA drives through a x8 SlimSAS connector. The backplane is integrated with an Expander chip and also provides three downlink interfaces for connecting other drive backplanes to support more drives.

Figure 29 25SFF drive backplane

 

Table 27 25SFF drive backplane connectors

No.

Description

Mark

1

x4 SlimSAS downlink connector 3

SAS EXP 3

2

x8 SlimSAS uplink connector, used to control all drives attached to the backplane

SAS PORT

3

x8 SlimSAS downlink connector 2

SAS EXP 2

4

x4 SlimSAS downlink connector 1

SAS EXP 1

5

Power connector 1

PWR 1

6

Power connector 2

PWR 2

7

MCIO connector 4 (PCIe5.0 x8), supporting NVMe drives (in slots 17 and 18)

NVMe 4

8

AUX connector

AUX

9

MCIO connector 3 (PCIe5.0 x8), supporting NVMe drives (in slots 19 and 20)

NVMe 3

10

MCIO connector 2 (PCIe5.0 x8), supporting NVMe drives (in slots 21 and 22)

NVMe 2

11

Power connector 3

PWR 3

12

MCIO connector 1 (PCIe5.0 x8), supporting NVMe drives (in slots 23 and 24)

NVMe 1

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Rear 2LFF SAS/SATA drive backplane (HDDCage-2LFF-2U-G6)

The HDDCage-2LFF-2U-G6 is a 2LFF drive backplane that can be installed at the server rear to support a maximum of two 3.5-inch SAS/SATA drives.

Figure 30 2LFF SAS/SATA drive backplane

 

Table 28 2LFF SAS/SATA drive backplane components

No.

Description

Mark

1

x4 Mini-SAS-HD connector

SAS PORT1

2

AUX connector

AUX1

3

Power connector

PWR1

 

Rear 4SFF UniBay drive backplane (HDDCage-4UniBay-A-2U-G6)

The UN-HDDCage-4UniBay-A-2U-G6 is a 4SFF UniBay drive backplane that can be installed at the server rear to support a maximum of four 2.5-inch SAS/SATA/NVMe drives.

Figure 31 4SFF UniBay drive backplane

 

Table 29 4SFF UniBay drive backplane components

No.

Description

Mark

1

AUX connector

AUX

2

Power connector

PWR

3

MCIO connector B1/B2 (PCIe5.0 x8)

NVMe-B1/B2

4

MCIO connector B3/B4 (PCIe5.0 x8)

NVMe-B3/B4

5

x4 Mini-SAS-HD connector

SAS PORT

PCIe5.0 x8 description:

·     PCIe5.0: Fifth-generation signal rate.

·     x8: Bus bandwidth.

 

Rear 4LFF SAS/SATA drive backplane (HDDCage-4LFF-2U-G6)

The HDDCage-4LFF-2U-G6 is a 4LFF drive backplane that can be installed at the server rear to support a maximum of four 3.5-inch SAS/SATA drives.

Figure 32 4LFF SAS/SATA drive backplane

 

Table 30 4LFF SAS/SATA drive backplane components

No.

Description

Mark

1

AUX connector

AUX

2

Power connector

PWR

3

x4 Mini-SAS-HD connector

SAS PORT

 

Riser cards

The slot number of a PCIe slot varies by the PCIe riser connector that holds the riser card. For example, slot 1/4 represents PCIe slot 1 if the riser card is installed on connector 1 and represents PCIe slot 4 if the riser card is installed on connector 2.

RC-1FHHL-2U-G6

Figure 33 RC-1FHHL-2U-G6

 

Table 31 RC-1FHHL-2U-G6 riser card components

No.

Description

1

PCIe5.0 x16 slot 3/6

 

RC-2FHHL-2U-G6

Figure 34 RC-2FHHL-2U-G6

 

Table 32 RC-2FHHL-2U-G6 riser card components

No.

Description

1

PCIe5.0 x8 slot 3/6

2

PCIe5.0 x8 slot 2/5

 

RC-3FHFL-2U-G6-1

Figure 35 RC-3FHFL-2U-G6-1 (1)

 

Figure 36 RC-3FHFL-2U-G6-1 (2)

 

Table 33 RC-3FHFL-2U-G6-1 riser card components

No.

Description

1

PCIe5.0 x16 slot 2/5

2

PCIe5.0 x16 slot 3/6

3

GPU power connector

4

PCIe5.0 x16 slot 1/4

5

MCIO connector 3-C

6

MCIO connector 3-A

7

MCIO connector 1-A

8

MCIO connector 1-C

 

RC-3FHHL-2U-G6

Figure 37 RC-3FHHL-2U-G6 (1)

 

Figure 38 RC-3FHHL-2U-G6 (2)

 

Table 34 RC-3FHHL-2U-G6 riser card components

No.

Description

1

PCIe5.0 x8 slot 2/5

2

PCIe5.0 x8 slot 3/6

3

PCIe5.0 x8 slot 1/4

4

MCIO connector 1-A

5

MCIO connector 1-C

 

PCA-R4900-4GPU-G6

Figure 39 PCA-R4900-4GPU-G6 (1)

 

Figure 40 PCA-R4900-4GPU-G6 (2)

 

Figure 41 PCA-R4900-4GPU-G6 (3)

 

Table 35 PCA-R4900-4GPU-G6 riser card components

No.

Description

1

PCIe5.0 x16 slot 14

2

PCIe5.0 x16 slot 13

3

PCIe5.0 x16 slot 12

4

PCIe5.0 x16 slot 11

5, 6, 7, 8

GPU power connector

9

PCIe5.0 x16 slot 6

10

PCIe5.0 x16 slot 3

 

Riser 3 assembly module (supporting two HHHL modules)

A Riser 3 assembly module contains two cables (0404A2H4) and a structure module (0231AKAU).

Figure 42 Riser 3 assembly module (supporting two HHHL modules)

 

Table 36 Description of the Riser 3 assembly module (supporting two HHHL modules)

No.

Description

1

PCIe connector cable S1 for slot 8

2

Power connector S2 for slot 8

3

PCIe5.0 x8 for slot 8

4

PCIe5.0 x8 for slot 7

5

Power connector S2 for slot 7

6

PCIe connector cable S1 for slot 7

 

Riser 4 assembly module (supporting two HHHL modules)

A Riser 4 assembly module contains two cables (0404A2H4) and a structure module (0231AKAU).

Figure 43 Riser 4 assembly module (supporting two HHHL modules)

 

Table 37 Description of the Riser 4 assembly module (supporting two HHHL modules)

No.

Description

1

PCIe connector cable S1 for slot 10

2

Power connector S2 for slot 10

3

PCIe5.0 x8 for slot 10

4

PCIe5.0 x8 for slot 9

5

Power connector S2 for slot 9

6

PCIe connector cable S1 for slot 9

 

Riser 4 assembly module (supporting one FHFL module)

A Riser 4 assembly module contains one cable (0404A2FE) and a structure module (0231AKAV).

Figure 44 Riser 4 assembly module (supporting one FHFL module)

 

Table 38 Riser 4 assembly module (supporting one FHFL module)

No.

Description

1

PCIe5.0 x16 for slot 9

2

Power connector S3 for slot 9

3

PCIe connector cable S2 for slot 9

4

PCIe connector cable S1 for slot 9

 

Riser 4 assembly module (supporting two FHFL modules)

A Riser 4 assembly module supporting two FHFL modules contains two cables (0404A2H8 and 0404A2FE) and a structure module (0231AKAV).

Figure 45 Riser 4 assembly module (supporting two FHFL modules)

 

Table 39 Riser 4 assembly module (supporting two FHFL modules)

No.

Description

1

Power connector S3 for slot 9

2

PCIe connector cable S2 for slot 9

3

PCIe connector cable S2 for slot 10

4

Power connector S3 for slot 10

5

PCIe connector cable S1 for slot 10

6

PCIe connector cable S1 for slot 9

7

PCIe5.0 x16 for slot 9

8

PCIe5.0 x16 for slot 10

 

Fan

The server supports up to four hot swappable fans. Figure 46 shows the fan layout. The server supports N+1 fan redundancy, which means that the server can still operate correctly when a single fan fails.

The server can adjust the fan speed automatically based on the actual temperature of the system to optimize heat dissipation while reducing noise.

Figure 46 Fan layout

 

LCD smart management module

An LCD smart management module displays basic server information, operating status, and fault information, and provides diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the LCD module in conjunction with the event logs generated in HDM.

For more information, see LCD Smart Management Module User Guide.

Figure 47 LCD smart management module

LCD_001

 

Table 40 LCD smart management module components

No.

Name

Description

1

Mini-USB connector

Used for upgrading the firmware of the LCD module.

2

LCD module cable

Connects the extension cable, which is then connected to LCD module connector on the system board of the server. For information about the LCD module connector on the system board, see "System board layout."

3

LCD module shell

Protects and secures the LCD screen.

4

LCD screen

Displays basic server information, operating status, and fault information.

 

Server B/D/F information

Server B/D/F information

The server B/D/F information might change as the PCIe configuration changes. You can obtain B/D/F information by using one of the following methods:

·     BIOS log—Search the dumpiio keyword in the BIOS log.

·     UEFI shell—Execute the pci command. For more information, execute the help pci command.

·     Operating system—The obtaining method varies by OS.

¡     For Linux, execute the lspci-vvv command.

 

 

NOTE:

If Linux does not support the lspci command by default, use the software package manager supported by the operating system to obtain and install the pci-utils package.

 

¡     For Windows, install the pciutils package, and then execute the lspci command.

¡     For VMware, execute the lspci command.

Component installation guidelines

Processors

·     The server requires two processors and does not support single-processor operation. For more information about processor locations, see "System board layout."

·     To avoid damage to a processor or the system board, only H3C authorized or professional server engineers can install, replace, or remove a processor.

·     Make sure the processors installed on the same server are the same model.

·     The pins in the processor sockets are very fragile and prone to damage. Install a protective cover if a processor socket is empty.

·     To prevent ESD, wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

·     To avoid injury caused by high temperature from the processor heatsink or processor liquid-cooled module during the removal, take necessary heat protection measures before any operation.

Memory

The server supports DDR5 DIMMs.

Concepts

DDR

DDR5 DIMMs can perform parity check on addresses but cannot protect data from getting lost in case of unexpected system power outage.

Rank

A DIMM rank is a set of memory chips that the system accesses while writing or reading from the memory. On a multi-rank DIMM, only one rank is accessible at a time.

Memory specifications

You can use the memory label to identify the memory specifications.

Figure 48 Memory label

 

Table 41 Memory label description

No.

Description

Remarks

1

Capacity

·     32GB.

·     64GB.

2

Number of ranks

·     1R—One rank (Single-Rank).

·     2R—Two ranks (Dual-Rank).

·     4R—Four ranks (Quad-Rank).

·     8R—Eight ranks (8-Rank).

3

Data width

·     ×4—4 bits.

·     ×8—8 bits.

4

DIMM generation

DDR5

5

Data rate

4800B—4800 MT/s.

6

DIMM type

R—RDIMM.

 

Installation guidelines

You can install two processors on the server. The server provides 12 DIMM channels per processor and each channel has one DIMM slot.

DIMM and processor compatibility

When you install a DIMM, use Table 42 to verify that it is compatible with the processors.

Table 42 DIMM and processor compatibility

Processor

Memory type @ speed

Description

AMD Turin EYPC

DDR5 @6400MT/s

N/A

 

Memory operating speed

 

 

NOTE:

To obtain the memory speed and maximum memory speed supported by a specific processor, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66. You can query the memory speed by selecting Memory Module and query the maximum supported memory speed by selecting Processor.

 

The actual operating speed of the server memory depends on the lesser value between the memory speed and the maximum memory speed supported by the processors. For example, if the memory speed is 4400 MT/s and the maximum memory speed supported by processors is 6000 MT/s, the actual operating memory speed is 4400 MT/s.

DIMM installation guidelines

·     As a best practice, install the same number of DIMMs in the same slots for each processor. Install DIMMs according to the DIMM installation guidelines.

·     As a best practice, install DDR4 DIMMs that have the same product code and DIMM specification (type, capacity, rank, and frequency). For information about DIMM product codes, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66. To install components or replace faulty DIMMs of other specifications, contact Technical Support.

 

 

NOTE:

Install DIMMs as instructed in Figure 49.

 

Figure 49 DIMM population schemes

 

SAS/SATA drives

·     Drives support hot swapping.

·     As a best practice, install drives that do not contain RAID information.

·     To avoid RAID performance degrade or RAID creation failure, make sure all drives used to create a RAID array are the same type (HDDs or SSDs) and have the same connector type (SAS or SATA).

·     As a best practice, use drives with the same capacity to create a RAID array. When the drives used have different capacities, the system uses the smallest capacity across all the drives, causing capacity waste.

When you configure the recording mode, follow these restrictions and guidelines:

·     Using one drive to form multiple RAID arrays complicates future maintenance and affects RAID performance.

·     If you hot swap an HDD or SSD repeatedly within 30 seconds, the system might fail to identify the drive.

NVMe drives

·     Support for hot removal of NVMe drives varies by operating system.

·     If your operating system supports hot swapping of NVMe drives, follow these guidelines:

¡     When you insert a drive, insert it smoothly and continuously. Any pauses might cause the operating system to freeze or reboot.

¡     Do not hot swap multiple NVMe drives at the same time. As a best practice, hot swap NVMe drives one after another at intervals longer than 30 seconds. After the operating system identifies the first NVMe drive, you can hot swap the next drive. If you insert multiple NVMe drives simultaneously, the system might fail to identify the drives.

M.2 SSDs

M.2 SSD drives are installed on the server using the M.2 SSD expander module. You can install the M.2 SSD expander module to the server front or server rear.

When you install front SATA/NVMe M.2 SSDs, follow these restrictions and guidelines:

·     The front M.2 SSD expander module is installed between the drive backplane and the fan modules at the front of the chassis. It supports both SATA and NVMe M.2 SSDs and is connected to the system board with high-speed cables. A maximum of two M.2 SSDs can be installed. For more information, see "Connecting cables for the front M.2 SSD expander module."

·     As a best practice, use a SATA M.2 SSD to install the operating system.

Figure 50 Front view of the front M.2 SSD expander module

114-M.2转接卡正面

(1) Data cable connector

(2) M.2 SSD slot 1

 

Figure 51 Rear view of the front M.2 SSD expander module

115-M.2转接卡反面

(1) M.2 SSD slot 2

 

Server management module

The server management module is installed on the system board to provide I/O connectors and HDM out-of-band features for the server.

Figure 52 Server management module

 

Table 43 Server management module description

No.

Description

1

VGA connector

2

Two USB3.0 connectors

3

HDM dedicated network port

4

UID LED

5

HDM serial port

6

iFIST mezzanine module

7

NCSI connector

 

Riser cards and PCIe modules

PCIe module form factor

Table 44 PCIe module form factor

Abbreviation

Full name

LP

Low Profile card

FHHL

Full Height, Half Length card

FHFL

Full Height, Full Length card

HHHL

Half Height, Half Length card

HHFL

Half Height, Full Length card

 

Riser card and PCIe module compatibility

Restrictions and guidelines

·     When the corresponding processor is absent for a riser card, the PCIe slots on the riser card are unavailable.

·     For the specific location of the PCIe riser connectors on the system board, see "System board layout." For more information about PCIe slots on a riser card, see "Riser cards."

·     You can install small-size PCIe modules into the PCIe slots for larger-size PCIe modules. For example, you can install LP modules into FHFL module slots.

·     A PCIe slot can supply power to the installed PCIe module if the maximum power consumption of the module does not exceed 75 W. If the maximum power consumption exceeds 75 W, a power cord is required.

·     PCIe5.0 x8 description:

¡     PCIe5.0: Fifth-generation signal rate.

¡     x8: Compatible bus bandwidth, include x8, x4, x2, and x1.

·     For an x8 MCIO connector, x8 indicates the bus bandwidth.

·     The default connector width of a standard PCIe slot is x16.

Riser card and PCIe module compatibility

For more information about riser card and PCIe module compatibility, see Table 45, Table 46, Table 47, Table 48, Table 49, Table 50, Table 51, Table 52, and Table 53. For more information about riser assembly module and PCIe module compatibility, see "Riser cards."

Table 45 Riser card and PCIe module compatibility (1)

Riser card model

Riser card installation location

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

RC-3FHFL-2U-G6

PCIe riser connector 1

PCIe slots

Slots 1 through 3

PCIe5.0 x16

FHFL modules

75 W

Processor 1

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C1-P1A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 1

N/A

Processor 1

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C1-P1C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 1

N/A

Processor 1

SLOT 2-A

x8 MCIO connector

Connect to MCIO connector C1-P2A on the system board and work together with x8 MCIO connector SLOT 2-C to provide a x16 PCIe link for slot 2

N/A

Processor 1

SLOT 2-C

x8 MCIO connector

Connect to MCIO connector C1-P2C on the system board and work together with x8 MCIO connector SLOT 2-A to provide a x16 PCIe link for slot 2

N/A

Processor 1

PCIe riser connector 2

PCIe slots

Slots 4 through 6

PCIe5.0 x16

FHFL modules

75W

Processor 2

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C2-P3A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 4

N/A

Processor 2

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C2-P3C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 4

N/A

Processor 2

SLOT 2-A

x8 MCIO connector

Connect to MCIO connector C2-P0A on the system board and work together with x8 MCIO connector SLOT 2-C to provide a x16 PCIe link for slot 5

N/A

Processor 2

SLOT 2-C

x8 MCIO connector

Connect to MCIO connector C2-P0C on the system board and work together with x8 MCIO connector SLOT 2-A to provide a x16 PCIe link for slot 5

N/A

Processor 2

 

Table 46 Riser card and PCIe module compatibility (2)

Riser card model

Riser card installation location

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

RC-3FHFL-2U-G6-1

PCIe riser connector 1

PCIe slots

Slots 1 through 3

PCIe5.0 x16

FHFL modules

75W

Processor 1

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C1-P1A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 1

N/A

Processor 1

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C1-P1C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 1

N/A

Processor 1

SLOT 3-A

x8 MCIO connector

Connect to MCIO connector C1-P2A on the system board and work together with x8 MCIO connector SLOT 3-C to provide a x16 PCIe link for slot 2

N/A

Processor 1

SLOT 3-C

x8 MCIO connector

Connect to MCIO connector C1-P2C on the system board and work together with x8 MCIO connector SLOT 3-A to provide a x16 PCIe link for slot 2

N/A

Processor 1

PCIe riser connector 2

PCIe slots

Slots 4 through 6

PCIe5.0 x16

FHFL modules

75W

Processor 2

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C2-P3A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 4

N/A

Processor 2

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C2-P3C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 4

N/A

Processor 2

SLOT 3-A

x8 MCIO connector

Connect to MCIO connector C2-P0A on the system board and work together with x8 MCIO connector SLOT 3-C to provide a x16 PCIe link for slot 5

N/A

Processor 2

SLOT 3-C

x8 MCIO connector

Connect to MCIO connector C2-P0C on the system board and work together with x8 MCIO connector SLOT 3-A to provide a x16 PCIe link for slot 5

N/A

Processor 2

 

Table 47 Riser card and PCIe module compatibility (3)

Riser card model

Riser card installation location

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

RC-3FHHL-2U-G6

PCIe riser connector 1

PCIe slots

Slot 1

PCIe5.0 x16

FHHL modules

75W

Processor 1

Slot 2/3

PCIe5.0 x8

FHHL modules

75W

Processor 1

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C2-P0A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 1

N/A

Processor 1

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C2-P0C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 1

N/A

Processor 1

PCIe riser connector 2

PCIe slots

Slot 4

PCIe5.0 x16

FHHL modules

75W

Processor 2

Slot 5/6

PCIe5.0 x8

FHHL modules

75W

Processor 2

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C2-P2A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 1

N/A

Processor 2

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C2-P2C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 1

N/A

Processor 2

 

Table 48 Riser card and PCIe module compatibility (4)

Riser card model

Riser card installation location

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

RC-2FHFL-2U-LC-G6

PCIe riser connector 2

PCIe slots

Slot 4

PCIe5.0 x16

FHFL

75W

Processor 2

Slot 5

PCIe5.0 x16

FHFL

75W

Processor 2

Cable extension connectors

SLOT 1-A

x8 MCIO connector

Connect to MCIO connector C2-P3A on the system board and work together with x8 MCIO connector SLOT 1-C to provide a x16 PCIe link for slot 4

N/A

Processor 2

SLOT 1-C

x8 MCIO connector

Connect to MCIO connector C2-P3C on the system board and work together with x8 MCIO connector SLOT 1-A to provide a x16 PCIe link for slot 4

N/A

Processor 2

SLOT 2-A

x8 MCIO connector

Connect to MCIO connector C2-P3A on the system board and work together with x8 MCIO connector SLOT 2-C to provide a x16 PCIe link for slot 5

N/A

Processor 2

SLOT 2-C

x8 MCIO connector

Connect to MCIO connector C2-P0C on the system board and work together with x8 MCIO connector SLOT 2-A to provide a x16 PCIe link for slot 5

N/A

Processor 2

 

Table 49 Riser card and PCIe module compatibility (5)

Riser card model

Riser card installation location

PCIe slots on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

PCA-R4900-4GPU-G6

PCIe riser connector 1 & PCIe riser connector 2

Slot 3

PCIe5.0 x16

FHHL modules

75W

Processor 1

Slot 6

PCIe5.0 x16

FHHL modules

75W

Processor 2

Slot 11

PCIe5.0 x16

FHFL modules

300W*

Processor 1

Slot 12

PCIe5.0 x16

FHFL modules

300W*

Processor 1

Slot 13

PCIe5.0 x16

FHFL modules

300W*

Processor 2

Slot 14

PCIe5.0 x16

FHFL modules

300W*

Processor 2

300W*: Slots 11 through 14 of the rear 4GPU module support only GPUs, and the 300W power supply capability requires an external GPU power cable.

 

Table 50 Riser card and PCIe module compatibility (6)

Riser card model

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

Riser 4 assembly module (supporting one FHFL module)

PCIe slots

Slots 9 and 10

PCIe5.0 x16

FHFL modules

75 W

Processor 2

Cable extension connectors

SLOT 9

PCIe connector cable S1 for slot 9

Connect to MCIO connector C2-G3A on the system board and work together with PCIe connector cable S2 to provide a x16 PCIe link for slot 9

-

Processor 2

PCIe connector cable S2 for slot 9

Connect to MCIO connector C2-G3C on the system board and work together with PCIe connector cable S1 to provide a x16 PCIe link for slot 9

-

Processor 2

 

Table 51 Riser card and PCIe module compatibility (7)

Riser card model

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

Riser 4 assembly module (supporting two FHFL modules)

PCIe slots

Slots 9 and 10

PCIe5.0 x16

FHFL modules

75W

Processor 2

Cable extension connectors

SLOT 9

PCIe connector cable S1 for slot 9

Connect to MCIO connector C2-P2A on the system board and work together with PCIe connector cable S2 to provide a x16 PCIe link for slot 9

-

Processor 2

PCIe connector cable S2 for slot 9

Connect to MCIO connector C2-P2C on the system board and work together with PCIe connector cable S1 to provide a x16 PCIe link for slot 9

-

Processor 2

SLOT 10

PCIe connector cable S1 for slot 10

Connect to MCIO connector C2-G3A on the system board and work together with PCIe connector cable S2 to provide a x16 PCIe link for slot 10

-

Processor 2

PCIe connector cable S2 for slot 10

Connect to MCIO connector C2-G3C on the system board and work together with PCIe connector cable S1 to provide a x16 PCIe link for slot 10

-

Processor 2

 

Table 52 Riser card and PCIe module compatibility (8)

Riser card model

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

Riser 4 assembly module (supporting two HHHL modules)

PCIe slots

Slots 9 and 10

PCIe5.0 x8

HHHL

75 W

Processor 2

Cable extension connectors

SLOT 9

PCIe connector cable S1 for slot 9

Connect to MCIO connector C2-P2A on the system board

-

Processor 2

SLOT 10

PCIe connector cable S1 for slot 10

Connect to MCIO connector C2-P2C on the system board

-

Processor 2

 

Table 53 Riser card and PCIe module compatibility (9)

Riser card model

PCIe slots and cable extension connectors on the riser card

PCIe slot or connector description

PCIe devices supported by PCIe slot or connector

PCIe power supply capability

Processor

Riser 3 assembly module (supporting two HHHL modules)

PCIe slots

Slots 7 and 8

PCIe5.0 x8

HHHL

75 W

Processor 2

Cable extension connectors

SLOT 7

PCIe connector cable S1 for slot 7

Connect to MCIO connector C2-G3A on the system board

-

Processor 2

SLOT 8

PCIe connector cable S1 for slot 8

Connect to MCIO connector C2-G3C on the system board

-

Processor 2

 

Storage controllers and power fail safeguard modules

Storage controllers

Storage controllers can be divided into the categories shown in Table 54 according to the installation location.

Table 54 Storage controller description

Type

Installation location

Embedded SATA controller/embedded NVMe controller

Embedded in the system board (no installation required)

Standard storage controller

Installed onto a PCIe riser connector on the system board through a riser card

 

Power fail safeguard module

A power fail safeguard module contains a flash card and a supercapacitor. The server supports independent flash cards that require installation onto a storage controller and built-in flash cards embedded in a storage controller. Built-in flash cards do not require installation. When a system power failure occurs, the supercapacitor can provide power for a minimum of 20 seconds. During this interval, the storage controller transfers data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data.

 

 

NOTE:

The supercapacitor might have a low charge after the power fail safeguard module is installed or after the server is powered up. If the system displays that the supercapacitor has low charge, no action is required. The system will charge the supercapacitor automatically. You can view the status of the supercapacitor from the BIOS.

 

When you use a supercapacitor, follow these restrictions and guidelines:

·     A supercapacitor has a lifespan of 3 to 5 years.

·     If the lifespan of a supercapacitor expires, a supercapacitor exception might occur. The system notifies users of supercapacitor exceptions by using the following methods:

¡     For a PMC storage controller, the status of the flash card will become Abnormal+status code. You can check the status code to identify the exception. For more information, see HDM online help.

¡     For an LSI storage controller, the status of the flash card will become Abnormal.

¡     You can also review log messages from HDM to identify supercapacitor exceptions. For more information, see HDM2 online help.

·     For the power fail safeguard module to take effect, replace the supercapacitor before its lifespan expires.

 

 

NOTE:

After the supercapacitor replacement, verify that cache related settings are enabled for logical drives. For more information, see HDM2 online help.

 

Installation guidelines

·     You can install one or multiple standard storage controllers. When you install standard storage controllers, follow these restrictions and guidelines:

¡     Make sure the standard storage controllers are of the same vendor. For information about the available storage controllers and their vendors, visit the query tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

¡     Install standard storage controllers in PCIe slots in the following order: slots 3, 6, 1, 4, 2, 5, 7, and then 8. If a slot is installed with another module, install the standard storage controller in the next available slot. For information about PCIe slot locations, see rear panel view in "Rear panel."

¡     If you install multiple storage controllers, connect each controller to the drive backplane of the corresponding bay: lower-numbered controller to lower-numbered bay and higher-numbered controller to higher-numbered bay. For information about drive bay locations, see front panel view in "Front panel components."

¡     If you configure two storage controllers for the front and rear drives, connect the storage controller installed in the lower-numbered slot to the rear drive backplane. Connect the storage controller installed in the higher-numbered slot to the front drive backplane.

¡     If you connect both the front and rear drive backplanes to one storage controller, connect the lower-numbered connector on the controller to the rear drive backplane. Connect the higher-numbered connector on the controller to the front drive backplane.

¡     If you install both a 16i storage controller and a 8i storage controller, install the 16i storage controller to the higher-numbered slot and the 8i controller to the lower-numbered slot.

·     For information about power fail safeguard modules or supercapacitors compatible with a specific storage controller, see Table 55.

Table 55 Power fail safeguard modules compatible with storage controllers

Storage controller

Power fail safeguard module or supercapacitor

Supercapacitor installation location

RAID-LSI-9560-LP-8i-4GB

BAT-LSI-G3-A

On the air baffle

RAID-LSI-9560-LP-16i-8GB

RAID-P460-B2

BAT-PMC-G3-2U

HBA-H460-B1

Not supported

Not supported

HBA-LSI-9500-LP-8i

HBA-LSI-9540-LP-8i

HBA-LSI-9500-LP-16i

FC-HBA-LPe35000-LP-32Gb-1P

FC-HBA-LPe35002-LP-32Gb-2P

FC-HBA-LPe36000-LP-64G-1P

FC-HBA-LPe36002-LP-64G-2P

 

Network adapters

·     You can install an OCP network adapter only on to the OCP 3.0 connector on the system board. For the location of the connector, see "System board layout."

·     To install a standard PCIe network adapter, you must use a riser card. For more information, see "Riser card and PCIe module compatibility."

·     Use the OS compatibility lookup tool at https://iconfig-chl.h3c.com/iconfig/OSIndex?language=en to verify if the operating system supports hot swapping of network adapters.

¡     If the system supports hot swapping of OCP network adapters:

-     Only OCP network adapters installed before the server is powered on support hot swapping. To hot swap an OCP network adapter, make sure the new adapter and the adapter to be replaced are of the same model. To replace an OCP network adapter with one of a different model, you must first power off the server.

-     OCP network adapters installed after the server is powered on do not support hot swapping. To install, remove, or replace such an adapter, first power off the server.

¡     If the operating system does not support hot swapping of OCP network adapters, to install, remove, or replace an OCP network adapter, first power off the server.

GPUs

 

NOTE:

For information about the configuration guides for the power cords of GPUs, contact Technical Support.

 

·     To install a GPU module in any slot from PCIe slot 1 to slot 6, the RC-3FHFL-2U-G6-1 riser card is required. To install a GPU module in PCIe slot 9 or 10, the riser 4 assembly module supporting two FHFL modules is required.

·     To install a FHFL dual-width GPU, install the GPU in the recommended slot as shown in Table 56 and follow these restrictions and guidelines:

¡     Install the GPU in a PCIe slot with a bus bandwidth of x16.

¡     To install three or fewer GPUs, riser cards are required.

¡     To install four GPUs, use the rear 4GPU module.

·     To install a FHFL single-width GPU, install the GPU in the recommended slot as shown in Table 57 and follow these restrictions and guidelines:

¡     Install the GPU in a PCIe slot with a bus bandwidth of x16.

¡     To install three or fewer GPUs, install one GPU on each riser card as a best practice.

¡     To install four or more GPUs, install two GPUs on each riser card as a best practice.

Table 56 Configuration guidelines for FHFL dual-width GPUs

Number of GPUs

Recommended slots

1

Slot 5

2

Slot 2/5

3

Slot 2/5/9

4

Slot 11/12/13/14

 

Table 57 Configuration guidelines for FHFL single-width GPUs

Number of GPUs

Recommended slots

1

Slot 5

2

Slot 2/5

3

Slot 2/5/9

4

Slot 1/2/4/5

5

Slot 1/2/4/5/9

6

slot 1/2/4/5/9/10

 

Power supplies

 

NOTE:

For more information about the specifications of a power supply, see the corresponding power supply manuals.

 

·     Make sure the power supplies installed on the server are the same model. If they differ in model, HDM would raise a minor alarm.

·     The power supplies are hot swappable.

·     To avoid damage to hardware, do not use third-party power supplies.

·     The server supports 1+1 power supply redundancy.

·     The system provides an overtemperature protection mechanism for power supplies. The power supplies automatically turn off when they encounter an overtemperature situation and automatically turn on when the overtemperature situation is removed.

Fans

·     The fans are hot swappable and support N+1 redundancy.

·     Make sure the server is fully equipped with fans of the same model.

·     The server supports single-rotator fan FAN-8038-2U-G6 and dual-rotator fan FAN-8056-2U-G6. When any of the following conditions exists, use the FAN-8056-2U-G6 fan:

¡     The 12LFF/25SFF/2*8SFF UniBay/3*8SFF UniBay drive backplane is installed and the TDP of the installed processors exceeds 240 W (liquid-cooling module not installed).

¡     The TDP of the installed processors exceeds 200 W and rear drives are installed.

¡     GPUs of model A2/A30/A40/A100/A16 are installed.

¡     The MBF2H332A-AENOT network adapter or MBF2H536C-CEUOT intelligent network adapter is installed.

¡     The MCX623106AN-CDAT or MCX623436AN-CDAB OCP3.0 network adapter is installed.

¡     When an OCP 3.0 network adapter with bandwidth 100 Gb/s or higher is installed onto connector OCP1, an OCP fan is required.

Installing or removing the server

Installation flowchart

Figure 53 Installation flowchart

 

Preparing for the installation

Prepare an installation site that meets the requirements for space and airflow, temperature, humidity, cleanliness, equipment room height, and grounding.

Rack requirements

Liquid-cooling module not installed

The server is 2U high and has a depth of 780 mm (30.71 in). The rack for installing the server must meet the following requirements:

·     A standard 19-inch rack.

·     A minimum of 1200 mm (47.24 in) in depth as a best practice. For installation limits for different rack depth, see Table 58. As a best practice, allow technical support engineers to conduct onsite surveys to eliminate potential issues.

·     A clearance of more than 50 mm (1.97 in) between the rack front posts and the front rack door.

Table 58 Installation limits for different rack depths

Rack depth

Installation limits

1000 mm (39.37 in)

·     The H3C cable management arm (CMA) is not supported.

·     The slide rails and PDUs might hinder each other. Perform onsite survey to determine the PDU installation location and the proper PDUs. If the PDUs hinder the installation and movement of the slide rails anyway, use other methods to support the server, a tray for example.

·     A clearance of 60 mm (2.36 in) is reserved from the server rear to the rear rack door for cabling.

1100 mm (43.31 in)

Make sure the CMA does not hinder PDU installation at the server rear before installing the CMA. If the CMA hinders PDU installation, use a deeper rack or change the installation locations of PDUs.

1200 mm (47.24 in)

Make sure the CMA does not hinder PDU installation or cabling. If the CMA hinders PDU installation or cabling, change the installation locations of PDUs.

 

Figure 54 Installation suggestions for a 1200 mm deep rack (top view)

Rack size

(1) 1200 mm (47.24 in) rack depth

(2) A minimum of 50 mm (1.97 in) between the front rack posts and the front rack door

·     Use rear-facing cables for the PDUs to avoid interference with the chassis.

·     If side-facing cables are used for the PDUs, conduct onsite surveys as a best practice to ensure that the PDUs do not interfere with other components at the chassis rear.

Server size

(3) 780 mm (30.71 in) between the front rack posts and the rear of the chassis, including power supply handles at the server rear (not shown in the figure)

(4) 800 mm (31.50 in) server depth, including chassis ears

(5) 960 mm (37.80 in) between the front rack posts and the rear ends of the CMA

(6) 860 mm (33.86 in) between the front rack posts and the rear ends of the slide rails

 

Liquid-cooling module installed

The server is 2U high and has a depth of 803 mm (31.61 in). Table 59 shows the requirements of the server on the liquid-cooling system. As a best practice, use the H3C cold plate-based liquid-cooling system. For more information, see H3C Cold Plate-Based Liquid-Cooling System User Guide. The server can also be used independently of the H3C's cold plate-based liquid-cooling system. Before use, perform an on-site survey. For more information, contact Technical Support.

Table 59 Server requirements on the liquid-cooling system

Item

Requirements

Server traffic

≥ 1.4 L/min

Server inlet and outlet pressure differential

≥ 45 Kpa

Supported inlet temperature for server / secondary side supply temperature of CDU

5°C to 50°C (41°F to 122°F)

Recommended: 104°F

NOTE:

To prevent condensation, the minimum water supply temperature must be 3°C (37.4°F) higher than the dew point temperature, which is typically measured using a dew point hygrometer.

Operating pressure of the liquid-cooling system

≤ 3.5 Bar

Recommended: ≤ 2.5Bar

Secondary side filtration accuracy

≤ 50um

 

Airflow direction of the server

Figure 55 Airflow direction of the server

(1) and (2) Airflow into the chassis and power supplies

(3) Airflow out of the power supplies

(4) and (5) Airflow out of the chassis

 

 

Temperature and humidity requirements

To ensure the normal operation of the server, a certain temperature and humidity must be maintained in the equipment room. For more information about the operating temperature requirements on the server, see "Physical specifications."

Equipment room height requirements

To ensure correct operation of the server, make sure the equipment room height meets the requirements as described in "Physical specifications."

Corrosive gas concentration requirements

About corrosive gases

Corrosive gases can accelerate corrosion and aging of metal components and even cause server failure. Table 60 describes common corrosive gases and their sources.

Table 60 Common corrosive gases and their sources

Corrosive gas

Sources

Hydrogen sulfide (H2S)

Geothermal emissions, microbiological activities, fossil fuel processing, wood pulping, and sewage treatment

Sulfur dioxide (SO2) and sulfur trioxide (SO3)

Combustion of fossil fuel, petroleum products, automobile emissions, ore smelting, sulfuric acid manufacture, and tobacco smoke

Sulphur (S)

Foundries and sulfur manufacture

Hydrogen Fluoride (HF)

Fertilizer manufacture, aluminum manufacture, ceramics manufacture, steel manufacture, electronics device manufacture, and fossil fuel

Nitrogen Oxide (NOx)

Automobile emissions, fossil fuel combustion, microbes, and chemical industry

Ammonia (NH3)

Microbes, sewage, fertilizer manufacture, and geothermal steam

Carbonic oxide (CO)

Combustion, automobile emissions, microbes, and tree and wood pulping

Chlorine (Cl2) and chlorine dioxide (ClO2)

Chlorine manufacture, aluminum manufacture, zinc manufacture, and refuse decomposition

Hydrochloric acid (HCl)

Automobile emissions, combustion, forest fires, and combustion of polymers related to marine environment or processes

Hydrobromic acid (HBr) and hydroiodic acid (HI)

Automobile emissions

Ozone (O3)

Atmospheric photochemical processes mainly involving nitrogen oxides and oxygenated hydrocarbons

Hydrocarbons (CnHn)

Automobile emissions, tobacco smoke, animal excrement, sewage treatment, and tree and wood pulping

 

Requirements in the data center equipment room

As a best practice, make sure the corrosive gas concentration in the data center equipment room meets the requirements of severity level G1 of ANSI/ISA 71.4. The rate of copper corrosion product thickness growth must be less than 300 Å/month, and the rate of silver corrosion product thickness growth must be less than 200 Å/month.

 

 

NOTE:

Angstrom (Å) is a metric unit of length equal to one ten-billionth of a meter.

 

To meet the copper and silver corrosion rates stated in severity level G1, make sure the corrosive gases in the equipment room do not exceed the concentration limits as shown in Table 61.

Table 61 Requirements in a data center equipment room

Corrosive gas

Concentration (ppb)

H2S

< 3

SO2 and SO3

< 10

Cl2

< 1

NOx

< 50

HF

< 1

NH3

< 500

O3

< 2

 

 

NOTE:

·     Part per billion (ppb) is a concentration unit. 1 ppb represents a volume-to-volume ratio of 1 to 100000000.

·     The concentration limits are calculated based on the reaction results of the gases in the equipment room with a relative humidity less than 50%. If the relative humidity of the equipment room increases by 10%, the severity level of ANSI/ISA 71.4 to be met must also increase by 1.

 

Due to the variability of product performance under the influence of corrosive gases in equipment rooms, see the product installation guide for specific requirements regarding corrosive gas concentration.

Requirements in a non-data center equipment room

The corrosive gas concentration for a non-data center equipment room must meet the requirements of class 3C2 of IEC 60721-3-3:2002, as shown in Table 62.

Table 62 Requirements in a non-data center equipment room

Corrosive gas

Average concentration (mg/m3)

Max concentration (mg/m3)

SO2

0.3

1.0

H2S

0.1

0.5

Cl2

0.1

0.3

HCI

0.1

0.5

HF

0.01

0.03

NH3

1.0

3.0

O3

0.05

0.1

NOx

0.5

1.0

 

 

NOTE:

As a best practice, control the corrosive gas concentrations in the equipment room at their average values. Make sure the corrosive gas concentrations do not exceed 30 minutes per day at their maximum values.

 

Due to the variability of product performance under the influence of corrosive gases in equipment rooms, see the product installation guide for specific requirements regarding corrosive gas concentration.

Guidelines for controlling corrosive gases

To control corrosive gases, follow these guidelines:

·     As a best practice, do not build the equipment room in a place with a high concentration of corrosive gases.

·     Make sure the equipment room is not connected to sewer, sewage, vertical shaft, or septic tank pipelines and keep it far away from these pipelines. The air inlet of the equipment room must be away from such pollution sources.

·     Use environmentally friendly materials to decorate the equipment room. Avoid using organic materials that contains harmful gases, such as sulfur or chlorine-containing insulation cottons, rubber mats, sound-proof cottons, and avoid using plasterboards with high sulfur concentration.

·     Place fuel (diesel or gasoline) engines separately. Do not place them in the same equipment room with the device. Make sure the exhausted air of the engines will not flow into the equipment room or towards the air inlet of the air conditioners.

·     Place batteries separately. Do not place them in the same room with the device.

·     Employ a professional company to monitor and control corrosive gases in the equipment room regularly.

Cleanliness requirements

Mechanically active substances buildup on the chassis might result in electrostatic adsorption, which causes poor contact of metal components and contact points. In the worst case, electrostatic adsorption can cause communication failure.

Requirements in a data center equipment room

The concentration of dust participles in the equipment room must meet the ISO 8 cleanroom standard defined by ISO 14644-1, as described in Table 63.

Table 63 Dust particle concentration limit in the equipment room

Particle diameter

Concentration limit

Remarks

≥ 5 μm

≤ 29300 particles/m3

Make sure no zinc whiskers are in the equipment room.

≥ 1 μm

≤ 832000 particles/m3

≥ 0.5 μm

≤ 3520000 particles/m3

 

Due to the variability of product performance under the influence of dust buildup in equipment rooms, see the product installation guide for specific requirements regarding dust concentration.

Requirements in a non-data center equipment room

The concentration of dust participles (particle diameter ≥ 0.5 µm) must meet the requirement of the GB 50174-2017 standard, which is less than 17600000 particles/m3.

Due to the variability of product performance under the influence of dust buildup in equipment rooms, see the product installation guide for specific requirements regarding dust concentration.

Guidelines for controlling corrosive gases

To maintain cleanliness in the equipment room, follow these guidelines:

·     Keep the equipment room away from pollution sources and do not smoke or eat in the equipment room.

·     Use double-layer glass in windows and seal doors and windows with dust-proof rubber strips.

·     Use dustproof materials for floors, walls, and ceilings and use matt coating that does not produce powders.

·     Keep the equipment room clean and clean the air filters of the rack regularly.

·     Wear ESD clothing and shoe covers before entering the equipment room. Keep the ESD clothing and shoe covers clean and replace them frequently.

Grounding requirements

Correctly connecting the server grounding cable is crucial to lightning protection, anti-interference, and ESD prevention. The server can be grounded through the grounding wire of the power supply system and no external grounding cable is required.

Storage requirements

·     As a best practice, do not store an HDD for 6 months or more without powering on and using it.

·     As a best practice, do not store an SSD, M.2 SSD, or SD card for 3 months or more without powering on and using it. Long unused time increases data loss risks.

·     To store the server chassis, or an HDD, SSD, M.2 SSD, or SD card for 3 months or more, power on it every 3 months and run it for a minimum of 2 hours each time. For information about powering on and powering off the server, see "Powering on and powering off the server."

Installation tools

The table below lists the tools that you might use during installation.

Table 64 Installation tools

Picture

Name

Description

T25 Torx screwdriver

Installs or removes screws inside chassis ears. A flat-head screwdriver can also be used for this purpose.

T30 Torx screwdriver

Installs or removes captive screws on processor heatsinks.

T15 Torx screwdriver (shipped with the server)

Installs or removes screws on the processor system board.

T10 Torx screwdriver (shipped with the server)

Installs or removes screws on chassis ears.

Flat-head screwdriver

Replaces the system battery.

Phillips screwdriver

Installs or removes screws on drive carriers.

Cage nut insertion/extraction tool

Inserts or extracts the cage nuts in rack posts.

Diagonal pliers.

Clips insulating sleeves.

Paper knife

Removes the server's external packaging.

Tape measure

Measures distance.

Multimeter

Measures resistance and voltage.

ESD wrist strap

Prevents ESD when you operate the server.

ESD gloves

Antistatic clothing

Ladder

Supports high-place operations.

Interface cable (such as an Ethernet cable or optical fiber)

Connects the server to an external network.

Serial console cable

Connects the serial connector on the server to a monitor for troubleshooting.

Type-C to USB cable (connecting a USB Wi-Fi module or USB drive)

·     If you connect a third-party USB Wi-Fi module, you can use the HDM Mobile client on a mobile endpoint to access the HDM interface.

·     If you connect a USB drive, you can download SDS log messages to the USB drive from HDM.

NOTE:

Support for USB Wi-Fi modules depends on the server model.

Monitor

Displays the output from the server.

Temperature and humidity meter

Displays current temperature and humidity in the equipment room.

Oscilloscope

Displays the variation of voltage over time in waveforms.

 

Installing or removing the server

(Optional) Installing rails

Install the inner rails to the server and the outer rails to the rack. For information about installing the rails, see the document shipped with the rails.

Rack-mounting the server

1.     Slide the server into the rack. For more information about how to slide the server into the rack, see the installation guide for the rails.

Figure 56 Rack-mounting the server

Orch_136.png

 

2.     Secure the server. Push the server until the chassis ears are flush against the rack front posts. Unlock the latches of the chassis ears, fasten the captive screws inside the chassis ears, and lock the latches.

Figure 57 Securing the server

R170_047.png

 

(Optional) Installing cable management brackets

Install cable management brackets if the server is shipped with cable management brackets. For information about how to install cable management brackets, see the installation guide shipped with the brackets.

Connecting external cables

Connecting a mouse, keyboard, and monitor

About this task

Perform this task before you configure the BIOS, HDM, FIST, or RAID on the server or enter the operating system of the server.

 

 

NOTE:

The server-compatible operating systems come with the onboard VGA driver. If a higher display resolution is required, you can search for VGA in the Software Downloads > Servers section of the H3C website to obtain and update the onboard VGA driver.

 

The server provides two DB15 VGA connectors for connecting a monitor. One is on the front panel and the other is on the rear panel.

The server is not shipped with a standard PS2 mouse and keyboard. To connect a PS2 mouse and keyboard, you must prepare a USB-to-PS2 adapter.

Procedure

1.     Connect one plug of a VGA cable to a VGA connector on the server, and fasten the screws on the plug.

Figure 58 Connecting a VGA cable

 

2.     Connect the other plug of the VGA cable to the VGA connector on the monitor, and fasten the screws on the plug.

3.     Insert the USB connector of the USB-to-PS2 adapter to a USB connector on the server. Then, insert the PS2 connectors of the mouse and keyboard into the PS2 receptacles of the adapter.

Figure 59 Connecting a PS2 mouse and keyboard by using a USB-to-PS2 adapter

 

Connecting an Ethernet cable

About this task

Perform this task before you set up a network environment or log in to the HDM management interface through the HDM network port to manage the server.

Procedure

1.     Determine the network port on the server.

¡     To connect the server to the external network, use the Ethernet port on the network adapter.

¡     To log in to the HDM management interface:

-     Use the HDM dedicated network port. For the location of the HDM dedicated network port, see "System board layout."

-     If the server is configured with an OCP network adapter, you can also use the HDM shared network port on the OCP network adapter to log in to the HDM management interface. For the location of the OCP network adapter, see "System board layout."

2.     Determine the type of the Ethernet cable.

Verify the connectivity of the cable by using a link tester. If you are replacing the Ethernet cable, make sure the new cable is the same type or compatible with the old cable.

3.     Label the Ethernet cable by filling in the names and numbers of the server and the peer device on the label.

¡     If you are replacing the Ethernet cable, label the new cable with the same number as the number of the old cable.

¡     As a best practice, use labels of the same kind for all cables.

4.     Connect one end of the Ethernet cable to the network port on the server and the other end to the peer device.

Figure 60 Connecting an Ethernet cable

 

5.     Verify network connectivity.

After powering on the server, use the ping command to test the network connectivity. If the connection between the server and the peer device fails, verify that the Ethernet cable is securely connected.

6.     Secure the Ethernet cable. For information about how to secure cables, see "Securing cables."

Connecting a power cord

Restrictions and guidelines

·     To avoid damage to the equipment or even bodily injury, use the power cord that ships with the server.

·     Before connecting the power cord, make sure the server and components are installed correctly.

Procedure

 

NOTE:

Multiple types of wire fasteners can be used for securing the power cord. In this procedure, a cable clamp is used.

 

1.     Insert the power cord plug into the power receptacle of a power supply at the rear panel.

Figure 61 Connecting a power cord

 

2.     Connect the other end of the power cord to the power source, for example, the power strip on the rack.

3.     Secure the power cord to avoid unexpected disconnection of the power cord.

a.     If the cable clamp is positioned too near the power cord that it blocks the power cord plug connection, press down the tab on the cable mount and slide the clip backward.

Figure 62 Sliding the cable clamp backward

 

b.     Open the cable clamp, place the power cord through the opening in the cable clamp, and then close the cable clamp, as shown by callouts 1, 2, 3, and 4 in Figure 63.

Figure 63 Securing the AC power cord

 

c.     Slide the cable clamp forward until it is flush against the edge of the power cord plug, as shown in Figure 64.

Figure 64 Sliding the cable clamp forward

 

Securing cables

Securing cables to cable management brackets

For information about how to install cable management brackets, see the installation guide shipped with the brackets.

Securing cables to slide rails by using cable straps

 

NOTE:

·     You can secure cables to either left slide rails or right slide rails. As a best practice for cable management, secure cables to left slide rails.

·     When multiple cable straps are used in the same rack, stagger the strap location, so that the straps are adjacent to each other when viewed from top to bottom. This positioning will enable the slide rails to slide easily in and out of the rack.

 

1.     Hold the cables against a slide rail.

2.     Wrap the strap around the slide rail and loop the end of the cable strap through the buckle. Dress the cable strap to ensure that the extra length and buckle part of the strap are facing outside of the slide rail.

Figure 65 Securing cables to a slide rail

Orch_140.png

 

Cabling guidelines

·     For heat dissipation, make sure no cables block the inlet or outlet air vents of the server.

·     To easily identify ports and connect/disconnect cables, make sure the cables do not cross.

·     Label the cables for easy identification of the cables.

·     Wrap unused cables onto an appropriate position on the rack.

·     To avoid electric shock, fire, or damage to the equipment, do not connect communication equipment to RJ-45 Ethernet ports on the server.

·     To avoid damage to cables when extending the server out of the rack, do not route the cables too tight if you use cable management brackets.

Removing the server from a rack

1.     Power off the server.

2.     Disconnect all peripheral cables from the server.

3.     Extend the server from the rack.

4.     Open the latches of the chassis ears. Loosen the captive screws inside the chassis ears, and slide the server out of the rack.

Figure 66 Extending the server from the rack

Orch_135.png

 

5.     Place the server on a clean, stable surface.

Powering on and powering off the server

 

NOTE:

If the server is connected to external storage devices, make sure the server is the first device to power off and then the last device to power on. This restriction prevents the server from mistakenly identifying the external storage devices as faulty devices.

 

Powering on the server

Prerequisites

·     Install the server and internal components correctly.

·     Connect the server to a power source.

·     To power on the server immediately after shutdown, as a best practice for all internal components to function properly, wait a minimum of 30 seconds until the HDD is completely still and electronic components are fully powered down.

Procedure

Powering on the server by pressing the power on/standby button

Press the power on/standby button to power on the server.

The server exits standby mode and supplies power to the system. The system power LED changes from steady amber to flashing green and then to steady green.

Powering on the server from the HDM Web interface

1.     Log in to HDM. For more information, see the HDM2 user guide for the server.

2.     Navigate to System > Power Management.

3.     Click Power on.

Powering on the server from the remote console interface

1.     Log in to HDM. For more information, see the HDM2 user guide for the server.

2.     Log in to a remote console and then power on the server. For more information, see HDM2 online help.

Configuring automatic power-on

You can configure automatic power-on from HDM or the BIOS.

·     To configure automatic power-on from HDM:

a.     Log in to HDM. For more information, see the HDM2 user guide for the server.

b.     Navigate to System > Power Management.

c.     Select Always power on as the power-on policy and then click OK.

·     To configure automatic power-on from the BIOS:

d.     Log in to the BIOS. For information about how to log in to the BIOS, see the BIOS user guide for the server.

e.     Select Server > AC Restore Settings, and then press Enter.

f.     Select Always Power On, and then press Enter.

g.     Press F4 to save the configuration.

Powering off the server

Prerequisites

·     Back up all critical data.

·     Make sure all services have stopped or have been migrated to other servers.

Procedure

Powering off the server from its operating system

1.     Connect a monitor, mouse, and keyboard to the server.

2.     Shut down the operating system of the server.

3.     Disconnect all power cords from the server.

Powering off the server by pressing the power on/standby button

1.     Press the power on/standby button and wait for the system power LED to turn into steady amber.

2.     Disconnect all power cords from the server.

Powering off the server forcedly by pressing the power on button

1.     Press and hold the power on/standby button until the system power LED turns into steady amber.

 

 

NOTE:

This method forces the server to enter standby mode without properly exiting applications and the operating system. Use this method only when the server system crashes. For example, a process gets stuck.

 

2.     Disconnect all power cords from the server.

Powering off the server from the HDM Web interface

1.     Log in to HDM. For more information, see the HDM2 user guide for the server.

2.     Navigate to System > Power Management.

3.     Click Graceful power-off.

4.     Disconnect all power cords from the server.

Powering off the server from the remote console interface

1.     Log in to HDM. For more information, see the HDM2 user guide for the server.

2.     Disconnect all power cords from the server.

Configuring the server

Configuration flowchart

Figure 67 Configuration flowchart

 

Powering on the switch

1.     Power on the server. For more information, see "Powering on the server."

2.     Verify that the health LED on the front panel is steady green, which indicates that the system is operating correctly. For more information about the health LED status, see "LEDs and buttons."

Configuring basic BIOS settings

 

NOTE:

The BIOS setup utility screens are subject to change without notice.

 

You can set the server boot order and the BIOS passwords from the BIOS setup utility of the server.

Setting the server boot order

The server has a default boot order. You can change the server boot order from the BIOS. For the default boot order and the procedure of changing the server boot order, see the BIOS user guide for the server.

Setting the BIOS passwords

BIOS passwords include a boot password as well as an administrator password and a user password for the BIOS setup utility. By default, no passwords are set.

To prevent unauthorized access and changes to the BIOS settings, set both the administrator and user passwords for accessing the BIOS setup utility. Make sure the two passwords are different.

After setting the administrator password and user password for the BIOS setup utility, you must enter the administrator password or user password each time you access the system.

·     To obtain administrator privileges, enter the administrator password.

·     To obtain the user privileges, enter the user password.

For the difference between the administrator and user privileges and guidelines for setting the BIOS passwords, see the BIOS user guide for the server.

Configuring the RAID

The supported RAID levels and RAID configuration methods vary by storage controller model. For more information, see the storage controller user guide for the server.

Installing the operating system and hardware drivers

Install an operating system

For the server compatibility with the operating systems, visit the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

Install a compatible operating system on the server by following the procedures described in the operating system installation guide for the server.

Installing hardware drivers

For newly installed hardware to operate correctly, the operating system must have the required hardware drivers.

To install a hardware driver, see the operating system installation guide for the server.

 

 

NOTE:

To avoid hardware unavailability caused by an update failure, always back up the drivers before you update them.

 

Updating firmware

 

NOTE:

Verify the hardware and software compatibility before firmware upgrade. For information about the hardware and software compatibility, see the software release notes.

 

You can update the following firmware from UniSystem or HDM:

·     HDM.

·     BIOS.

·     CPLD.

·     BPCPLD.

·     PSU.

·     BMCCPLD.

For information about the update procedures, see the firmware update guide for the server.

Replacing hardware options

 

NOTE:

·     If you are replacing multiple hardware options, read their replacement procedures and identify similar steps to streamline the entire replacement procedure.

·     This document contains procedures for replacing and installing hardware options. If the replacement and installation operations are similar, only replacement procedures are illustrated. If you refer to a replacement procedure to install a hardware option, remove the corresponding blank in advance.

 

Replace a processor

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the processor installation guidelines in "Processors."

Procedure

 

NOTE:

·     To avoid damage to a processor or the system board, only H3C authorized or professional server engineers can install, replace, or remove a processor.

·     The pins in the processor sockets are very fragile and prone to damage. Install a protective cover if a processor socket is empty.

·     For the server to operate correctly, make sure the processor is always in position. For the location of a processor, see "System board layout."

·     To prevent ESD, wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

 

Remove a processor

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     (Optional.) Remove the rear 4GPU module.

5.     Remove the chassis air baffle. Lift the air baffle out of the chassis.

6.     Remove the processor heatsink:

a.     Loosen the captive screws strictly in the sequence instructed by the label on the processor. An incorrect sequence might cause the captive screws to fall off.

b.     Lift the heatsink to remove it from the chassis.

7.     Use isopropanol wiping cloth to clear the residual thermal grease from the processor top and heatsink. Make sure the processor and the heatsink are clean.

8.     Open the processor cover. Use a T20 Torx screwdriver to loosen the screw on the processor cover. Then, the processor cover automatically pops out.

9.     Open the processor frame. Use your index fingers to hold the metal handles to pull up the processor frame until you cannot pull it further.

10.     Remove a processor. Pinch the protruding part of the processor carrier to pull it out.

 

CAUTION

CAUTION:

To avoid component damage, do not drop the processor carrier with the processor or touch the surface of the processor.

 

Installing a processor

1.     Install the processor carrier with the processor. Pinching the protruding part of the processor carrier, insert it into the processor frame to secure it into place.

 

CAUTION

CAUTION:

To avoid component damage, do not drop the processor carrier with the processor or touch the surface of the processor.

 

2.     Close the processor frame. Close the processor frame slowly, and then hold both sides of the frame with your hands until it locks in place.

3.     Secure the processor cover. Close the processor cover slowly and use a T20 Torx screwdriver to fasten the screw.

4.     Smear thermal grease onto the heatsink. Use the thermal grease injector to inject 0.6 ml of thermal grease to the five dots on the bottom of the heatsink, 0.12 ml for each dot.

5.     Install a heatsink.

a.     Place the heatsink down onto the processor socket.

b.     Use a T20 Torx screwdriver to fasten the captive screws on the heatsink strictly in the sequence instructed by the label on the heatsink. Strictly follow this sequence to fasten the screws, as the incorrect order might cause the screws to come loose.

 

CAUTION

CAUTION:

·     To avoid poor contact between the processor and the system board or damage to the pins in the processor socket, tighten the screws to a torque of 1.6 N·m (16.1 kgf.cm).

·     Paste bar code label supplied with the processor over the original label on the heatsink. This step is required for you to obtain H3C's processor servicing.

 

6.     Install the chassis air baffle.

7.     (Optional.) Install the rear 4GPU module.

8.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

9.     Rack-mount the server. For more information, see "Installing or removing the server."

10.     Connect the power cord.

11.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing a DIMM

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the memory installation guidelines in "Memory."

Procedure

Remove a DIMM

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     (Optional.) Remove the rear 4GPU module.

5.     Remove the air baffle. Lift the air baffle out of the chassis.

6.     Remove a DIMM. Open the DIMM slot latches and pull the DIMM out of the slot to remove the DIMM.

 

CAUTION

CAUTION:

To avoid damage to DIMMs or the system board, make sure the server has been powered off and disconnected from the power cord for at least 20 seconds.

 

Installing a DIMM

1.     Install a DIMM. Align the notch on the DIMM with the connector key in the DIMM slot and press the DIMM into the socket until the latches lock the DIMM in place.

2.     Install the air baffle.

3.     (Optional.) Install the rear 4GPU module.

4.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

5.     Rack-mount the server. For more information, see "Installing or removing the server."

6.     Connect the power cord.

7.     Power on the server. For more information, see "Powering on and powering off the server."

8.     (Optional.) To modify the memory mode, enter the BIOS and configure the memory mode as described in the BIOS user manual for the server.

Verifying the installation

Use one of the following methods to verify that the DIMM is installed correctly:

·     Using the operating system:

¡     In Windows, select Run in the Start menu, enter msinfo32, and verify the memory capacity of the DIMM.

¡     In Linux, execute the cat /proc/meminfo command to verify the memory capacity.

·     Using HDM:

Log in to HDM and verify the memory capacity of the DIMM. For more information, see HDM3 online help.

·     Using the BIOS:

Access the BIOS, select Advanced > Socket Configuration > Memory Configuration > Memory Topology, and press Enter. Then, verify the memory capacity of the DIMM.

If the memory capacity displayed is inconsistent with the actual capacity, remove and then reinstall the DIMM, or replace the DIMM with a new DIMM. If the DIMM is in Mirror mode, it is normal that the displayed capacity is smaller than the actual capacity.

Replacing the system board

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

Removing the system board

CAUTION

CAUTION:

To prevent electrostatic discharge, place the removed parts on an antistatic surface or in antistatic bags.

 

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove riser cards.

5.     Remove the power supplies.

6.     Remove the chassis air baffle. Lift the air baffle out of the chassis.

7.     Remove a fan module.

8.     Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.

9.     Remove all the DIMMs installed on the system board.

10.     Remove the processor heatsinks.

11.     Remove processors and install protective covers over the empty processor sockets.

12.     Disconnect all cables on the system board.

13.     Remove the system board.

a.     Loosen the captive screws on the system board.

b.     Hold the captive screws on the system board and slide the system board toward the server front to disengage the system board from the server management module. Then, lift the system board out of the chassis.

Installing the system board

1.     Install the system board.

a.     Slowly place the system board in the chassis. Then, hold the captive screws on the system board and slide the system board toward the server rear until the server management module connectors are successfully attached to the system board.

 

 

NOTE:

The system board is securely seated if you cannot use the captive screws on the system board to lift the system board.

 

b.     Fasten the captive screws on the system board.

2.     Remove the installed protective covers over the processor sockets and install processors.

3.     Connect cables to the system board.

4.     Install a heatsink.

5.     Install a DIMM.

6.     Install the fan cage and fan modules.

7.     Install the chassis air baffle.

8.     Install the removed power supplies.

9.     Install riser cards and connect cables to riser cards.

10.     Install the access panel.

11.     Rack-mount the server. For more information, see "Installing or removing the server."

12.     Connect the power cord.

13.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing the server management module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the drive installation guidelines in "Server management module."

Procedure

Removing the server management module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove riser cards.

5.     Remove the power supplies.

6.     Remove the chassis air baffle. Lift the air baffle out of the chassis.

7.     Remove a fan module.

8.     Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.

9.     Remove all the DIMMs installed on the system board.

10.     Remove the processor heatsinks.

11.     Remove processors and install protective covers over the empty processor sockets.

12.     Disconnect all cables on the system board.

13.     Remove the system board.

a.     Loosen the captive screws on the system board.

b.     Hold the captive screws on the system board and slide the system board toward the server front to disengage the system board from the server management module. Then, lift the system board out of the chassis.

14.     Remove the server management module. Slide the management module toward the server front to disengage the connectors on the module and the rear panel. Lift the management module out of the chassis.

Installing the server management module

1.     Install the server management module. Slowly place the management module into the chassis. Then, slide the management module toward the server rear until the connectors on the module are securely seated.

2.     Install the system board.

a.     Slowly place the system board in the chassis. Then, hold the captive screws on the system board and slide the system board toward the server rear until the server management module connectors are successfully attached to the system board.

 

 

NOTE:

The system board is securely seated if you cannot use the captive screws on the system board to lift the system board.

 

b.     Fasten the captive screws on the system board.

3.     Remove the installed protective covers over the processor sockets and install processors.

4.     Connect cables to the system board.

5.     Install a heatsink.

6.     Install a DIMM.

7.     Install the fan cage and fan modules.

8.     Install the chassis air baffle.

9.     Install the removed power supplies.

10.     Install riser cards and connect cables to riser cards.

11.     Install the access panel.

12.     Rack-mount the server. For more information, see "Installing or removing the server."

13.     Connect the power cord.

14.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing a SAS/SATA drive

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Identify the installation location of the drive to be replaced in the server.

·     Identify the RAID array information of the drive to be replaced. To replace a drive in a non-redundancy RAID array, back up data in the RAID array if the old drive is full or the new drive is of a different model.

·     Read the drive installation guidelines in "SAS/SATA drives."

Procedure

 

NOTE:

SAS/SATA drives attached to a storage controller supports how swapping after entering the BIOS or OS.

 

Removing a SAS/SATA drive

1.     Remove the security bezel, if any. Unlock the security bezel and remove it.

2.     Observe the drive LEDs to verify that the drive is not selected by the storage controller and is not performing a RAID migration or rebuilding. For more information about the LEDs, see "Drive LEDs."

3.     Remove the drive. Press the button on the drive panel to release the locking lever and pull the drive out of the slot. For an HDD, pull the drive 3 cm (1.18 in) out of the slot. Wait for a minimum of 30 seconds for the drive to stop rotating, and then pull the drive out of the slot.

4.     Remove the drive carrier. Remove the screws that secure the drive and then remove the drive from the carrier.

Installing a SAS/SATA drive

 

NOTE:

As a best practice, install drives that do not contain RAID information.

 

1.     Attach the drive to the drive carrier. Place the drive in the carrier and then use four screws to secure the drive into place.

2.     Install a drive. Insert the drive into the slot and push it gently until you cannot push it further, and then close the locking lever.

3.     Install the security bezel, if any. Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. Insert the key provided with the bezel into the lock on the bezel and lock the security bezel.

4.     If the installed drive contains RAID information, clear the information before configuring RAIDs.

5.     To configure RAID settings after the drive is replaced, see the storage controller user guide for the server.

Verifying the installation

Use one of the following methods to verify that the drive has been replaced correctly:

·     Verify the drive properties (including capacity) by using one of the following methods:

¡     Log in to HDM. For more information, see HDM3 online help.

¡     Access the BIOS. For more information, see the storage controller user guide for the server.

¡     Access the CLI or GUI of the server.

·     Observe the drive LEDs to verify that the drive is operating correctly. For more information about drive LEDs, see drive LEDs in "Drive LEDs".

Adding an NVMe drive

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Identify the installation location of the drive to be replaced in the server.

·     Identify the RAID array information of the drive to be replaced. To replace a drive in a non-redundancy RAID array, back up data in the RAID array if the old drive is full or the new drive is of a different model.

·     Read the drive installation guidelines in "NVMe drives."

Procedure

 

NOTE:

Only some operating systems support the hot insertion of NVMe drives. For more information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

 

1.     Remove the security bezel, if any. Unlock the security bezel and remove it.

2.     Attach the drive to the drive carrier. Place the drive in the carrier and then use four screws to secure the drive into place.

3.     Install an NVMe drive. Push the drive into the drive slot and close the locking lever on the drive panel.

4.     Install the security bezel, if any. Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. Insert the key provided with the bezel into the lock on the bezel and lock the security bezel.

Verifying the installation

Use the following methods to verify that the drive is installed correctly:

·     Verify the drive properties (including capacity) by using one of the following methods:

¡     Access HDM. For more information, see HDM3 online help.

¡     Access the BIOS. For more information, see the BIOS user guide for the server.

¡     Access the CLI or GUI of the server.

·     Observe the drive LEDs to verify that the drive is operating correctly. For more information about the LEDs, see "Drive LEDs."

Replacing an NVMe drive

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Identify the installation location of the drive to be replaced in the server.

·     Identify the RAID array information of the drive to be replaced. To replace a drive in a non-redundancy RAID array, back up data in the RAID array if the old drive is full or the new drive is of a different model.

·     Read the drive installation guidelines in "NVMe drives."

Procedure

 

NOTE:

·     Only some operating systems support the hot swapping of NVMe drives. For more information, use the component compatibility lookup tool at http://www.h3c.com/en/home/qr/default.htm?id=66.

·     If the operating system does not support hot swapping of NVMe drives, power off the server first. For more information, see "Powering on and powering off the server."

 

Removing an NVMe drive

1.     Remove the security bezel, if any. Unlock the security bezel and remove it.

2.     Remove the NVMe drive. Press the button on the drive panel to release the locking lever and pull the drive out of the slot.

3.     Remove the drive carrier. Remove the screws that secure the drive and then remove the drive from the carrier.

Installing an NVMe drive

1.     Attach the drive to the drive carrier. Place the drive in the carrier and then use four screws to secure the drive into place.

2.     Install an NVMe drive. Push the drive into the drive slot and close the locking lever on the drive panel.

3.     Install the security bezel, if any. Press the latch at the other end, close the security bezel, and then release the latch to secure the security bezel into place. Insert the key provided with the bezel into the lock on the bezel and lock the security bezel.

Verifying the installation

Use the following methods to verify that the drive is installed correctly:

·     Verify the drive properties (including capacity) by using one of the following methods:

¡     Access HDM. For more information, see HDM3 online help.

¡     Access the BIOS. For more information, see the BIOS user guide for the server.

¡     Access the CLI or GUI of the server.

·     Observe the drive LEDs to verify that the drive is operating correctly. For more information about the LEDs, see "Drive LEDs."

Replacing a drive backplane

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

Removing a drive backplane

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the drives attached to the backplane.

4.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

5.     Remove a fan module.

6.     Remove the fan cage. Press the unlocking clips at both sides of the fan cage, and lift the fan cage out of the chassis.

7.     Disconnect cables from the backplane.

8.     Remove the drive backplane. Loosen the captive screws that secure the backplane, and then lift the backplane out of the chassis.

Installing a drive backplane

1.     Install a drive backplane. Place the backplane in the slot and then fasten the captive screws.

2.     Connect cables to the drive backplane.

3.     Install the fan cage.

4.     Install a fan module.

5.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

6.     Install the removed drives.

7.     Rack-mount the server. For more information, see "Installing or removing the server."

8.     Connect the power cord.

9.     Power on the server. For more information, see "Powering on and powering off the server."

Installing a rear drive cage

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the PCIe riser card blank. Lift the blank to remove it from the chassis.

5.     For a 2SFF UniBay drive cage, install a bracket:

a.     Align the guide pin on the bracket with the notch in the chassis.

b.     Place the bracket in the chassis.

c.     Use screws to secure the bracket.

6.     Install the rear drive cage:

a.     Place the drive cage in the chassis.

b.     Use screws to secure the drive cage.

7.     Connect cables to the drive cage.

8.     Install the blank. Aligning the guide pins on the blank with the notches in the chassis, insert the blank into the slot.

9.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

10.     Rack-mount the server. For more information, see "Installing or removing the server."

11.     Connect the power cord.

12.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing riser cards and PCIe modules

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the installation guidelines for riser cards and PCIe modules in "Riser cards and PCIe modules."

Procedure

Removing a riser card and a PCIe module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Disconnect all cables that hinder the replacement, if any.

5.     Remove the riser card installed with a PCIe module. Pressing the unlocking button, lift the riser card out of the chassis.

6.     Remove the PCIe module from the riser card. Open the retaining latch for the PCIe slot, and pull the PCIe module out of the slot.

Installing a riser card and a PCIe module

1.     Install the PCIe module on the riser card:

a.     Remove the PCIe module blank. Open the retaining latch for the PCIe slot, and then pull out the blank.

b.     Install the PCIe module to the riser card. Insert the PCIe module into the PCIe slot, and close the retaining latch.

2.     Install the riser card to the server:

a.     Remove the riser card blank from the target PCIe riser connector. Lift the blank to remove it from the chassis.

b.     Install the riser card onto the PCIe riser connector. Pressing the unlocking button, install the riser card onto the PCIe riser connector. Then, release the unlocking button and make sure the unlocking button is in locked position.

3.     Connect cables to the riser card or PCIe modules, if any.

4.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

5.     Rack-mount the server. For more information, see "Installing or removing the server."

6.     Connect the power cord.

7.     Power on the server. For more information, see "Powering on and powering off the server."

Installing PCIe modules and a riser card in PCIe riser bay 3

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the installation guidelines for PCIe riser cards in "Riser cards and PCIe modules."

Procedure

1.     Identify the location of the PCIe riser connector. For more information, see "System board layout."

2.     Power off the server. For more information, see "Powering on and powering off the server."

3.     Remove the server from the rack. For more information, see "Removing the server from a rack."

4.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

5.     Remove the PCIe riser card blank. Lift the blank to remove it from the chassis.

6.     Assemble the Riser 3 assembly module in advance as required.

7.     Install a PCIe module to the Riser 3 assembly module:

a.     Remove the PCIe module blank. Loosen the screws on the PCIe module blank, and then remove the PCIe module blank.

b.     Install the PCIe module to the riser card. Insert the PCIe module into the PCIe slot, and fasten the screws to secure the PCIe module.

8.     Install the support bracket:

a.     Align the guide pin on the bracket with the notch in the chassis.

b.     Place the bracket in the chassis.

c.     Use screws to secure the bracket.

9.     Install the Riser 3 assembly module with the PCIe module in PCIe riser bay 3.

10.     Connect cables on the Riser 3 assembly module.

11.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

12.     Rack-mount the server. For more information, see "Installing or removing the server."

13.     Connect the power cord.

14.     Power on the server. For more information, see "Powering on and powering off the server."

Installing PCIe modules and a riser card in PCIe riser bay 4

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the installation guidelines for PCIe riser cards in "Riser cards and PCIe modules."

Procedure

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the PCIe riser card blank. Lift the blank to remove it from the chassis.

5.     Assemble the Riser 4 assembly module in advance as required.

6.     Install a PCIe module to the Riser 4 assembly module:

a.     Remove the PCIe module blank. Loosen the screws on the PCIe module blank, and then remove the PCIe module blank.

b.     Install the PCIe module to the riser card. Insert the PCIe module into the PCIe slot, and fasten the screws to secure the PCIe module.

7.     Install the support bracket:

a.     Align the guide pin on the bracket with the notch in the chassis.

b.     Place the bracket in the chassis.

c.     Use screws to secure the bracket.

8.     Install the Riser 4 assembly module with the PCIe module in PCIe riser bay 4.

9.     Connect cables for the Riser 4 assembly module.

10.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

11.     Rack-mount the server. For more information, see "Installing or removing the server."

12.     Connect the power cord.

13.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing a storage controller and a power fail safeguard module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     To replace the storage controller with a controller of the same model, make sure the following configurations remain the same after replacement:

¡     Storage controller location and cabling method.

¡     Storage controller model, operating mode, and firmware version.

¡     BIOS boot mode.

·     To replace the storage controller with a controller of a different model, back up data in the drives of the storage controller and clear RAID configuration.

·     Read the installation guidelines for storage controllers and power fail safeguard modules in "Storage controllers and power fail safeguard modules."

Procedure

Removing a standard storage controller and a power fail safeguard module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Disconnect all cables from the standard storage controller.

5.     Remove the standard storage controller:

a.     Remove the riser card where the standard storage controller resides. Lift the riser card to remove the riser card from the chassis.

b.     Remove the standard storage controller from the riser card. Open the retaining latch for the PCIe slot, and pull the PCIe module out of the slot.

6.     Remove the power fail safeguard module or supercapacitor, if any.

7.     Remove the supercapacitor. Open the protective cover over the supercapacitor, and take the supercapacitor out of the holder.

Installing a standard storage controller and a power fail safeguard module

1.     (Optional.) Install the supercapacitor. Place the supercapacitor into the holder as instructed on the holder, and close the protective cover.

2.     Install the standard storage controller to the riser card. Insert the standard storage controller into the PCIe slot, and then close the retaining latch on the riser card.

3.     Install the riser card to the server.

4.     Connect the data cables between the standard storage controller and the drive backplane.

5.     Install the removed power fail safeguard module or supercapacitor. Connect the supercapacitor extension cable to the flash card. For more information, see "Connecting the supercapacitor cable."

6.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

7.     Rack-mount the server. For more information, see "Installing or removing the server."

8.     Connect the power cord.

9.     Power on the server. For more information, see "Powering on and powering off the server."

10.     If you replace a storage controller with a controller of a different model, configure RAID settings for the drives managed by the new controller. For more information, see the storage controller user guide for the server.

Replacing a GPU module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the GPU installation guidelines in "GPUs."

Procedure

Removing a GPU module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Disconnect all cables that hinder the replacement, if any.

5.     Remove the riser card where the GPU module resides. Pressing the unlocking button on the riser card, lift the riser card out of the chassis.

6.     Remove the GPU module from the riser card:

a.     Disconnect the cable from the GPU module.

b.     Open the retaining latch on the riser card, and pull the GPU module out from the slot.

(Optional) Adjusting the form of the riser card

 

NOTE:

To replace with a GPU module of a different length, adjust the form of the riser card.

 

1.     Remove the screws on the riser card.

2.     Adjust the riser card length to the long form or short form as needed.

3.     Fasten the screws to secure the riser card to its new form.

4.     Replace the air baffle with one suitable for the new form of the riser card.

Installing a GPU module

1.     Install a GPU module on the riser card:

a.     Insert the GPU module into the PCIe slot, and then close the retaining latch on the riser card.

b.     (Optional.) Connect the power cord to the power connector on the GPU module according to the cable label.

2.     Reconnect other cables to the riser card if required.

3.     Install the riser card to the server. Pressing the unlocking button, install the riser card onto the PCIe riser connector. Then, release the unlocking button and make sure the unlocking button is in locked position.

4.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

5.     Rack-mount the server. For more information, see "Installing or removing the server."

6.     Connect the power cord.

7.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing a standard PCIe network adapter

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the network adapter installation guidelines in "Network adapters."

Procedure

Removing a standard PCIe network adapter

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Disconnect cables from the standard PCIe network adapter.

3.     Remove the server from the rack. For more information, see "Removing the server from a rack."

4.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

5.     Disconnect all cables that hinder the replacement, if any.

6.     Remove the riser card that holds the PCIe network adapter. Lift the riser card to remove the riser card from the chassis.

7.     Remove the PCIe network adapter from the riser card. Loosen the captive screws on the riser card and pull the PCIe network adapter out of the slot.

Installing a standard PCIe network adapter

For more information, see "Installing a riser card and a PCIe module."

Installing OCP network adapter 1

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the rear drive support bracket next to OCP network adapter slot 1 on the system board. Loosen the screws that secure the bracket, and remove the bracket.

5.     Secure the connector end of the cable for OCP network adapter slot 1. Place the connector onto OCP network adapter slot 1, and use screws to secure the connector in place.

6.     Connect the cable for OCP network adapter slot 1.

7.     Install the rear drive support bracket.

8.     Remove the blank over OCP network adapter slot 1.

9.     Install an OCP network adapter. Take the network adapter out of the antistatic bag, push the network adapter into the slot slowly, and then fasten the captive screw on the network adapter.

10.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

11.     Rack-mount the server. For more information, see "Installing or removing the server."

12.     Connect the power cord.

13.     Power on the server. For more information, see "Powering on and powering off the server."

Installing OCP network adapter 2

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

 

NOTE:

To install OCP network adapter 2, you must use the 0404A1XN cable to connect the network adapter to connectors C1-G3C and OCP2 X8 on the system board. For more information, see "Connecting cables for OCP 3.0 network adapter 2."

 

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the rear drive support bracket next to OCP network adapter slot 2 on the system board. Loosen the screws that secure the bracket, and remove the bracket.

5.     Secure the connector end of the cable for OCP network adapter slot 2. Place the connector onto OCP network adapter slot 2, and use screws to secure the connector in place.

6.     Connect the cable for OCP network adapter slot 2.

7.     Install the rear drive support bracket.

8.     Remove the blank over OCP network adapter slot 2.

9.     Installing the OCP network adapter Take the network adapter out of the antistatic bag, push the network adapter into the slot slowly, and then fasten the captive screw on the network adapter.

10.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

11.     Rack-mount the server. For more information, see "Installing or removing the server."

12.     Connect the power cord.

13.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing the OCP network adapter

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the network adapter installation guidelines in "Network adapters."

Procedure

 

NOTE:

·     Some operating systems support managed hot removal of specific OCP network adapters. To replace such an OCP network adapter, you do not need to power off the server. For more information about managed hot removal, see the Appendix C.

·     This section describes the procedure to replace an OCP network adapter that does not support managed hot removal.

 

Removing an OCP network adapter

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Disconnect cables from the OCP network adapter.

3.     Remove the OCP network adapter. Loosen the captive screws on the OCP network adapter and pull the OCP network adapter out of the slot.

Installing an OCP network adapter

1.     Install the OCP network adapter. Insert the OCP network adapter into the slot and fasten the captive screws on it.

2.     Connect cables to the OCP network adapter.

3.     Power on the server. For more information, see "Powering on and powering off the server."

4.     (Optional.) Configure a network port on the OCP network adapter as an HDM shared network port. OCP network adapters inserted into OCP adapter slots support NCSI. By default, port 1 on an OCP network adapter acts as the HDM shared network port. You can specify another port on the OCP network adapter as the HDM shared network port from the HDM Web interface. Note that you can specify only one port as the HDM shared network port at the same time.

Replacing a SATA M.2 SSD and the front M.2 SSD expander module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the installation guidelines for SATA M.2 SSDs in "M.2 SSDs."

Procedure

Removing a SATA M.2 SSD and the M.2 SSD expander module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the SATA M.2 SSD expander module that holds the SATA M.2 SSD:

a.     Disconnect the cable from the SATA M.2 SSD expander module.

b.     Remove the expander module. Remove the screws that secure the expander module and then pull the expander module out.

5.     Remove the SATA M.2 SSD. Slide the locking tab, lift the SSD, and then pull the SSD out of the slot.

Installing a SATA M.2 SSD and the M.2 SSD expander module

1.     Install the SATA M.2 SSD to the SATA M.2 SSD expander module. Insert the connector of the SSD into the socket, slide the locking tab, press the SSD to secure the SSD into place, and then release the locking tab.

2.     Install the expander module.

a.     Align the two screw holes in the expander module with the two internal threaded studs in the chassis, put the expander module into the chassis, and then use screws to secure the expander module.

b.     Connect the SATA M.2 SSD cable. For more information, see "Connecting SATA data cables for the front M.2 SSD expander module."

3.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

4.     Rack-mount the server. For more information, see "Installing or removing the server."

5.     Connect the power cord.

6.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing a chassis ear

Replace a chassis ear if it fails or any of the components (for example, I/O components or VGA/USB connectors) fails.

Procedure

 

NOTE:

The procedure is the same for the left and right chassis ears. This section uses the left chassis ear as an example.

 

Removing a chassis ear

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."

3.     Remove the access panel:

a.     Press the button on the locking lever and then lift the locking lever.

b.     The access panel automatically slides to the server rear.

c.     Lift the access panel to remove it from the server.

4.     Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.

5.     Remove the chassis air baffle. Lift the air baffle out of the chassis.

6.     Remove the front I/O component cable assembly:

a.     Disconnect the front I/O component cable assembly from the system board.

b.     Remove the cable protection plate. Remove the captive screws that secure the cable protection plate, press the cable protection plate and slide it toward the rear of the chassis until you cannot slide it further, and then pull out the cable protection plate.

c.     Remove the front I/O component cable assembly.

7.     Remove the chassis ear. Remove the screws that secure the left chassis ear, and then pull the chassis ear until it is removed.

Installing a chassis ear

1.     Install a chassis ear. Attach the chassis ear to the corresponding side of the server, and use screws to secure the chassis ear into place.

2.     Install the front I/O component cable assembly:

a.     Insert the front I/O component cable assembly into the cable cutout.

b.     Install the cable protection plate on the chassis. Insert the cable protection plate along the slot and slide it toward the front of the chassis until you cannot slide it further, and then install the captive screws on the cable protection plate.

c.     Connect the front I/O component cable assembly to the front I/O connector on the system board.

3.     Install the fan cage. Place the fan cage down into the chassis.

4.     Install the chassis air baffle.

5.     Install the access panel:

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front.

c.     Press down the wrench on the access panel until it snaps into place.

6.     Rack-mount the server. For more information, see "Installing or removing the server."

7.     Connect the power cord. For more information, see "Connecting a power cord."

8.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing the air baffle

Procedure

Removing the air baffle

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the air baffle. Lift the air baffle out of the chassis.

Installing the air baffle

1.     Install the chassis air baffle. Place the air baffle vertically into the chassis.

2.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

3.     Rack-mount the server. For more information, see "Installing or removing the server."

4.     Connect the power cord.

5.     Power on the server. For more information, see "Powering on and powering off the server."

Installing the LCD smart management module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you install a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.

5.     Remove the drive or drive blank from the LCD module slot.

6.     Install the LCD smart management module:

a.     Connect one end of the LCD module cable to the LCD smart management module.

b.     Push the LCD smart management module into the slot until you cannot push it any further.

c.     Connect the other end of the cable to the LCD smart management module connector on the system board.

7.     Install the fan cage. Place the fan cage down into the chassis.

8.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

9.     Rack-mount the server. For more information, see "Installing or removing the server."

10.     Connect the power cord. For more information, see "Connecting a power cord."

11.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing the LCD smart management module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

Removing the LCD smart management module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Disconnect the power cords.

3.     Remove the server from the rack. For more information, see "Removing the server from a rack."

4.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

5.     Remove the fan cage. Pull up the ejector levers at both sides of the fan cage and lift the fan cage to remove it from the chassis.

6.     Remove the LCD smart management module:

a.     Disconnect the LCD module cable from the system board.

b.     Use a flat-head screwdriver or tweezers to press the clip of the LCD smart management module and pull the module out of the slot.

Installing the LCD smart management module

1.     Install the LCD smart management module:

a.     Connect one end of the LCD module cable to the LCD smart management module.

b.     Push the LCD smart management module into the slot until you cannot push it any further.

c.     Connect the other end of the cable to the LCD smart management module connector on the system board.

2.     Install the fan cage. Place the fan cage down into the chassis.

3.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

4.     Rack-mount the server. For more information, see "Installing or removing the server."

5.     Connect the power cord.

6.     Power on the server. For more information, see "Powering on and powering off the server."

Replacing a fan module

Replacing a fan module

Removing a fan module

1.     The fan modules are hot swappable. If sufficient space is available for replacement, you can replace a fan module without removing the server from the rack.

2.     Power off the server. For more information, see "Powering on and powering off the server."

3.     Remove the server from the rack. For more information, see "Removing the server from a rack."

4.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

5.     Remove a fan module. Press the fan button and pull the fan module out of the slot.

Installing a fan module

1.     Install a fan module. Insert the fan module into the slot and press the fan module until it is secured in position.

2.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

3.     If the server was removed, rack-mount the server. For more information, see "Installing or removing the server."

4.     Connect the power cord if the power cord has been disconnected. For more information, see "Connecting a power cord."

5.     Power on the server if the server has been powered off. For more information, see "Powering on and powering off the server."

Installing and setting up a TCM or TPM

·     Trusted platform module (TPM) is a microchip embedded in the system board. It stores encryption information (such as encryption keys) for authenticating server hardware and software. The TPM operates with drive encryption programs such as Microsoft Windows BitLocker to provide operating system security and data protection. For information about Microsoft Windows BitLocker, visit the Microsoft website at http://www.microsoft.com.

·     Trusted cryptography module (TCM) is a trusted computing platform-based hardware module with protected storage space, which enables the platform to implement password calculation.

Installation and setup flowchart

Figure 68 Installation and setup flowchart

 

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Installing the TPM or TCM module

Procedure

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     Remove all riser cards that hider the installation.

5.     Install the TCM or TPM.

a.     Press the TPM into the TPM connector on the system board.

b.     Insert the rivet pin.

c.     Insert the security rivet into the hole in the rivet pin and press the security rivet until it is firmly seated.

6.     Install the removed riser cards, if any.

7.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

8.     Rack-mount the server. For more information, see "Installing or removing the server."

9.     Connect the power cord.

10.     Power on the server. For more information, see "Powering on and powering off the server."

Cautions after installation

·     Do not remove an installed TPM or TCM. Once installed, the module becomes a permanent part of the system board.

·     If you want to replace the failed TCM or TPM, remove the system board, and then contact H3C Support to replace the TCM or TPM and the system board.

·     When installing or replacing hardware, H3C technicians cannot configure the TCM or TPM or enter the recovery key. For security reasons, only the user can perform the tasks.

·     When replacing the system board, do not remove the TPM or TCM from the system board. H3C will provide a TCM or TPM with a spare system board for the replacement.

·     Any attempt to remove an installed TPM or TCM from the system board breaks or disfigures the TPM or TCM security rivet. Upon locating a broken or disfigured rivet on an installed TCP or TPM, administrators should consider the system compromised and take appropriate measures to ensure the integrity of the system data.

·     H3C is not liable for blocked data access caused by improper use of the TCM or TPM. For more information, see the encryption technology feature documentation provided by the operating system.

Enabling the TCM or TPM in the BIOS

1.     Access the BIOS utility. For information about how to enter the BIOS utility, see the BIOS user guide.

2.     Select Advanced > Trusted Computing, and press Enter.

3.     Enable TCM or TPM. By default, the TCM and TPM are enabled for a server.

¡     If the server is installed with a TPM, select TPM State > Enabled, and then press Enter.

¡     If the server is installed with a TCM, select TCM State > Enabled, and then press Enter.

4.     Log in to HDM to verify that the TCM or TPM is operating correctly. For more information, see the HDM3 online help.

Configuring encryption in the operating system

For more information about this task, see the encryption technology feature documentation that came with the operating system.

For more information about Microsoft Windows BitLocker drive encryption, visit the Microsoft website at http://technet.microsoft.com/en-us/library/cc732774.aspx. The recovery key/password is generated during BitLocker setup, and can be saved and printed after BitLocker is enabled. When using BitLocker, always retain the recovery key/password. The recovery key/password is required to enter Recovery Mode after BitLocker detects a possible compromise of system integrity or firmware or hardware change. For security purposes, follow these guidelines when retaining the recovery key/password:

·     Always store the recovery key/password in multiple locations.

·     Always store copies of the recovery key/password away from the server.

·     Do not save the recovery key/password on the encrypted hard drive.

Replacing a power supply

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the power supply installation guidelines in "Power supplies."

Procedure

Removing a power supply

If two operating power supplies are present and the server rear has sufficient space for replacement, you can replace one of the power supplies without powering off the server.

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the power cord from the power supply:

Press the tab to disengage the ratchet from the tie mount, slide the cable clamp outward, and then release the tab.

Open the cable clamp and remove the power cord out of the clamp.

Unplug the power cord.

4.     Remove the power supply. Holding the power supply by its handle and pressing the retaining latch with your thumb, pull the power supply slowly out of the slot.

Installing a power supply

 

NOTE:

If only one power supply is present, install the new power supply in the slot for the replaced power supply.

 

1.     Install a new power supply. Push the power supply into the slot until it snaps into place.

2.     Rack-mount the server if the server has been removed. For more information, see "Installing or removing the server."

3.     Connect the power cord if the power cord has been disconnected.

4.     Power on the server if the server has been powered off. For more information, see "Powering on and powering off the server."

Replace the system battery

The server comes with a system battery installed on the system board, which supplies power to the real-time clock and has a lifespan of 3 to 5 years.

If the server no longer automatically displays the correct date and time, replace the battery.

 

 

NOTE:

Battery failure or complete power depletion will cause the BIOS to reset to default settings. To reconfigure the BIOS, see the BIOS user guide for the server.

 

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

Procedure

Removing the system battery

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel. Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

4.     Lift the access panel to remove it from the server.

5.     Remove the system battery. Pinch the system battery by its top edge and the battery will disengage from the battery holder.

 

 

NOTE:

For environment protection purposes, dispose of the used-up system battery at a designated site.

 

Installing the system battery

1.     Install the system battery. Insert the system battery with the plus sign "+" facing up into the system battery holder, and press down the battery to secure it into place.

2.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

3.     Rack-mount the server. For more information, see "Installing or removing the server."

4.     Connect the power cord.

5.     Power on the server. For more information, see "Powering on and powering off the server."

6.     Access the BIOS to reconfigure the system date and time. For more information, see the BIOS user guide for the server.

Replacing a rear 4GPU module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the GPU installation guidelines in "GPUs."

Procedure

Removing a rear 4GPU module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     (Optional.) Disconnect all cables that hinder the replacement, if any.

5.     Remove the screws at both sides of the 4GPU module.

6.     Remove the rear 4GPU module where the GPU module resides. Lift the rear 4GPU module out of the chassis.

7.     Remove the GPU module from the rear 4GPU module:

Disconnect the cable from the GPU module, if any.

Pull the GPU module out of the slot.

Installing a rear 4GPU module

1.     Install a GPU module on the rear 4GPU module:

a.     Insert the GPU module into the PCIe slot along the guide rails.

b.     Connect the power cord to the GPU module according to the cable label.

c.     Connect the other end of the power cord to the power connector on the 4GPU module according to the cable label.

2.     (Optional.) Reconnect other cables to the rear 4GPU module.

3.     Install the rear 4GPU module to the server. Insert the rear 4GPU module into the slot.

4.     Fasten the screws at both sides of the 4GPU module.

5.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

6.     Rack-mount the server. For more information, see "Installing or removing the server."

7.     Connect the power cord.

8.     Power on the server. For more information, see "Powering on and powering off the server."

Installing a GPU module on the rear 4GPU module

Prerequisites

·     Take the following ESD prevention measures:

¡     Wear antistatic clothing.

¡     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

¡     Do not wear any conductive objects, such as jewelry or watches.

·     When you replace a component, examine the slot and connector for damages. Make sure the pins are not damaged (bent for example) and do not contain any foreign objects.

·     Read the GPU installation guidelines in "GPUs."

Procedure

Removing a rear 4GPU module

1.     Power off the server. For more information, see "Powering on and powering off the server."

2.     Remove the server from the rack. For more information, see "Removing the server from a rack."

3.     Remove the access panel.

a.     Press the button on the locking lever and then lift the locking lever. The access panel automatically slides to the server rear.

b.     Lift the access panel to remove it from the server.

4.     (Optional.) Disconnect all cables that hinder the replacement, if any.

5.     Remove the screws at both sides of the 4GPU module.

6.     Remove the rear 4GPU module that is not installed with a GPU module. Lift the rear 4GPU module out of the chassis.

Installing a rear 4GPU module

1.     Remove the GPU blank from the target slot on the 4GPU module. Use a screwdriver to remove the mounting screw from the blank, and then remove the blank.

2.     Install a GPU module on the rear 4GPU module:

a.     Insert the GPU module into the PCIe slot along the guide rails.

b.     Connect the power cord to the GPU module according to the cable label.

c.     Connect the other end of the power cord to the power connector on the 4GPU module according to the cable label.

3.     (Optional.) Reconnect other cables to the rear 4GPU module.

4.     Install the rear 4GPU module to the server. Insert the rear 4GPU module into the slot.

5.     Fasten the screws at both sides of the 4GPU module.

6.     Install the access panel.

a.     Place the access panel onto the server.

b.     Slide the access panel to the server front and close the locking lever. The access panel snaps into place.

7.     Rack-mount the server. For more information, see "Installing or removing the server."

8.     Connect the power cord.

9.     Power on the server. For more information, see "Powering on and powering off the server."

Installing or removing filler panels

Install blanks over the empty slots if the following modules are not present and remove blanks before you install the following modules:

·     Drives.

·     Drive backplanes.

·     Power supplies.

·     Riser cards.

·     PCIe modules.

·     OCP network adapters.

Prerequisites

Take the following ESD prevention measures:

·     Wear antistatic clothing.

·     Wear an ESD wrist strap and make sure it makes good skin contact and is reliably grounded.

·     Do not wear any conductive objects, such as jewelry or watches.

Procedure

Table 65 Removing or installing a blank

Task

Procedure

Remove a drive blank.

Press the latches on the drive blank inward with one hand, and pull the drive blank out of the slot.

Install a drive blank.

Insert the drive blank into the slot.

Remove a drive backplane blank.

From the inside of the chassis, use a flat-head screwdriver to push aside the clip of the blank and push the blank outward to disengage the blank. Then, pull the blank out of the server.

Install a drive backplane blank.

Insert the drive backplane blank into the slot and push the blank until you hear a click.

Remove a power supply blank.

Hold and pull the power supply blank out of the slot.

Install a power supply blank.

Insert the power supply blank into the slot with the TOP mark facing up.

Remove a riser card blank.

Lift the riser card blank to remove it from the connector.

Install a riser card blank.

Insert the riser card blank into the slot along guide rails.

Remove a PCIe module blank.

Pull the blank out of the slot.

Install a PCIe module blank.

Insert the blank into the slot along the guide rails.

Remove an OCP network adapter blank.

Hold the protrusion on the blank and pull it of the slot.

Install an OCP network adapter blank.

Insert the blank into the slot horizontally.

 

Connecting internal cables

Internal cabling guidelines

Restrictions and guidelines

Follow these guidelines when connecting the internal cables:

·     Do not route the cables above the removable components, such as DIMMs.

·     Route the internal cables without hindering installation or removal of other components or hindering other internal components.

·     Route the cables neat and tidy in their own fixed spaces. Make sure the cables will not be squeezed or scratched by other internal components.

·     Do not pull the connectors when routing the cables.

·     Do not use a cable tie to bundle an excessive number of cables.

·     Appropriately bind long cables. Coil and use cable ties to secure unused cables.

·     When you connect a drive cable, make sure the cable clicks into place.

·     Remove the cap (if any) from the target cable connector before connecting a cable to it.

Connecting drive cables

Drive cables include SAS/SATA data cables, NVMe data cables, power cords, and AUX cables. The server supports multiple drive configurations. This section takes the following two typical drive configurations as examples to help users understand the cabling schemes for drives. For cabling schemes for other drive configurations, contact Technical Support.

 

 

NOTE:

Compared with the AUX cables and power cords, more data cables (including SAS/SATA and NVMe data cables) are required and the cabling methods are more complicated. This section provides code information for data cables and you can use the information to identify cables and their connection methods.

 

·     Front 12LFF (8SAS/SATA+4UniBay)

·     Front 12LFF (4SAS/SATA+8UniBay, LSI Expander backplane)+rear 4SFF UniBay

·     Front 8SFF UniBay+8SFF UniBay+8SFF UniBay

·     Front 25SFF drives (17SAS/SATA+8UniBay)

Front 12LFF (8SAS/SATA+4UniBay)

1.     Connect data cables for the front 12LFF NVMe drives as shown in Figure 69.

Figure 69 Connecting data cables for the front 12LFF NVMe drives

 

No.

Cable type

Cable code

Cable description

1

NVMe data cable

0404A2B3

Connect the front drive backplane (NVME A1/A2) to the system board (C1-P0A)

2

NVMe data cable

0404A2AS

Connect the front drive backplane (NVME A3/NVME A4) to the system board (C2-P2A)

 

2.     Connect data cables for the front 12LFF SAS/SATA drives as shown in Figure 70.

Figure 70 Connecting data cables for the front 12LFF SAS/SATA drives

 

No.

Cable type

Cable code

Cable description

1

SAS/SATA data cable

0404A2AW

Connect the front drive backplane (SAS PORT2) to the system board (C2-P0A)

2

SAS/SATA data cable

0404A2B7

Connect the front drive backplane (SAS PORT1) to the system board (C1-P0C)

 

3.     Connect AUX cables for the front 12LFF drives as shown in Figure 71.

Figure 71 Connecting AUX cables for the front 12LFF drives

 

No.

Cable type

Cable description

1

AUX cable

Connect the front drive backplane (AUX1) to the system board (AUX1)

 

4.     Connect power cords for the front 12LFF drives as shown in Figure 72.

Figure 72 Connecting power cords for the front 12LFF drives

 

No.

Cable type

Cable description

1

Power cord

Connect the front drive backplane (PWR2) to the drive backplane (PWR2)

2

Power cord

Connect the front drive backplane (PWR1) to the drive backplane (PWR1)

 

Front 12LFF (4SAS/SATA+8UniBay, LSI Expander backplane)+rear 4SFF UniBay

1.     Connect data cables for the rear 4SFF Unibay NVMe drives as shown in Figure 73.

Figure 73 Connecting data cables for the rear 4SFF Unibay NVMe drives

 

No.

Cable type

Cable code

Cable description

1

NVMe data cable

0404A2AQ

Connect the rear drive backplane (NVME B1/B2) to the system board (C2-P2A)

2

NVMe data cable

0404A2AQ

Connect the rear drive backplane (NVME B3/B4) to the system board (C2-P2C)

 

2.     Connect data cables for the rear 4SFF Unibay SAS/SATA drives as shown in Figure 74.

Figure 74 Connecting data cables for the rear 4SFF Unibay SAS/SATA drives

 

No.

Cable type

Cable code

Cable description

1

SAS/SATA data cable

0404A1RP

Connect the rear drive backplane (SAS PORT) to the system board (SAS EXP1)

2

SAS/SATA data cable

0404A1QM

Connect the front drive backplane (SAS PORT) to the system board (C0)

 

3.     Connect AUX cables for the front 12LFF and rear 4SFF Unibay drives as shown in Figure 75.

Figure 75 Connecting AUX cables

 

No.

Cable type

Cable description

1

AUX cable

Connect the front drive backplane (AUX) to the system board (AUX1)

2

AUX cable

Connect the front drive backplane (AUX) to the system board (AUX5)

 

4.     Connect power cords for the front 12LFF and rear 4SFF Unibay drives as shown in Figure 76.

Figure 76 Connecting power cords for the front 12LFF and rear 4SFF Unibay drives

 

No.

Cable type

Cable description

1

Power cord

Connect the front drive backplane (PWR1) to the drive backplane (PWR2)

2

Power cord

Connect the front drive backplane (PWR2) to the drive backplane (PWR2)

3

Power cord

Connect the front drive backplane (PWR) to the system board (PWR5)

 

Front 8SFF UniBay+8SFF UniBay+8SFF UniBay

1.     Connect data cables for the three sets of front 8SFF UniBay NVMe drives as shown in Figure 77.

Figure 77 Connecting data cables for the three sets of front 8SFF UniBay NVMe drives

 

No.

Cable type

Cable code

Cable description

1

NVMe data cable

0404A2AQ

Connect the front drive backplane (NVME B3/B4) to the system board (C2-P0C)

2

NVMe data cable

0404A2AQ

Connect the front drive backplane (NVME B1/B2) to the system board (C2-P0A)

3

NVMe data cable

0404A1PW

Connect the front drive backplane (NVME A3/A4) to the system board (C2-G3C)

4

NVMe data cable

0404A1PW

Connect the front drive backplane (NVME A1/A2) to the system board (C2-G3A)

5

NVMe data cable

0404A1WY

Connect the front drive backplane (NVME B3/B4) to the system board (C2-P2C)

6

NVMe data cable

0404A1WY

Connect the front drive backplane (NVME B1/B2) to the system board (C2-P2A)

7

NVMe data cable

0404A1QS

Connect the front drive backplane (NVME A3/A4) to the system board (C1-P0C)

8

NVMe data cable

0404A1WY

Connect the front drive backplane (NVME A1/A2) to the system board (C1-P0A)

9

NVMe data cable

0404A2AQ

Connect the front drive backplane (NVME B3/B4) to the system board (C1-G1C)

10

NVMe data cable

0404A2AQ

Connect the front drive backplane (NVME B1/B2) to the system board (C1-G1A)

11

NVMe data cable

0404A2AX

Connect the front drive backplane (NVME A3/A4) to the system board (C1-P2C)

12

NVMe data cable

0404A1PW

Connect the front drive backplane (NVME A1/A2) to the system board (C1-P2A)

 

2.     Connect AUX cables for the three sets of front 8SFF UniBay drives as shown in Figure 78.

Figure 78 Connecting AUX cables for the three sets of front 8SFF UniBay drives

 

No.

Cable type

Cable description

1

AUX cable

Connect the front drive backplane (AUX) to the system board (AUX2)

2

AUX cable

Connect the front drive backplane (AUX) to the system board (AUX1)

3

AUX cable

Connect the front drive backplane (AUX) to the system board (AUX3)

 

3.     Connect power cords for the three sets of front 8SFF UniBay drives as shown in Figure 79.

Figure 79 Connecting power cords for the three sets of front 8SFF UniBay drives

 

No.

Cable type

Cable description

1

Power cord

Connect the front drive backplane (PWR) to the system board (PWR1)

2

Power cord

Connect the front drive backplane (PWR) to the system board (PWR2)

3

Power cord

Connect the front drive backplane (PWR) to the system board (PWR3)

 

Front 25SFF drives (17SAS/SATA+8UniBay)

1.     Connect data cables for the front 25SFF NVMe drives as shown in Figure 80.

Figure 80 Connecting data cables for the front 25SFF NVMe drives

 

No.

Cable type

Cable code

Cable description

1

NVMe data cable

0404A2B3

Connect the front drive backplane (NVME1) to the system board (C1-P0A)

2

NVMe data cable

0404A2BF

Connect the front drive backplane (NVME2) to the system board (C1-P0C)

3

NVMe data cable

0404A2AX

Connect the front drive backplane (NVME3) to the system board (C2-P2A)

4

NVMe data cable

0404A2AX

Connect the front drive backplane (NVME4) to the system board (C2-P2C)

 

2.     Connect data cables for the front 25SFF SAS/SATA drives as shown in Figure 81.

Figure 81 Connecting data cables for the front 25SFF SAS/SATA drives

 

No.

Cable type

Cable code

Cable description

1

SAS/SATA data cable

0404A1QM

Connect the front drive backplane (SAS PORT1) to the system board (C0)

 

3.     Connect AUX cables for the front 25SFF drives as shown in Figure 82.

Figure 82 Connecting AUX cables for the front 25SFF drives

 

No.

Cable type

Cable description

1

AUX cable

Connect the front drive backplane (AUX) to the system board (AUX1)

 

4.     Connect power cords for the front 25SFF drives as shown in Figure 83.

Figure 83 Connecting power cords for the front 25SFF drives

 

No.

Cable type

Cable description

1

Power cord

Connect the front drive backplane (PWR3) to the drive backplane (PWR3)

2

Power cord

Connect the front drive backplane (PWR2) to the drive backplane (PWR2)

3

Power cord

Connect the front drive backplane (PWR1) to the drive backplane (PWR1)

 

Connecting the LCD smart management module cable

Figure 84 Connecting the LCD smart management module cable

 

Cable type

Cable code

Cable description

AUX cable

0404A1BN

AUX cable from the LCD module to the system board (DIAG LCD)

 

Connecting cables for the front M.2 SSD expander module

Connecting SATA data cables for the front M.2 SSD expander module

Figure 85 Connecting SATA data cables for the front M.2 SSD expander module

 

No.

Cable type

Cable code

Cable description

1

SATA signal cable&data cable

0404A2KW

Use the cable labeled M1 from the M.2 SSD expander module (M.2 PORT) to the system board (C1-P0A)

2

Use the cable labeled M2 from the M.2 SSD expander module (M.2 PORT) to the system board (M.2 PORT)

 

Connecting NVMe data cables for the front M.2 SSD expander module

Figure 86 Connecting NVMe data cables for the front M.2 SSD expander module

 

No.

Cable type

Cable code

Cable description

1

NVMe data cable

0404A2KX

Use the cable labeled M2 from the M.2 SSD expander module (M.2 PORT) to the system board (M.2 PORT)

 

Connecting cables for OCP 3.0 network adapter 1

OCP 3.0 network adapter 1 supports the 0404A2KQ and 0404A2KS cables. Figure 87 uses the 0404A2KS as an example. For the cabling method using the other cable, contact Technical Support.

Figure 87 Connecting cables for OCP 3.0 network adapter 1

 

No.

Cable type

Cable code

Cable description

1

PCIe data cable

0404A2KS

Connecting OCP1_X8L to C1-G1C on the system board

2

PCIe data cable

0404A2KS

Connecting OCP1_X8H to C1-G1A on the system board

 

Connecting cables for OCP 3.0 network adapter 2

OCP network adapter 2 requires a connection to the system board.

·     If you install both OCP network adapter 1 and OCP network adapter 2, use a 0404A1XN cable to connect C2-G3C to OCP2 X8 on the system board, as shown in Figure 88.

Figure 88 Connecting cables for OCP 3.0 network adapter 2

 

No.

Cable type

Cable code

Cable description

1

PCIe power cord

0404A1XN

Connect C2-G3C to OCP2 X8 on the system board

 

·     If you install only OCP network adapter 2, use a 0404A2KR cable to connect C2-P4A to OCP2 X8 on the system board, as shown in Figure 89.

Figure 89 Connecting cables for OCP 3.0 network adapter 2

 

Cable type

Cable code

Cable description

Cable for OCP 3.0 network adapter 2

0404A2KR

Connect C2-P4A to OCP2 X8 on the system board

 

Connecting cables for riser cards

Some riser cards can provide additional PCIe links for the slots on the card by connecting to the system board. This section introduces the cabling schemes for these riser cards. For more information about PCIe riser connectors, see "Riser cards and PCIe modules."

For more information about riser card slot numbering, see "Riser cards." The cable method is similar for all the riser card slots. This section installs riser cards in slot 3 and slot 4 as examples.

Figure 90 Connecting cables for the Riser 3 assembly

 

Riser number

No.

Cable type

Cable code

Cable description

Riser3

1

Power cord

0404A1YK (2 cables)

Connect the PWR cable labeled S2 for slot 7 to connector PWR6 on the system board

2

PCIe data cable

Connect the PCIe cable labeled S1 for slot 7 to connector C2-G3A on the system board

3

Power cord

Connect the PWR cable labeled S2 for slot 8 to connector PWR7 on the system board

4

PCIe data cable

Connect the PCIe cable labeled S1 for slot 8 to connector C2-G3C on the system board

 

Figure 91 Connecting cables for the Riser 4 assembly

 

Riser number

No.

Cable type

Cable code

Cable description

Riser4

1

Power cord

0404A2H4 (2 cables)

Connect the PWR cable labeled S2 for slot 9 to connector PWR8 on the system board

2

PCIe data cable

Connect the PCIe cable labeled S1 for slot 9 to connector C2-P2A on the system board

3

Power cord

Connect the PWR cable labeled S2 for slot 10 to connector PWR5 on the system board

4

PCIe data cable

Connect the PCIe cable labeled S1 for slot 10 to connector C2-P2C on the system board

 

Figure 92 Connecting cables for the Riser 4 assembly module supporting one FHFL module

 

No.

Cable type

Cable code

Cable label

Cable description

1

Power cord

0404A2FE

S3

Connects to PWR8 on the system board

2

PCIe data cable

S1

Connects to C2-G3A on the system board

3

PCIe data cable

S2

Connects to C2-G3C on the system board

 

Figure 93 Connecting cables for the Riser 4 assembly module supporting two FHFL module

 

No.

Cable type

Cable code

Cable label

Cable description

1

Power cord

0404A2FE

S3

Connect to PWR5 on the system board

2

PCIe data cable

S1

Connect to C2-G3A on the system board

3

PCIe data cable

S2

Connect to C2-G3C on the system board

4

Power cord

0404A2H8

S3

Connect to PWR8 on the system board

5

PCIe data cable

S1

Connect to C2-P2A on the system board

6

PCIe data cable

S2

Connect to C2-P2C on the system board

 

Connecting the supercapacitor cable

Figure 94 Connecting the supercapacitor cable

 

Connecting cables for the rear 4GPU module

Figure 95 Connecting data cables for the rear 4GPU module

 

No.

Cable type

Cable code

Cable description

1

PCIe data cable

0404A1Y9

Connect the rear GPU module (S1 in slot 11) to the system board (C1-P0A)

2

PCIe data cable

0404A1Y9

Connect the rear GPU module (S2 in slot 11) to the system board (C1-P0C)

3

PCIe data cable

0404A1Y9

Connect the rear GPU module (S3 in slot 12) to the system board (C1-P2A)

4

PCIe data cable

0404A1Y9

Connect the rear GPU module (S4 in slot 12) to the system board (C1-P2C)

5

PCIe data cable

0404A1Y8

Connect the rear GPU module (S1 in slot 13) to the system board (C2-P0A)

6

PCIe data cable

0404A1Y8

Connect the rear GPU module (S2 in slot 13) to the system board (C2-P0C)

7

PCIe data cable

0404A1Y8

Connect the rear GPU module (S3 in slot 14) to the system board (C2-P2A)

8

PCIe data cable

0404A1Y8

Connect the rear GPU module (S4 in slot 14) to the system board (C2-P2C)

 

Figure 96 Connecting the power cord for the rear 4GPU module

 

Connecting cables for the chassis ears

(1) Left chassis ear cable

(2) Right chassis ear cable

 

Maintenance

This section introduces daily maintenance methods for the server.

Guidelines

·     Keep the server room clean and maintain the temperature and humidity according to the server operating requirements. Do not place unrelated equipment and items in the server room.

·     Regularly check the server's health status through HDM, and if it's not healthy, immediately inspect and troubleshoot.

·     Stay informed about the latest updates for operating systems and applications, and update the software as needed.

·     Develop a reliable backup plan.

¡     Schedule regular data backups based on the server's operational status.

¡     Regularly back up data if it changes frequently.

¡     Regularly check backups to ensure data is stored correctly.

·     Keep a certain number of spare parts on site to replace components promptly if they fail. After using spare parts, please replenish them promptly.

·     To conveniently solve networking issues, please save the latest network topology diagram.

Maintenance tools

To maintain the server, you need the following tools:

·     Monitor the server operating environment with a thermometer and hygrometer.

·     Monitor the operating status of the server through HDM and UniSystem.

Maintenance operations

Maintenance tasks

Daily maintenance tasks are shown in Table 66.

Table 66 Daily maintenance tasks

Task

Required tools

Checking the server LEDs

/

Monitoring the temperature and humidity of the equipment room

Temperature humidity meter

Inspecting the cables

/

 

Checking the server LEDs

Check if all LEDs on the front and back panels of the server are functioning properly.

Monitoring the temperature and humidity of the equipment room

Use a hygrometer to measure the temperature and humidity in the equipment room, ensuring that they are within the operating range of the server. For server operation and storage temperature and humidity requirements, see "Physical specifications."

Inspecting the cables

Check if the communication and power cables are properly connected.

Restrictions and guidelines

·     When plugging or unplugging cables, please do not use excessive force.

·     Do not twist or pull the cables.

·     Organize the cables appropriately.

Check standards

·     The cable type is correct.

·     The cable connection is correct, secure, and the cable length is appropriate.

·     The cable shows no signs of aging, and the connection points are free from twisting and corrosion.

Viewing server status

To view basic information and status of the subsystems of the server, see the HDM user guide.

Collecting server logs

For more information, see the HDM user guide.

Updating the server firmware

For more information about updating the HDM, BIOS, or CPLD firmware, see H3C Servers Firmware Update Guide.

Troubleshooting

For more information, see the troubleshooting manual.

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Intelligent Storage
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
  • Technical Blogs
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网