Title | Size | Downloads |
---|---|---|
MD5 | 69 bytes | |
H3C_SeerEngine-Campus_E6502_Release_Notes.pdf | 760.21 KB | |
H3C_SeerEngine-Campus_E6502_X86.zip | 1.70 GB |
|
H3C SeerEngine-Campus E6502 Release Notes |
|
|
Copyright © 2021 New H3C Technologies Co., Ltd. All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd. The information in this document is subject to change without notice. |
|
MD5 checksums for software package files· 3
Software operating environments· 4
Hardware requirements for deployment on a physical server 4
Hardware requirements for separate controller deployment 4
Version compatibility matrix· 6
Registering and installing licenses· 14
Obtaining license server software and documentation· 14
Open problems and workarounds· 14
Resolved problems in SeerEngine-Campus E6502· 14
Resolved problems in SeerEngine-Campus E6501· 16
List of tables
Table 1 MD5 checksums for individual software installation files. 3
Table 3 General hardware requirements. 4
Table 4 Hardware requirements for separate controller deployment in standalone mode. 4
Table 5 Hardware requirements for separate controller deployment in cluster mode. 5
Table 6 Hardware and software compatibility matrix. 6
Table 8 Version upgrade list 16
Version information
Version number
H3C SeerEngine Campus (E6502)
To see the version number, click the Settings icon in the upper right corner and select About.
MD5 checksums for software package files[wm1]
MD5 checksum for the H3C_SeerEngine-Campus_E6502_X86.zip software package is 190371638522101b260ef557d12bd76c. Table 1 shows the MD5 checksums for the installation files in this package.[wm2]
Table 1 MD5 checksums for individual software installation files
Software | Installation file name | MD5 checksum |
Microsoft Windows DHCP plugin installation package | dhcp-plug-windows-3.7.zip | 7c5a777e20d123e8918f9f3b04b4df32 |
SeerEngine-Campus controller installation package | SeerEngine_CAMPUS-E6502-MATRIX.zip | db1e8541e7d7eeab32f6f076d4e396ae |
Version history
Version number | Release date | Remarks |
SeerEngine-Campus (E6502) | 2022-07-30 | None. |
SeerEngine-Campus (E6501) | 2022-07-01 | First release. |
Software operating environments
Hardware requirements for deployment on a physical server
Table 3 General hardware requirements
Item | Requirements |
Drive | The drives must be set up in RAID 1, 5, or 10 mode. · System disk: 2.4 TB or above of 7.2K RPM SATA/SAS HDDs or SSDs after RAID setup. · etcd disk: 50 GB or above of 7.2K RPM SATA/SAS HDDs or SSDs after RAID setup. Installation path: /var/lib/etcd · Storage controller: 1GB cache, power fail protected with a supercapacitor installed. · Data disk: SSDs or SATA/SAS HDDs. As a best practice, configure a minimum of three data drives in RAID 5. |
NICs | · Non-bonding mode: 1 × 1 Gbps or above, or 2 ×10Gbps or above if SeerAnalyzer is deployed. · Bonding mode (recommended mode: mode 2 or mode 4): 2 × 1 Gbps Linux bonding interfaces, or 2 × 10 Gbps Linux bonding interfaces if SeerAnalyzer is deployed. As a best practice, enable the controller and the Unified Platform to share one NIC and enable the SeerAnalyzer southbound network to use a separate NIC. If the southbound networks can only use one NIC, the southbound networks for the controller and SeerAnalyzer can share one NIC and enable the Unified Platform to use a separate NIC. |
Hardware requirements for separate controller deployment
Table 4 and Table 5 show the hardware requirements when the controller is separately deployed in standalone mode and cluster mode, respectively.
Table 4 Hardware requirements for separate controller deployment in standalone mode
Hardware node requirements | Maximum resources that can be managed | ||
Node name | Node count | Hardware requirements on a node | |
Controller node | 1 | · CPU: 16 cores, 2.0 GHz main frequency. or higher. · Memory: 128 GB or higher. · System disk: 2.4 TB or above after RAID setup · ETCD disk: 50 GB or above after RAID setup | · Online users: 2000 · Switches, ACs, and APs in total: 400 |
Controller node | 1 | Ditto | · Online users: 5000 · Switches, ACs, and APs in total: 1000 |
Table 5 Hardware requirements for separate controller deployment in cluster mode
Hardware node requirements | Maximum resources that can be managed | ||
Node name | Node count | Hardware requirements on a node | |
Controller node | 3 | · CPU: 12 cores, 2.0 GHz main frequency. or higher. · Memory: 96 GB or higher. · System disk: 2.4 TB or above after RAID setup · ETCD disk: 50 GB or above after RAID setup | · Online users: 2000 · Switch + AC + AP: 400 |
Controller node | 3 | Ditto | · Online users: 5000 · Switches, ACs, and APs in total: 1000 |
Controller node | 3 | · CPU: 16 cores, 2.0 GHz main frequency. or higher. · Memory: 128 GB or higher. · System disk: 4 TB or above after RAID setup · ETCD disk: 50 GB or above after RAID setup | · Online users: 10000 · Switches, ACs, and APs in total: 2000 |
Controller node | 3 | Ditto | · Online users: 20000 · Switches, ACs, and APs in total: 4000 |
Controller node | 3 | Ditto | · Online users: 40000 · Switches, ACs, and APs in total: 8000 |
Controller node | 3 | · CPU: 20 cores, 2.0 GHz main frequency. or higher. · Memory: 128 GB or higher. · System disk: 2.4 TB or above after RAID setup · ETCD disk: 50 GB or above after RAID setup | · Online users: 60000 · Switches, ACs, and APs in total: 12000 |
Controller node | 3 | Ditto | · Online users: 100000 · Switches, ACs, and APs in total: 20000 |
| NOTE: · In the tables in this section, you can estimate the number of switches and the number of ACs and APs in the ratio of 1:3. · The hardware requirements described above do not support deploying SeerAnalyzer together with the controller. To deploy SeerAnalyzer with the controller, see the hardware requirements of SeerAnalyzer. |
Version compatibility matrix
Table 6 Hardware and software compatibility matrix
Item | Specifications |
Models | H3C SeerEngine-Campus |
Software image files | SeerEngine_CAMPUS-E6502-MATRIX.zip SeerEngine_CAMPUS-E6502-MATRIX.zip.md5 dhcp-plug-windows-3.7.zip dhcp-plug-windows-3.7.zip.md5 SeerEngine_CAMPUS-REST_API-E6502.zip |
License server version | E1150 or later |
vDHCP server version | R1111 or later |
PLAT version | 2.0(E0706) or later |
EIA version | 9.0(E6202) or later 7.3(E0611H08) or later |
WSM version | 9.0(E6203) or later. Required for WLAN services. |
Browsers | Google Chrome 70 or later |
Remarks | The SeerEngine_CAMPUS-E6502-MATRIX.zip file is the containerized image file for the SeerEngine-Campus controller component in the SNA architecture. The SeerEngine_CAMPUS-E6502-MATRIX.zip.md5 file is used for checking the integrity of the SeerEngine_CAMPUS-E6502-MATRIX.zip file. The dhcp-plug-windows-3.7.zip file is the Windows plug-in needed when the SeerEngine-Campus controller uses the Microsoft DHCP server. The dhcp-plug-windows-3.7.zip.md5 file is used for checking the integrity of the dhcp-plug-windows-3.7.zip file. The SeerEngine_CAMPUS-REST_API-E6502.zip file provides the northbound REST API document for the controller of the current version. For the compatible License Server, vDHCP Server, PLAT, and EIA versions and compatible network device versions, see the compatibility matrix in the released solution. Please decompress file H3C_SeerEngine-Campus_E6502_X86.zip, and then obtain file SeerEngine_CAMPUS-E6502-MATRIX.zip from the decompressed file and install it. |
Restrictions and cautions
Restrictions
Restriction 1
In the current software version, 50 GB is reserved for the diag log. After the system is installed and deployed, navigate to the System > Log Management > Diag Log Setting page and set the maximum disk space to 50 GB.
Restriction 2
The device autodeployment function needs the vDHCP server.
Restriction 3
Before autodeploying access devices, make sure their upstream leaf devices have already been autodeployed and activated.
To manually add access devices for management, make sure their upstream leaf devices have been managed and activated.
NOTE: If a device fails to be autodeployed, clear the device configuration and then try to autodeploy the device again.
Restriction 4
The device replacement function does not support spine switching devices, or wireless ACs.
Restriction 5
In the single-host deployment scenario, name-IP binding is not supported, and cannot be used on the public hosts. The name-service binding service strictly depends on the device software version and EIA version. To use the name-service binding service, follow the version compatibility matrix.
Restriction 6
MAC portal authentication cannot be used on the public hosts..
Restriction 7
To use the functions of clearing device configuration to autodeploy devices again and replacing devices, operate the devices one by one. Before operating the next device, make sure the current device has completed the operation. Otherwise, the network might become unreachable and the operation might fail.
Restriction 8
To incorporate a third-party access device with limited VLAN resources, make sure the VLAN resources on the device are enough for use. Such a device can be used only as the lowest-level access device if it belongs to an access device hierarchy.
Restriction 9
When you pre-configure a Cisco access device in the system, make sure the specified local user has the permissions to deploy configurations to the device. The username and password of the user cannot contain pound signs (#).
Restriction 10
When the controller is deployed together with the security controller, the transport protocol can only be HTTP.
Restriction 11
When you remotely attach multiple access devices to a leaf downlink interface through a PTN, you must disable LLDP packet transparent transmission on the PTN devices or disable LLDP on the uplink interfaces of access devices. Then, you must manually add the links from all access devices to the leaf downlink interface, and make sure interfaces of the added links are the actual ones. Otherwise, the controller might misjudge the logical links among access devices and consider the access devices as cascade devices.
Restriction 12
To avoid IP conflicts between manually incorporated devices and automatically onboarded devices, do not incorporate devices by using IP addresses in the automation address pools.
Restriction 13
To onboard devices in static access mode, you must first create a static access VLAN pool.
Restriction 14
The controller does not support the following tasks in this release:
l Change the deployment mode from standalone to HA.
l Change the cluster IP address or node IP address.
Restriction 15
Access the license server to reclaim the vDHCP and Campus controller licenses from the controller before you perform the following tasks:
l Upgrade the controller from an E33xx version earlier than E3308 to this release.
l Use the configuration file created on an E31xx or E32xx version after an upgrade to this release.
Restriction 16
DHCP options cannot be edited or deleted once configured.
Restriction 17
The site navigation function on the dashboard does not support displaying wireless devices.
Restriction 18
Does not support IPv6-only deployment.
Restriction 19
Does not support the light theme.
Restriction 20
Optimized automatic deployment does not support online replacement of faulty access devices. To replace a faulty access device, specify a replacement device by its serial number.
Restriction 21
The system does not support modifying modular devices in the device list or deleting online devices from the device list.
Restriction 22
The controller of version E6204 is compatible with devices running B70D064SP14 and later.
Restriction 23
You cannot upgrade E6204 to E6204L01 if Layer 2 network domains of legacy networks, global authentication, and port-based authentication are configured.
Restriction 24
You cannot upgrade E6204 to E6204L01 if IP-SGT services are configured for legacy networks.
Restriction 25
For the dual-spine scenario, the VLAN pooland address pool to select are changed from a campus egress VLAN pool and acampus egress address pool to a management network egress VLAN pool and amanagement network egress address pool. In addition, you cannot upgrade E6204 to E6204P01or later in this scenario.
Restriction 26
The seed scenario does not support software upgrade from E6204 toE6204P01 and later. On the spine and leaf devices in this scenario, theinterface that provides a management IP address cannot be changed from Loopback0 to VSI-interface 4094 or VLAN-interface 4094.
Restriction 27
Upgrading from E6501 to the current version with static IP authentication service is not supported.
Cautions
Caution 1
When you incorporate legacy campus devices through WebSocket and the devices have power outage or interface down events, the controller takes 200 seconds to sense the WebSocket state down events.
Caution 2
When the controller is upgraded from version D6202 or earlier to D6202 or later, if the device name of a device was different from the system name of the device before upgrade, the manual audit result will be red for the device in EIA after upgrade. The audit result shows that the device name on the controller is different from that on EIA. To resolve this issue, synchronize device data on EIA. If the EIA version is E0215P06 or later, one synchronization can resolve this issue. If the EIA version is earlier than E0215P06, two synchronizations are needed. When you modify the system name, you must manually update it in EIA.
Caution 3
Activating or removing a device might take a long time if it is busy. If you need to quickly remove the device from the controller[F227503] , disconnect the management session between the controller and the device.
Caution 4
After being upgraded to E6203 or later, if the controller does not have the port trunk permit vlan setting for a downlink port of a leaf device in configuration auditing, verify that the permitted VLAN is not among the following VLANs:
VLAN 1 and VLAN 4094.
VLANs configured for authentication-free, static access, and security group services on the controller.
VLANs in access VLAN pools.
If the permitted VLAN is not among these VLAN, smooth the configuration manually to resolve the configuration inconsistency. If a leaf device is automatically deployed before the upgrade, update its automation template as follows after the upgrade:
1. Execute the dis cu | inc vcf-fabric command on the device and verify that the automation template name is identical to the one on the controller. If the automation template names are different, create an automation template named after the device template.
2. Lock the network configuration, execute the dis cu | inc vcf-fabric command, and verify that automatic deployment is paused on the device. If you are not permitted to lock the network configuration, execute the vcf-fabric underlay pause command on the device.
3. Execute the tftp 214.1.1.5 get temp_leaf.template vpn-instance vpn-default command to transfer the automation template named from the controller to the device through TFTP. The cluster IP address is 214.1.1.5.
4. Enter probe view on the device and execute the process restart name vcfunderlayd command to restart the vcfunderlayd process.
5. Unlock the network configuration and execute the undo vcf-fabric underlay pause command on the leaf device.
Caution 5
After the controller is upgraded to E6203 or later, ARP and ND scanning is enabled by default for a newly added Layer 2 network domain. For successful configuration deployment to network elements, make sure their software supports ARP and ND scanning.
Caution 6
M-LAG:
1. Web portal authentication is not supported in the current software version.
2. Adding, deleting M-LAG configuration will cause network flapping, and some settings might remain on the related devices.
3. IPv6-based M-LAG is not supported in the current software version.
4. Manual M-LAG configuration is not supported in the current software version.
5. Assignment of aggregation group IDs starts from 1 on an M-LAG system. Please reserve enough aggregation group IDs and plan aggregation group IDs in advance.
6. Do not use track entry 1024 on an M-LAG system deployed either manually or automatically. This track entry is reserved for the setting.
7. You can only manually set up an M-LAG system with two leaf devices in an automated single leaf deployment scenario.
Caution 7
When an interface is configured with a critical VLAN, guest VLAN, or auth-fail VLAN, you must manually assign the interface to the VLAN. For example, if you configure critical VLAN 201 for authentication on an interface, you must execute the port hybrid vlan 201 untagged command on the interface.
Caution 8
If you install this software version, fine-grained VLAN deployment is not supported by the leaf devices onboarded through non-optimized automated deployment. In addition, if you delete such a leaf device and manually incorporate it again, disable the automation process on the device for the controller to perform fine-grained policy deployment.
If you upgrade a software version earlier than E6203 to this software version, a leaf device not onboarded through non-optimized automated deployment will has a yellow warning icon. You can resolve this issue through fine-grained configuration synchronization. If you delete a leaf device onboarded through non-optimized automated deployment and manually incorporate it again, disable the automation process on the device for the controller to perform fine-grained policy deployment.
If you upgrade E6203 to this software version, a leaf device onboarded through non-optimized automated deployment will has a red warning icon. You can resolve this issue through fine-grained configuration synchronization. If you delete a leaf device onboarded through non-optimized automated deployment and manually incorporate it again, disable the automation process on the device for the controller to perform fine-grained policy deployment.
Caution 9
When an old version is upgraded to a new version (E6204L01 or later), the controller will audit VLAN-interface 1 on access devices. If VLAN-interface 1 exists and is up on an access device, the audit result for the access device is red.
Caution 10
After the controller is upgradedfrom a version earlier than E6205 to E6205 or later, the preference of theroute to VLAN-interface 4094 will automatically change to 75 on the accessdevices incorporated through optimized automated deployment. Configurationauditing for these access devices will raise a red flag. To remove this issue,manually audit and synchronize these devices.
Caution 11
If the controller runs a version earlier than E6501 and has isolation domain interconnect or fabric connection settings for a spine or leaf device, configurationin consistency is detected for the device on the configuration audit page afterthe controller software is upgraded to E6501 or later.
Explanation:
In the old version,the controller has deployed the peer xxx router-mac-local command to the device. In the new version, the controller records that it has deployed the peer xxx router-mac-local dci command to the device, but actually it has not deployed this command to the device. As a result, the command that has been deployed to the device is peer xxx router-mac-local. However, in the controller memory, the command deployed to the device is peer xxx router-mac-local dci, inwhich the dci keyword is added.
To resolve the issue, go to the selective sync page and synchronize the peer xxx router-mac-local dci command to the spine or leafdevice.
Caution 12
After the controller is upgraded E6203P01or E6204 to E6205 or later, VPN settings exist on access devices deployed byusing optimized automated deployment. In this case, you must click Sync All to synchronize VPN settings on these access devices, because selective synchronization cannot synchronize VPN settings. During the synchronization process, red audit flags will be continuously displayed for these devices. To remove the red audit flags, audit the configurations on these devices again. Features and functionality.
Caution 13
After the controller is upgraded from E6203P01 or E6204 to E6205, the STP path cost and port priority settings are added to the downlink interfaces of leaf devices and interconnect interfaces of access devices deployed by using optimized automated deployment on a VXLAN network. You must use the audit and synchronization function to synchronize these settings on the controller to devices.
Caution 14
If devices at different tiers are onboarded with the initial configuration for upgrade at the same time, the restart of devices at an upper tier might cause upgrade failure on devices at a lower tier. As a best practice, upgrade devices tier by tier. If the upgrade of a device at a lower tier fails, upgrade the device again.
Caution 15
Aggregation group ID 1 will be assigned to M-LAG if M-LAG is configured on an access device. If M-LAG is configured, reserve this aggregation group ID for M-LAG.
After the upgrade, the controller will delete the evpn m-lag local XXX remote XXX setting from the border devices in an M-LAG system, and device audit issues will occur consequently. To resolve the issues, manually synchronize configuration for the border devices.
After the upgrade, the controller will have incremental spanning tree configuration for M-LAG interfaces of M-LAG member devices, and device audit issues will occur consequently. To resolve the issues, manually synchronize configuration for the M-LAG member devices.
Caution 16
If the controller version is earlier than E6202 and the device version is F6628P11 or later, the controller is not compatible with the ACL VPN capability on the device. If an inter-group policy is configured, the system displays in the Data Synchronization State field due to inconsistent ACL settings detected in the auditing process. To resolve this issue, upgrade the controller version to D6202 or later. Upon completing the upgrade, the system displays the data synchronization state as follows:
· In the IP-based policy mode, the system displays in the Data Synchronization State field for the device.
· In the group-based policy mode, the system displays in the Data Synchronization State field for the device. An auditing anomaly has occurred for settings related to ACL and PBR. To resolve this issue, manually synchronize the configuration.
If the controller version is E6202 or later and the device version is upgraded from a version earlier than F6628P11 to F6628P11 or later, the ACL VPN capability support on the device has changed. If an inter-group policy is configured in group-based mode, the system displays in the Data Synchronization State field for the device. An auditing anomaly has occurred for settings related to ACL and PBR. To resolve this issue, manually synchronize the configuration.
Caution 17
After the software is upgraded from E6501 or earlier to the current version, the original port isolation device groups will be obsoleted, and original members in these groups will be all recognized again. By default, port isolation will be deployed to all non-uplink interfaces of access devices.
Caution 18
If IP security group tag subscription is enabled before the software is upgraded from E6501 or earlier to the current version, the red audit flags will be displayed for spine and leaf devices on the VXLAN network and distribution and core devices on the VLAN network after the upgrade. To remove the red audit flags and display green audit flags for these devices, synchronize data.
Software features
Level 1 | Level 2 | Description |
Dashboard monitoring | Dashboard monitoring | Displays the overall performance, status, and alarms of the campus network. |
Fabric | Fabric management | Provides multi-fabric management. |
Isolation domain | Isolation domain management | Provides multi-campus management. |
Autodeploy | Autodeploy | Autodeploys devices in fabrics. |
Site | Site | Manage information based on GIS sites. |
Topology | Physical topology | Shows fabric-based physical topology information. |
IPv6 | IPv6 | Supports IPv6 networks (Overlay network only). |
Resources | Wired | Manages wired resources. |
Wireless | Manages wireless resources. | |
PON | OLT/ONU management | |
DDI | Manages DHCP, DNS, and IP (DDI) resources. | |
Policy | General device group | Interface group and device group management |
User policy | Sets the network access privilege based on access group and access scenario. | |
Network policy | Isolation domains, private networks, security groups, resource groups, and intergroup policies | |
Service chain | Provides service chain-based service orchestration. | |
Users | Access groups | Access groups, access policies, access scenarios, and LDAP server synchronization |
Online users | Manages user access in the perspective of access service. | |
Transparent users | Provides transparent user authentication and endpoint management. | |
IP binding | Provides the bindings among accounts, IP addresses, MAC addresses, and security groups. | |
O&M monitoring | Alarms | Displays system alarms. |
Network diagnostics | Provides radar detection. | |
Logs | Provides system logs, operation logs management, and diagnosis logs. | |
System management | GUI language selection | Options include Chinese and English. |
System users | Allows you to add, delete, and modify passwords for system users. | |
Cluster management | Provides controller cluster management. | |
Backup & restore | Allows you to back up and restore the controller configuration. | |
System parameters | Allows you to configure the controller system parameters. | |
License management | · Supports the license server. · Supports official licenses and trial licenses. · Supports activating licenses and displaying license status. |
Version updates
SeerEngine-Campus E6502
Added features
· Access port isolation
· Added support of border gateways for M-LAG, path failover, and NQA
· Added support for static IP failover allowlists
· Added support of the campus controller for incorporating GPON devices
Removed featuress
None.
Modified features
· On-demand IP-SGT deployment
SeerEngine-Campus E6501
First release.
Licensing
About licensing
H3C offers licensing options for you to deploy features and expand resource capacity on an as needed basis. To use license-based features, purchase licenses from H3C and install the licenses. For more information about the license-based features and licenses available for them, see AD-NET & U-Center 2.0 License Matrixes.
Registering and installing licenses
To register and transfer licenses, access H3C license services at http://www.h3c.com/en/License.
For information about registering licenses, installing activation files, and transferring licenses, see H3C Software Products Remote Licensing Guide
Obtaining license server software and documentation
To perform remote licensing, first download and install the H3C license server software.
· To obtain the H3C license server software package, click
H3C license server software package
· To obtain H3C license server documentation, click
H3C license server documentation
Open problems and workarounds
List of resolved problems
Resolved problems in SeerEngine-Campus E6502
Problem 1 202207191336
Problem 2 202207190948
Problem 3 202207190705
Problem 4 202207190379
Problem 5 202207150346
Problem 6 202207141287
Problem 7 202207120299
Problem 8 202207110380
Problem 9 202207080915
Problem 10 202206291554
Problem 11 202206271123
Problem 12 202206221614
Problem 13 202206011282
Problem 14 202204280928
Problem 15 202204280846
Problem 16 202206211148
Problem 17 202207011365
Problem 18 202207111000
Problem 19 202207130972
Problem 20 202207181002
Problem 21 202207200840
Problem 22 202207211097
Problem 23 202207211463
Problem 24 202207130800
Resolved problems in SeerEngine-Campus E6501
First release.
Related documentation
Related documents
· H3C SeerEngine-Campus Component Deployment Guide-E65XX
Obtaining documentation
Access the most up-to-date H3C product documentation on the World Wide Web at http://www.h3c.com.
Click the following links to obtain different categories of product documentation:
[Technical Documents]—Provides hardware installation, software upgrading, and software feature configuration and maintenance documentation.
[Products & Solutions]—Provides information about products and technologies, as well as solutions.
Technical support
Email: service@h3c.com
Tel: 400-810-0504
Website: http://www.h3c.com
Upgrading software
current version | historical version | ISSU |
H3C SeerEngine Campus (E6502) | H3C SeerEngine Campus (E6501) | support |