HP Cluster Platform Interconnects v2010 Quadrics QsNetII Interconnect - Page 22

Federated Networks, 4 Interconnect Identification

Page 22 highlights

2.3 Federated Networks To scale the interconnect network for clusters with more than 128 nodes, the enclosure is configured as either a node-level interconnect (NLI) or as a top-level interconnect (TLI). Node Level Interconnect (NLI) When fully populated, an NLI has ports for up to 64 nodes plus 64 uplinks to connect to the next stage in the switch network. The NLI contains up to 4 QM501 switch cards and 4 QM502 switch cards. It is connected as follows: • The QM501 switch cards provide 64 downlinks to the nodes, each node has a QM500 PCI card (host bus adapter) installed. • The 4 QM502 switch cards form the third stage of the switch and provide the following: - 64 link ports for downlinks to the second stage QM501 switch cards. - 64 link ports for uplinks to the fourth stage of the switch network. Top Level Interconnect (TLI) The TLI provides the fourth and fifth stage of the network. It holds 8 QM501 switch cards or QM502 switch cards. These switch cards are configured to provide one of the following options: • 32 4-way interconnects (for a network of up to 256 nodes). • 16 8-way interconnects (for a network of up to 512 nodes). • 8 16-way interconnects (for a network of up to 1024 nodes) See the cabling tables, which shows the number of NLI and TLI required for different numbers of nodes, and describes how to calculate the number of interconnects required. 2.4 Interconnect Identification Because large clusters can have several interconnects, each enclosure has a unique identifying which in turn determines its host name for IP addressing. An interconnects name also forms part of its port ID for cable labeling and port addresses. This naming convention enables you to uniquely identify every interconnect network port in the cluster, as follows: • Interconnect naming convention, (see Section 2.4.1). • Interconnect IP addressing, (see Section 2.4.2). 2.4.1 Interconnect Naming Convention Every interconnect participates in a management network (a private 10/100 Base-T Ethernet network) by a networking link from its QM503 controller card. Each controller card has a name that identifies its position in the network. The naming scheme for the modules is as follows: name = QRrailNumber[N|T]ICNumber • The integer railNumber indicates the rail number: 0 for single rail systems; 0 or 1 for dual rail systems. Note Dual-rail configurations are not supported in HP Cluster Platforms at the time of publication. 2-4 Quadrics Interconnect Overview

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166

2.3 Federated Networks
To scale the interconnect network for clusters with more than 128 nodes, the
enclosure is configured as either a node-level interconnect (NLI) or as a top-level
interconnect (TLI).
Node Level Interconnect (NLI)
When fully populated, an NLI has ports for up to 64 nodes plus 64 uplinks to
connect to the next stage in the switch network. The NLI contains up to 4 QM501
switch cards and 4 QM502 switch cards. It is connected as follows:
The QM501 switch cards provide 64 downlinks to the nodes, each node has a
QM500 PCI card (host bus adapter) installed.
The 4 QM502 switch cards form the third stage of the switch and provide the
following:
-
64 link ports for downlinks to the second stage QM501 switch cards.
-
64 link ports for uplinks to the fourth stage of the switch network.
Top Level Interconnect (TLI)
The TLI provides the fourth and fifth stage of the network. It holds 8 QM501
switch cards or QM502 switch cards. These switch cards are configured to provide
one of the following options:
32 4-way interconnects (for a network of up to 256 nodes).
16 8-way interconnects (for a network of up to 512 nodes).
8 16-way interconnects (for a network of up to 1024 nodes)
See the cabling tables, which shows the number of NLI and TLI required for
different numbers of nodes, and describes how to calculate the number of
interconnects required.
2.4 Interconnect Identification
Because large clusters can have several interconnects, each enclosure has a
unique identifying which in turn determines its host name for IP addressing.
An interconnects name also forms part of its port ID for cable labeling and
port addresses. This naming convention enables you to uniquely identify every
interconnect network port in the cluster, as follows:
Interconnect naming convention, (see Section 2.4.1).
Interconnect IP addressing, (see Section 2.4.2).
2.4.1 Interconnect Naming Convention
Every interconnect participates in a management network (a private 10/100 Base-T
Ethernet network) by a networking link from its QM503 controller card. Each
controller card has a name that identifies its position in the network. The naming
scheme for the modules is as follows:
name = QR
railNumber
[N|T]
ICNumber
The integer
railNumber
indicates the rail number: 0 for single rail systems;
0 or 1 for dual rail systems.
_______________________
Note
_______________________
Dual-rail configurations are not supported in HP Cluster Platforms
at the time of publication.
2-4
Quadrics Interconnect Overview