HP Cluster Platform Interconnects v2010 Quadrics QsNetII Interconnect - Page 19

Interconnect Network Components

Page 19 highlights

2 Quadrics Interconnect Overview A high-speed interconnect switch supporting a private network is the core component in the HP Cluster Platform cluster. Within an HP Cluster Platform cluster, all application and utility nodes have a direct connection to the switch. A connection for the control node is optional. A scalable, high performance interconnect enables you to network industry-standard servers together to form a high-speed cluster. The Quadrics system interconnect implements a fat tree topology, which maximizes cross-sectional bandwidth and minimizes the possibility of one transfer being blocked by another. The interconnect provides transport for user application communication between the service and application nodes. Applications communicate across the interconnect with message protocols, such as the Message Passing Interface (MPI). User I/O requests for files are also communicated across the interconnect. The interconnect is also used for process management, such as launching, signaling, and exiting applications. The Quadrics QsNetII Interconnect (hereafter, referred to as the interconnect) is a low-latency, high-bandwidth interconnect based on two main components: A programmable communications processor (Elan) designed to implement the high-level, message passing protocols required for parallel processing. This processor is contained in a PCI X-based network adapter card, the QM500. This PCI card forms the interface between a rackmount server (node) and a multistage interconnect network. Network connections route through a crossbar switch (Elite) providing eight bidirectional link ports. Each link connects either to a PCI network adapter or to another interconnect. Features of the interconnect are as follows: • Interconnect network components, (see Section 2.1). • Interconnect network topology, (see Section 2.2). • Federated networks, (see Section 2.3). • Interconnect identification, (see Section 2.4). 2.1 Interconnect Network Components The interconnect can support a variety of switch cards in different configurations to provide a scalable, multistage interconnect network. Switch cards provide a management (JTAG) interface for configuration and management. The interconnect is the core of the cluster's high-speed data network, the principal components of which are as follows: • A midplane design interconnect (enclosure). Each side of the enclosure supports four switch cards with up to 64 link ports, and a controller card in five slots. You can install a total of eight switch cards and two controller cards in one enclosure. • A PCI host bus adapter (HBA) installed in each node links the node to the interconnect network. This PCI card is designated by the interconnect vendor as a model QM500. • Link cables connecting nodes to interconnects and, in a federated network, connecting node-level interconnects to top-level interconnects. Quadrics Interconnect Overview 2-1

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166

2
Quadrics Interconnect Overview
A high-speed
interconnect switch
supporting a private network is the core
component in the HP Cluster Platform cluster. Within an HP Cluster Platform
cluster, all application and utility nodes have a direct connection to the switch.
A connection for the control node is optional. A scalable, high performance
interconnect enables you to network industry-standard servers together to form a
high-speed cluster.
The Quadrics system interconnect implements a
fat tree topology
, which
maximizes cross-sectional bandwidth and minimizes the possibility of one
transfer being blocked by another. The interconnect provides transport for user
application communication between the service and application nodes. Applications
communicate across the interconnect with message protocols, such as the Message
Passing Interface (MPI). User I/O requests for files are also communicated across
the interconnect. The interconnect is also used for process management, such as
launching, signaling, and exiting applications.
The Quadrics QsNet
II
Interconnect (hereafter, referred to as the interconnect)
is a low-latency, high-bandwidth interconnect based on two main components:
A programmable communications processor (Elan) designed to implement the
high-level, message passing protocols required for parallel processing. This
processor is contained in a PCI X-based network adapter card, the QM500. This
PCI card forms the interface between a rackmount server (node) and a multistage
interconnect network.
Network connections route through a crossbar switch (Elite) providing eight
bidirectional link ports. Each link connects either to a PCI network adapter or to
another interconnect. Features of the interconnect are as follows:
Interconnect network components, (see Section 2.1).
Interconnect network topology, (see Section 2.2).
Federated networks, (see Section 2.3).
Interconnect identification, (see Section 2.4).
2.1 Interconnect Network Components
The interconnect can support a variety of switch cards in different configurations
to provide a scalable, multistage interconnect network. Switch cards provide
a management (JTAG) interface for configuration and management. The
interconnect is the core of the cluster’s high-speed data network, the principal
components of which are as follows:
A midplane design interconnect (enclosure). Each side of the enclosure supports
four switch cards with up to 64 link ports, and a controller card in five slots.
You can install a total of eight switch cards and two controller cards in one
enclosure.
A PCI host bus adapter (HBA) installed in each node links the node to the
interconnect network. This PCI card is designated by the interconnect vendor
as a model QM500.
Link cables connecting nodes to interconnects and, in a federated network,
connecting node-level interconnects to top-level interconnects.
Quadrics Interconnect Overview
2-1