HP Cluster Platform Interconnects v2010 HP Cluster Platform InfiniBand Interco - Page 69

HP 4x DDR IB Switch Module for c-Class BladeSystems Overview, 5.1 Installing the 4x DDR IB Switch

Page 69 highlights

5 HP 4x DDR IB Switch Module for c-Class BladeSystems Overview The 4x DDR IB switch module is a double-wide switch module for the HP BladeSystem c-Class enclosure. It is based on the Mellanox 24-port InfiniScale III 4x DDR InfiniBand switch chip. When an IB mezzanine HCA is plugged into the c-Class server blade, the mezzanine HCA is connected to the IB switch through the mid-plane in the c-Class enclosure. For more information on the c-Class enclosure, see the Servers and Workstations Overview, or go to the HP BladeSystem web page at http://h71028.www7.hp.com/enterprise/cache/80316-0-0-225-121.aspx. For more information on the 4x DDR IB Mezzanine HCA, see Section 8.9.2 (page 105). The 4x DDR IB switch module provides 24 InfiniBand 4x DDR ports with 20 Gb/s port-to-port connectivity. The ports are arranged as 16 downlinks to connect up to 16 blade servers in the enclosure, and eight uplinks to connect to the external InfiniBand switches to build an InfiniBand fabric. All links conform to InfiniBand Trade Association (IBTA) specifications. Voltaire Grid Switch family products come with the GridVision fabric and device manager software stack running on an embedded processor on the internally managed switch. The GridVision provides comprehensive and powerful management capabilities, delivering real-time proactive management by providing the following: • Aggregated fabric and resource views • Access to a suite of fabric and switch diagnostics • Fail-over management on all levels • Provisioning of InfiniBand fabrics and the attached server • Networking and storage resources The management capabilities can be accessed through the command line interface (CLI), graphical user interface (GUI) or simple network management protocol (SNMP) managers, or in-band through InfiniBand (IPoIB). The GridVision software is being ported by Voltaire to run on server processors to enable an InfiniBand cluster of HP BladeSystem c-Class server blades without requiring any internally managed rack-mount IB switch. OpenSM is not supported in HP Cluster Platform solutions. With the InfiniBand technology, the 4x DDR IB switch module runs at a signal rate of 20 Gb/s and a data rate of 16 Gb/s in each direction. The MPI ping-pong latency is expected to be around three to four microseconds. The actual performance will depend on the specific configuration. 5.1 Installing the 4x DDR IB Switch Module The 4x DDR IB switch module is designed to fit into the double-wide switch bays on the c-Class enclosures. Depending on the mezzanine connectors used for the 4x DDR IB Mezzanine HCA, the 4x DDR IB switch module must be inserted into switch bays 3 and 4, 5 and 6, or 7 and 8. Refer to the Servers and Workstations Overview and the 4x DDR IB Switch Module Installation Instructions for information on how to install the 4x DDR IB switch module in the c-Class BladeSystem enclosure. 5.1 Installing the 4x DDR IB Switch Module 69

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140

5 HP 4x DDR IB Switch Module for c-Class BladeSystems
Overview
The 4x DDR IB switch module is a double-wide switch module for the HP BladeSystem c-Class
enclosure. It is based on the Mellanox 24-port InfiniScale III 4x DDR InfiniBand switch chip.
When an IB mezzanine HCA is plugged into the c-Class server blade, the mezzanine HCA is
connected to the IB switch through the mid-plane in the c-Class enclosure. For more information
on the c-Class enclosure, see the
Servers and Workstations Overview
, or go to the
HP BladeSystem
web page at
. For more
information on the 4x DDR IB Mezzanine HCA, see
Section 8.9.2 (page 105)
.
The 4x DDR IB switch module provides 24 InfiniBand 4x DDR ports with 20 Gb/s port-to-port
connectivity. The ports are arranged as 16 downlinks to connect up to 16 blade servers in the
enclosure, and eight uplinks to connect to the external InfiniBand switches to build an InfiniBand
fabric. All links conform to InfiniBand Trade Association (IBTA) specifications.
Voltaire Grid Switch family products come with the GridVision fabric and device manager
software stack running on an embedded processor on the internally managed switch. The
GridVision provides comprehensive and powerful management capabilities, delivering real-time
proactive management by providing the following:
Aggregated fabric and resource views
Access to a suite of fabric and switch diagnostics
Fail-over management on all levels
Provisioning of InfiniBand fabrics and the attached server
Networking and storage resources
The management capabilities can be accessed through the command line interface (CLI), graphical
user interface (GUI) or simple network management protocol (SNMP) managers, or in-band
through InfiniBand (IPoIB).
The GridVision software is being ported by Voltaire to run on server processors to enable an
InfiniBand cluster of HP BladeSystem c-Class server blades without requiring any internally
managed rack-mount IB switch. OpenSM is not supported in HP Cluster Platform solutions.
With the InfiniBand technology, the 4x DDR IB switch module runs at a signal rate of 20 Gb/s
and a data rate of 16 Gb/s in each direction. The MPI ping-pong latency is expected to be around
three to four microseconds. The actual performance will depend on the specific configuration.
5.1 Installing the 4x DDR IB Switch Module
The 4x DDR IB switch module is designed to fit into the double-wide switch bays on the c-Class
enclosures. Depending on the mezzanine connectors used for the 4x DDR IB Mezzanine HCA,
the 4x DDR IB switch module must be inserted into switch bays 3 and 4, 5 and 6, or 7 and 8. Refer
to the
Servers and Workstations Overview
and the
4x DDR IB Switch Module Installation Instructions
for information on how to install the 4x DDR IB switch module in the c-Class BladeSystem
enclosure.
5.1 Installing the 4x DDR IB Switch Module
69