HP Cluster Platform Interconnects v2010 HP Cluster Platform InfiniBand Interco - Page 19

InfiniBand Technology Overview, InfiniBand Network Features

Page 19 highlights

1 InfiniBand Technology Overview InfiniBand™ is a specification of the InfiniBand® Trade Association of which Hewlett-Packard is a member. The trade association has generated specifications for a 100 Gb/s communications protocol for high-bandwidth, low latency server clusters. The same communications protocol can operate across all system components for computing, communications, and storage as a distributed fabric. InfiniBand supplements other interconnect technologies such as SCSI I/O, providing an interconnect for storage and communications networks that is efficient, reliable, and scalable. The InfiniBand specification defines the physical layers, application layers, application programming interfaces, and fabric management (the complete I/O stack). An InfiniBand switched network provides a fabric that has the following features: • A high-performance, channel-based interconnect architecture that is modular and highly scalable, enabling networks to grow as needed. • It provides hardware management features such as device discovery, device failover, remote boot, and I/O sharing. • It enables devices such as servers, storage, and I/O to communicate at high speed, avoiding the serialization of data transfers required by shared I/O buses. • It provides inter-processor communication and memory sharing at speeds from 2.5 Gb/s to 30 Gb/s • It has advanced fault isolation controls that provide fault tolerance. InfiniBand Network Features The InfiniBand architecture consists of single data rate (SDR) and double data rate (DDR) channels created by linking host channel adapters (HCAs) and target channel adapters through InfiniBand interconnects (switches). The host channel adapters are PCI bus devices installed in a server, or are integrated into devices such as storage arrays. An HP Cluster platform consists of many servers connected in a fat tree (clos) topology through one or more InfiniBand interconnects. Where necessary, a target channel adapter connects the cluster to remote storage and networks such as Ethernet, creating an InfiniBand fabric. Features of an InfiniBand fabric are: • Performance: - The following bandwidth options are supported by the architecture: ◦ 1X (2.5 Gb/s) ◦ 4X (10 Gb/s or 20 Gb/s - depending on configuration) ◦ 12X (30 Gb/s) ◦ DDR-up to 60 Gb/s Note: 1X bandwidth occurs only if a 4X or 12X link auto-negotiates to 1X because of link problems. - Low latency - Reduced CPU utilization - Fault-tolerance through automatic path migration - Physical multiplexing through virtual lanes - Physical link aggregation • Flexibility: - Linear scalability - Industry standard components and open standards 19

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140

1 InfiniBand Technology Overview
InfiniBand™ is a specification of the InfiniBand® Trade Association of which Hewlett-Packard
is a member. The trade association has generated specifications for a 100 Gb/s communications
protocol for high-bandwidth, low latency server clusters. The same communications protocol
can operate across all system components for computing, communications, and storage as a
distributed fabric. InfiniBand supplements other interconnect technologies such as SCSI I/O,
providing an interconnect for storage and communications networks that is efficient, reliable,
and scalable. The InfiniBand specification defines the physical layers, application layers,
application programming interfaces, and fabric management (the complete I/O stack).
An InfiniBand switched network provides a fabric that has the following features:
A high-performance, channel-based interconnect architecture that is modular and highly
scalable, enabling networks to grow as needed.
It provides hardware management features such as device discovery, device failover, remote
boot, and I/O sharing.
It enables devices such as servers, storage, and I/O to communicate at high speed, avoiding
the serialization of data transfers required by shared I/O buses.
It provides inter-processor communication and memory sharing at speeds from 2.5 Gb/s to
30 Gb/s
It has advanced fault isolation controls that provide fault tolerance.
InfiniBand Network Features
The InfiniBand architecture consists of single data rate (SDR) and double data rate (DDR) channels
created by linking host channel adapters (HCAs) and target channel adapters through InfiniBand
interconnects (switches). The host channel adapters are PCI bus devices installed in a server, or
are integrated into devices such as storage arrays. An HP Cluster platform consists of many
servers connected in a fat tree (clos) topology through one or more InfiniBand interconnects.
Where necessary, a target channel adapter connects the cluster to remote storage and networks
such as Ethernet, creating an InfiniBand fabric. Features of an InfiniBand fabric are:
Performance:
The following bandwidth options are supported by the architecture:
1X (2.5 Gb/s)
4X (10 Gb/s or 20 Gb/s — depending on configuration)
12X (30 Gb/s)
DDR–up to 60 Gb/s
Note:
1X bandwidth occurs only if a 4X or 12X link auto-negotiates to 1X because of link
problems.
Low latency
Reduced CPU utilization
Fault-tolerance through automatic path migration
Physical multiplexing through virtual lanes
Physical link aggregation
Flexibility:
Linear scalability
Industry standard components and open standards
19