HP P6300 HP P6300/P6500 EVA Installation Guide (5697-2485, September 2013) - Page 6

Plan your storage configuration, General IP-SAN recommendations - disk firmware

Page 6 highlights

Plan your storage configuration Proper planning of the system storage and its subsequent performance is critical to a successful deployment of the EVA. Improper planning or implementation can result in wasted storage space, degraded performance, or inability to expand the system to meet growing storage needs. Planning considerations include: • System and performance expectations • Striping methods • RAID levels • Disk drive sizes and types • Spare drives • Array sizing (capacity) • Number of Fibre-Channel-presented virtual LUNs • Number of iSCSI and FCoE initiators ◦ iSCSI module: Maximum of 256 initiators or logins ◦ iSCSI/FCoE module: Maximum of 1,024 initiators or logins • Number of virtual LUNs to be presented to the iSCSI and FCoE initiators ◦ iSCSI module: Maximum of 255 LUNs (plus LUN 0); 1,020 iSCSI LUNs (plus LUN 0 from each virtual port group) supported by Mez50-3.2.2.11 and Mez75-3.2.2.11 firmware and later revisions. ◦ iSCSI/FCoE module: Maximum of 1020 combined iSCSI and FCoE LUNs (plus LUN 0 from each virtual port group) NOTE: FCoE requires a converged network switch which implements data center bridging (DCB) standards for lossless Ethernet. Building a high-performance, highly available IP storage network (IP-SAN) can be done several ways. In general, consider enterprise-class switch infrastructure (described in "Minimum recommended switch capabilities for a P63x0/P65x0 EVA-based IP-SAN" (page 7)) to minimize packet discard, packet loss, and unpredictable performance. For the 10 GbE IP-SAN, consider implementing it on a lossless Ethernet network, utilizing DCB switches. Within a 10 GbE based data center, consider implementing the FCoE protocol. There is no downgrade and both Mez50/Mez75 controllers must be at the same firmware version. General IP-SAN recommendations • For Microsoft Windows Server environments, implement MPIO and HP DSM for NIC fault-tolerance and superior performance. • For other operating systems, where supported, implement NIC bonding in the host software for NIC fault-tolerance and performance. • Implement a separate subnet or VLAN for the IP storage network for dedicated bandwidth. • Implement separate FCoE and iSCSI VLANs. • Implement a fault-tolerant switch environment as a separate VLAN through a core switch infrastructure or multiple redundant switches. • Set the individual 1 and 10 gigabit ports connected to the storage nodes and servers at auto negotiate full-duplex at both the switch and host/node port level. 6 Reviewing and confirming your plans

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82

Plan your storage configuration
Proper planning of the system storage and its subsequent performance is critical to a successful
deployment of the EVA. Improper planning or implementation can result in wasted storage space,
degraded performance, or inability to expand the system to meet growing storage needs. Planning
considerations include:
System and performance expectations
Striping methods
RAID levels
Disk drive sizes and types
Spare drives
Array sizing (capacity)
Number of Fibre-Channel-presented virtual LUNs
Number of iSCSI and FCoE initiators
iSCSI module: Maximum of 256 initiators or logins
iSCSI/FCoE module: Maximum of 1,024 initiators or logins
Number of virtual LUNs to be presented to the iSCSI and FCoE initiators
iSCSI module: Maximum of 255 LUNs (plus LUN 0); 1,020 iSCSI LUNs (plus LUN 0 from
each virtual port group) supported by Mez50-3.2.2.11 and Mez75–3.2.2.11 firmware
and later revisions.
iSCSI/FCoE module: Maximum of 1020 combined iSCSI and FCoE LUNs (plus LUN 0
from each virtual port group)
NOTE:
FCoE requires a converged network switch which implements data center bridging (DCB)
standards for lossless Ethernet.
Building a high-performance, highly available IP storage network (IP-SAN) can be done several
ways. In general, consider enterprise-class switch infrastructure (described in
“Minimum
recommended switch capabilities for a P63x0/P65x0 EVA-based IP-SAN” (page 7)
) to minimize
packet discard, packet loss, and unpredictable performance. For the 10 GbE IP-SAN, consider
implementing it on a lossless Ethernet network, utilizing DCB switches. Within a 10 GbE based
data center, consider implementing the FCoE protocol.
There is no downgrade and both Mez50/Mez75 controllers must be at the same firmware version.
General IP-SAN recommendations
For Microsoft Windows Server environments, implement MPIO and HP DSM for NIC
fault-tolerance and superior performance.
For other operating systems, where supported, implement NIC bonding in the host software
for NIC fault-tolerance and performance.
Implement a separate subnet or VLAN for the IP storage network for dedicated bandwidth.
Implement separate FCoE and iSCSI VLANs.
Implement a fault-tolerant switch environment as a separate VLAN through a core switch
infrastructure or multiple redundant switches.
Set the individual 1 and 10 gigabit ports connected to the storage nodes and servers at auto
negotiate full-duplex at both the switch and host/node port level.
6
Reviewing and confirming your plans