Intel RS25DB080 Hardware User Guide - Page 13

Usability, Redundancy and Error Handling, OCE, and Patrol Read. - cables

Page 13 highlights

Usability • The card ships with both a standard and a low-profile bracket. • The card ships with a 1GB memory board which is already installed onto the RAID controller baseboard as the RAID cache. • Small, thin cabling with serial point-to-point 6.0 Gbps data transfer rates. • Support for non-disk devices and mixed capacity drives. • Support for intelligent XOR RAID levels 0, 1, 5, 6, 10, 50, and 60. • Dedicated or global hot spare with auto rebuild if an array drive fails. • User defined stripe size per drive: 8, 16, 32, 64, 128, 256, 512, or 1024 KB. • Advanced array configuration and management utilities provide: - Online Capacity Expansion (OCE) adds space to existing drive or new drive. See Appendix A: Drive Roaming and Drive Migration Install for limitations on OCE and RAID migration. - Online RAID level migration (upgrade of RAID mode may require OCE) - Drive migration - Drive roaming - No reboot necessary after expansion - Load Balancing • Upgradeable Flash ROM interface. • Allows for staggered spin-up, hot-plug, and lower power consumption. • User specified rebuild rate (percent of system resources to use from 0-100%). Caution: Exceeding 50% rate may cause operating system errors due to waiting for controller access. • Background operating mode can be set for Rebuilds, Consistency Checks, Initialization (auto restarting Consistency Check on redundant volumes), Migration, OCE, and Patrol Read. Redundancy and Error Handling • SES2 enclosure management support • SGPIO enclosure management support • Fault indicators per drive. • Drive coercion (auto-resizing to match existing disks). • Auto-detection of failed drives with transparent rebuild. There must be disk activity (I/O to the drive) for a missing drive to be marked as failed. • Auto-resume of initialization or rebuild on reboot (the Auto Rebuild feature must be enabled before virtual disk creation). Intel® RAID Controller RS25DB080 Hardware User's Guide 3

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54

Intel® RAID Controller RS25DB080 Hardware User’s Guide
3
Usability
The card ships with both a standard and a low-profile bracket.
The card ships with a 1GB memory board which is already installed onto the RAID
controller baseboard as the RAID cache.
Small, thin cabling with serial point-to-point 6.0 Gbps data transfer rates.
Support for non-disk devices and mixed capacity drives.
Support for intelligent XOR RAID levels 0, 1, 5, 6, 10, 50, and 60.
Dedicated or global hot spare with auto rebuild if an array drive fails.
User defined stripe size per drive: 8, 16, 32, 64, 128, 256, 512, or 1024 KB.
Advanced array configuration and management utilities provide:
Online Capacity Expansion (OCE) adds space to existing drive or new drive.
See
Appendix A: Drive Roaming and Drive Migration Install
for limitations on
OCE and RAID migration.
Online RAID level migration (upgrade of RAID mode may require OCE)
Drive migration
Drive roaming
No reboot necessary after expansion
Load Balancing
Upgradeable Flash ROM interface.
Allows for staggered spin-up, hot-plug, and lower power consumption.
User specified rebuild rate (percent of system resources to use from 0-100%).
Caution:
Exceeding 50% rate may cause operating system errors due to waiting for
controller access.
Background operating mode can be set for Rebuilds, Consistency Checks,
Initialization (auto restarting Consistency Check on redundant volumes), Migration,
OCE, and Patrol Read.
Redundancy and Error Handling
SES2 enclosure management support
SGPIO enclosure management support
Fault indicators per drive.
Drive coercion (auto-resizing to match existing disks).
Auto-detection of failed drives with transparent rebuild. There must be disk activity
(I/O to the drive) for a missing drive to be marked as failed.
Auto-resume of initialization or rebuild on reboot (the Auto Rebuild feature must be
enabled before virtual disk creation).