Compaq ProLiant 6000 Compaq DLT Tape Array II: High-Performance Backup of Ente - Page 17

Se Of, Oncurrent, Obs To, Mprove, Erformance, Indows

Page 17 highlights

ECG075.0997 (cont.) DLT Tape Array II ... TEST 5 USE OF CONCURRENT JOBS TO IMPROVE PERFORMANCE  WINDOWS NT  165 GB/HR Windows NT 4.0 Cheyenne ARCserve 6.0 for Windows NT Image  multiple jobs 2 GB 2:1, 4:1 RAIT-0 ProLiant 6000, three Pentium Pro 200 processors, 512 K cache, 384 MB RAM 224 GB array (four SMART-2/P Array Controllers with fifty-six 4-GB drives) 35/70 GB DLT drive(s) Wide-Ultra SCSI-3 cards (All Compaq off-the-shelf products) To determine whether throughput can be improved by performing concurrent backups on multiple drives. To show what effect multiple tape drives would have, a large variety of situations were tested. In all cases, the Compaq Array Configuration Utility was used to configure the Compaq disk drive arrays. To join multiple disk array groups between adapter groupings, Windows NT Disk Administrator was used to group these as one striped drive. These tests demonstrated performance limits during image backups under Windows NT. The bottleneck has to do with the way an image is processed. To get around the bottleneck, concurrent backups of multiple drives were run. This technique works very well. It helps obtain higher overall backup speeds, but its use depends on the drive/data structure. It is of no advantage in an environment with a single large database, but it is of great value in many applications server or file server backups. The following graph demonstrates one, two, four and eight jobs running under Windows NT 4.0, backing up in each case to a set of eight tape drives. The graph shows the high performance capability under Windows NT; however, the raw data below the chart shows that performance with a four-drive array is still very impressive.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

7
%
(cont.)
DLT Tape Array II
ECG075.0997
T
EST
5
U
SE OF
C
ONCURRENT
J
OBS TO
I
MPROVE
P
ERFORMANCE
W
INDOWS
NT
165 GB/
HR
Windows NT 4.0
Cheyenne ARCserve 6.0 for Windows NT
Image
multiple jobs
2 GB
2:1, 4:1
RAIT-0
ProLiant 6000, three Pentium Pro 200 processors, 512 K
cache, 384 MB RAM
224 GB array (four SMART-2/P Array Controllers
with fifty-six 4-GB drives)
35/70 GB DLT drive(s)
Wide-Ultra SCSI-3 cards
(All Compaq off-the-shelf products)
To determine whether throughput can be improved by performing concurrent backups on
multiple drives.
To show what effect multiple tape drives would have, a large variety of situations were tested.
In all cases, the Compaq Array Configuration Utility was used to configure the Compaq disk
drive arrays. To join multiple disk array groups between adapter groupings, Windows NT Disk
Administrator was used to group these as one striped drive.
These tests demonstrated performance limits during image backups under Windows NT. The
bottleneck has to do with the way an image is processed. To get around the bottleneck,
concurrent backups of multiple drives were run. This technique works very well. It helps obtain
higher overall backup speeds, but its use depends on the drive/data structure. It is of no
advantage in an environment with a single large database, but it is of great value in many
applications server or file server backups.
The following graph demonstrates one, two, four and eight jobs running under Windows NT
4.0, backing up in each case to a set of eight tape drives. The graph shows the high performance
capability under Windows NT; however, the raw data below the chart shows that performance
with a four-drive array is still very impressive.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.