Compaq ProLiant 1000 Compaq Backup and Recovery for Microsoft SQL Server 6.X - Page 23

Compaq Backup and Recovery for Microsoft SQL Server 6.x, negatively

Page 23 highlights

Compaq Backup and Recovery for Microsoft SQL Server 6.x Page 23 When considering an on-line backup, be aware of the following: SQL Server reads all pages from the disk devices, not from data cache. As a result, a bigger SQL Server data cache will not improve the performance of database dumps. Each dump or load is a single threaded operation, with the exception of striped dumps or loads in which one thread is used per device. A dump operation essentially requires the data pages from the database being dumped to be read directly from disk in a sequential nature. If you are dumping one database at a time to a single device, the speed of your backup operation will depend primarily on how fast your disk subsystem can perform single-threaded sequential reads. Dumping to striped devices adds additional threads to the process. SQL Server 6.0 can vary the size of the reads from a single 2KB page at a time to 8 pages (16KB) at a time27. The sequential nature of reads from the database during dumps can also benefit from read-ahead at the hardware level, such as the Compaq SMART-2 Array Controller which pre-fetches data into its on-board cache module. During a dump, SQL Server reads the data pages into in-memory buffers; with one buffer being allocated per backup thread. The size of these buffers can be tuned in 32 page increments from between 64KB to 640KB, using the backup_buffer_size parameter.28 As each buffer fills up, SQL Server sends the data to the dump device using 64KB writes. In the case of tape devices, these writes are further formatted into block sizes suited for the type of tape media being written to. SQL Server 6.x determines the appropriate blocking factor by first querying the tape device using the appropriate Windows NT Tape API call29. In the case of DLT tape drives, SQL Server 6.0 uses a block size of 8KB (so that each write to the tape drive will contain 8 blocks of tape data), and SQL Server 6.5 uses a more optimal 64KB block size (1 block per write)30. The database being backed up, as well as other databases, remains open for use. Naturally, the backup activity and queries against the database server conflict with each other, resulting in a performance degradation of both during the entire backup operation. This subject will be discussed in greater detail later in the document. A full database dump includes a dump of the log file, which follows the actual dump of the data. Like the data, the transaction log is dumped sequentially from beginning to end, and is read from the log device volume in sequential, 16KB size requests. The transaction log will not be truncated following a full database dump. Preventing the transaction log from filling up is crucial for continuous SQL Server operation. The transaction log will fill up when it has not been dumped for a period of time and enough transaction log entries accumulate to consume all free pages in the log segment. Once the transaction log fills up, the corresponding database becomes unavailable for any insert, delete or update operations, or any other activity that generates a transaction log entry (dumps, checkpoints, object creations, etc.). 27 During the testing for this paper, the average read size was measured closer to 16KB. 28 The backup_buffer_size parameter is supposedly intended to tune dump and load performance. During our testing for this paper however, we saw that increasing this parameter from its default value of 1 either did not effect performance, or negatively affected performance by up to 50% (in the case of striped dumps to multiple DLT tape drives). We therefore strongly recommend that it be kept at its default value. 29 SQL Server versions prior to 6.0 would use a hard-coded block size of 512 bytes, which had serious performance repercussions in the case of DLT tape drives. 30 See the earlier section entitled Block Size for more information on this setting. 1997 Compaq Computer Corporation, All Rights Reserved Doc No 444A/0797

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81

Compaq Backup and Recovery for Microsoft SQL Server 6.x
Page
23
1997 Compaq Computer Corporation, All Rights Reserved
Doc No 444A/0797
When considering an on-line backup, be aware of the following:
SQL Server reads all pages from the disk devices, not from data cache.
As a result, a bigger SQL
Server data cache will not improve the performance of database dumps.
Each dump or load is a single threaded operation, with the exception of striped dumps or loads in
which one thread is used per device.
A dump operation essentially requires the data pages from the database being dumped to be read
directly from disk in a sequential nature. If you are dumping one database at a time to a single
device, the speed of your backup operation will depend primarily on how fast your disk subsystem
can perform single-threaded sequential reads.
Dumping to striped devices adds additional threads
to the process.
SQL Server 6.0 can vary the size of the reads from a single 2KB page at a time to 8
pages (16KB) at a time
27
.
The sequential nature of reads from the database during dumps can also
benefit from read-ahead at the hardware level, such as the Compaq SMART-2 Array Controller
which pre-fetches data into its on-board cache module.
During a dump, SQL Server reads the data pages into in-memory buffers; with one buffer being
allocated per backup thread.
The size of these buffers can be tuned in 32 page increments from
between 64KB to 640KB, using the
backup_buffer_size
parameter.
28
As each buffer fills up, SQL
Server sends the data to the dump device using 64KB writes.
In the case of tape devices, these
writes are further formatted into block sizes suited for the type of tape media being written to.
SQL Server 6.x determines the appropriate blocking factor by first querying the tape device using
the appropriate Windows NT Tape API call
29
.
In the case of DLT tape drives, SQL Server 6.0 uses
a block size of 8KB (so that each write to the tape drive will contain 8 blocks of tape data), and
SQL Server 6.5 uses a more optimal 64KB block size (1 block per write)
30
.
The database being backed up, as well as other databases, remains open for use.
Naturally, the
backup activity and queries against the database server conflict with each other, resulting in a
performance degradation of both during the entire backup operation.
This subject will be
discussed in greater detail later in the document.
A full database dump includes a dump of the log file, which follows the actual dump of the data.
Like the data, the transaction log is dumped sequentially from beginning to end, and is read from
the log device volume in sequential, 16KB size requests.
The transaction log will not be truncated
following a full database dump.
Preventing the transaction log from filling up is crucial for continuous SQL Server operation.
The
transaction log will fill up when it has not been dumped for a period of time and enough transaction log
entries accumulate to consume all free pages in the log segment.
Once the transaction log fills up, the
corresponding database becomes unavailable for any insert, delete or update operations, or any other
activity that generates a transaction log entry (dumps, checkpoints, object creations, etc.).
27
During the testing for this paper, the average read size was measured closer to 16KB.
28
The
backup_buffer_size
parameter is supposedly intended to tune dump and load performance.
During our
testing for this paper however, we saw that increasing this parameter from its default value of 1 either did not
effect performance, or
negatively
affected performance by up to 50% (in the case of striped dumps to multiple
DLT tape drives).
We therefore strongly recommend that it be kept at its default value.
29
SQL Server versions prior to 6.0 would use a hard-coded block size of 512 bytes, which had serious
performance repercussions in the case of DLT tape drives.
30
See the earlier section entitled
Block Size
for more information on this setting.