IBM 88554RU Installation Guide - Page 61

In-memory databases, databases that are constantly being accessed and for databases that serve

Page 61 highlights

Database applications with memory-intensive workloads that require working data sets larger than 4 GB to be loaded in memory will benefit from the larger memory support of the 64-bit platform. The following is an example from the field. Microsoft SQL Server Enterprise Edition uses Advanced Windowing Extensions (AWE) memory only for the buffer pool. The AWE API allows applications up to 64 GB of RAM. However, due to the AWE mapping overhead, it is not practical to try to use it for sort areas, procedure cache, or any other type of work area. Many applications do make heavy use of this extra large buffer pool but will not fully exploit its benefits. The most efficient solution in such cases is to move the applications onto a 64-bit database server, which can access memory area above 4 GB as a flat address space without the need to move data in and out of a 4 GB memory area. Even at the same clock speed a 64-bit processor will move twice as much data as a 32-bit platform. With the improvement Intel has made to the way the data is handled and the additional cache, you should see a noticeable performance increase. The database server will also benefit from a larger 3 MB third-level and 64 MB XceL4 cache. With such large cache, the need to go to memory or disk for database transaction elements is greatly reduced, and this directly implies a performance increase, faster access to data, and improved throughput. Itanium 2 systems are likely to be able to hold database transaction records in cache during the entire transaction, which enables the I/O portion of the transaction to occur at speeds faster than memory access. In-memory databases Architectures with 64-bit addresses can store reasonably large databases in memory and access them with little or no paging overhead. This is often done for databases that are constantly being accessed and for databases that serve as the basis for complex analysis. The theoretical maximum of 16 Exabytes for memory has not yet been tested, but multi-Gigabyte databases are frequently run on 64-bit machines. A major challenge to providing high-performance access to database information is the time it takes to access disk drives. When disk access is required, disk access times add what can be an intolerable delay to efficient information access and utilization. Access to disk is typically hundreds to thousands of times slower than access to memory. Today, the disk access time challenge can be overcome. The price of random access memory has come down to affordable levels for many systems. This price reduction means that an entire database can be stored in system memory if the system processors can provide a very large linear address space. Chapter 2. Positioning 47

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232

Chapter 2. Positioning
47
Database applications with memory-intensive workloads that require working
data sets larger than 4 GB to be loaded in memory will benefit from the larger
memory support of the 64-bit platform.
The following is an example from the field. Microsoft SQL Server Enterprise
Edition uses Advanced Windowing Extensions (AWE) memory only for the buffer
pool. The AWE API allows applications up to 64 GB of RAM. However, due to the
AWE mapping overhead, it is not practical to try to use it for sort areas, procedure
cache, or any other type of work area. Many applications do make heavy use of
this extra large buffer pool but will not fully exploit its benefits. The most efficient
solution in such cases is to move the applications onto a 64-bit database server,
which can access memory area above 4 GB as a flat address space without the
need to move data in and out of a 4 GB memory area.
Even at the same clock speed a 64-bit processor will move twice as much data
as a 32-bit platform. With the improvement Intel has made to the way the data is
handled and the additional cache, you should see a noticeable performance
increase.
The database server will also benefit from a larger 3 MB third-level and 64 MB
XceL4 cache. With such large cache, the need to go to memory or disk for
database transaction elements is greatly reduced, and this directly implies a
performance increase, faster access to data, and improved throughput. Itanium 2
systems are likely to be able to hold database transaction records in cache
during the entire transaction, which enables the I/O portion of the transaction to
occur at speeds faster than memory access.
In-memory databases
Architectures with 64-bit addresses can store reasonably large databases in
memory and access them with little or no paging overhead. This is often done for
databases that are constantly being accessed and for databases that serve as
the basis for complex analysis. The theoretical maximum of 16 Exabytes for
memory has not yet been tested, but multi-Gigabyte databases are frequently
run on 64-bit machines.
A major challenge to providing high-performance access to database information
is the time it takes to access disk drives. When disk access is required, disk
access times add what can be an intolerable delay to efficient information access
and utilization. Access to disk is typically hundreds to thousands of times slower
than access to memory.
Today, the disk access time challenge can be overcome. The price of random
access memory has come down to affordable levels for many systems. This price
reduction means that an entire database can be stored in system memory if the
system processors can provide a very large linear address space.