HP 418800-B21 HP StorageWorks Replication Solutions Manager 4.0.1 user guide ( - Page 142

Mirrorclones - fractured

Page 142 highlights

Each logical volume in a volume group is considered to be a component with which hosts can perform I/O. Logical volumes can contain file systems or be raw storage. See raw disks. LUN In each storage system, logical unit numbers (LUNs) are assigned to its virtual disks. When a virtual disk is presented to hosts, the storage system and the hosts perform I/O by referencing the LUN. At a low level, a host OS typically reports each storage device that it detects in the format of C# T# D#, where: C # Identifies a host I/O controller T # Identifies the target storage system on the controller D # Identifies the virtual disk (LUN) on the storage system Automatic LUN assignment • When presenting a virtual disk to a host, enter zero (0) to allow the storage controller software to automatically assign the LUN. Mirrorclones - fractured Mirrorclone replication establishes and maintains a copy of a original virtual disk, via a local replication link. See virtual disk types. source =====||===== mirrorclone local link (fractured) When the local replication between a synchronized mirrorclone and its source is stopped by an action or command, the mirrorclone is said to be fractured. In a fractured state, the mirrorclone is not updated when the source virtual disk is updated. At the instant replication is stopped, the mirrorclone is a point-in-time copy of its source. See also mirrorclone states and synchronized mirrorclones. Task summary for fractured mirrorclones Fractured mirrorclones Deleting No. The disk must first be detached, then deleted. Detaching Yes. Fracturing Not applicable. Presenting Yes. The disk can immediately be presented to hosts for I/O. Replicating - snapclones No. Replicating - snapshots Yes. Multiple snapshots are allowed. Restoring Yes. The disk must be unpresented first. Resynchronizing Yes. The disk must be unpresented first. See also general mirrorclone guidelines and mirrorclone FAQ. Mirrorclone write-cache flush The source virtual disk write cache must be flushed before a fracture is started. (See cache policies write cache.) This ensures that the source virtual disk and its mirrorclone contain identical data when the fracture occurs. The following table shows how a write cache flush is implemented. 142 Host volumes

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324
  • 325
  • 326
  • 327
  • 328
  • 329
  • 330
  • 331
  • 332
  • 333
  • 334
  • 335
  • 336
  • 337
  • 338
  • 339
  • 340
  • 341
  • 342
  • 343
  • 344
  • 345
  • 346
  • 347
  • 348
  • 349
  • 350
  • 351
  • 352
  • 353
  • 354
  • 355
  • 356
  • 357
  • 358
  • 359
  • 360
  • 361
  • 362
  • 363
  • 364
  • 365
  • 366
  • 367
  • 368
  • 369
  • 370
  • 371
  • 372
  • 373
  • 374
  • 375
  • 376
  • 377
  • 378
  • 379
  • 380
  • 381
  • 382
  • 383
  • 384
  • 385
  • 386
  • 387
  • 388
  • 389
  • 390
  • 391
  • 392
  • 393
  • 394
  • 395
  • 396
  • 397
  • 398
  • 399
  • 400
  • 401
  • 402
  • 403
  • 404
  • 405
  • 406
  • 407
  • 408
  • 409
  • 410
  • 411
  • 412
  • 413
  • 414
  • 415
  • 416
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • 424
  • 425
  • 426
  • 427
  • 428
  • 429
  • 430
  • 431
  • 432
  • 433
  • 434
  • 435
  • 436
  • 437
  • 438
  • 439
  • 440
  • 441
  • 442
  • 443
  • 444
  • 445
  • 446
  • 447
  • 448
  • 449
  • 450
  • 451
  • 452
  • 453
  • 454
  • 455
  • 456
  • 457
  • 458
  • 459
  • 460
  • 461
  • 462

Each logical volume in a volume group is considered to be a component with which hosts can perform
I/O. Logical volumes can contain
le systems or be raw storage. See
raw disks
.
LUN
In each storage system, logical unit numbers (LUNs) are assigned to its virtual disks. When a virtual disk
is presented to hosts, the storage system and the hosts perform I/O by referencing the LUN.
At a low level, a host OS typically reports each storage device that it detects in the format of C# T#
D#, where:
C#
Identi
es a host I/O
c
ontroller
T#
Identi
es the
t
arget storage system on the controller
D#
Identi
es the virtual
d
isk (LUN) on the storage system
Automatic LUN assignment
When presenting a virtual disk to a host, enter zero (0) to allow the storage controller software to
automatically assign the LUN.
Mirrorclones - fractured
Mirrorclone replication establishes and maintains a copy of a original virtual disk, via a local replication
link. See virtual disk
types
.
source
=====||=====
mirrorclone
local link
(fractured)
When the local replication between a synchronized mirrorclone and its source is stopped by an action or
command, the mirrorclone is said to be fractured. In a fractured state, the mirrorclone is not updated
when the source virtual disk is updated. At the instant replication is stopped, the mirrorclone is a
point-in-time copy of its source. See also
mirrorclone states
and
synchronized mirrorclones
.
Task summary for fractured mirrorclones
Fractured mirrorclones
Deleting
No. The disk must
rst be detached, then deleted.
Detaching
Yes.
Fracturing
Not applicable.
Presenting
Yes. The disk can immediately be presented to hosts for I/O.
Replicating - snapclones
No.
Replicating - snapshots
Yes. Multiple snapshots are allowed.
Restoring
Yes. The disk must be unpresented
rst.
Resynchronizing
Yes. The disk must be unpresented
rst.
See also general
mirrorclone guidelines
and
mirrorclone FAQ
.
Mirrorclone write-cache
ush
The source virtual disk write cache must be
ushed before a fracture is started. (See cache policies
write cache
.) This ensures that the source virtual disk and its mirrorclone contain identical data when the
fracture occurs. The following table shows how a write cache
ush is implemented.
142
Host volumes