HP MSA 1040 HP MSA 1040 SMU Reference Guide (762784-001, March 2014) - Page 121

Using Remote Snap to replicate volumes, About the Remote Snap replication feature

Page 121 highlights

6 Using Remote Snap to replicate volumes About the Remote Snap replication feature Remote Snap is a licensed feature for disaster recovery. This feature performs asynchronous (batch) replication of block-level data from a volume on a local storage system to a volume that can be on the same system or on a second, independent system. This second system can be located at the same site as the first system or at a different site. A typical replication configuration involves these physical and logical components: • A host connected to a local storage system, which is networked via FC or iSCSI ports to a remote storage system as described in installation documentation. • Remote system. A management object on the local system that enables the MCs in the local system and in the remote system to communicate and exchange data. • Replication set. Associated master volumes that are enabled for replication and that typically reside in two physically or geographically separate storage systems. These volumes are also called replication volumes. • Primary volume. The volume that is the source of data in a replication set and that can be mapped to hosts. For disaster recovery purposes, if the primary volume goes offline, a secondary volume can be designated as the primary volume. The primary volume exists in a primary vdisk in the primary system. • Secondary volume. The volume that is the destination for data in a replication set and that is not accessible to hosts. For disaster recovery purposes, if the primary volume goes offline, a secondary volume can be designated as the primary volume. The secondary volume exists in a secondary vdisk in a secondary system. • Replication snapshot. A special type of snapshot that preserves the state of data of a replication set's primary volume as it existed when the snapshot was created. For a primary volume, the replication process creates a replication snapshot on both the primary system and, when the replication of primary-volume data to the secondary volume is complete, on the secondary system. Replication snapshots are unmappable and are not counted toward a license limit, although they are counted toward the system's maximum number of volumes. A replication snapshot can be exported to a regular, licensed snapshot. • Replication image. A conceptual term for replication snapshots that have the same image ID in the primary and secondary systems. These synchronized snapshots contain identical data and can be used for disaster recovery. Replication process overview As a simplified overview of the remote-replication process, it can be configured to provide a single point-in-time replication of volume data or a periodic delta-update replication of volume data. The periodic-update process has multiple steps. At each step, matching snapshots are created: in the primary system, a replication snapshot is created of the primary volume's current data; this snapshot is then used to copy new (delta) data from the primary volume to the secondary volume; then in the secondary system, a matching snapshot is created for the updated secondary volume. This pair of matching snapshots establishes a replication sync point and these sync points are used to continue the replication process. The following figure illustrates three replication sets in use by two hosts: • The host in New York is mapped to and updates the Finance volume. This volume is replicated to the system in Munich. • The host in Munich is mapped to and updates the Sales and Engineering volumes. The Sales volume is replicated from System 2 to System 3 in the Munich data center. The Engineering volume is replicated from System 3 in Munich to System 1 in New York. About the Remote Snap replication feature 121

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190

About the Remote Snap replication feature
121
6
Using Remote Snap to replicate volumes
About the Remote Snap replication feature
Remote Snap is a licensed feature for disaster recovery. This feature performs asynchronous (batch) replication of
block-level data from a volume on a local storage system to a volume that can be on the same system or on a second,
independent system. This second system can be located at the same site as the first system or at a different site.
A typical replication configuration involves these physical and logical components:
A host connected to a local storage system, which is networked via FC or iSCSI ports to a remote storage system
as described in installation documentation.
Remote system
. A management object on the local system that enables the MCs in the local system and in the
remote system to communicate and exchange data.
Replication set
. Associated master volumes that are enabled for replication and that typically reside in two
physically or geographically separate storage systems. These volumes are also called replication volumes.
Primary volume
. The volume that is the source of data in a replication set and that can be mapped to hosts. For
disaster recovery purposes, if the primary volume goes offline, a secondary volume can be designated as the
primary volume. The primary volume exists in a primary vdisk in the primary system.
Secondary volume
. The volume that is the destination for data in a replication set and that is not accessible to
hosts. For disaster recovery purposes, if the primary volume goes offline, a secondary volume can be designated
as the primary volume. The secondary volume exists in a secondary vdisk in a secondary system.
Replication snapshot
. A special type of snapshot that preserves the state of data of a replication set’s primary
volume as it existed when the snapshot was created. For a primary volume, the replication process creates a
replication snapshot on both the primary system and, when the replication of primary-volume data to the
secondary volume is complete, on the secondary system. Replication snapshots are unmappable and are not
counted toward a license limit, although they are counted toward the system’s maximum number of volumes. A
replication snapshot can be exported to a regular, licensed snapshot.
Replication image
. A conceptual term for replication snapshots that have the same image ID in the primary and
secondary systems. These synchronized snapshots contain identical data and can be used for disaster recovery.
Replication process overview
As a simplified overview of the remote-replication process, it can be configured to provide a single point-in-time
replication of volume data or a periodic delta-update replication of volume data.
The periodic-update process has multiple steps. At each step, matching snapshots are created: in the primary system,
a replication snapshot is created of the primary volume’s current data; this snapshot is then used to copy new (delta)
data from the primary volume to the secondary volume; then in the secondary system, a matching snapshot is created
for the updated secondary volume. This pair of matching snapshots establishes a replication sync point and these
sync points are used to continue the replication process.
The following figure illustrates three replication sets in use by two hosts:
The host in New York is mapped to and updates the Finance volume. This volume is replicated to the system in
Munich.
The host in Munich is mapped to and updates the Sales and Engineering volumes. The Sales volume is replicated
from System 2 to System 3 in the Munich data center. The Engineering volume is replicated from System 3 in
Munich to System 1 in New York.