HP StorageWorks NAS 8000 NAS 8000 Version 1.6.3 Release Notes - Page 9

Storage Quotas For Users and Groups, Snapshot Functionality

Page 9 highlights

Storage Quotas For Users and Groups • In quotas, the user/group list is not filtered. Therefore, it is possible to add a user/group quota for a user/group that already has a quota. In this case, the previous user/group quota is overwritten. • There is a single grace period per file volume. A grace period set for a file volume will be enforced on BOTH users and groups regardless of which type of quota was set. • It is possible that an incorrect user quota size will be reported even when quotas are not enabled. If a quota is set for a particular user, the size of the soft quota will be reported to CIFS clients as the size of the disk. The user will be able to write past this reported size as long as there is actually enough disk space available. To correct this, set the users quota to no quota set. If using the command line, execute the following command: setSystemUserQuota 0 0 Repeat this for any groups that could be demonstrating this behavior. • When trying to get or set quotas on a system joined to a Windows domain, an error can be encountered if no users have yet been mapped on the system. This can be fixed by creating a share and connecting to it, or by explicitly mapping a domain user to a specific UNIX user. As soon as this occurs, getting and setting quotas are possible. Snapshot Functionality • If the NAS 8000 is configured for one time zone and is managed by a client configured for another time zone, a scheduled event will take place at the client time. For example, if the NAS 8000 is configured for EST and a snapshot is scheduled for 6:00am from a client system configured for MST, the actual snapshot will occur at 6:00am MST or 8:00am EST. A workaround is to set the client management system for the same time zone as the NAS 8000. An alternate workaround is to adjust for the time zone difference and, in this example, set the snapshot for 4:00am from a MST client when you really want a 6:00am EST snapshot. • The NAS Administrator should carefully plan snapshots (scheduled or manual) to occur immediately before the creation of the Disaster Recovery files. If a snapshot of file volumes is taken after creating a DRF and an outage occurs, the snapshot will not be recovered, and any storage space used by the snapshot will be lost. • When using the "Snapshot Scheduler" in the Storage tab of HP Command View NAS, the NAS 8000 in a clustered configuration has the following limitation. If the package that owns the volume for which the snapshot has been scheduled, is not running on the node that originally scheduled the snapshot, then the snapshot creation will fail. There could be several reasons why the package and snapshot schedule are not on the same node, such as: - The package has automatically failed over for some reason, such as NFS service interruption, or networking hardware failures, and not yet been failed back. - A deliberate relocation of the package by an administrator who forgot about the snapshot schedule. There is no built-in way to guarantee that the package owning the volume and the scheduled snapshot are running on the same node. • To avoid this, use the command: setStorageSnapshotStopIoEnabled • This will temporarily limit access to the NAS 8000 while the snapshot is taken. • Snapshots taken while the system is under load may cause problems. In environments that are using NAS OS version 1.6.3-68 in a cluster, using snapshots, and having heavy nfs activity, to avoid possible corrupt creation of snapshots and subsequent package failures, run the following command from the command line on both nodes in the cluster. # setStorageSnapshotStopIoEnabled T • This will temporarily limit access to the NAS 8000 while the snapshot is taken. HP StorageWorks NAS 8000 Version 1.6.3 Release Notes 9

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

Storage Quotas For Users and Groups
In quotas, the user/group list is not filtered. Therefore, it is possible to add a user/group quota for
a user/group that already has a quota. In this case, the previous user/group quota is overwritten.
There is a single grace period per file volume. A grace period set for a file volume will be enforced
on BOTH users and groups regardless of which type of quota was set.
It is possible that an incorrect user quota size will be reported even when quotas are not enabled.
If a quota is set for a particular user, the size of the soft quota will be reported to CIFS clients as
the size of the disk.
The user will be able to write past this reported size as long as there is
actually enough disk space available.
To correct this, set the users quota to no quota set.
If
using the command line, execute the following command:
setSystemUserQuota <domain+username> <volume> 0 0
Repeat this for any groups that could be demonstrating this behavior.
When trying to get or set quotas on a system joined to a Windows domain, an error can be
encountered if no users have yet been mapped on the system.
This can be fixed by creating a
share and connecting to it, or by explicitly mapping a domain user to a specific UNIX user.
As
soon as this occurs, getting and setting quotas are possible.
Snapshot Functionality
If the NAS 8000 is configured for one time zone and is managed by a client configured for
another time zone, a scheduled event will take place at the client time. For example, if the NAS
8000 is configured for EST and a snapshot is scheduled for 6:00am from a client system
configured for MST, the actual snapshot will occur at 6:00am MST or 8:00am EST. A workaround
is to set the client management system for the same time zone as the NAS 8000. An alternate
workaround is to adjust for the time zone difference and, in this example, set the snapshot for
4:00am from a MST client when you really want a 6:00am EST snapshot.
The NAS Administrator should carefully plan snapshots (scheduled or manual) to occur
immediately before the creation of the Disaster Recovery files.
If a snapshot of file volumes is
taken after creating a DRF and an outage occurs, the snapshot will not be recovered, and any
storage space used by the snapshot will be lost.
When using the "Snapshot Scheduler" in the Storage tab of HP Command View NAS, the NAS
8000 in a clustered configuration has the following limitation. If the package that owns the volume
for which the snapshot has been scheduled, is not running on the node that originally scheduled
the snapshot, then the snapshot creation will fail. There could be several reasons why the
package and snapshot schedule are not on the same node, such as:
-
The package has automatically failed over for some reason, such as NFS service
interruption, or networking hardware failures, and not yet been failed back.
-
A deliberate relocation of the package by an administrator who forgot about the snapshot
schedule. There is no built-in way to guarantee that the package owning the volume and the
scheduled snapshot are running on the same node.
To avoid this, use the command:
setStorageSnapshotStopIoEnabled
This will temporarily limit access to the NAS 8000 while the snapshot is taken.
Snapshots taken while the system is under load may cause problems. In environments that are
using NAS OS version 1.6.3-68 in a cluster, using snapshots, and having heavy nfs activity, to
avoid possible corrupt creation of snapshots and subsequent package failures, run the following
command from the command line on both nodes in the cluster.
# setStorageSnapshotStopIoEnabled T
This will temporarily limit access to the NAS 8000 while the snapshot is taken.
HP StorageWorks NAS 8000
Version 1.6.3 Release Notes
9