Cloudibee

linux - storage - virtualization
Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.

Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself.  This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).

Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.

When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.

This guides you quickly through the Snapmirror setup and commands.

1) Enable Snapmirror on source and destination filer

source-filer> options snapmirror.enable
snapmirror.enable            on
source-filer>
source-filer> options snapmirror.access
snapmirror.access            legacy
source-filer>

2) Snapmirror Access

Make sure destination filer has snapmirror access to the source filer. The snapmirror filer’s name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.

source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>

3) Initializing a Snapmirror relation

Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume – we create the destination volume and make it restricted.

destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination

Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.

Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.

destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.
destination-filer>

Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.

Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.

destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use “snapmirror status” to check the status.

destination-filer> snapmirror status
Snapmirror is on.
Source                          Destination                          State          Lag Status
source-filer:demo_source        destination-filer:demo_destination   Uninitialized  –   Transferring (1690 MB done)
source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree   Uninitialized  –   Transferring (32 MB done)
destination-filer>

5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.

If you want to sync the data on a scheduled frequency, you can set that in destination filer’s /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.

destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source        destination-filer:demo_destination – 0 * * *  # This syncs every hour
source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree – 0 21 * * # This syncs every 9:00 pm
destination-filer>

6) Other Snapmirror commands

  • To break snapmirror relation – do snapmirror quiesce and snapmirror break.
  • To update snapmirror data  – do snapmirror update
  • To resync a broken relation – do snapmirror resync.
  • To abort a relation – do snapmirror abort

Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.

To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed

Netapp SnapVault is a heterogeneous disk-to-disk backup solution for Netapp filers and heterogeneous OS systems (Windows, Linux , Solaris, HPUX and AIX). Basically, Snapvault uses netapp snapshot technology to take point-in-time snapshot and store them as online backups. In event of data loss or corruption on a filer, the backup data can be restored from the SnapVault filer with less downtime. It has significant advantages over traditional tape backups, like

  • Reduce backup windows versus traditional tape-based backup
  • Media cost savings
  • No backup/recovery failures due to media errors
  • Simple and Fast recovery of corrupted or destroyed data

Snapvault consists of major two entities –  snapvault clients and a snapvault storage server. A snapvault client (Netapp filers and unix/windows servers) is the system whose data should be backed-up.  The SnapVault server is a Netapp filer – which gets the data from clients and backs up data. For Server to Netapp Snapvault, we need to install Open System Snapvault client software provided by Netapp, on the servers. Using the snapvault agent software, the Snapvault server can pull and backup data on to the backup qtrees. SnapVault protects data on a client system by maintaining a number of read-only versions (snapshots) of that data on a SnapVault filer. The replicated data on the snapvault server system can be accessed via NFS or CIFS. The client systems can restore entire directories or single files directly from the snapvault filer.  Snapvault requires primary and secondary license.

How snapvault works?

When snapvault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it is duplicating the entire source data set on the server – much like a level-zero backup to tape. Each subsequent backup transfers only the data blocks that has changed since the previous backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume for the data that is to be backed up. SnapVault creates a new Snapshot copy with every transfer, and allows retention of a large number of copies according to a schedule configured by the backup administrator. Each copy consumes an amount of disk space proportional to the differences between it and the previous copy.

Snapvault commands :

Initial step to setup Snapvault backup between filers is to install snapvault license and enable snapvault on all the source and destination filers. 

Source filer – filer1

    filer1> license add XXXXX
    filer1> options snapvault.enable on
    filer1> options snapvault.access host=filer2

Destination filer – filer2

    filer2> license add XXXXX
    filer2> options snapvault.enable on
    filer2> options snapvault.access host=filer1

Consider filer2:/vol/snapvault_volume as the snapvault destination volume, where all backups are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups on the destination filer (filer2) using snapvault – manually disable scheduled snapshots on the destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled snapshots, with below command.

    filer2> snap sched snapvault_volume 0 0 0

Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data from source to destination before scheduling snapvault backups. On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.

    filer2> snapvault start -S filer1:/vol/datasource/qtree1  filer2:/vol/snapvault_volume/qtree1

Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules have to be created for incremental backups. The retention period of the backup depends on the schedule created. The snapshot name should be prefixed with “sv_”. The schedule is in the form of “[@][@]”.

On source filer:

For example, let us create the schedules on source as below – 2 hourly, 2 daily and 2 weekly snapvault . These snapshot copies on the source enables administrators to recover directly from source filer without accessing any copies on the destination. This enables more rapid restores. However, it is not necessary to retain a large number of copies on the primary; higher retention levels are configured on the secondary. The commands below shows how to create hourly, daily & weekly snapvault snapshots.

    filer1> snapvault snap sched datasource sv_hourly [email protected]
    filer1> snapvault snap sched datasource sv_daily  [email protected]
    filer1> snapvault snap sched datasource sv_weekly [email protected]@sun

On snapvault filer:

Based on the retention period of the backups you need, the snapvault schedules on the destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0. If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.

    filer2> snapvault snap sched -x snapvault_volume sv_hourly [email protected]
    filer2> snapvault snap sched -x snapvault_volume sv_daily  [email protected]@sun-fri
    filer2> snapvault snap sched -x snapvault_volume sv_weekly [email protected]@sun

To check the snapvault status, use the command “snapvault status” either on source or destination filer. And to see the backups, do a “snap list” on the destination volume – that will give you all the backup copies, time of creation etc.

Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination volume through NFS or CIFS and copy the required data from the backup snapshot.

You can also try Netapp Protection manager to manage the snapvault backups either from OSSV or from Netapp primary storage. Protection manager is based on Netapp Operations manager (aka Netapp DFM). It is a client based UI, with which you connect to the Ops Manager and protect your storages.

To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed

Here are some of the useful functions of “storage” command in Netapp.

1) To show all disks on the system : Use “storage show disk -T” to display all the disks attached to the filer, the disk serial number, vendor, model, disk firmware version and type of disk (SATA/ATA/FCAL)

# rsh filer12 storage show disk -T
DISK                  SHELF BAY SERIAL           VENDOR   MODEL      REV TYPE
——————— ——— —————- ——– ———- —- ——
0d.16                   1    0  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL
0d.17                   1    1  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL
0d.18                   1    2  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL
0d.19                   1    3  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL
0d.20                   1    4  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL
0d.21                   1    5  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL
0d.22                   1    6  xxxxxxxxxxxxxxxx NETAPP   X276 NA07 FCAL

2) To see complete information of a particular disk : Use “storage show disk -a ” to view complete information of a netapp disk. This command gives you the shelf, bay, serial number of disk, disk speed and many other.

# rsh filer12 storage show disk -a 0d.99
Disk:             0d.99
Shelf:            5
Bay:              13
Serial:           xxxxxxxxxxxxxxxxxxxx
Vendor:           NETAPP
Model:            X276
Rev:              NA07
RPM:              10000
WWN:              xxxxxxxxxxxxxxxxxxa
UID:              xxxxxxxxxxxxxxxxx:00000000:00000000:00000000:00000000
Downrev:          no
Pri Port:         B
Power-on Hours:   N/A
Blocks read:      0
Blocks written:   0
Time interval:    00:00:00
Glist count:      0
Scrub last done:  00:00:00
Scrub count:      0
LIP count:        0
Dynamically qualified:  No
#

3) To list all storage adapters on the filer : Use “storage show adapter -a” command to display all the storage adapters (hba) on the filer.

# rsh filer12 storage show adapter -a

Slot:            0a
Description:     Fibre Channel Host Adapter 0a (Dual-channel, QLogic 2322 rev. 3)
Firmware Rev:    3.3.25
FC Node Name:    xxxxxxxxxxxxxxxxxxx
FC Packet Size:  2048
Link Data Rate:  2 Gbit
SRAM Parity:     Yes
External GBIC:   No
State:           Enabled
In Use:          No
Redundant:       Yes

Slot:            0b
Description:     Fibre Channel Host Adapter 0b (Dual-channel, QLogic 2322 rev. 3)
Firmware Rev:    3.3.25
FC Node Name:    xxxxxxxxxxxxxxxxxxx
..

4) To get shelf details of filer : Use “storage show shelf ” command to display the details of the shelf and its partner shelf.

# rsh filer12 storage show shelf 0c.shelf2
Shelf name:    0c.shelf2
Channel:       0c
Module:        A
Shelf id:      2
Shelf UID:     xxxxxxxxxxxxxxxxxxxxxxx
Term switch:   N/A
Shelf state:   ONLINE
Module state:  OK

                               Loop  Invalid  Invalid  Clock  Insert  Stall  Util    LIP
Disk    Disk     Port            up      CRC     Word  Delta   Count  Count  Percent Count
  Id     Bay    State         Count    Count    Count
—————————————————————————————-
[IN  ]          OK                0        0        0      8       0      0    71     0
[OUT ]          OK                0        0        0      0       0      0    52     0
[  32]     0    OK                0        0        0     32       0      0     0     0
[  33]     1    OK                0        0        0     32       0      0     2     0
[  34]     2    OK                0        0        0     24       0      0     0     0
[  35]     3    OK                0        0        0     24       0      0     1     0
[  36]     4    OK                0        0        0      8       0      0     2     0
[  37]     5    OK                0        0        0     24       0      0     4     0

More Netapp commands at : http://unixfoo.blogspot.com/search/label/netapp

How to find failed disks on a filer ? . “vol status -f” command gives you the failed disk on a filer.

# rsh filer12 vol status -f

Broken disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
———       ——  ————- —- —- —- —– ————–    ————–
failed          3a.33   3a    2   1   FC:A   –  FCAL 10000 68000/139264000   68552/140395088
failed          4a.28   4a    1   12  FC:A   –  FCAL 10000 68000/139264000   68552/140395088
#

How to find spare disks on a filer ? . “vol status -s” command gives you the spare disks on a filer.

# rsh filer12 vol status -s

Spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
———       ——  ————- —- —- —- —– ————–    ————–
Spare disks for block or zoned checksum traditional volumes or aggregates
spare           2a.45   2a    2   13  FC:A   –  FCAL 10000 68000/139264000   68552/140395088
spare           4a.57   4a    3   9   FC:A   –  FCAL 10000 68000/139264000   68552/140395088
#

To find out the disks in an aggregate : Use “aggr status -r ” to list all the disks that are part of the aggregage. This command gives the plex, raid and disk information.

# rsh filer12 aggr status -r aggr0
Aggregate aggr0 (online, raid_dp) (block checksums)
  Plex /aggr0/plex0 (online, normal, active)
    RAID group /aggr0/plex0/rg0 (normal)

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      ——— ——  ————- —- —- —- —– ————–    ————–
      dparity   4a.15   4a    2   13  FC:A   –  FCAL 10000 68000/139264000   69536/142410400
      parity    4a.16   4a    4   12  FC:A   –  FCAL 10000 68000/139264000   68552/140395088
      data      4a.22   4a    4   8   FC:A   –  FCAL 10000 68000/139264000   68552/140395088
#

To find out the volumes on a filer : “vol status” command is used to find volume on a filer. It gives the volume names and its status (online/offline/restricted)

# rsh filer12 vol status
         Volume State      Status            Options
          vol10 online     raid_dp, flex     no_i2p=on
          vol11 online     raid_dp, flex     no_i2p=on
           root online     raid_dp, flex     root, no_i2p=on
          vol12 offline    raid_dp, flex     no_i2p=on
#

More Netapp commands at : http://unixfoo.blogspot.com/search/label/netapp