Monthly Archives: October 2009

  • Netapp Snapmirror Setup Guide

    Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.

    Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself.  This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).

    Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.

    When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.

    This guides you quickly through the Snapmirror setup and commands.

    1) Enable Snapmirror on source and destination filer

    source-filer> options snapmirror.enable
    snapmirror.enable            on
    source-filer>
    source-filer> options snapmirror.access
    snapmirror.access            legacy
    source-filer>

    2) Snapmirror Access

    Make sure destination filer has snapmirror access to the source filer. The snapmirror filer’s name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.

    source-filer> rdfile /etc/snapmirror.allow
    destination-filer
    destination-filer2
    source-filer>

    3) Initializing a Snapmirror relation

    Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume – we create the destination volume and make it restricted.

    destination-filer> vol create demo_destination aggr01 100G
    destination-filer> vol restrict demo_destination

    Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.

    Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.

    destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
    Transfer started.
    Monitor progress with ‘snapmirror status’ or the snapmirror log.
    destination-filer>

    Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.

    Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.

    destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
    Transfer started.
    Monitor progress with ‘snapmirror status’ or the snapmirror log.

    4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use “snapmirror status” to check the status.

    destination-filer> snapmirror status
    Snapmirror is on.
    Source                          Destination                          State          Lag Status
    source-filer:demo_source        destination-filer:demo_destination   Uninitialized  –   Transferring (1690 MB done)
    source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree   Uninitialized  –   Transferring (32 MB done)
    destination-filer>

    5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.

    If you want to sync the data on a scheduled frequency, you can set that in destination filer’s /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.

    destination-filer> rdfile /etc/snapmirror.conf
    source-filer:demo_source        destination-filer:demo_destination – 0 * * *  # This syncs every hour
    source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree – 0 21 * * # This syncs every 9:00 pm
    destination-filer>

    6) Other Snapmirror commands

    • To break snapmirror relation – do snapmirror quiesce and snapmirror break.
    • To update snapmirror data  – do snapmirror update
    • To resync a broken relation – do snapmirror resync.
    • To abort a relation – do snapmirror abort

    Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.

    To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed

  • ZFS : Basic administration guide

    ZFS is a combined file system and logical volume manager designed by Oracle/Sun Microsystems. The features of ZFS include support for high storage capacities, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.

    This ZFS guide provides an overview of ZFS and its administration commands that will be helpful for beginners.

    ZFS Pool:ZFS organizes physical devices into logical pools called storage pools. Both individual disks and array logical unit numbers (LUNs) that are visible to the operating system can be included in a ZFS pools. These pools can be created as disks striped together with no redundancy (RAID 0), mirrored disks (RAID 1), striped mirror sets (RAID 1 + 0), or striped with parity (RAID Z). Additional disks can be added to pools at any time but they must be added with the same RAID level.

     

    ZFS Filesystem : ZFS offers a POSIX-compliant file system interface to the Solaris/OpenSolaris operating system. ZFS file systems must be built in one and only one storage pool, but a storage pool may have more than one defined file system. ZFS file systems are managed & mounted through /etc/vfstab file. The common way to mount a ZFS file system is to simply define it against a pool. All defined ZFS file systems automatically mount at boot time unless otherwise configured.Here are the basic commands for getting started with ZFS.

    Creating Storage pool using “zpool create” :

    bash-3.00# zpool create demovol raidz c2t1d0 c2t2d0 
    bash-3.00# zpool status 
      pool: demovol 
    state: ONLINE 
    scrub: none requested 
    config:
            NAME         STATE     READ WRITE CKSUM 
            demovol      ONLINE       0     0     0 
              raidz1     ONLINE       0     0     0 
                c2t1d0   ONLINE       0     0     0 
                c2t2d0   ONLINE       0     0     0
    errors: No known data errors 
    bash-3.00#

    “zfs list” will give the details of the pool and other zfs filesytems.

    bash-3.00# zfs list 
    NAME                   USED  AVAIL  REFER  MOUNTPOINT 
    demovol               1.00G  900G  38.1K  /demovol 
    bash-3.00#

    Creating File Systems : “zfs create” is used to create zfs filesytem.

    bash-3.00# zfs create demovol/testing 
    bash-3.00# zfs list 
    NAME                   USED  AVAIL  REFER  MOUNTPOINT 
    demovol               1.00G  900G  38.1K  /demovol 
    demovol/testing       32.6K  900G  32.6K  /demovol/testing 
    bash-3.00#
    bash-3.00# ls /dev/zvol/dsk/demovol — This should show you the disk file.

    Setting Quota for the filesytem : Until Quota is set, the filesytem shows the total available space of the containter zfs pool.

    bash-3.00# zfs set quota=10G emspool3/testing 
    bash-3.00# zfs list 
    NAME                     USED  AVAIL  REFER  MOUNTPOINT 
    demovol                 1.00G  900G   39.9K  /demovol 
    demovol/testing         32.6K  10.0G  32.6K  /demovol/testing

    Creating a snapshot :

    bash-3.00# zfs snapshot demovol/[email protected] 
    bash-3.00# zfs list 
    NAME                     USED  AVAIL  REFER  MOUNTPOINT 
    demovol                 1.00G  900G   39.9K  /demovol 
    demovol/testing         32.6K  10.0G  32.6K  /demovol/testing 
    demovol/[email protected]      0      –  32.6K  - 
    bash-3.00#

    Get all properties of a ZFS filesytem :

    bash-3.00# zfs get all demovol/testing 
    NAME             PROPERTY         VALUE                  SOURCE 
    demovol/testing  type             filesystem             - 
    demovol/testing  creation         Mon Feb  9  9:05 2009  - 
    demovol/testing  used             32.6K                  - 
    demovol/testing  available        10.0G                  - 
    demovol/testing  referenced       32.6K                  - 
    demovol/testing  compressratio    1.00x                  - 
    demovol/testing  mounted          yes                    - 
    demovol/testing  quota            10G                    local 
    demovol/testing  reservation      none                   default 
    demovol/testing  recordsize       128K                   default 
    demovol/testing  mountpoint       /demovol/testing       default 
    ..

    Cloning a ZFS filesystem from a snapshot :

    bash-3.00# zfs clone demovol/[email protected] demovol/clone22 
    bash-3.00# zfs list 
    NAME                     USED  AVAIL  REFER  MOUNTPOINT 
    demovol                 1.00G  900G   39.9K  /demovol 
    demovol/clone22             0  900G   32.6K  /demovol/clone22 
    demovol/testing         32.6K  10.0G  32.6K  /demovol/testing 
    demovol/[email protected]      0      –  32.6K  - 
    bash-3.00#

    Performance IO Monitoring the ZFS storage pool:

    bash-3.00# zpool  iostat 1 
                   capacity     operations    bandwidth 
    pool         used  avail   read  write   read  write 
    ———-  —–  —–  —–  —–  —–  —– 
    demovol     4.95M  900G      0      0      0     35 
    demovol     4.95M  900G      0      0      0      0 
    demovol     4.95M  900G      0      0      0      0 
    demovol     4.95M  900G      0      0      0      0
  • Linux Network bonding – setup guide

    Linux network Bonding is creation of a single bonded interface by combining 2 or more Ethernet interfaces. This helps in high availability of your network interface and offers performance improvement. Bonding is same as port trunking or teaming.

    Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed

    Steps for bonding in Oracle Enterprise Linux and Redhat Enterprise Linux are as follows..

    Step 1.

    Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding config file.

    $ cat /etc/sysconfig/network-scripts/ifcfg-bond0

    DEVICE=bond0
    IPADDR=192.168.1.12
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes

    Step 2.

    Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above. Make sure you add the MASTER and SLAVE configuration in these files.

    $ cat /etc/sysconfig/network-scripts/ifcfg-eth0

    DEVICE=eth0
    BOOTPROTO=none
    ONBOOT=yes
    # Settings for Bond
    MASTER=bond0
    SLAVE=yes

    $ cat /etc/sysconfig/network-scripts/ifcfg-eth1

    DEVICE=eth1
    BOOTPROTO=none
    ONBOOT=yes
    USERCTL=no
    # Settings for bonding
    MASTER=bond0
    SLAVE=yes

    $ cat /etc/sysconfig/network-scripts/ifcfg-eth2

    DEVICE=eth2
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes

    Step 3.

    Set the parameters for bond0 bonding kernel module. Select the network bonding mode based on you need, documented at http://unixfoo.blogspot.com/2008/02/network-bonding-part-ii-modes-of.html. The modes are

    • mode=0 (Balance Round Robin)
    • mode=1 (Active backup)
    • mode=2 (Balance XOR)
    • mode=3 (Broadcast)
    • mode=4 (802.3ad)
    • mode=5 (Balance TLB)
    • mode=6 (Balance ALB)

    Add the following lines to /etc/modprobe.conf

    # bonding commands
    alias bond0 bonding
    options bond0 mode=1 miimon=100

    Step 4.

    Load the bond driver module from the command prompt.

    $ modprobe bonding

    Step 5.

    Restart the network, or restart the computer.

    $ service network restart # Or restart computer

    When the machine boots up check the proc settings.

    $ cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

    Bonding Mode: adaptive load balancing
    Primary Slave: None
    Currently Active Slave: eth2
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    Slave Interface: eth2
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:13:72:80: 62:f0

    Look at ifconfig -a and check that your bond0 interface is active. You are done!. For more details on the different modes of bonding, please refer to unixfoo’s modes of bonding.

    To verify whether the failover bonding works..

    • Do an ifdown eth0 and check /proc/net/bonding/bond0 and check the “Current Active slave”.
    • Do a continuous ping to the bond0 ipaddress from a different machine and do a ifdown the active interface. The ping should not break.