Cloudibee

linux - storage - virtualization

ZFS is a combined file system and logical volume manager designed by Oracle/Sun Microsystems. The features of ZFS include support for high storage capacities, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.

This ZFS guide provides an overview of ZFS and its administration commands that will be helpful for beginners.

ZFS Pool:ZFS organizes physical devices into logical pools called storage pools. Both individual disks and array logical unit numbers (LUNs) that are visible to the operating system can be included in a ZFS pools. These pools can be created as disks striped together with no redundancy (RAID 0), mirrored disks (RAID 1), striped mirror sets (RAID 1 + 0), or striped with parity (RAID Z). Additional disks can be added to pools at any time but they must be added with the same RAID level.

 

ZFS Filesystem : ZFS offers a POSIX-compliant file system interface to the Solaris/OpenSolaris operating system. ZFS file systems must be built in one and only one storage pool, but a storage pool may have more than one defined file system. ZFS file systems are managed & mounted through /etc/vfstab file. The common way to mount a ZFS file system is to simply define it against a pool. All defined ZFS file systems automatically mount at boot time unless otherwise configured.Here are the basic commands for getting started with ZFS.

Creating Storage pool using “zpool create” :

bash-3.00# zpool create demovol raidz c2t1d0 c2t2d0 
bash-3.00# zpool status 
  pool: demovol 
state: ONLINE 
scrub: none requested 
config:
        NAME         STATE     READ WRITE CKSUM 
        demovol      ONLINE       0     0     0 
          raidz1     ONLINE       0     0     0 
            c2t1d0   ONLINE       0     0     0 
            c2t2d0   ONLINE       0     0     0
errors: No known data errors 
bash-3.00#

“zfs list” will give the details of the pool and other zfs filesytems.

bash-3.00# zfs list 
NAME                   USED  AVAIL  REFER  MOUNTPOINT 
demovol               1.00G  900G  38.1K  /demovol 
bash-3.00#

Creating File Systems : “zfs create” is used to create zfs filesytem.

bash-3.00# zfs create demovol/testing 
bash-3.00# zfs list 
NAME                   USED  AVAIL  REFER  MOUNTPOINT 
demovol               1.00G  900G  38.1K  /demovol 
demovol/testing       32.6K  900G  32.6K  /demovol/testing 
bash-3.00#
bash-3.00# ls /dev/zvol/dsk/demovol — This should show you the disk file.

Setting Quota for the filesytem : Until Quota is set, the filesytem shows the total available space of the containter zfs pool.

bash-3.00# zfs set quota=10G emspool3/testing 
bash-3.00# zfs list 
NAME                     USED  AVAIL  REFER  MOUNTPOINT 
demovol                 1.00G  900G   39.9K  /demovol 
demovol/testing         32.6K  10.0G  32.6K  /demovol/testing

Creating a snapshot :

bash-3.00# zfs snapshot demovol/[email protected] 
bash-3.00# zfs list 
NAME                     USED  AVAIL  REFER  MOUNTPOINT 
demovol                 1.00G  900G   39.9K  /demovol 
demovol/testing         32.6K  10.0G  32.6K  /demovol/testing 
demovol/[email protected]      0      –  32.6K  - 
bash-3.00#

Get all properties of a ZFS filesytem :

bash-3.00# zfs get all demovol/testing 
NAME             PROPERTY         VALUE                  SOURCE 
demovol/testing  type             filesystem             - 
demovol/testing  creation         Mon Feb  9  9:05 2009  - 
demovol/testing  used             32.6K                  - 
demovol/testing  available        10.0G                  - 
demovol/testing  referenced       32.6K                  - 
demovol/testing  compressratio    1.00x                  - 
demovol/testing  mounted          yes                    - 
demovol/testing  quota            10G                    local 
demovol/testing  reservation      none                   default 
demovol/testing  recordsize       128K                   default 
demovol/testing  mountpoint       /demovol/testing       default 
..

Cloning a ZFS filesystem from a snapshot :

bash-3.00# zfs clone demovol/[email protected] demovol/clone22 
bash-3.00# zfs list 
NAME                     USED  AVAIL  REFER  MOUNTPOINT 
demovol                 1.00G  900G   39.9K  /demovol 
demovol/clone22             0  900G   32.6K  /demovol/clone22 
demovol/testing         32.6K  10.0G  32.6K  /demovol/testing 
demovol/[email protected]      0      –  32.6K  - 
bash-3.00#

Performance IO Monitoring the ZFS storage pool:

bash-3.00# zpool  iostat 1 
               capacity     operations    bandwidth 
pool         used  avail   read  write   read  write 
———-  —–  —–  —–  —–  —–  —– 
demovol     4.95M  900G      0      0      0     35 
demovol     4.95M  900G      0      0      0      0 
demovol     4.95M  900G      0      0      0      0 
demovol     4.95M  900G      0      0      0      0

Linux network Bonding is creation of a single bonded interface by combining 2 or more Ethernet interfaces. This helps in high availability of your network interface and offers performance improvement. Bonding is same as port trunking or teaming.

Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed

Steps for bonding in Oracle Enterprise Linux and Redhat Enterprise Linux are as follows..

Step 1.

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding config file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168.1.12
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

Step 2.

Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above. Make sure you add the MASTER and SLAVE configuration in these files.

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
# Settings for Bond
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
# Settings for bonding
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Step 3.

Set the parameters for bond0 bonding kernel module. Select the network bonding mode based on you need, documented at http://unixfoo.blogspot.com/2008/02/network-bonding-part-ii-modes-of.html. The modes are

  • mode=0 (Balance Round Robin)
  • mode=1 (Active backup)
  • mode=2 (Balance XOR)
  • mode=3 (Broadcast)
  • mode=4 (802.3ad)
  • mode=5 (Balance TLB)
  • mode=6 (Balance ALB)

Add the following lines to /etc/modprobe.conf

# bonding commands
alias bond0 bonding
options bond0 mode=1 miimon=100

Step 4.

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5.

Restart the network, or restart the computer.

$ service network restart # Or restart computer

When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!. For more details on the different modes of bonding, please refer to unixfoo’s modes of bonding.

To verify whether the failover bonding works..

  • Do an ifdown eth0 and check /proc/net/bonding/bond0 and check the “Current Active slave”.
  • Do a continuous ping to the bond0 ipaddress from a different machine and do a ifdown the active interface. The ping should not break.
Netapp SnapVault is a heterogeneous disk-to-disk backup solution for Netapp filers and heterogeneous OS systems (Windows, Linux , Solaris, HPUX and AIX). Basically, Snapvault uses netapp snapshot technology to take point-in-time snapshot and store them as online backups. In event of data loss or corruption on a filer, the backup data can be restored from the SnapVault filer with less downtime. It has significant advantages over traditional tape backups, like

  • Reduce backup windows versus traditional tape-based backup
  • Media cost savings
  • No backup/recovery failures due to media errors
  • Simple and Fast recovery of corrupted or destroyed data

Snapvault consists of major two entities –  snapvault clients and a snapvault storage server. A snapvault client (Netapp filers and unix/windows servers) is the system whose data should be backed-up.  The SnapVault server is a Netapp filer – which gets the data from clients and backs up data. For Server to Netapp Snapvault, we need to install Open System Snapvault client software provided by Netapp, on the servers. Using the snapvault agent software, the Snapvault server can pull and backup data on to the backup qtrees. SnapVault protects data on a client system by maintaining a number of read-only versions (snapshots) of that data on a SnapVault filer. The replicated data on the snapvault server system can be accessed via NFS or CIFS. The client systems can restore entire directories or single files directly from the snapvault filer.  Snapvault requires primary and secondary license.

How snapvault works?

When snapvault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it is duplicating the entire source data set on the server – much like a level-zero backup to tape. Each subsequent backup transfers only the data blocks that has changed since the previous backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume for the data that is to be backed up. SnapVault creates a new Snapshot copy with every transfer, and allows retention of a large number of copies according to a schedule configured by the backup administrator. Each copy consumes an amount of disk space proportional to the differences between it and the previous copy.

Snapvault commands :

Initial step to setup Snapvault backup between filers is to install snapvault license and enable snapvault on all the source and destination filers. 

Source filer – filer1

    filer1> license add XXXXX
    filer1> options snapvault.enable on
    filer1> options snapvault.access host=filer2

Destination filer – filer2

    filer2> license add XXXXX
    filer2> options snapvault.enable on
    filer2> options snapvault.access host=filer1

Consider filer2:/vol/snapvault_volume as the snapvault destination volume, where all backups are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups on the destination filer (filer2) using snapvault – manually disable scheduled snapshots on the destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled snapshots, with below command.

    filer2> snap sched snapvault_volume 0 0 0

Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data from source to destination before scheduling snapvault backups. On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.

    filer2> snapvault start -S filer1:/vol/datasource/qtree1  filer2:/vol/snapvault_volume/qtree1

Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules have to be created for incremental backups. The retention period of the backup depends on the schedule created. The snapshot name should be prefixed with “sv_”. The schedule is in the form of “[@][@]”.

On source filer:

For example, let us create the schedules on source as below – 2 hourly, 2 daily and 2 weekly snapvault . These snapshot copies on the source enables administrators to recover directly from source filer without accessing any copies on the destination. This enables more rapid restores. However, it is not necessary to retain a large number of copies on the primary; higher retention levels are configured on the secondary. The commands below shows how to create hourly, daily & weekly snapvault snapshots.

    filer1> snapvault snap sched datasource sv_hourly [email protected] 
    filer1> snapvault snap sched datasource sv_daily  [email protected]
    filer1> snapvault snap sched datasource sv_weekly [email protected]@sun

On snapvault filer:

Based on the retention period of the backups you need, the snapvault schedules on the destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0. If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.

    filer2> snapvault snap sched -x snapvault_volume sv_hourly [email protected]
    filer2> snapvault snap sched -x snapvault_volume sv_daily  [email protected]@sun-fri
    filer2> snapvault snap sched -x snapvault_volume sv_weekly [email protected]@sun

To check the snapvault status, use the command “snapvault status” either on source or destination filer. And to see the backups, do a “snap list” on the destination volume – that will give you all the backup copies, time of creation etc.

Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination volume through NFS or CIFS and copy the required data from the backup snapshot.

You can also try Netapp Protection manager to manage the snapvault backups either from OSSV or from Netapp primary storage. Protection manager is based on Netapp Operations manager (aka Netapp DFM). It is a client based UI, with which you connect to the Ops Manager and protect your storages.

To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed

Static routes will be added usually through “route add” command. The drawback of ‘route’ command is that, when Linux reboots it will forget static routes. But to make it persistent across reboots, you have to add it to /etc/sysconfig/network-scripts/route- .

To add static route using “route add”:

 

# route add -net 192.168.100.0 netmask 255.255.255.0 gw 192.168.10.1 dev eth0

Adding Persistent static route:

You need to edit /etc/sysconfig/network-scripts/route-eth0 file to define static routes for eth0 interface.

GATEWAY0=192.168.10.1
NETMASK0=255.255.255.0
ADDRESS0=192.168.100.0

GATEWAY1=10.64.34.1
NETMASK1= 255.255.255.240
ADDRESS1=10.64.34.10

Save and close the file. Restart networking:

# service network restart

Verify new routing table:

# route –n

# netstat –nr