wiki:EonNAS
Last modified 4 years ago Last modified on 03/27/2013 09:48:19 AM

EonNAS

Just a few quick notes. I picked up an Infortrends EonNAS through SimplyNAS. They were responsive, quick and they pre-built the system with a burn-in and initial RAID setup.

I thought that EonNAS would permit zfs snapshot-replication, but apparently, I am mistaken, much to my disppointment. Our network storage model for this revolution is LOCAL STORAGE with backup and Offsite Replication. We are using a two (2) Dell R720s with 12.8TB of production storage each. The Dells have 2x8-core Xeons and 128GB of RAM. We're using Veeam backup as well as replicate between hosts. The EonNAS functions as our backup target and onsite backup server. The Offsite Server will provide Business Continuity. Our primary SQL database replicates to the second host and retains a lot of snapshots per day in order to give us a pseudo-PiTR. All of this conforms to the scope of our operations.

Our primary file server replicates as frequently as possible, and also employs Volume Shadow Copy to help with incidentals. As our Veeam process irons itself out, we will likely ditch VSS in favor of more Veeam.

The EONNAS 1510 has 10 3TB disks in it. We arranged them in a single pool with RAIDZ2, and 2 hotspares. We started with a VM Guest, running with 8 cores and 16GB of ram, using VMwares Software iSCSI stack to attach to the EonNAS, then using VMDKs attached to Windows. Due to the size of one file server, we were forced to employ Windows Dynamic disks in order to exceed the 2TB limit per VMDK.

Initially our "Processing Speed" reported about 70MB/s. This number fell slowly but constantly until after about 72 hours it was 1MB/s. We began troubleshooting the entire stack. After a week of speaking with Infortrends, Veeam, and other consultants, we still had no resolution. The customer and I elected to test NFS based access, as that is another intrinsically supported access method for EONNAS.

The Addition of a Linux-based NFS repository had a few caveats.

  • The EONNAS WebUI kept preventing me from allowing root access to the folder, this eventually went away, but I am uncertain what I did.
  • NFSv4 Produced weird problems which prevented backups from occurring at all.
  • I was not able to successfully get Locking to work, so I disabled it.

Veeam worked well, and I was able to setup a local user on my linux VM which elevated itself via sudo.

Our Linux VM, "Zmon", began life as a simple SNMP monitoring station, so it was very small. I increased it's resources to four (4) cores and a 2GB of RAM. I added a second NIC, using VMXNET3, and connected it to the NFS storage network. The front-facing NIC connected to Veeam with an MTU of 1500 and the storage NIC with an MTU of 9000.

Initially things looked discouraging. The entire process was lots of test->reconfigure->test. The NFS access was ridiculously easy and frustratingly obtuse. NFS is so simple, but something with Linux or the EonNAS kept giving me errors. I finally landed on NFS version 3, 8k reads and writes, and no locking.

When the Backup Job started, the "Processing Speed" was reporting around 13MB/s. However, after the first couple hundred megs, it quickly climbed to more than 70MB/s. We monitored the "Processing Speed" in Veeam as well as the interface speed of the NICs. The real success was our 3TB File Services VM. The backup which never completed now commenced and maintained high speed throughout. Once we had the job properly configured, the data interface of the Eonnas spent the bulk of it's time receiving at over 80MB/sec. This was a substantial improvement and met with performance requirements.

The backup is now more complicated, and it required a lot of testing. Here is our production mount statement:

mount 172.16.1.100:/Pool-1/NFS2 /media/VeeamPool1/ -o nfsvers=3,rsize=8192,wsize=8192,hard,intr,bg,nolock

Here is the addition to our Linux fstab:

172.16.1.100:/Pool-1/NFS2 /media/VeeamPool1/ nfsvers=3,rsize=8192,wsize=8192,hard,intr,bg,nolock 0 0

The critical Notes being:

  • Veeam did not appreciate NFSv4.
  • Linux was happy with NFSv2, v3 or v4, but the Agent kept reporting either locking or other odd errors.

A Few Facts

  • VMware version 5.1, Licensed from VMware Essentials. (No vMotion, etc)
  • Windows Server 2008 R2, running Veeam 6.5.x
  • zmon is Ubuntu 12.10. I would have used 12.04 Server, but the admin onsite preferred 12.10, with full desktop mode. This left us with no additional package requirements.
  • zmon has a local user named veeamuser with ssh access permissions and is sudo-enabled.

Copyright (c) 2013, Joshua Schmidlkofer <joshua@…>

Attachments