Monday, July 27, 2009

NetApp is Generation V storage!!!

By:Rick Rohne

It seems that everyone has a favorite hypervisor, and the tech battles between which hypervisor is better will probably go on for the next few years.

One detail that seems to be overlooked in almost every environment that is the storage that hosts the VM's. NetApp is the only major player in that market "that I can think of" which (by luck or maybe some talented forward thinkers) has invested a lot of research and development around virtualization. NetApp uses De-Duplication and Flex Volumes to reduce the VM storage for your favorite hypervisor. I recently finished a project that involved hosting XENServer VM's using NetApp storage and I thought I would share my experiences...

Project Plan



The goal was to virtualize all 30 servers into 5 Dell Servers and a single Storage Unit. To accomplish this, I went with XENServer 5.0 embedded edition using a NetApp FAS2050 to store all of the data.

The Netapp FAS2050 is an Active/Active Clusterable Storage Appliance. When I scoped the project, I went with a total of 12 disks. Of course, at the time, I kinda forgot that each Filer will have to have it's own disks. So I had to split the disks in half 6 for each filer. At first, I was a little taken back by this but it actually worked out for me in the long run. Since I was limited on the disk space, I went with a RAID 4 on each aggregate instead of RAID DP. This gave me 4 usable disks, with one parity and one spare per cluster. So in essence, I had approximately 908 GB of usable space on each cluster.

The plan was to separate the VM's from the usable data, this is of course recommended by NetApp and Citrix because it allows your data de-duplication and your VM de-duplication top be treated independently. It also splits the Disk I/O in a way that data read and writes do not affect Server read and writes and vice versa. To accomplish this plan, I would actually have to have two seperate aggregates.

Filer configurations


I created one filer on node1, Aggregate1 to hold all of the CIFS file share data along with the Exchange databases. The connectivity to the data was served up using the CIFS protocol and built-in iSCSI. The placement of the Exchange Database on the same disks as the CIFS file systems basically leveled out the I/O requirements of that Filer. This also made backing up the data easy to do because all of the changed data was sitting on a single filer that was directly attached to an HP Tape Library. The total available disk space for the aggregate was about 900Gb. I was working with a little under 900 GB of data, so getting it to fit on that small chunk of disk was going to be tricky (so it seemed until I ran the Deduplication jobs).

The second filer was used to host only VM images. I had well over 1.5 TB of data requirements, and had to fit all of that into a 900GB Aggregate. To fit all those VDI's into that small space, I decided to use the XenServer adapter for NetApp Data ONTAP. This gave me the option to do thin provisioning, integrated snap shots and FAS de-duplication which would later reduce my storage on this aggregate down to 700 GB of used space (over 50% reduction in storage utilization).

XENServer Configuration


When you setup the XENServer adaptor for NetApp, you have the option to select the number of FlexVols to use on your aggregate. Here are the basic guidelines for setting up FlexVols:

The Number of FlexVols to use is by default 8. This can be changed to a number in the range 1 - 32. The number of FlexVols to be used is based on:

Increase the number of FlexVols if the data center will have:

  • A lot of VMs that will be Snapshot often


  • A lot of VMs with multiple VDIs

Decrease the number of FlexVols if the data center will have:


  • SnapMirror replication to a Disaster Recovery Site

Now, I started with the default FlexVol size, which basically created a single Storage repository which consisted of 8 FlexVols on the NetApp. When using the XENServer adaptor for NetApp, you are given the decision to use thin provisioning and to turn on FAS de-duplication or use thick provisioning only. I chose to use thin provisioning and FAS de-duplication, which allow you to use the following features:

  • Fast cloning of VDIs


  • Fast Snapshot of VDIs


  • FAS data de-duplication

I created a few Windows Server 2003 and 2008 templates to use when deploying new servers and deployed new servers using these templates (Which basically performs a fast clone in the back ground). One thing I noticed right away was the Fast Cloning gave me well over 75% data deduplication. So in essence, I wasn't really using too much disk space to add all these new servers.

P2V and Disk Allocation


The next part of the project was to perform a P2V on all of the remaining servers that were already running in the environment.

When you P2V a server into the environment, XENserver allocates all of the space on the NetApp while you are performing the import. You have to rely on Data De-Duplication to actually go around after the fact and perform the de-duplication job. There are a couple things that I found out when performing P2V's into XENServer.

  1. If possible, try to keep the P2V's disks on their own storage repository. P2V's will not de-dup as well as fast clones. This is because the machines have less in common with each other and the fact that the blocks on the disk may actually be misaligned. (you can read the Best Practices Guide to determine if your VM's are misaligned.


  2. Manually run a ASIS job (De-Duplication Job) after every P2V. This will gain the actual space back so that you have more room for the next P2V.


  3. Try to eliminate white space (Free Space) if possible. Free Space actually gets allocated to the VDI, which basically grows your allocated space very fast (more on this in a second). XENConvert does not include a way to reduce the free space on a drive, so for some servers, I actually used VMWare Workstation to perform the P2V so that I could actually reduce the drive size. I recommend using PlateSpin for this process as it integrates directly into XENserver (And it reduces the downtime during the migration).


  4. XENServer allocates all of the space in the VDI, even if you perform data de-duplication. This was a little shocking at first, because by my 15th P2V job, it appeared that I was out of space and I had 15 more to go!!! XenServer will actually prevent you from allocating more space than what is physically available on a per-storage repository basis. You can easily work around this by creating another Storage Repository. XENserver will see this Storage repository as a clean slate. Now of course, remember that once you start doing this, you are actually in the RED zone. The new data that you are adding is being allocated to previously freed up blocks that have already been de-duplicated. You should monitor the allocated space using NetApp monitoring tools once you get into the RED zone.

  5. You can check the size of the allocated space using XENCenter. If you click on the SR, you will note the size of the repository reported in the Right pane. "Size: 672 GB used of 908 GB total (802 GB allocated)"What this means is you have 802 GB allocated to a 908 GB aggregate, however you are only using 672 GB. Once you have allocated 908 GB you will no longer be able to add new disks (however writes can continue to happen on the already allocated disks). If you must add additional disks, you will have to create a new SR or add physical disks.

NOTE: When the New SR is created now, it should read...Size: 672 GB used of 908 GB total (0 GB allocated)

Project wrap up


In wrapping up the project, I was able to get more than 50% data duplication on the XENServer VDI's and about 30% deduplication on the CIFS volume. The whole project reduced the datacenter footprint from 6 racks to one rack. The systems were more manageable and more importantly, the servers were portable which allowed for the entire data center to be backed up and shipped off-site or moved to different physical servers running XENServer 5.0.
I am impressed at how easy it was to setup, but there were some speed bumps that I did run into. Overall, the project was a sucess.

Now let's examine Data Deduplication


NetApp De-Duplication is basically just that... Data Deduplication. Similar to the way that compression works, De-Duplication works at the Block level to basically remove duplicate blocks in the file system and create pointers to a single block in its place.

To accomplish this, NetApp provides a process called ASIS which provides data de-duplication within the volume. In short, Netapp uses the WAFL file system to look for any 4kb blocks of data that is stored in more than one location "within a single VM or across all VM's" and consolidates them into a single 4kb Block instance.

Now that's all fancy talk, but the reality is this... Almost every VM that is hosted on your SAN is generally running the same operating system, has the same base applications such as Anti-virus and management software, and has very typical update cycles. Because of this, you are generally storing the same data over and over on every VM and probably adding more and more disk to accommodate for the VM sprawl!

By eliminating redundant data objects and referencing just the original object, an immediate benefit is obtained through storage space efficiencies. This reduces the initial storage cost and allows more time before you have to add capacity to your existing storage unit.

The de-duplication job is run on a schedule on a per-volume basis. The de-duplication process itself can negatively impact the performance of a production environment, so you may choose to run it over the weekend or manually. The job may take anywhere from 30 minutes to a few hours depending on the amount of duplicated data that is on the volume. I found that the best time to run the de-dup job is on a weekend after a major P2V conversion and approximately once a month after the fact.

What About thin Provisioning


Thin provisioning uses the same concept as data deduplication, however, there is no process that runs in the back ground. When you take a snap shot or take a fast clone, you are basically creating new pointers to the exsisting system and locking those pointers until the data is deleted or changed. Netapp still uses the WAFL file system to consolidate any 4kb blocks of data that is stored in more than one location. The main difference, is there is no process that runs in the background to clean this up.

For performance reasons, you may choose to separate volumes and aggregates so that the physical disks that hold some VM's do not impact the performance of others. The best way to get the most de-duplication in your environment and still have some kind of separation is to ensure that all VM's on a volume share a common installation source. For instance, Thin provisioned clones should be hosted on a one volume, while P2V VM's should be hosted on their own volume. If you have different flavors of operating systems, you can place each group of OS's on separate volumes. Finally, try to host the read write data outside of the Operating system image and on its own volume.

Both Fast Clones and FAS deduplicaiton can be used together to get the best storage consolidation possible.


50% Less Storage Guarantee


NetApp has also announced a 50% less storage guarantee! If you don’t use 50% less storage with NetApp or reduce your data by 35% on non-NetApp storage with NetApp V-Series, then NetApp will provide the additional capacity to meet the shortfall at no additional charge. Read the details here:
http://www.netapp.com/us/solutions/infrastructure/virtualization/guarantee.html

NetApp Best Practices Docs


Finally, I thought I would share some of the best practices documents related to Virtualization and NetApp!

Microsoft Hyper-V Storage Best Practices
http://media.netapp.com/documents/tr-3702.pdf

XENServer Storage Best Practices
http://media.netapp.com/documents/tr-3732.pdf

VMWare VI3 Storage Best Practices
http://media.netapp.com/documents/tr-3428.pdf

NetApp does many great things for a Virtual environment whether you are using VMWare, XENServer, or Hyper-V. There is a nice blog that I've been watching that really dives deep into the technical details of pretty much everything in the NetApp Virtualization solution offerings. You can find that blog here http://blogs.netapp.com/virtualstorageguy/.

Thanks for reading!
blog comments powered by Disqus
Microsoft Virtualization, Citrix, XENServer, Storage, iscsi, Exchange, Virtual Desktops, XENDesktop, APPSense, Netscaler, Virtual Storage, VM, Unified Comminications, Cisco, Server Virtualization, Thin client, Server Based Computing, SBC, Application Delivery controllers, System Center, SCCM, SCVMM, SCOM, VMware, VSphere, Virtual Storage, Cloud Computing, Provisioning Server, Hypervisor, Client Hypervisor.