Lesser known enhancements in the latest VNX code: Pool luns for FILE

emc%20vnx%205300You may remember my posts on the latest VNX OE code that brought with it some highly publicized enhancements, but I wanted to take this time to speak about some of the lesser known enhancements.  In this post, I’m going to talk about provisioning BLOCK luns for the FILE side.

 

Historically, from DART 5.x, 6.x, and even VNX OE for FILE 7.0, if you wanted to provision luns to the FILE side, they had to be part of a raid group.  This meant that you couldn’t take advantage of any of the major block enhancements like Tiering and FAST VP.  Well starting with VNX OE for FILE 7.1, you create luns from your pool and provision them to the FILE front end.

 

For those of you who are not familiar with this process, let me walk you through it.  We’ll start with a pool.  In this example, I created 1 large pool made up of FLASH, SAS, & NL-SAS drives.

image

imageimageNow I will create some luns.  When creating luns for FILE, it is best to create them in sets of 10 and provision them as thick luns.  You can always do thin filesystems on the FILE side later.  In this example, I want to make sure to set the default owner to auto so that I get an even number of luns on SPA & SPB.  And of course, to take advantage of the new tiering policy, I have that set to “High then auto-tier”.

 

imageWhen this finishes, you’ll get 10 luns split between SPA & SPB and we are ready to assign them to a host.  The storage group for VNX FILE is called “~filestorage”.  Make sure when adding your luns to this storage group, that you start with Host LUN ID of 16 or greater.  If you set it to anything less, it will not be detected on the rescan.  Speaking of rescan, once you have assigned the luns, select the “Rescan Storage Systems” on the right hand side of the Storage section of Unisphere.  Alternatively, you can also run “server_devconfig server_2 –create –scsi –all” to rescan for disks.  You will then need to rerun the command for your other datamovers as well.

 

imageNow that we have our new luns scanned into VNX FILE side of things, lets go see what we have.  You will notice that the new FILE pool shares the same name as the BLOCK pool, the drive types are “mixed”, and the tiering policy is specified under advanced services.  That’s pretty much all there is to it.  At this point you would go ahead and provision file systems as normal.

 

I hope you have enjoyed this look at a new enhancement cooked into the latest VNX code.  Expect more posts on this as I continue the series.  As always, I love to receive feedback, so feel free to leave a comment below.

Get a sneak peek at new VNX features

imageToday marks the first full day of EMC World 2012.  While everyone is busy watching key notes and checking out the hands on labs, I thought I’d offer you a sneak peak at some new VNX features you can look forward to in the 2nd half of 2012.

 

New Raid Levels for Storage Pools

The first thing I want to talk about is storage pools.  As you are well aware, when you add disks in to storage pool, you need to use the same type of raid level in all storage tiers in the pool.

image

 

As you can see from the picture above, when creating a typical pool from a RAID 6 configuration, you must use it for your FLASH, your SAS, and your NL-SAS drives.  This means that you must use extra flash drives to fill out your pool.  What is changing in the future is a shift towards towards tier specific raid levels.

 

image

 

As you can see in the picture above, now you will be able to have different raid levels at different tiers in your pool.  By mixing a smaller amount of flash with a larger amount of spinning disks, you can put the majority of your unread / archived data on your cheaper storage while being able to afford flash drives as well for your performance data.  This translates into a cheaper initial cost for your storage and offers a more affordable option for customers looking to start out.

 

What the SNAP!

imageThe next big thing coming to VNX is enhanced block snapshots.  I think everyone is well aware of the limitations of SNAPS of luns on the VNX.  Well I’m proud to announce that those are a thing of the past!  With the new functionality, the VNX has increased the maximum amount of writable SNAPS to 256 per lun.  That also raises the limit to 32,768 per system.  Picture me in my best Boston Accent when I say that is a “wicked” high number of snaps.

 

Also introduced in this new enhancement is the ability to take SNAPS of a SNAP.  This opens up the possibility for all sorts of new use cases such as Testing and Development options as well as Point-In-Time backups.  This is functionality that has existed on the FILE side for quite some time now and I’m glad to see it’s making it’s way to the lun level as well.

 

Windows Branch Cache Support for CIFS

imageWith the release of Windows 7 and Windows 2008 R2, Microsoft added new functionality called Branch Cache.  This functionality allows remote computers to cache files and server them out locally to their pears, thus reducing bandwidth over the WAN.  This cached data can either be distributed from clients PCs or be held on a local server in the branch office.  Application performance will be increased by reducing the number of hops the data has to travel.

 

In the next big release for VNX, we will see added support for this functionality to CIFS shares on the VNX.  For more information on this, please read this Microsoft TechNet Article.

 

Well that about does it for now.  3 big new features to look forward to in the second half of 2012.  Please feel free to ask a question in the comments section and I’ll try to answer them as best I can.