Lesser known enhancements in the latest VNX code: Pool luns for FILE

emc%20vnx%205300You may remember my posts on the latest VNX OE code that brought with it some highly publicized enhancements, but I wanted to take this time to speak about some of the lesser known enhancements.  In this post, I’m going to talk about provisioning BLOCK luns for the FILE side.

 

Historically, from DART 5.x, 6.x, and even VNX OE for FILE 7.0, if you wanted to provision luns to the FILE side, they had to be part of a raid group.  This meant that you couldn’t take advantage of any of the major block enhancements like Tiering and FAST VP.  Well starting with VNX OE for FILE 7.1, you create luns from your pool and provision them to the FILE front end.

 

For those of you who are not familiar with this process, let me walk you through it.  We’ll start with a pool.  In this example, I created 1 large pool made up of FLASH, SAS, & NL-SAS drives.

image

imageimageNow I will create some luns.  When creating luns for FILE, it is best to create them in sets of 10 and provision them as thick luns.  You can always do thin filesystems on the FILE side later.  In this example, I want to make sure to set the default owner to auto so that I get an even number of luns on SPA & SPB.  And of course, to take advantage of the new tiering policy, I have that set to “High then auto-tier”.

 

imageWhen this finishes, you’ll get 10 luns split between SPA & SPB and we are ready to assign them to a host.  The storage group for VNX FILE is called “~filestorage”.  Make sure when adding your luns to this storage group, that you start with Host LUN ID of 16 or greater.  If you set it to anything less, it will not be detected on the rescan.  Speaking of rescan, once you have assigned the luns, select the “Rescan Storage Systems” on the right hand side of the Storage section of Unisphere.  Alternatively, you can also run “server_devconfig server_2 –create –scsi –all” to rescan for disks.  You will then need to rerun the command for your other datamovers as well.

 

imageNow that we have our new luns scanned into VNX FILE side of things, lets go see what we have.  You will notice that the new FILE pool shares the same name as the BLOCK pool, the drive types are “mixed”, and the tiering policy is specified under advanced services.  That’s pretty much all there is to it.  At this point you would go ahead and provision file systems as normal.

 

I hope you have enjoyed this look at a new enhancement cooked into the latest VNX code.  Expect more posts on this as I continue the series.  As always, I love to receive feedback, so feel free to leave a comment below.

Configuring VASA for use with a VNX

vnxWhen VMware introduced vSphere 5 to the world, one of the enhancements was a new API for storage arrays that provides vSphere with information as to the configuration and performance of your storage array.  For more information on VASA, please see this article from The Virtualization Practice.  VASA on a VNX (and other EMC arrays) historically used to be configured using an SMI-S provider.  This older configuration method has been covered very well by EMC vSpecialist Craig Stewart and can be found here.

 

Starting with VNX OE for FILE 7.1 and VNX OE for BLOCK 05.32, the VNX now has native VASA support.  This eliminates the need for the SMI-S provider and allows you to point vSphere directly to the control station and SP.  It really is a 1-step implementation and I will show you below.  And there is only 1 caviot to this, and it is VASA for the BLOCK and FILE are done separately.  if you are using, FC, FCoE, or iSCSI connections, you will want to use the BLOCK example, and if you are using NFS, you will want to use the FILE example.

 

You will want to start in vSphere by going to Home > Administration > Storage Providers.  From there you would click on “add…” to configure your connection.

 

VNX OE for FILE 7.1 VASA configuration example

You will start by naming this connection.  I chose VNX FILE to make it easy to distinguish between block and file connections.  You will then use the URL as follows: https://<ip.or.dns.of.control.station>:5989/vasa/services/vasaService.  The username/password would be one local to the control station (such as nasadmin or root).  The global accounts from the storage domain will not work here.  When it’s all said and done you should have something like the photo below:

VASA_VNX_FILE

You will probably be prompted to verify the SHA fingerprint, so just click yes and soon you’ll see your new provider listed with the following information:

VASA_VNX_FILE_2

 

VNX OE for BLOCK 05.32 configuration example

Just like the VNX OE for FILE example, you will start off by using a name.  This time the URL will be pointing to the SP.  The url will be as follows: https://<ip.or.dns.of.SP>/vasa/services/vasaService.  Please note the lack of a port specification as by default https uses port 443.  For the password you will want to use a storage domain account (such as sysadmin).  If you configured it correctly, it should look something like this:

VASA_VNX_BLOCK

 

Since I have a very basic configured array in the lab, I see provider information like this:

VASA_VNX_BLOCK_2

 

After you have successfully configured your providers, you can go and setup your storage profiles.  Go to Home > Management > VM Storage Profiles and add a new profile.  From there you can select from a multitude of options to pick the one that best matches the lun you are using for storage.

VASA_VNX_STORAGE_PROFILES

It really is that simple!  For more information on VASA on the VNX, read the Virtualization for EMC® VNX Release Notes (EMC Support credentials required).

Introducing VNX FILE OE 7.1 & BLOCK OE 05.32

EMC-VNXIts here! It’s finally here!  Today marks the general availability of the first release in a new line of VNX code.  Many of you may remember my preview posts on what can be found in this latest version (found here and here). Now you can take it for a spin and try out these new features and changes.

 

As of today, you can browse to the VNX Product Support Page or use the Unisphere Service Manager (USM) tool (which has been upgraded to version 1.2.0.1.0554) to download VNX FILE OE 7.1.47.5 and VNX BLOCK OE 05.32.000.5.006.  Again to highlight some of the changes you will see:

  • New “Flash First” data aging policy for tiering
  • Mixed raid levels for storage pools
  • Enhanced block snapshots
  • Windows Branch Cache support for CIFS
  • Simplified Unisphere LDAP configuration (see my note here)
  • FLR upgrades and enhancements

There are more changes under the hood than I could possibly list here, but a full set of release notes and documentation can be found on the VNX Product Support Page as well as the GA announcement that I posted on ECN.

 

Well what are you waiting for?  Go out and upgrade (remember that this is an out of family upgrade) and start enjoying the latest and greatest in unified storage and let me know what you think of it in the comments below!