Lesser known enhancements in the latest VNX code: Pool luns for FILE

emc%20vnx%205300You may remember my posts on the latest VNX OE code that brought with it some highly publicized enhancements, but I wanted to take this time to speak about some of the lesser known enhancements.  In this post, I’m going to talk about provisioning BLOCK luns for the FILE side.

 

Historically, from DART 5.x, 6.x, and even VNX OE for FILE 7.0, if you wanted to provision luns to the FILE side, they had to be part of a raid group.  This meant that you couldn’t take advantage of any of the major block enhancements like Tiering and FAST VP.  Well starting with VNX OE for FILE 7.1, you create luns from your pool and provision them to the FILE front end.

 

For those of you who are not familiar with this process, let me walk you through it.  We’ll start with a pool.  In this example, I created 1 large pool made up of FLASH, SAS, & NL-SAS drives.

image

imageimageNow I will create some luns.  When creating luns for FILE, it is best to create them in sets of 10 and provision them as thick luns.  You can always do thin filesystems on the FILE side later.  In this example, I want to make sure to set the default owner to auto so that I get an even number of luns on SPA & SPB.  And of course, to take advantage of the new tiering policy, I have that set to “High then auto-tier”.

 

imageWhen this finishes, you’ll get 10 luns split between SPA & SPB and we are ready to assign them to a host.  The storage group for VNX FILE is called “~filestorage”.  Make sure when adding your luns to this storage group, that you start with Host LUN ID of 16 or greater.  If you set it to anything less, it will not be detected on the rescan.  Speaking of rescan, once you have assigned the luns, select the “Rescan Storage Systems” on the right hand side of the Storage section of Unisphere.  Alternatively, you can also run “server_devconfig server_2 –create –scsi –all” to rescan for disks.  You will then need to rerun the command for your other datamovers as well.

 

imageNow that we have our new luns scanned into VNX FILE side of things, lets go see what we have.  You will notice that the new FILE pool shares the same name as the BLOCK pool, the drive types are “mixed”, and the tiering policy is specified under advanced services.  That’s pretty much all there is to it.  At this point you would go ahead and provision file systems as normal.

 

I hope you have enjoyed this look at a new enhancement cooked into the latest VNX code.  Expect more posts on this as I continue the series.  As always, I love to receive feedback, so feel free to leave a comment below.

Tagged , , , , . Bookmark the permalink.
  • dynamox

    What LUN size restrictions exist these day if any ( back in the Celerra days had to be < 2TB) ?

    • According to the 7.1.55 release notes (page 41), the 2TB max has been lifted.  Gateways support up to 16TB luns.  Filesystems are still limited to 16TB in size and you can only have a maximum of 200TB – 256TB in total (depending on the model of VNX you have)

      • dynamox

        still 4096 file systems per datamover ? File system limit and file system count limit is becoming a struggle for some of my customers.  Everybody wants snapsure snapshots so getting close to 4096 limit.

        • Yes the max file system count is still 4096 on a system, and 2048 per datamover.

  • hey sean,

    i view this as a huge win for the vnx. now we can truly have “one pool to rule them all,” now we don’t have to waste disks just for a file raid group. i’m curious though, what is the downside to provisioning file storage this way, if any?

    will

    • The downsides that i see see from it would be things that might come from performence side.  You now have the FILE side sharing resources back end with any other block connected host that has luns from the same pool.

      • That seems obvious to me, also there are risks associated with using one larger pool versus several smaller pools. I guess what I am really asking is are there any technical limitations that I run into by going with a large pool, and provisioning block storage and file storage out of the same pool? Do I lose any functionality by doing so?

        • There isn’t really any loss of functionality going by going to pool luns instead of raid groups that i know of.

  • Drew

    Excellent job! Thanks for sharing this!!

  • Joe Tarin

    I see you recommend creating LUNS for a file pool in increments of 10. Can you provide an explanation for this? I have read tons of EMC docs and cant really get a straight answer on how many luns to create in a pool. The last conclusion I came to was to use a number that was a multiple of 5 and try to come close to 1 LUN per disk. For example, if I have a 16 disk pool of NL-SAS disks, I would make 15 LUNS in the pool. I chose to use a multiple of 5 since AVM will try to stripe across 5 LUNS first, then 4, then 3. What are your thoughts on this method?

    • so the reason 10 is recommended is in part what you said, but to build on it a bit more, while you do get a multiple of 5 for AVM, you then want 2 sets so you get even distribution across both SPA and SPB.

  • Great info Sean, thanks! Do you happen to know if this will work with a “File-only” VNX”? I’ve never seen the GUI on one, but assume you can still create a storage pool and pass the LUNs through to the datamovers just like you would normally allocate LUNs from a RAID group? I’m not looking to use FAST VP, but do want to use storage pools so I get the benefit of re-balancing when I add drives, etc.

  • Clayton

    This is fantastic. You have no idea how long I’ve been looking for an explanation on how exactly to do this. Thank you for sharing!

  • Harmont

    What’s the idea of creating multiple LUNs behind a File OE “stripe volume”, when using pool LUNs?

    Creating several classic LUNs, one on each RG and striping between them makes sense. But pool LUNs are created on top of a storage pool and AVM has no visibility in it to balance between underlying RGs.

    The only benefit I see is to balance between SPs. Where 2 LUNs should be enough. But best practices doc recommends creating LUNs in multiple of 10, which makes it even more confusing.

    My guess is that it has something to do with the storage queues. Because data mover is a front-end for a storage processor. But it’s just a guess.

  • Jim Bringham

    With the advent of larger 2-3 Tb drives, would you recommend larger LUNs? We have a 10 Tb File Pool using 2 Tb NL-SAS and 300 Gb FC drives. We will need 200 LUNs if we keep them at 50 Gb each.

  • Tom Fenton

    I over provisioned my pool for file. Can I re-size it? right now ~filestorage has 2TB (4 x 500GB thick LUNs) and is being consumed with thin enabled filesystems. how can I return 1TB of this capacity back to the storage pool?