Breathing new life into the home lab – Part 1: Flash Storage

M600It’s been a few years since I’ve put an investment into the home lab.  I had originally built this to teach myself enough to pass the VCP4 & my VCP5 (and i’ll use if for my VCP6 too).  But now I want to expand, learn more about VDI, the vRealize suite, as well as experiment with other technologies.  To do that, some upgrades will be needed, and the first area to start with is storage.  Spinning disk is still the cheapest way to get bulk storage, but for a home lab, I don’t need multiple TB of space when all of my VMs are thin provisioned.  Instead, to get the speed I want, i’d have to stitch together way more hard drives than I have space for.  This is where flash can really shine.  You only need a few disks to get a huge speed boost, so your costs are not astronomical.  By chance, I recently received a few 1Tb Micron M600 SSDs and these things are amazing.  After taking 1 for my laptop, the rest were loaded into a Synology 1813+.  So what do these SSDs bring to the table?

Type of test Performance IOPS
Sequential Read (Q= 32,T= 1) 560.129 MB/s
Sequential Write (Q= 32,T= 1) 511.183 MB/s
Random Read 4KiB (Q= 32,T= 1) 357.966 MB/s 87394.0
Random Write 4KiB (Q= 32,T= 1) 365.970 MB/s 89348.1
Sequential Read (Q= 1,T= 1) 489.114 MB/s
Sequential Write (Q= 1,T= 1) 473.808 MB/s
Random Read 4KiB (Q= 1,T= 1) 22.846 MB/s 5577.6
Random Write 4KiB (Q= 1,T= 1) 60.840 MB/s 14853.5

Wow that’s fast!  Good job Micron!  The results above were taken using CrystalDiskMark on my windows laptop and show the most I could get out of a single drive that was direct attached.

To make the most of this storage for a lab, i think it would be best to put this into the NAS and leverage it as shared storage, and the synology is configured for a 4 x 1gig LACP connection, which should be more than enough for a home lab.  The question is, what do i do with the storage, do i do NFS or iSCSI?  RAID 5 or RAID 10?  Well, lets try them all!  I’ll create a datastore in each configuration and test it with 1 windows VM running CrystalDiskMark just like I did on my laptop and see what we get.

iSCSI_Raid_5 iSCSI_Raid_10 iSCSI_on_FS Raid_10
Type_of_test Performance IOPS Performance IOPS Performance IOPS
SR (Q=32) 113.758 MB/s 117.027 MB/s 117.316 MB/s
SW (Q= 32) 82.531 MB/s 117.046 MB/s 115.717 MB/s
RR 4KiB (Q= 32) 52.542 MB/s 12827.6 52.154 MB/s 12732.9 38.101 MB/s 9302.0
RW 4KiB (Q= 32) 35.035 MB/s 8553.5 49.571 MB/s 12102.3 66.477 MB/s 16229.7
SR (Q= 1) 86.619 MB/s 94.588 MB/s 101.082 MB/s
SW (Q= 1) 75.291 MB/s 105.702 MB/s 102.972 MB/s
RR 4KiB (Q= 1) 8.691 MB/s 2121.8 8.276 MB/s 2020.5 10.676 MB/s 2606.4
RW 4KiB (Q= 1) 10.006 MB/s 2442.9 9.594 MB/s 2342.3 11.077 MB/s 2704.3
NFS Raid 5 NFS Raid 10
Type of test Performance IOPS Performance IOPS
SR (Q= 32) 114.898 MB/s 117.439 MB/s
SW (Q= 32) 96.743 MB/s 117.007 MB/s
RR 4KiB (Q= 32) 56.588 MB/s 13815.4 66.533 MB/s 16243.4
RW 4KiB (Q= 32) 44.319 MB/s 10820.1 57.590 MB/s 14060.1
SR (Q= 1) 106.323 MB/s 109.257 MB/s
SW (Q= 1) 81.581 MB/s 106.127 MB/s
RR 4KiB (Q= 1) 12.513 MB/s 3054.9 14.132 MB/s 3450.2
RW 4KiB (Q= 1) 9.270 MB/s 2263.2 10.571 MB/s 2580.8

*I apologize for the table formatting, no matter what i set it to, wordpress is deciding to do it’s own thing.

It’s clear from these test results that i am maxing out the 1 gig connection on the sequential transfers (especialy when the queue depth is increased).  I was a bit surprised by the performance gains in the RAID 10 vs. RAID 5 and that NFS ended up being faster than iSCSI (probably cause it’s all software based iSCSI).  Clearly this will work well for a single host, but the real performance testing will happen when multiple hosts hit the NAS.  So that is where i go next, now that i’ve settled on a storage configuration, i can start planning hosts for this home lab.  Let me know your thoughts in the comments

Peeling back the layers of XtremIO: What is an X-Brick?

XtremIO_iconMany moons ago, on a stage not too far from where I work, EMC announced the future of flash and the creation of the Xtrem brand / business unit.  Today, EMC announces the latest product in the brand: XtremIO.  This all flash storage monster changes the way we think about storage and for the better.  Gone is the need for tiering and different types of RAID configurations. Rebuilds are measured in minutes, not hours. I present to you, the X-Brick!

 

What’s in the X-Brick?

imageSo the picture above shows the major breakdown of an X-Brick.  Behind the covers you have 2 controllers, 2 battery backup units, and a 25 drive DAE that accepts 2.5” drives (does that look familiar?).

 

image

In  the back you can see there is 2 of everything.  There are 2 power supplies, 2 SAS controllers, 2 iSCSI and Fiber Channel ports, and 2 InfiniBand ports for clustering.  Just like with all other EMC products, there is no single point of failure in this design (and I do like how everything gets a UPS instead of just the DAE).

 

image

 

Inside the hardware of each X-Brick are dual SPs (these are external 1U blades, unlike what you see in a VNX SP), each with dual 8 core CPUs and 256GB of RAM.  They each have a SAS 2.0 connection directly to 25 eMLC SSD drives as well as InfiniBand connectivity to other nodes in the cluster (more on this soon).  On the front end, you have 10gig iSCSI as well as 8gig FiberChannel.   This impressive platform sets the stage for even more impressive software.

 

Lets talk about clusters

At launch, the XtremIO platform can support up to 4 X-Bricks (in theory, I don’t see why more can’t be added, and maybe they will be in the future).  Each X-Brick is of a fixed size of around 10TB of storage with around 7.5TB of useable space (though I expect that total size will be increased in the near future).  In a 50/50 read/write performance test, each X-Brick topped out at about 150,000 IOPS (that number increased to around 250,000 if you are doing 100% reads).  And when you max out your cluster with 4 X-Bricks, both your storage and IOPS scale out giving you 40TB of capacity and around 600,000 real world IOPS (topping out at around 1,000,000 if your doing just reads!!!!!!).

 

image

The key to achieving all of this is in the software layer.  When data comes in, it is broken down in to 4K chunks.  Each chunk is then hashed using an SHA-1 algorithm and assigned a unique metadata fingerprint.  The chunks are then spread out across all the storage processors in the cluster to distribute the data around for faster throughput and the logical block address, fingerprint, and SSD offset are recorded in the metadata.  When new data comes in, the fingerprints are checked against the existing database to see if there is a match.  If there is, the metadata is recorded, but the write is not necessary, thus extending the life of the SSDs as well as performing an inline deduplication.  Now 256GB is not a lot of RAM to store metadata, and when full it will destage this to the SSDs.  This is where the cluster really starts to shine.

image

By utilizing the RDMA fabric between the X-Bricks.  The metadata calculation can be distributed across the entire cluster for an even load balancing.  This allows the decoupling of the user data and the meta data so that they don’t have to be on the same X-Brick and also allows you to recall any of the data in a similar fashion.  The in memory metadata of a controller is also mirrored to another controller in the cluster just in case there is a controller failure.  By being able to utilize multiple X-Bricks at the same time, you can scale out all the processing in an active/active environment and increase the total throughput of the cluster as a whole.

 

So what does it look like?

Well first off, it’s not Unisphere, but it’s own interface (the XMS management system) that is launched from the web server running on a controller as well as a robust CLI.  This video demonstration gives you a great overview.

XtremIO v2.2 GUI Demonstration

Final Thoughts

All in all, for a first round product, I think this is a great offering.  I’d like to see it scaled up higher with more storage and more X-Bricks in a cluster as I don’t think they have hit the limits of the architecture.  Be sure to watch the Launch event.  Here is a sneak peek at the cool X-Brick Coffee table (which will one day end up in my living room if I can help it)!

EMC XtremIO Launch 173