What’s New in vSAN 7.0 U1?

Earlier this year, VMware brought a dramatic chance to their virtualization platform with vSphere 7.  Now its time to bring in bug fixes as well as usher in new features with the U1 update.  This post will focus on changes coming to the vSAN software.  This software was designed with enhancements in 4 key areas:

  • Deliver a developer ready architecture
  • Increased scalability
  • Simplify day to day operations
  • Extend File Services

Developer Ready Architecture

Building on an already powerful Kubernetes offering with Tanzu, VMware has enhanced VSANs ability to work with containers and stateful cloud native applications thought the use of a Data Persistence platform (DPp).  This service provides the proper framework required for integrating stateful apps into vSAN.  By utilizing this framework, applications can assume the resilience responsibilities, cutting down on the overhead required by VSAN and allowing the applications to consume storage that would be equal to VMFS & RDMS.

Increased scalability

One of the biggest issues I’ve had with vSAN storage was that it was only dedicated to the same compute cluster.  This meant that things had to be incredibly right sized.  Otherwise, you had a lot of wasted space or had to add extra compute just to add storage.  This is where HCI Mesh comes into play.  Now you can share empty storage from one vSAN cluster with another cluster and make efficient use of the space.  Other enhancements have come in the form of customization settings for DD&C.  Now you can turn of the deduplication service in clusters that run workloads that wouldn’t benefit from it.  This frees up compute cycles that would otherwise just be wasted.  This setting also has the benefit of not taking out the entire disk group if a single disk fails.

Capacity management has also improved with more details being displayed about the cluster consumption.  Now users can get a more accurate depiction of just what is using up the space and can plan accordingly.  Increased efficiency also came to large clusters where the amount of reserved overhead space actually reduces as more nodes are added.  On the flip side, for smaller 2 node clusters, a single vSAN Witness can handle up to 64 2-node clusters (instead of requiring one per cluster).  This is great news for companies adopting vSAN for both their core datacenters as well as smaller remote sites.  And finally, VMware brings an overall performance increase to vSAN services, especially those utilizing erasure coding for increased capacity efficacy over a mirrored storage policy.

Simplify Day to Day Operations

One of the other big issues with vSAN has been maintenance.  Effectively faulting the system if you want to put a host into maintenance mode right away (or wait a while for data to be evacuated).  Now enhancements have been put in place to help get the last writes done quicker and the hosts in and out of maintenance mode even faster. The in-memory metadata tables are now being written down to cache in a “Save and Restore” method to allow hosts to reboot and come back into service faster, which allows for a quicker rolling reboot upgrade scenario.

Extend File Services

The biggest, and probably best, enhancement that comes to file services is the support of SMB.  Especially in smaller sites that need file services, a single VSAN cluster can now replace all the specialized storage services that normally needed a stand-alone array.  With support for both NFS and SMB, multiprotocol is now being offered and these can be shared back into applications.

Final Thoughts

These are all welcome changes and enhancements to VSAN and address a lot of things users have been bringing up for a while now.  Hopefully with the release of U1, users will now start upgrading from 6.7 sooner rather than later.

What you need to know about vSAN 7.0

As with my previous post, I wanted to take a moment to focus on some coming changes to vSAN 7 (something I work with on an almost daily basis).  Now, most of you are probably aware of vSAN and hopefully a good amount of you are using it for some of your workloads.  VMware has announced today it plans to bring some enhancements to bolster the offering and cement this as a product to handle the workloads of the future.

Integrated File Services

One of the early features of vSAN was to support an iSCSI connection to VMs or non-ESXi hosts, most of this being used for workloads that still needed a block based storage device.  Well now, VMware is implementing File Services into the mix.  Now, before you throw away your NAS device, this is just NFS support (Sorry CIFS users, you’ll have to wait until next time).  This support allows vSAN to be better suited for cloud native workloads and those that need a file based persistent volume to be shared with VMs.

2-node and stretched clusters

Stretched clusters are gaining popularity as an alternative way to do active-active sites and disaster recovery with a low RTO.  A couple of key enhancements are coming that will definitely help.  First, there is going to be some enhancements with DRS in the event of a failover and recovery.  If the primary site comes back online, DRS won’t move the VM back until the resync is done, thus keeping the strain down on the ISL line having to try and pull data from the other side. Second, the “replace witness” command will start immediately repairing things.  Third, and probably the most interesting feature.  In the event that you run out of space in the secondary site, the system will allow the VM to keep running on the primary (with an alert) and will resync once space is added.

Management

VMware has also gone ahead and improved the reporting and management features of vSAN.  VM capacity reporting is now consistent across both the UI as well as APIs.  This will also take into account things like thin provisioning, swap and namespace objects as well.  You can also easily view how much memory consumption is being taken by VSAN (especially important for those of you with low memory hosts).  It is also easy to see objects created by vSphere Replication.

Hardware and Usage Enhancements

Lastly, let’s take a moment here to talk about some speeds/feeds related enhancements.  vSAN now supports 32TB drives (if ever one would exist in a cost effective version), but this also increases the max storage to 1PB in logical capacity.  One of the biggest new enhancements that is coming with vSAN 7 and vSphere 7 is that NVMe gains hot plug support.  What this means is it’s no longer a requirement to shut down the host to replace an NVMe drive (something I’ve been waiting over a year for since we starting going mainstream with NVMe drives in VxRail).

The last big change is actually for a very specific workload.  Those that are sharing a disk between VMs (Oracle RAC) no longer have to have that disk thick provisioned.  One thing that wasn’t shared with me, but may come up later is about the cache size.  In vSAN 6, the cache size is limited to 600GB (even if the disk is larger).  I’ve heard nothing on if this changes, but will update this post if it does at launch.

vSphere 7.0 is coming, are you ready?

It seems like just yesterday vSphere 6.7 was dropping (the 3rd installment in the vSphere 6 series).  Like a good book turned into a movie, it seems like even the final release was split into multiple parts.  Today starts a new adventure, and with that a major change to the to vCenter and ESXi.  Today I’m going to highlight just a few of the big changes coming.

vCenter Server Profiles

I know what you’re thinking … “OMG, Host profiles is coming to vCenter, why would I want this nightmare?”  I assure you, its not like that.  The idea behind this is for those of you who have multiple environments and require multiple vCenters.  We’ve all been there in just how complicated it can be to fine tune all the settings to meet security and integration needs.  Now you can do all that busy work on your first server, and just export that configuration to other vCenters, standardizing your implementation across the board.  There is even version control, so you can revert back to a previous known good if you mess something up (but of course you wouldn’t do that because you’re an expert!).  For those using automation platforms (puppet, chef, ansible, etc…) there is a wide range of APIs (4, just 4) that allow you to control this functionality as well as an exportable JSON configuration.  This API even has the built in ability to check if your changes are valid and will let you know which settings won’t work before you deploy.  While those in the SMB market may not need to use this functionality, those in the enterprise space will welcome it I’m sure.

vCenter Server Scalability

There have been a few improvements to vCenter server around it’s ability to scale up and out.  First and foremost, as expected with every major release, the number of hosts and VMs increase to 2500 and 30,000 respectively.  While were still limited to 15 vCenters in linked mode, the number of hosts that can be managed in that topology increased dramatically.  These will make great VCP test questions (They still ask for maximums on the exam, right?).  

Speaking of SSO, the CLI tools (cmsso-util) has been included for all your easy domain repointing and unregistering needs.

Content libraries are being improved and these are now considered the go to for template deployments.  One of the new features being included is a new version control system for templates, so you can roll back and deploy an early version if you need to.  It’s a simple check out / check in system to handle this.

Improved Performance

Several enhancements were made to the performance systems in a cluster.  First, DRS runs every 1 minute instead of every 5 to get better understanding of the workloads in an environment.  Also gone is the bubble level and instead a percentage score to show how optimized you are.  A lower score doesn’t necessarily mean a VM isn’t running properly, just that there are improvements to be gained. The other enhancement is around the concept of saleable shares.  This better aligns the amount of resource entitlements a VMs can get as determined by the resource pool they are in.  Now things dynamically adjust based on the number of VMs instead of a fixed share amount being granted.  No longer could a VM marked as normal be granted more shares than a Higher level VM.

Even vMotion is getting an enhancement.  By claiming a single vCpu during the vMotion process, great efficiencies can be had in the memory page tracing process, allowing for a decrease in the stun time.  While you may not notice this with a small VM, large workloads (such as SAP or Oracle) will greatly benefit from this and allow you to vMotion them without a huge impact.

Upgrades

Finally, probably one of the best announcements, the external platform controller is dead (and there was much rejoicing).  Any upgrades done with an external platform controller will be converged into an integrated, and its done as part of the upgrade so there is no longer a need to run the separate converged tool.  Even the upgrade planner gets some enhancements where it now gets notifications of the latest versions of vCenter server and has a what-if capability to validate as much as it can before an upgrade happens as well as checking interoperability between multiple VMware products.  To be honest, this is the simplest solution for everyone and I’m glad this is being built in to the installer and not just a KB article that has to be referenced.

The final piece of the upgrade enhancements revolves around the vSphere lifecycle manager.  Previously, upgrades were limited to the ESXi image (and any drivers that may be baked into an OEM image).  Now we can combine ESXi, Drivers, and even hardware firmware as part of the upgrade lifecycle (where have I seen this before … *cough* VRail *cough*).  Users will now be able to combine a base image, vendor add ons, firmware updates, and any additional components they deem necessary for the upgrade cycle.

So what do you think? Is this enough to take the plunge and upgrade right away?

The vSphere C# client is dead! Long live the C# client!

Web Client All The ThingsToday VMware announced that it will no longer be supporting the C# client in the next major version of vSphere.  This really shouldn’t come as a surprise to anyone.  VMware has been shifting towards this for some time now as they keep improving on their web interface.  Earlier, other advanced vSphere functionality as well as  plugins such as SRM went web client only.  With additions of the embedded host client and a new HTML5 web client fling, it’s clear that this will be the future of GUI management going forward.

During a recent discussion on this news, it’s clear there are some concerns about the announcement and the plans going forward.  Right now there is a percentage of the user base that have to use both clients to successfully manage their vSphere environment.  My biggest concern revolves around the Client Integration Plugin, which seems to have issues depending on what browser that you want to use.  Other things like VUM don’t really work that well in the web client either (not to mention there is still a windows dependency on the VUM server currently).  These are all hurdles that VMware will need to overcome, and I’m sure they can in time, the question is will they be ready on GA date.

The biggest hurdle of all will be user acceptance and the learning curve associated with it.  There are a lot of users that still like the way the C# client is laid out and avoid the web client at all costs.  I know a lot of that was based on the speed of the interface.  The jump from 5.5 –> 6.0 saw vast improvements in speed and performance, and I’m sure the next major version will see gains as well.

At this point, my suggestion to everyone is to start getting used to the web client as it is the future of GUI management for vSphere.  If you are running 5.5 or 6.0, go ahead and give it a try (you might need to separately install the web client server depending on your vSphere environment).  If you are running something older, well now might be a good time to start planning an upgrade!

Configuring VASA for use with a VNX

vnxWhen VMware introduced vSphere 5 to the world, one of the enhancements was a new API for storage arrays that provides vSphere with information as to the configuration and performance of your storage array.  For more information on VASA, please see this article from The Virtualization Practice.  VASA on a VNX (and other EMC arrays) historically used to be configured using an SMI-S provider.  This older configuration method has been covered very well by EMC vSpecialist Craig Stewart and can be found here.

 

Starting with VNX OE for FILE 7.1 and VNX OE for BLOCK 05.32, the VNX now has native VASA support.  This eliminates the need for the SMI-S provider and allows you to point vSphere directly to the control station and SP.  It really is a 1-step implementation and I will show you below.  And there is only 1 caviot to this, and it is VASA for the BLOCK and FILE are done separately.  if you are using, FC, FCoE, or iSCSI connections, you will want to use the BLOCK example, and if you are using NFS, you will want to use the FILE example.

 

You will want to start in vSphere by going to Home > Administration > Storage Providers.  From there you would click on “add…” to configure your connection.

 

VNX OE for FILE 7.1 VASA configuration example

You will start by naming this connection.  I chose VNX FILE to make it easy to distinguish between block and file connections.  You will then use the URL as follows: https://<ip.or.dns.of.control.station>:5989/vasa/services/vasaService.  The username/password would be one local to the control station (such as nasadmin or root).  The global accounts from the storage domain will not work here.  When it’s all said and done you should have something like the photo below:

VASA_VNX_FILE

You will probably be prompted to verify the SHA fingerprint, so just click yes and soon you’ll see your new provider listed with the following information:

VASA_VNX_FILE_2

 

VNX OE for BLOCK 05.32 configuration example

Just like the VNX OE for FILE example, you will start off by using a name.  This time the URL will be pointing to the SP.  The url will be as follows: https://<ip.or.dns.of.SP>/vasa/services/vasaService.  Please note the lack of a port specification as by default https uses port 443.  For the password you will want to use a storage domain account (such as sysadmin).  If you configured it correctly, it should look something like this:

VASA_VNX_BLOCK

 

Since I have a very basic configured array in the lab, I see provider information like this:

VASA_VNX_BLOCK_2

 

After you have successfully configured your providers, you can go and setup your storage profiles.  Go to Home > Management > VM Storage Profiles and add a new profile.  From there you can select from a multitude of options to pick the one that best matches the lun you are using for storage.

VASA_VNX_STORAGE_PROFILES

It really is that simple!  For more information on VASA on the VNX, read the Virtualization for EMC® VNX Release Notes (EMC Support credentials required).

Countdown to the VCP4

With the recent announcement of the VCP5, the time to take the VCP4 is running out. On top of that, VMware is currently running a promotion that allows for a free retake if you schedule and take the exam in the month of July (promo codes “VCPTAKE1” and “VCPTAKE2”). This renewed sense of urgency has motivated me to get my certification now. I took the required course back in December, but without having a home lab until a few months ago, I barely had any exposure to VMware products. By taking the VCP4, I will be eligible to take the VCP5 without having to take a training course as long as I complete the exam by February of 2012.

 

The exam:

The VCP4 exam consists of 85 questions that cover the changes version 3 to version 4 as well as a basic understanding of ESX/i 4, vSphere 4, and the related plugins and features. The exam is scored on a scale from 100 – 500 and a 300 is considered a passing score. With that being said, it is my understanding that this exam is no walk in the park. This will test your understanding of exact minimums and maximums, what hardware can be used and how it works, and how the software is installed, configured, and used.

 

Preparing for the exam:

The only thing that VMware requires to take the exam is to take the certified training course. This will provide the minimum amount of exposure that VMware feels is necessary to come with the certification. I took this class with my coworkers Mathew Brender and Tommy Trogden back in December of 2010. Now it is time to study for the exam. Besides the standard resources made available on the VMware website, I picked up 2 books. I am using the “VCP VMware Certified Professional vSphere 4 Study Guide” by Robert Schmidt as well as the “VCP4 Exam Cram: VMware Certified Professional” by Elias Khnaser. Both of these resources come with very detailed overviews of all the topics covered for the exam as well as a plethora of test style questions designed to give you a taste of what to expect. However I’ve found the questions on one book to be much easier than the other so I’m hoping the true questions fall somewhere in the middle.

I can combine this with my home lab to test things I’ve been reading and to redo the labs from the training course. My home lab is more or less based on the Baby Dragon from Phil Jaenke. However I only have one physical host at this time. Luckily, ESX/i can be run virtualized, so I can create a few virtual hosts to test the more advanced vSphere features.

 

Final thoughts before the exam:

At this point I am 10 days away from walking into the testing center. I have completed most of my reading from the two books, I am reviewing test questions, and I am trying to reconfigure the lab to redo some of my old excercises. I am always looking for new practice test questions and there seem to be plenty of them on the web (like the website of Simon Long). If you have any good links, please feel free to leave them in the comments and look for me on twitter after the exam to see how I did.