Add virtual machine host fails with error 20408

I recently had a problem adding a host to a VMM server – all the obvious things had been checked. WinRM was enabled, firewall rules were in place. Service account had admin rights and DNS was correct.

Still, every time I attempted to add the host an error occurred:

“Error (20408) VMM could not get the specified instance Microsoft:{668f165d-4dae-bcb6-5007ff1fc2e8} of class http://schemas.microsoft.com/wbem/wsman/1/wmi/root/standardcimv2/MSFT_NetAdapterRssSettingData on the server server.fqdn. The operation failed with error NO_PARAM”

In this instance, the server was a 2016 one which has been upgraded from 2012 R2. The fix was bizarre. Save all VMs and remove the vswitches so that only normal physical adapters remain and then recreate the vswitches. The config was identical but clearly, something behind the scenes was wrong and recreating the vswitches worked. Retrying the same job on VMM resulted in success and the host was added to VMM.

Windows 2012 Dedupe – huge chunk store and 0%

One of the best new features in 2012 was the file de-duplication.  That said it does sometimes behave a bit strangely under some workloads.  I recently faced an issue where a a 40TB volume with de-duplication enabled resulted in a huge chunk store that was using more space than the original data!

chunky

At a glance it looks like the best thing to do is turn off dedupe for this volume, but all this seems to do is disable further dedup work, anything that is already deduped will remain so.  I found the best/fasted way to “re-hydrate” your data and get rid of the chunkstore (You could just format the volume if you don’t need the data) is to leave the dedupe enabled, but set an exclusion on the root.

Then run the commands below in power-shell (Assuming drive letter F:):

Start-DedupJob -Volume “F:” -Type unoptimization -Memory 50

Then run:

Start-DedupJob -Volume “F:” -Type GarbageCollection -Memory 50

You can then monitor the size of the chunkstore and/or run this command to see the progress of any “dedupjobs” with this command:

Get-dedupejob

dedupejob

Do bare in mind the increased IO and server load while this runs, it maybe best to start this out of hours.  Please also note that this command will only actually re-hydrate your files if dedupe is still enabled.

System Center 2012 – Inside the Private Cloud

My three favorite parts of the System Center suite are Configuration Manager, Data protection Manager and Endpoint Protection.  These three products work well at making most of the chores of running an IT environment lighter

 

Configuration Manager & Endpoint Protection

This is, in my opinion, the flagship product of the System Center Suite.  Management of servers, workstations and even mobile devices is completed here and with Service Pack 1 an impressive list of operating systems and devices are supported including Linux and Mac OS.  The mobile device manager has now been brought into Configuration manager as well.  It is also within Configuration manager that you should deploy and manage Endpoint Protection.  Endpoint Protection was formally known as Forefront Protection, and I really hope this product continues being supported and isn’t eventually dropped like other forefront products have been, such as TMG. If you are lucky enough to have the standard or Enterprise CAL already (and you really should if you are looking at System Center) then it might be that you can save a fair bit of money by ditching your current Antivirus vendor and moving to Endpoint.

Typically in the past I would have used standard windows deployment from a share or USB volume or another vendor’s solution such as Ghost as the configuration manager effort wasn’t always worth the reward.  Deployments are now a lot easier and when tied with a decent collection of drivers and task sequences it is simple to quickly cater for a new situation or model of desktop or server.

 

Data Protection Manager

In my experience no backup solution is perfect and generally each has its strength and weaknesses.  With the 2012 iteration of Data protection manager the Microsoft offering is looking to be more of the former and less of the latter.

DPM is great at backing up Microsoft own products and applications and I have been using it to back up nearly 2TB of Exchange data and a large SharePoint Farm as well.

DPM offers many of the features an enterprise backup solution should, such as continuous protection, differential and incremental backups as well as Disk to disk and Disk to Tape backups.  I feel that it is only in the scheduling and retention options that DPM starts to fall down.  Typically I like to keep daily data for a month, weekly data for 2-6 months, month end for 2 years and year end data for even longer but unfortunately the retention a scheduling options don’t really cater for this approach, you simply have a hard limit on how long you can retain backup data with disk to disk used for short term and a second schedule for long term tape backups.  This leads to me using a different product to perform end of month backups simply so that I can keep them for longer than the other tape backups.

Generally DPM performs very well and can perform backups in shockingly fast order but it can have a tendency to occasionally mark replicas as bad or fail a snapshot only for it to succeed later without issue.  Quite possibly a quirk of the environment I have evaluated it in but something which seems to happen with other solutions a lot less often.  Also on upgrading on to sp1 be prepared to check the consistency of every replica and build the time taken for this into your upgrade plan.

Service pack 1 is definitely worth the upgrade as it sees a number of feature improvements such as support for deduplicated volumes and is the final piece in the puzzle to getting dedupe working on cheap hardware.  With Windows Server 2012 and Data Protection Manager you can use deduped volumes without the need to buy expensive storage solutions and licenses.  The useful extra features don’t end there, Cluster shared volumes can now be backed up as well as continuous protection of Hyper-V guest machines even while they are being live migrated.

 

Virtual Machine Manager

Virtual Machine Manager is to Hyper-V as vCentre server is to VMware.  VMM is the only real additional software cost you will have to bare if you want to use a full Hyper-V clustered solution. (God help anyone who wants to manage a large cluster of Hyper-V hosts as individual servers) The thought obviously being that if Microsoft gives you the Hyper visor for free you won’t balk at paying for the management tools and I expect a good number of people will buy the System Center Suite simply to be able to run Virtual Machine Manager.  If this sounds like you I hope you at least try the other parts of the System Center suite as they are worth a good look.

 

Orchestrator

Orchestrator is the centre point of the System Center suite and ties all of the other products together to make an intelligent workflow based automation solution.  It is based on software Microsoft acquired when it purchased Opalis back in 2009.

Orchestrator makes sense in the larger environments or when a requirement for automation is present such as in managed hosting.  It will likely be less use in smaller environments as the time taken to configure and automate tasks won’t have quite the same payback.

With Orchestrator it is possible to automate almost anything from deploying VMs through to recovering from an error condition in a service.  You can even find Integration packs from various vendors which lets you control and automate them from Orchestrator.

 

Operations Manager

Operations Manager is Microsoft monitoring and alerting system and in the latest version it does a lot more than peer into event logs and give you a huge list of errors.  As with the rest of the System Center Suite Service Pack 1 introduces support for Linux which enhances the appeal of Operations manager a little and the list of supported applications and devices seems to be constantly growing as well.  No doubt pure Linux environments will be running Nagios or something similar but for mixed or pure Microsoft environments Operations manager is definitely one of the best out there.

 

Service Manager

Service Manager is the System Center component I have spent the least time looking at.  It is hard to get excited about help desk solutions, especially when so many spend so long logged into them.  Possibly the best feature of Service Manager is the auditing and reporting.  If correctly configured with orchestrator Service Manager can help you to identify why problems are occurring or when changed were made which could have contributed to an issue.  Service Manager doesn’t feel like the kind of product people would buy on its own, but if you have already paid for the full System Center suite you would have to be silly not to at least try it and as with all of the other solutions mentioned here, generally the longer you use them, the more you come to realise how powerful and  they are.

 

Unified installer

There is a unified installer which is great for quickly deploying the whole System Center Suite and you can read all about my experience here I also urge you to click through some of the categories above for more System Center related posts.

Windows Server 2012: Thoughts so far

When I first booted up into windows server 2012 I genuinely couldn’t believe my eyes.  The user interface formerly known as metro?  On a server?  Who is going to have a touchscreen on a server?  But slowly, it starts to make sense.  When you open the start menu it is usually because you are looking to start an application or configuration console from a shortcut, and with the old start menu the rest of the screen is somewhat redundant.  Not any more; every icon in the menu now fills the entire screen and the Win key + search term combo stills works so I am happy.  The only thing I miss is the ability to shift and right click on an item to run it as another user; now I have to pin it to the task bar and go to the desktop to do so.  A minor inconvenience, but an inconvenience none the less.

metro 2012 gui

Now on to some of the amazing new features of 2012: I love the ability to team network interfaces at the OS level.  Previously, you were at the whim of your network card drivers to achieve any kind of teaming, whereas now you can use whatever network interfaces you like to create a bit of redundancy and/or failover.  I can’t say enough good things about the new server manager either; it makes adding roles and features a breeze, particularly when compared to previous versions.  You can quickly and easily add a role or make a change to entire clusters of servers from one server manager console.

server 2012 manager

The new resilient file system, and in particular the Deduplication feature of 2012, look very exciting and I suggest everyone tries building a test 2012 server and moving their file shares to it, just to see how much space you could save with deduplication.  Actually using it in production could be a little trickier as it requires backup solutions that are Deduplication aware ,or else on a restore you may find yourself rapidly running out of space or encountering other issues.  I don’t imagine it will be long before vendors include support for this feature.  Another great new feature of the resilient file system is that you can now run check disk online; never again will you have to restart and wait while check disk trawls tediously through a volume before the operating system starts. The new resilient file system also does not re-use the same disk blocks during a write, so if there is a power outage or other failure, the original data will still be readable.

PowerShell 3 is touted to have over 2,400 command-lets and to be honest I am only starting to scratch the surface of what is now available, but it is safe to say that if you liked PowerShell in 2008 R2,  you will love it in 2012.  A useful trick I use to learn more about PowerShell is to first configure something in the GUI and then hunt through the PowerShell logs in event viewer to see all the actual commands that were run.  Also dont forget to check out the new PowerShell 3.0 ISE.

PowerShell 3.0 ISE

There are other less tangible improvements such as boot time; it certainly feels a lot quicker to be up and running than previous versions.

There are a few gotchas.  For example, while deploying a new Lync 2013 environment I discovered that 2012 has much tougher certificate requirements, and even a single non self-signed certificate in the “Trusted CA” certificates folder was enough to upset the reset of the certificates in the personal store. So if you are planning to move to 2012 any time soon, now is a great time to think about cleaning up your certificates and rationalising any you have pushed out via group policy.  Another issue I faced was with a core edition server which had many updates applied.  I then tried to install the server GUI and found myself unable to do so, I would recommend that you build all servers with the GUI, update them and then uninstall the GUI so that you have the option of re-adding it later should you so desire.  The new “Minimalism” interface offers a reasonable compromise if the core is a little to extreme for you but you want to realize the benefits of a lighter footprint.

Hyper-v is now in its 3rd generation and each new version feels a little more mature and stable, and if you are already paying for datacentre licenses for your hosts this new version makes it harder than ever to justify paying for a competitor’s Hyper-visor when this is already included in your datacentre licenses.  Unfortunately I have not yet built a 2012 Hyper-v cluster, but even running it on single hosts I can see improvements. Additionally, running native hyper-v guests means that you can always export them to Azure, either for a bit of extra capacity or as a backup/DR solution.  My only gripe is that the new Hyper-v management tools can’t manage older 2008 R2 Hyper-v hosts, but I guess that is one of the prices of progress.

hyper-v manager 2012