Wednesday, August 20, 2014




Our whitepaper “Hybrid Cloud with NVGRE (Cloud OS)” has reached an unbelievable milestone today.
The paper has been downloaded more than 10.000 times!

I am truly humbled and honored to know that our effort by writing this whitepaper has been greatly appreciated by the community worldwide.

The Story

The initial idea back in October in 2013 was to create a comprehensive guide that would help people to implement Network Virtualization (NVGRE) using System Center Virtual Machine Manager 2012 R2 and Hyper-V in Windows Server 2012 R2.
I started the job together with Flemming Riis – which had this real-world fabric available for us to test, crash and build. Learn, apply, and repeat.

The first release was based on the Preview bits of the Cloud OS and we decided to update the content with the RTM builds as soon as possible; in order to address many questions we knew would arise in the different TechNet forums.

Our reviewers was Daniel Neumann and Stanislav Zhelyazkov.

We had now a new version.

In addition, we added a comprehensive “FAQ” chapter to the paper based on experience from early adoption, TechNet forums and feedback.
This is when we decided that we had to “hire” Stanislav Zhelyazkov. He provided us with unique details that gave a much better quality to the whitepaper, and especially around this section.

This was the third version.

Our forth version added Windows Azure Pack to the paper where we ended up putting NVGRE into context. We highlighted how to leverage the multi-tenant IaaS platform we had been building with VMM, with the service management API in Azure Pack. This was a big update, which included several other elements such as gallery items, remote console and much more.

Our fifth version is where we added a “FAQ” chapter for the Azure Pack part, and hence the “hiring” of Marc Van Eijk, which gave us a deeper insight and a better perspective based on his experience as well, and this is still the current version that we know are helping people on a day-to-day basis.

We know that this whitepaper has been greatly appreciated by Microsoft Support and is widely used by their customers when they are facing challenges regarding these technologies. That is truly a confirmation that we really did something useful this time J

My promise to you:

Instead of putting too much effort into books with all those heavy processes, I will instead continue to write fresh, up-to-date and deep technical whitepapers that can make your life easier.
That means that you can expect more to come from this side as we are seeing new releases of this stack.

I also know that my team are with me and on behalf of them; I can only say that we are very grateful and appreciate all the feedback we have received on our way.

A big thank you from me, Flemming, Stanislav, Daniel and Marc!

Monday, August 18, 2014

VM Cloud is missing in Windows Azure Pack

Recently, I’ve encountered a bug when working with WAP and VM Cloud as the resource provider.


You have connected the service management API to your SPF endpoint and added a VMM management stamp together with a Remote Desktop Gateway.

If you decide to change the FQDN of the Remote Desktop Gateway registered with your VMM management stamp, you will end with a blank VM Cloud in the admin portal.
The connection to the SPF endpoint is still present, but the VMM management stamp with its cloud is missing.

This causes also the VMs and the virtual network for the tenants to appear as missing in the tenant portal.

On the SPF server you will find the following event logged for ManagementODataServices:

On the server where the admin API is installed, you will find the following in the event viewer:

When you make changes to the FQDN of the Remote Desktop Gateway in WAP, you will have another SCSPFServer record present in SPF together with a SCSPFSetting that has the same ID as the previous records.

As you can see from the screenshot below, we have now two records of the ServerType “RDGateway”.

If we dig deeper, the following screenshot shows that we have two entries with the same ID, both registered to the VMM management stamp.

In short, the VMM management stamp is registered again, which generates a duplicate ID that results in this behavior.


In order to clean up, we have to work directly on the SPF server using the SPFAdmin module with PowerShell.

Note: when doing this correctly, you will not delete, loose or cause any harm to your production environment so pay attention.

1.       Log on to your SPF server and import the SPFAdmin module

2.       Run the following cmdlets to identify and remove your RDGateway servers! In our case, we have two records and have to remove both of them before we later add the RDGateway we want.
The reason for that is that because when you try to add the RDGateway in WAP afterwards, you will see that this column is empty although it exist in SPF. If you try to add the RDGateway again, you will end up in the exact same situation. Therefore we must remove both servers in SPF.

3.       Remove the duplicate SCSpfSetting with the following cmdlets. The SCSpfSetting on the top is the setting you want to remove with the duplicate ID.

4.       Next, we want to register the RDGateway directly to our stamp with SPF to avoid creating duplicate ID's.

Once this is done, you can perform a refresh in both the admin portal and the tenant portal, and your VMM management stamp should again be present.
Also edit the connection to verify that the RDGW is registered with the correct values.

 Please note: If you register your VM Cloud resource provider in WAP with all the settings at once, you will not run into this issue. It's only if you decide to add the RDGateway afterwards, or are making changes to the existing one.

Tuesday, August 12, 2014

Applied UR3 for VMM? Update your VMM DHCP Server Extension now!

Update your SCVMM DHCP Server Extension now!

From the KB:

“When using System Center 2012 R2 Virtual Machine Manager (VMM 2012 R2), you may discover that some virtual machines that are deployed on Hyper-V Network Virtualization networks with dynamic IP address allocation may not get an IP address for a few minutes after a reboot of the VM. Eventually the VM gets the IP address and otherwise functions normally.

The behavior can occur if the host has an older version of the VMM DHCP server extension. In order to verify this, find the version of “Microsoft System Center Virtual Machine Manager DHCP Server (x64)” installed on the host by running the following Powershell command:

Get-WmiObject –Class win32_product –filter ‘Name = “Microsoft System Center Virtual Machine Manager DHCP Server (x64)”’

The resolution is to first uninstall the old version of the DHCP extension manually, and then install the updated version from VMM installation folder\SwExtn\DHCPExtn.msi

Default path is: C:\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\SwExtn\DHCPExtn.msi

Once this is done, the VMs should no longer experience a delay in acquiring an IP address after a reboot.


Windows Server 2012 R2 Hyper-V introduced several enhancements related to NVGRE.
One of these things are “Dynamic IP address learning”.

What is Dynamic IP address learning?

Feedback from customers told Microsoft that it was important to enable highly available services to run in a VM network. To support that, dynamic IP address learning was brought to the table.
In other words, services such as DHCP, DNS and AD is supported in a NVGRE based network on Hyper-V.
First for broadcast or multicast packets in a VM network, we will use a PA multicast IP address if configured. However, the typical data center operator does not enable multicast in their environments. As a result, when a PA multicast address is not available we use intelligent PA unicast replication. What this means is that we unicast packets only to PA addresses that are configured for the particular virtual subnet the packet is on. In addition, we only send one unicast packet per host no matter how many relevant VMs are on the host. Finally, once a host learns a new IP address it notifies SCVMM. At this point, the learned IP address becomes part of the centralized policy that SCVMM pushes out. This allows for both rapid dissemination of HNV routing policy and limits the network overhead for disseminating this HNV routing policy.”

Second, what is the SCVMM DHCP Extension?

In order to leverage NVGRE, you would have to manage your hosts entirely with powershell, if you didn't have VMM in place.

When VMM is in place (and really should be in place, when using NVGRE), VMM act as the complete management layer, also for the NVGRE part. Since NVGRE basically is a policy driven technology, VMM need to keep track of every IP address used with NVGRE. During deployment of virtual machines connected to a VM Network with NVGRE (often referred to as Customer Addresses), VMM is able to configure static IP addresses on to these VMs, using the agent you are mentioning. This was introduced in VMM 2012 SP1, and is present in the R2 Release. 
Therefore, to summarize, it is a Hyper-V Switch Extension that is required on all Windows Server with Hyper-V to have DHCP to work correctly.

Great. But can I deploy the updated agents to all my Hyper-V hosts in a single operation?

From your VMM server, you can run the following script (please let it match your computer names in the fabric before you run it):

$setup = "\\vmm01\c$\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\SwExtn\DHCPExtn.msi"

Invoke-Command -ComputerName hv03, hv04, hv01, hv02, hvfm01, hvfm02, hvgw01, hvgw02 -Command { $setup }

Monday, August 11, 2014

Free Webinars - Azure Technologies in the Private Cloud

This is just an announcement that I will be holding a presentation related to my last whitepaper, published by Savision in the upcoming weeks.

During this session, I will walk through the importance of a private cloud and how you can make this become real with technologies from Microsoft.
Especially interesting is the focus on Windows Azure Pack that has gotten a lot of attention during the last months.

Dive in to see what Azure Pack is all about and what the benefits are.

I encourage you all to ask any questions during these webcast, as long as it is related to the content or Led Zeppelin ;-)

Tuesday, July 29, 2014

New whitepaper - Azure Technologies in the Private Cloud

New whitepaper – Azure Technologies in the Private Cloud

Together with Savision, I am glad to announce that a new whitepaper has been published.

The content is focusing on the Cloud OS – and especially the private cloud enabled by Windows Azure Pack.

If you find this interesting, I suggest that you download it and sign up for one of our free webinars in the near future.

Feedback is highly appreciated.

The whitepaper can be downloaded by following this link:

Monday, July 28, 2014

Workaround for VM Roles and Storage Classifications

Solving classification for your VM Roles with VMM

Since you are reading this blog post, and hopefully this blog now and then, you are most likely familiar with the concept of Gallery Items in Windows Azure Pack.

Useful Resources

If not, I suggest that you read the following resources:

If you want all the details around everything, please download our “Hybrid Cloud with NVGRE (Cloud OS)” whitepaper that put everything into context.

My good friend and fellow MVP – Marc van Eijk, will publish a couple of blog posts at “Bulding Clouds blog”, where he dive into the nasty details around VM Roles. Here’s the link to his first contribution:

Gallery Items and VM Roles

Before we proceed. Gallery Items brings “VM Roles” into Azure Pack, which reminds you a lot of service templates in VMM in the way that they are able to climb up the stack and embrace applications and services during the deployment. However, a VM Role is not completely similar to a service template in VMM as it has no knowledge of any of the profiles (Hardware Profile, Application Profile, SQL Profile and Guest OS Profile).

This is where it gets tricky.

Gallery Items are designed for Azure and brings consistency to the Cloud OS vision, by letting you create VM Roles through the VMRoleAuthoringTool from Codeplex for both Azure Pack (Private Cloud) and Microsoft Azure (Public Cloud).

The components of a VM Role are:

·         Resource Definition file (required – and imported in Azure Pack)
·         View Definition file (required – presents the GUI/wizard to the tenants)
·         Resource Extension (optional – but required when you want to deploy applications, server roles/features and more to your VM Role

The tool let you create, alter and update all these components and you can read more about the news on this blog post:

So far, in 2014, I have been visiting many customers who are trying to adopt Azure Pack and Gallery Items with VM Roles. They want to provide their tenants with brilliant solutions, that are easy to understand and deploy, and can be serviced easily by the cloud administrators.
However, there are some important things to note prior to embracing the VM Roles in Azure Pack, especially when it comes to storage.

·         VM Roles are only using Differential disks
·         You can’t benefit from storage classifications associated with your VMM Clouds – and determine where the VHDX’s will be stored

Why are we using Differential disks for VM Roles?

This is a frequently asked question. In the VMM world, we are familiar with the BITS operation during VM deployment. Luckily, fast file copy was introduced with VMM 2012 R2 and we can also leverage ODX for deployment now, so hopefully BITS is nothing that you see very often when deploying VMs anymore.
However, in order to speed things up a bit more, we are using Diff disks for VM Roles. This is because we intend to reduce deployment time and improve performance. Shared bits from the parent VHDX would be served up from cache in most scenarios and new VMs simply create a new diff disk and boot up. This goes for both OS disk and data disks for the VM Role. No file copy need to occur (except for the first VM to require the disk). When you then decide to scale a VM Role in Azure Pack (scale out), the new instance can boot almost immediately and start walking through the setup.

Ok, I understand the decision around Differential disks now, but what about storage classification and where to put these disks?

Since VM Roles in Azure Pack are only linked to the disks in the VMM library (by using tags), we can’t map it towards any of the storage classifications.
Out of the box, there is no way to modify this prior to the deployment.

Tip 1 – Default parent disks path on the hosts

In VMM, navigate to Fabric and click on properties on your hosts in the host groups associated with the cloud used by Hosting Plans in Azure Pack.

Here you can specify the default parent disk paths to be used for the virtual machines (VM Roles).
If you have dedicated shares or CSVs, this might be helpful and can streamline where the VM roles are living.

Tip 2 – Live Storage Migration post deployment

At a customer site earlier this year, we ended up by using Powershell to move the disks around after deployment.

This is something we added to SMA afterwards that was automatically triggered post the create operation of every new VM Role.

Here’s the script:

### Get every VM Role in a specific VMM Cloud used by Azure Pack

$vms = Get-SCVirtualMachine | Where-Object {$_.AvailabilitySetNames -cnotcontains $null -and $_.Cloud -like "Service Provider Cloud"}

### Move the storage to the preferred/dedicated directory
       foreach ($vm in $vms)

Move-SCVirtualMachine -VM $vm -Path "C:\ClusterStorage\CSV01\" -UseLAN -RunAsynchronously

As you can see, we are querying virtual machines that has an availability set associated. Each and every time you deploy a VM Role with Azure Pack, the underlying cloud resource in VMM gets an availability set to ensure that when you scale out the VM Role, the workloads are spread on different Hyper-V nodes in a Cluster (assuming you are using a Hyper-V Cluster for your workloads).

That’s it, and hopefully this gave you some ideas and more information around VM Roles in Azure Pack.

Monday, July 14, 2014

Creating multiple VMs in WAP - Tenant Public API

I received a question this morning if it was possible to create many VMs at once using the tenant public API for Azure Pack.

The short answer is: yes.

If you want to know how to expose and configure the tenant public API in Azure Pack, then you can read a blog post, I wrote a couple of weeks ago:

Here’s the cmdlets for creating 5 virtual machines as a tenant, using the tenant public API in Azure Pack with PowerShell ISE:

### Import the PublishSettingsFile you have downloaded from the tenant portal

Import-WAPackPublishSettingsFile "C:\mvp.publishsettings"

### Get your VM Template and store it in a variable

$template = Get-WAPackVMTemplate -Name "GEN1 Template"

### Get your Virtual Network and store it in a variable

$vnet = Get-WAPackVNet -Name "LabNetwork"

### Get the credentials required for the local admin account for your VMs

$creds = Get-Credential

### Define an array. We will create 5 VMs in this example

$vms =@(1..5)

Foreach ($VM in $VMs)


$VM = "wapvm"+$VM

New-WAPackVM -Name $vm -Template $template -VMCredential $creds -VNet $vnet -Windows


Once the cmdlet has completed, you should have 5 virtual machines in the tenant portal, like wapvm1, wapvm2, wapvm3 and so on.