Tuesday, July 29, 2014

New whitepaper - Azure Technologies in the Private Cloud

New whitepaper – Azure Technologies in the Private Cloud

Together with Savision, I am glad to announce that a new whitepaper has been published.

The content is focusing on the Cloud OS – and especially the private cloud enabled by Windows Azure Pack.

If you find this interesting, I suggest that you download it and sign up for one of our free webinars in the near future.

Feedback is highly appreciated.


The whitepaper can be downloaded by following this link:

http://www.savision.com/resources/news/free-whitepaper-azure-technologies-private-cloud-mvp-kristian-nese



Monday, July 28, 2014

Workaround for VM Roles and Storage Classifications

Solving classification for your VM Roles with VMM

Since you are reading this blog post, and hopefully this blog now and then, you are most likely familiar with the concept of Gallery Items in Windows Azure Pack.

Useful Resources

If not, I suggest that you read the following resources: http://kristiannese.blogspot.no/2013/10/windows-azure-gallery-items-getting.html

If you want all the details around everything, please download our “Hybrid Cloud with NVGRE (Cloud OS)” whitepaper that put everything into context.

My good friend and fellow MVP – Marc van Eijk, will publish a couple of blog posts at “Bulding Clouds blog”, where he dive into the nasty details around VM Roles. Here’s the link to his first contribution: http://blogs.technet.com/b/privatecloud/archive/2014/07/17/the-windows-azure-pack-vm-role-introduction.aspx

Gallery Items and VM Roles

Before we proceed. Gallery Items brings “VM Roles” into Azure Pack, which reminds you a lot of service templates in VMM in the way that they are able to climb up the stack and embrace applications and services during the deployment. However, a VM Role is not completely similar to a service template in VMM as it has no knowledge of any of the profiles (Hardware Profile, Application Profile, SQL Profile and Guest OS Profile).

This is where it gets tricky.

Gallery Items are designed for Azure and brings consistency to the Cloud OS vision, by letting you create VM Roles through the VMRoleAuthoringTool from Codeplex for both Azure Pack (Private Cloud) and Microsoft Azure (Public Cloud).

The components of a VM Role are:

·         Resource Definition file (required – and imported in Azure Pack)
·         View Definition file (required – presents the GUI/wizard to the tenants)
·         Resource Extension (optional – but required when you want to deploy applications, server roles/features and more to your VM Role

The tool let you create, alter and update all these components and you can read more about the news on this blog post: http://blogs.technet.com/b/scvmm/archive/2014/04/22/update-now-available-for-the-virtual-machine-role-authoring-tool.aspx

So far, in 2014, I have been visiting many customers who are trying to adopt Azure Pack and Gallery Items with VM Roles. They want to provide their tenants with brilliant solutions, that are easy to understand and deploy, and can be serviced easily by the cloud administrators.
However, there are some important things to note prior to embracing the VM Roles in Azure Pack, especially when it comes to storage.

·         VM Roles are only using Differential disks
·         You can’t benefit from storage classifications associated with your VMM Clouds – and determine where the VHDX’s will be stored

Why are we using Differential disks for VM Roles?

This is a frequently asked question. In the VMM world, we are familiar with the BITS operation during VM deployment. Luckily, fast file copy was introduced with VMM 2012 R2 and we can also leverage ODX for deployment now, so hopefully BITS is nothing that you see very often when deploying VMs anymore.
However, in order to speed things up a bit more, we are using Diff disks for VM Roles. This is because we intend to reduce deployment time and improve performance. Shared bits from the parent VHDX would be served up from cache in most scenarios and new VMs simply create a new diff disk and boot up. This goes for both OS disk and data disks for the VM Role. No file copy need to occur (except for the first VM to require the disk). When you then decide to scale a VM Role in Azure Pack (scale out), the new instance can boot almost immediately and start walking through the setup.


Ok, I understand the decision around Differential disks now, but what about storage classification and where to put these disks?

Since VM Roles in Azure Pack are only linked to the disks in the VMM library (by using tags), we can’t map it towards any of the storage classifications.
Out of the box, there is no way to modify this prior to the deployment.

Tip 1 – Default parent disks path on the hosts

In VMM, navigate to Fabric and click on properties on your hosts in the host groups associated with the cloud used by Hosting Plans in Azure Pack.



Here you can specify the default parent disk paths to be used for the virtual machines (VM Roles).
If you have dedicated shares or CSVs, this might be helpful and can streamline where the VM roles are living.

Tip 2 – Live Storage Migration post deployment

At a customer site earlier this year, we ended up by using Powershell to move the disks around after deployment.

This is something we added to SMA afterwards that was automatically triggered post the create operation of every new VM Role.

Here’s the script:

### Get every VM Role in a specific VMM Cloud used by Azure Pack

$vms = Get-SCVirtualMachine | Where-Object {$_.AvailabilitySetNames -cnotcontains $null -and $_.Cloud -like "Service Provider Cloud"}

### Move the storage to the preferred/dedicated directory
      
       foreach ($vm in $vms)
{
      

Move-SCVirtualMachine -VM $vm -Path "C:\ClusterStorage\CSV01\" -UseLAN -RunAsynchronously
      
      
       }

As you can see, we are querying virtual machines that has an availability set associated. Each and every time you deploy a VM Role with Azure Pack, the underlying cloud resource in VMM gets an availability set to ensure that when you scale out the VM Role, the workloads are spread on different Hyper-V nodes in a Cluster (assuming you are using a Hyper-V Cluster for your workloads).

That’s it, and hopefully this gave you some ideas and more information around VM Roles in Azure Pack.



Monday, July 14, 2014

Creating multiple VMs in WAP - Tenant Public API

I received a question this morning if it was possible to create many VMs at once using the tenant public API for Azure Pack.

The short answer is: yes.

If you want to know how to expose and configure the tenant public API in Azure Pack, then you can read a blog post, I wrote a couple of weeks ago: http://kristiannese.blogspot.no/2014/06/azure-pack-working-with-tenant-public.html

Here’s the cmdlets for creating 5 virtual machines as a tenant, using the tenant public API in Azure Pack with PowerShell ISE:

### Import the PublishSettingsFile you have downloaded from the tenant portal

Import-WAPackPublishSettingsFile "C:\mvp.publishsettings"

### Get your VM Template and store it in a variable

$template = Get-WAPackVMTemplate -Name "GEN1 Template"

### Get your Virtual Network and store it in a variable

$vnet = Get-WAPackVNet -Name "LabNetwork"

### Get the credentials required for the local admin account for your VMs

$creds = Get-Credential

### Define an array. We will create 5 VMs in this example

$vms =@(1..5)

Foreach ($VM in $VMs)

{

$VM = "wapvm"+$VM


New-WAPackVM -Name $vm -Template $template -VMCredential $creds -VNet $vnet -Windows

}


Once the cmdlet has completed, you should have 5 virtual machines in the tenant portal, like wapvm1, wapvm2, wapvm3 and so on.




Monday, July 7, 2014

Windows Azure Pack - Infrastructure as a Service Jump-start

If you are interested in Azure Pack and especially the VM Clouds offering (Infrastructure as a Service), then you should mark the date and time so that you are able to join us this week.

We will be arranging a MVA Jump-Start: Windows Azure Pack – Infrastructure as a Service Jump-Start.


“IT Pros, you know that enterprises desire the flexibility and affordability of the cloud, and service providers want the ability to support more enterprise customers. Join us for an exploration of Windows Azure Pack's (WAP's) infrastructure services (IaaS), which bring Microsoft Azure technologies to your data center (on your hardware) and build on the power of Windows Server and System Center to deliver an enterprise-class, cost-effective solution for self-service, multitenant cloud infrastructure and application services. 

Join Microsoft’s leading experts as they focus on the infrastructure services from WAP, including self-service and automation of virtual machine roles, virtual networking, clouds, plans, and more. See helpful demos, and hear examples that will help speed up your journey to the cloud. Bring your questions for the live Q&A!”

To get a solid background and learn more on what we are going to cover, I highly recommend to download and read the whitepaper we created on the subject earlier this year.


Together with some of the industry experts, I will be answering questions during the event – so please use this opportunity to embrace and adopt the Azure Pack.


Thursday, July 3, 2014

Azure Site Recovery - On Demand

Recently, I wrote a blog post where I explained the setup of Azure Site Recovery so that you could use Microsoft Azure as your DR site. Here's a link to the blog post: http://kristiannese.blogspot.no/2014/06/microsoft-azure-site-recovery.html

One week after, I had a webinar on the subject and you can now watch it on demand following this link:


Hopefully you will find it useful, and there is still a lot to cover to explore all the goodies in this solution. If you have specific things you would like to see on this blog, please leave a comment.


(Oh, and I was lucky to be renewed as a MVP this month as well :-) )

Monday, June 30, 2014

Azure Pack - Working with the Tenant Public API

In these days, you are most likely looking for solutions where you can leverage powershell to gain some level of automation no matter if it’s on premises or in the cloud.
I have been writing about the common service management API in the Cloud OS vision before, where Microsoft Azure and Azure Pack is sharing the same exact management API.

In this blog post, we will have a look at the tenant public API in Azure Pack and see how to make it available for your tenants and also how do some basic tasks through powershell.

Azure Pack can either be installed with the express setup (all portals, sites and API’s on the same machine) or distributed, where you have dedicated virtual machines for each portal, site and components. By having a look at the API’s only, you can see that we have the following:

Windows Azure Pack and its service management API includes three separate components.

·         Windows Azure Pack: Admin API (Not publicly accessible). The Admin API exposes functionality to complete administrative tasks from the management portal for administrators or through the use of Powershell cmdlets. (Blog post: http://kristiannese.blogspot.no/2014/06/working-with-admin-api-in-windows-azure.html )

·         Windows Azure Pack: Tenant API (Not publicly accessible). The Tenant API enables users, or tenants, to manage and configure cloud services that are included in the plans that they subscribe to.

·         Windows Azure Pack: Tenant Public API (publicly accessible). The Tenant Public API enables end users to manage and configure cloud services that are included in the plans that they subscribe to. The Tenant Public API is designed to serve all the requirements of end users that subscribe to the various services that ha hosting service provider provides

Making the Tenant Public API available and accessible for your tenants

Default, the Tenant Public API is installed on port 30006 – which means it is not very firewall friendly.
We have already made the tenant portal and the authentication site available on port 443 (described by Flemming in this blog post: http://flemmingriis.com/windows-azure-pack-publishing-using-sni/ ), and now we need to configure the tenant public API as well.

1)      Create a DNS record for your tenant public API endpoint.
We will need to have a DNS registration for the API. In our case, we have registered “api.systemcenter365.com” and are ready to go.

2)      Log on to your virtual machine running the tenant public API.
In our case, this is the same virtual machine that runs the rest of the internet facing parts, like tenant site and tenant authentication site. This means that we have already registered cloud.systemcenter365.com and cloudauth.systemcenter365.com to this particular server, and now also api.systemcenter365.com.

3)      Change the bindings on the tenant public API in IIS
Navigate to IIS and locate the tenant public API. Click bindings, and change to port 443, register with your certificate and also type the correct hostname that the tenants will be using when accessing this API.



4)      Reconfigure the tenant public API with Powershell
Next, we need to update the configuration for Azure Pack using powershell (accessing the admin API).
The following cmdlet will change the tenant public API to use port 443 and host name “api.systemcenter365.com”.

Set-MgmtSvcFqdn –Namespace TenantPublicAPI –FQDN “api.systemcenter365.com” –Connectionstring “Data Source=sqlwap;Initial Catalog=Microsoft.MgmtSvc.Store;User Id=sa;Password=*” –Port 443

That’s it! You are done, and have now made the tenant public API publicly accessible.

Before we proceed, we need to ensure that we have the right tools in place for accessing the API as a tenant.
It might be quite obvious for some, but not everyone. To be able to manage Azure Pack subscriptions through Powershell, we basically need the powershell module for Microsoft Azure. That is right. We have a bunch of cmdlets in the Azure module for powershell that is directly related to Azure Pack.



You can read more about the Azure module and download it by following this link: http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/
Or simply search for it if you have Web Platform Installer in place on your machine.

Deploying a virtual machine through the Tenant Public API

Again, if you are familiar with Microsoft Azure and the powershell module, you have probably been hitting the “publishsettings” file a couple of times.

Normally when logging into Azure or Azure Pack, you reach for the portal, get redirected to some authentication site (can also be ADFS if not using the default authentication site in Azure Pack) and then sent back to the portal again which in our case is cloud.systemcenter365.com.

The same process will take place if you are trying to access the “publishsettings”. Typing https://cloud.systemcenter365.com/publishsettings in the internet explorer will first require you to logon and then you will have access to your published settings. This will download a file for you that contains your secure credentials and additional information about your subscription for use in your WAP environment.



Once download, we can open the file to explore the content and verify the changes we did when making the tenant public API publicly accessible in the beginning of this blog post.



Picture api content
Next, we will head over to Powershell to start exporing the capabilities.

1)      Import the publish settings file using Powershell

Import-WAPackPublishSettingsFile “C:\MVP.Publishsettings”



Make sure the cmdlet fits your environment and points to the file you have downloaded.

2)      Check to see the active subscriptions for the tenant

Get-WAPackSubscription | select SubscriptionName, ServiceEndpoint



3)      Deploy a new virtual machine

To create a new virtual machine, we first need to have some variables that stores information about the template we will use and the virtual network we will connect to, and then proceed to create the virtual machine.




4)      Going back to the tenant portal, we can see that we are currently provisioning a new virtual machine that we initiated through the tenant public API



Sunday, June 22, 2014

Microsoft Azure Site Recovery

In January, we had a new and interesting service available in Microsoft Azure, called “Hyper-V Recovery Manager”. I blogged about it and explained how to configure this on-premises using a single VMM management server. For details you can read this blog post: http://kristiannese.blogspot.no/2013/12/how-to-setup-hyper-v-recovery-manager.html

Hyper-V Recovery Manager provided organizations using Hyper-V and System Center with automated protection and orchestrating of accurate recovery of virtualized workloads between private clouds, leveraging the asynchronous replication engine in Hyper-V – Hyper – V Replica.

In other words, no data were sent to Azure except the metadata from the VMM clouds.
This has now changed and the service is renamed to Microsoft Azure Site Recovery that finally let you replicate between private clouds and public clouds (Microsoft Azure).

This means that we are still able to utilize the automatic protection of workloads that we are familiar with through the service, but now we can use Azure as the target in addition to private clouds.
This is also a door opener for migration scenarios where organizations considering moving VMs to the cloud, can easily do this with almost no downtime using Azure Site Recovery.

Topology

In our environment, we will use a dedicated Hyper-V cluster with Hyper-V Replica. This means we have added the Hyper-V Replica Broker role to the cluster. This cluster is located in its own host group in VMM and the only host group we have added to a cloud called “E2A”. Microsoft Azure Site Recovery requires System Center Virtual Machine Manager, which will be responsible for the communication and aggregation of the desired instructions made by the administrator in the Azure portal.


Pre-reqs

-          You must have an Azure account and add Recovery Services to your subscription
-          Certificate (.cer) that you upload to the management portal and register to the vault. Each vault has a single .cer certificate associated with it and it’s used when registering VMM servers in the vault.
-          Certificate (.pfx) that you import on each VMM server. When you install the Azure Site Recovery Provider on the VMM server, you must use this .pfx certificate.
-          Azure Storage account, where you will store the replicas replicated to Azure. The storage account needs geo-replication enabled and should be in the same region as the Azure Site Recovery service and associated with the same subscription
-          VMM Cloud(s). A cloud must be created in VMM that contains Hyper-V hosts in a host group enabled with Hyper-V Replica

-          Azure Site Recovery Provider must be installed on the VMM management server(s)
In our case, we had already implemented “Hyper-V Recovery Manager”, so we were able to do an in-place upgrade of the ASR Provider.
-          Azure Recovery Services agent must be installed on every Hyper-V host that will replicate to Microsoft Azure. Make sure you install this agent on all hosts located in the host group that you are using in your VMM cloud.

Once we had enabled all of this in our environment, we were ready to proceed and to the configuration of our site recovery setup.

Configuration


Login to the Azure management portal and navigate to recovery services to get the details around your vault, and see the instructions on how to get started.

We will jump to “Configure cloud for protection” as the fabric in VMM is already configured and ready to go.
The provider installed on the VMM management server is exposing the details of our VMM clouds to Azure, so we can easily pick “E2A” – which is the dedicated cloud for this setup. This is where we will configure our site recovery to target Microsoft Azure.



Click on the cloud and configure protection settings.



On target, select Microsoft Azure. Also note that you are able to setup protection and recovery using another VMM Cloud or VMM management server.



For the configuration part, we are able to specify some options when Azure is the target.

Target: Azure. We are now replicating from our private cloud to Microsoft Azure’s public cloud.
Storage Account: If none is present, then you need to create a storage account before you are able to proceed. If you have several storage accounts, then choose the accounts that are in the same region as your recovery vault.
Encrypt stored data: This is default set to “on”, and not possible to change in the preview.
Copy frequency: Since we are using Hyper-V 2012 R2 in our fabric – that introduced us for additional capabilities related to copy frequencies, we can select 30 seconds, 5 minutes and 15 minutes. We will use the “default” that is 5 minutes in this setup.
Retain recovery points: Hyper-V Replica is able to create additional recovery points (crash consistent snapshots) so that you can have a more flexible recovery option for your virtual workload. We don’t need any additional recovery points for our workloads, so we will leave this to 0.
Frequency of application consistent snapshots: If you want app consistent snapshots (ideally for SQL servers, which will create VSS snapshots) then you can enable this and specify it here.
Replication settings: This is set to “immediately” which means that every time a new VM is deployed to our “E2A” cloud in VMM with protection enabled, will automatically start the initial replication from on-premises to Microsoft Azure. For large deployments, we would normally recommend to schedule this.

Once you are happy with the configuration, you can click ‘save’.



Now, Azure Site Recovery will configure this for your VMM cloud. This means that – through the provider, the hosts/clusters will be configured with these settings automatically from Azure.
-          Firewall rules used by Azure Site Recovery are configured so that ports for replication traffic are opened
-          Certificates required for replication are installed
-          Hyper-V Replica Settings are configured
 Cool!

You will have a job view in Azure that shows every step during the actions you perform. We can see that protection has been successfully enabled for our VMM Cloud.




If we look at the cloud in VMM, we also see that protection is enabled and Microsoft Azure is the target.



Configuring resources

In Azure, you have had the option to create virtualized networks for many years now. We can of course use them in this context, to map with our VM networks present in VMM.
To ensure business continuity it is important that the VMs that failover to Azure are able to be reached over the network – and that RDP is enabled within the guest. We are mapping our management VM network to a corresponding network in Azure.



VM Deployment

Important things to note:
In preview, there are some requirements for using Site Recovery with your virtual machines in the private cloud.

Only support for Gen1 virtual machines!
This means that the virtual machines must have their OS partition attached to an IDE controller. The disk can be vhd or vhdx, and you can even attach data disks that you want to replicate. Please note that Microsoft Azure does not support VHDX format (introduced in Hyper-V 2012), but will convert the VHDX to VHD during initial replication in Azure. In other words, virtual machines using VHDX on premises will run on VHD’s when you failover to Azure. If you failback to on-premises, VHDX will be used as expected.

Next, we will deploy a new VM in VMM. When we enable protection on the hardware profile and want to deploy to a Cloud, intelligent placement will kick in and find the appropriate cloud that contains Hyper-V hosts/clusters that meet the requirements for replica.



After the deployment, the virtual machine should immediately start with an initial replication to Microsoft Azure, as we configured this on the protection settings for our cloud in Azure. We can see the details of the job in the portal and monitor the process. Once it is done, we can see – at a lower level that we are actually replicating to Microsoft Azure directly on the VM level.




After a while (depending on available bandwidth), we have finally replicated to Azure and the VM is protected.





Enabling protection on already existing VMs in the VMM cloud

Also note that you can enable this directly from Azure. If you have a virtual machine running in the VMM cloud enabled for protection, but the VM itself is not enabled in VMM, then Azure can pick this up and configure it directly from the portal.



If you prefer to achieve this by using VMM, it is easy by open the properties of the VM and enable for protection.




One last option is to use the VMM powershell module to enable this on many VMs at once.

Set-SCVirtualMachine –VM “VMName” –DRProtectionRequired $true –RecoveryPointObjective 300

Test Failover

One of the best things with Hyper-V Replica is that complex workflows, such as test failovers, planned failovers and unplanned failovers are integrated into the solution. This is also exposed and made available in the Azure portal, so that you easily can perform a test failover on your workloads. Once a VM is protected – meaning that the VM has successfully completed the initial replication to Azure, we can perform a test failover. This will create a copy based on the recovery point you select and boot that virtual machine in Microsoft Azure.







Once you are satisfied with the test, you can complete the test failover from the portal.
This will power off the test virtual machine and delete it from Azure. Please note that this process will not interfere with the ongoing replication from private cloud to Azure.



Planned failover

You can use planned failover in Azure Site Recovery for more than just failover. Consider a migration scenario where you actually want to move your existing on-premises workload to Azure, planned failover will be the preferred option. This will ensure minimal downtime during the process and start up the virtual machine in Azure afterwards.
In our case, we wanted to simulate planned maintenance in our private cloud, and therefore perform a planned failover to Azure.



Click on the virtual machine you want to failover, and click planned failover in the portal.
Note that if the virtual machine has not performed a test failover, we are recommending you to do so before an actual failover.
Since this is a test, we are ready to proceed with the planned failover.



When the job has started, we are drilling down to the lowest level again, Hyper-V Replica, to see what’s going on. We can see that the VM is preparing for planned failover where Azure is the target.



In the management portal, we can see the details for the planned failover job.



Once done, we have a running virtual machine in Microsoft Azure, that appears in the Virtual Machine list.



If we go back to the protected clouds in Azure, we see that our virtual machine “Azure01” has “Microsoft Azure” as its active location.



If we click on the VMs and drill into the details, we can see that we are able to change the name and the size of the virtual machine in Azure.



We have now successfully performed a planned failover from our private cloud to Microsoft Azure!

Failback from Microsoft Azure

When we were done with our planned maintenance in our fabric, it was time to failback the running virtual machine in Azure to our VMM Cloud.
Click on the virtual machine that is running in Azure that is protected, and click planned failover.
We have two options for the data synchronization. We can either use “Synchronize data before failover” that will perform something similar as “re-initializing replication” to our private cloud. This means synchronization will be performed without shutting down the virtual machine, leading to minimal downtime during the process.
The other option “Synchronize data during failover only” will minimize synchronization data but have more downtime as the shutdown will begin immediately. Synchronization will start after shutdown to complete the failover.
We are aiming for minimal downtime, so option 1 is preferred.



When the job is started, you can monitor the process in Azure portal.



Once the sync is complete, we must complete the failover from the portal so that this will go ahead and start the VM in our private cloud.



Checking Hyper-V Replica again, we can see that the state is set to “failback in progress” and that we currently have no primary server.



The job has now completed all the required steps in Azure.



Moving back to Hyper-V Replica, we can see that the VM is again replicating to Microsoft Azure, and that the primary server is one of our Hyper-V nodes.



In VMM, our virtual machine “Azure01” is running again in the “E2A” cloud



In the Azure management portal in the virtual machines list, our VM is still present but stopped.

Thanks for joining us on this guided tour on how to work with Azure Site Recovery.
Next time we will explore the scenarios we can achieve by using recovery plans in Azure Site Recovery, to streamline failover of multi-tier applications, LOB applications and much more.