Monday, February 1, 2016

Free book! Cloud Consistency with Azure Resource Manager

Finally!

I was able to spend some wife points this weekend to finalize the new book “Cloud Consistency with Azure Resource Manager”.


This book is aiming to get you started with Azure Resource Manager and covers many examples on how to author templates, use functions as well as exploring many of the other aspects of Azure Resource Manager



Here’s a snapshot of the content:

Acknowledgements:
About the authors
Kristian Nese | @KristianNese
Flemming Riis | @FlemmingRiis
Background
Introduction
Microsoft Azure
Microsoft Azure Stack
Cloud Computing and Modern Application Modeling
Step 1 – Service Templates
Step 2 – VM Roles
Step 3 – Azure Resource Manager
Summary
IaaS v2 – Azure Resource Manager API replaces Service Management API
Consistent Management Layer
Azure PowerShell
Azure CLI
Azure Resource Manager Rest API
Azure Portal
Azure Resource Manager Templates
Deploying with Azure Resource Manager
Where can we deploy our Azure Resource Manager Templates
Explaining the template format
Authoring the first Azure Resource Manager Template
Adding parameter file
Visual Studio
PowerShell
Azure Portal
Idempotency with Azure Resource Manager
Resource Explorer
Imperative Deployment with Azure Resource Manager
Advanced Azure Resource Manager Templates
Functions
Extensions
Write once, deploy anywhere

Instead of jumping right into the authoring experience and learn how an ARM template is constructed, we wanted to give you enough context to know what’s going on in the industry, what is changing and how you should prepare yourself to take advantage of this new way of managing your cloud resoueces.

If you have been playing around with Azure already, you are probably very familiar with some of the content already. If you are new, and especially interested in Microsoft Azure Stack, you should be glad to know that everything you learn in this book can be addressed there as well.

It has been a great experience writing this book, covering some of the most interesting stuff we have available right now, and I have to emphasize that this book will also be updated as we move forward to keep up with all the great things that is happening in the Microsoft Cloud.


I really hope you enjoy it.

Tuesday, January 19, 2016

Azure Site Recovery and Azure Resource Manager

Recently, I was working with the new Azure Site Recovery Resource Provider in Azure Resource Manager.
Since we now have support for this through PowerShell, I wanted to create a solution that would automatically add VMs to the protection group.

To get VMs protected, it is quite straightforward, but you want to plan this a bit more carefully when you are designing for real-world scenarios.

Planning and Considerations

·         Resource Groups
Everything you create in ARM will belong to a Resource Group. This should be common knowledge by now, but it is worth a friendly reminder to avoid any potential confusion

·         Storage Accounts
For using ASR and having Azure as the Recovery Site, you must also create a storage account that can receive and hold the vhds for the protected virtual machines. When you power up a virtual machine – either as part of a DR drill (test failover) or perhaps for a more permanent time using planned/unplanned failover, remember that this is where the disks will be located. As a friendly reminder, the storage accounts must also belong to a Resource Group. It is important that the storage account is created in the same region as the ASR resource itself.
If you choose to use a storage account created in classic (Azure Service Management API), then the VMs will be visible in the classic portal afterwards. If you use a storage account in the ARM model, you are good to go in the new portal.

·         Network
You want to be able to connect to your virtual machines post failover. This requires network connectivity – among other things. Where you place the virtual network isn’t imported as long as it is within the same region.

·         Virtual Machines
During test/planned/unplanned failover, the virtual machine will have their storage located on the storage account you have created for your ASR configuration. The virtual networks might be in a different resource group as well. This is important to know, as every VM (regardless of test/planned/unplanned failover) will be instantiated in its own – new Resource Group, only containing the virtual machine object and the virtual network interface. All other resources are in different Resource Group(s).

What you need to be aware of

In addition to the design considerations for Resource Groups, storage and networking, you must remember a couple of things. For being able to access virtual machines post a failover, you need to ensure that you have enabled RDP within the guest (Windows) if doing so. Next, you must have either a jump-host on the target virtual network where the recovered VM is running, or simply create a Network Security Group with the required rules, associated with either the subnet or the vNics itself.

I have created a PowerShell script that is currently being polished before publishing, where I will share my findings on this topic to make an efficient DR process of your virtual machines.



Monday, January 11, 2016

2016 - The year of Microservices and Containers

This is the first blog post I am writing this year.
I was planning to publish this before Christmas, but I figured out it would be better to wait and reflect even more about the trends that’s currently taking place in this industry.
So what’s a better way to start the New Year other than with something I really think will be one of the big bets for the coming year(s)?

I drink a lot of coffee. In fact, I might suspect it will kill me someday. On a positive note, at least I was the one who was controlling it. Jokes aside, I like to drink coffee when I'm thinking out loud around technologies and potentially reflect on the steps we’ve made so far.

Going back to 2009-10 when I was entering the world of virtualization with Windows Server 2008 R2 and Hyper-V, I couldn’t possible imagine how things would change in the future.
At this very day, I realized that the things we were doing back then, was just the foundation to what we are seeing today.

The same arguments are being used throughout the different layers of the stack.
We need to optimize our resources, increase density, flexibility and provide fault-tolerant, resilient and highly-available solutions to bring our business forward.

That was the approach back then – and that’s also the approach right now.

We have constantly been focusing on the infrastructure layer, trying to solve whatever issues that might occur. We have been in the belief that if we actually put our effort into the infrastructure layer, then the applications we put on top of that will be smiling from ear to ear.

But things change.
The infrastructure change, and the applications are changing.

Azure made its debut in 2007-08 I remember. Back then it was all about Platform as a Service offerings.
The offerings were a bit limited back then, giving us cloud services (web role – and worker role), caching and messaging systems such as Service Bus, together with SQL and other storage options such as blob, table and queue.

Many organizations were really struggling back then to get a good grasp of this approach. It was complex. It was a new way of developing and delivering services, and in almost all cases, the application had to be rewritten to fully functional using the PaaS components in Azure.

People were just getting used to virtual machines and has started to use them frequently also a part of test and development of new applications. Many customers went deep into virtualization in production as well, and the result was a great demand from customers for having the opportunity to host virtual machines in Azure too.
This would simplify any migration of “legacy” applications to the cloud, and more or less solve the well-known challenges we were aware of back then.

During the summer in 2011 (if my memory serves me well), Microsoft announced their support of Infrastructure as a Service in Azure. Finally they were able to hit the high note!
Now what?
An increased consumption of Azure was the natural result, and the cloud came a bit closer to most of the customers out there. Finally there was a service model that people could really understand. They were used to virtual machines. The only difference now was the runtime environment, which was now hosted in Azure datacenters instead of their own. At the same time, the PaaS offerings in Azure had evolved and grown to become even more sophisticated.

It is common knowledge now, and it was common knowledge back then that PaaS was the optimal service model for applications living in the cloud, compared to IaaS.

By the end of the day, each and every developer and business around the globe would prefer to host and provide their applications to customers as SaaS instead of anything else, such as traditional client/server applications.

So where are we now?

You probably might wonder where the heck I am going with this?
And trust me, I also wondered at some point. I had to get another cup of coffee before I was able to do a further breakdown.

Looking at Microsoft Azure and the services we have there, it is clear to me that the ideal goal for the IaaS platform is to get as near as possible to the PaaS components in regards to scalability, flexibility, automation, resiliency, self-healing and much more.
For those who have been deep into Azure with Azure Resource Manager know that there’s some really huge opportunities now to leverage the actual platform to deliver IaaS that you ideally don’t have to touch.

With features such as VM Scale Sets (preview), Azure Container Service (also preview), and a growing list of extensions to use together with your compute resources, you can potentially instantiate a state-of-the-art infrastructure hosted in Azure, without having to touch the infrastructure (of course you can’t touch Azure infrastructure, but I am now talking about the virtual infrastructure itself, the one you are basically responsible of).

The IaaS building blocks in Azure is separated in a way so that you can look at them as individual scale-units. Compute, Storage and Networking are all combined to bring you virtual machines. Having this approach with having the loosely coupled, we can also see that these building blocks are empowering many of the PaaS components in Azure itself that lives upon the IaaS.

The following graphic shows how the architecture is layered.
Once Microsoft Azure Stack becomes available on-prem, we will have one consistent platform that brings the same capabilities to your own datacenter as you can use in Azure already.

  

Starting at the bottom, IaaS is on the left side while PaaS is on the right hand side.
By climbing up, you can see that both Azure Stack and Azure Public cloud – which will be consistent has the same approach. VMs and VM Scale sets covers both IaaS and PaaS, but VM Scale Sets is place more on the right hand side than VMs. This is because VM Scale Sets is considered as the powering backbone from the other PaaS services on top of it.

Also VM Extensions leans more to the right as it gives us the opportunity to do more than traditional IaaS. We can extend our virtual machines to perform advanced in-guest operations when using extensions, so anything from provisioning of complex applications, configuration management and more can be handled automatically by the Azure platform.

On the left hand side on top of VM Extensions, we will find Cluster orchestration such as SCALR, RightScale, Mesos and Swarm. Again dealing with a lot of infrastructure, but also providing orchestration on top of it.
Batch is a service that is powered by Azure compute and is a compute job scheduling service that will start a pool of virtual machines for you, installing applications and staging data, running jobs with as many tasks as you have.

Going further to the right, we are seeing two very interesting things – which also is the main driver for the entire blog post. Containers and Service Fabric is leaning more to the PaaS side, and it is not by coincident that Service Fabric is at the right hand side of containers.

Let us try to do a breakdown of containers and Service Fabric

Comparing Containers and Service Fabric

Right now in Azure, we have a new preview service that I encourage everyone who’s interesting in container technology to look into. The ACS Resource Provider provides you basically with a very efficient and low-cost solution to instantiate a complete container environment using a single Azure Resource Manager API call to the underlying resource provider. After completion of the deployment, you will be surprised to find 23 resources within a single resource groups containing all the components you need to have a complete container environment up and running.
One important thing to note at this point is that ACS is Linux first and containers first, in comparison to Service Fabric – which is Windows first and also microservices first rather to container first.

At this time it is ok to be confused. And perhaps this is a good time for me to explain the difficulties to put this on paper.

I am now consuming the third cup of coffee.

Azure explains it all

Let us take some steps back to get some more context into the discussion we are entering.
If you want to keep up with everything that comes in Azure nowadays, that is more or less a full-time job. The rapid pace of innovation, releases and new features is next to crazy.
Have you ever wondered how the engineering teams are able to ship solutions this fast – also with this level of quality?

Many of the services we are using today in Azure is actually running on Service Fabric as Microservices. This is a new way of doing development and is also the true implementation of DevOps, both as a culture and also from a tooling point of view.
Meeting customer expectations isn’t easy. But it is possible when you have a platform that supports and enables it.
As I stated earlier in this blog post, the end goal for any developer would be to deliver their solutions using the SaaS service model.
That is the desired model which implies continuous delivery, automation through DevOps, adoption of automatable, elastic and scalable microservices.

Wait a moment. What exactly is Service Fabric?

Service Fabric provides the complete runtime management for microservices and is dealing with the things we have been fighting against for decades. Out-of-the box, we get hyper scale, partitioning, rolling upgrades, rollbacks, health monitoring, load balancing, failover and replication. All of these capabilities is built-in so we can focus on building those applications we want to be scalable, reliable, consistent and available microservices.

Service Fabric provides a model so you can wrap together the code for a collection of related microservices and their related configuration manifests to an application package. The package is then deployed to a Service Fabric Cluster (this is actually a cluster that runs on one as much as many thousands Windows virtual machines – yes, hyper scale). We have two defined programming models in Service Fabric, which is ‘Reliable Actor’ and ‘Reliable Service’. Both of these models provides you with – and makes it possible to write both stateless and stateful applications. This is breaking news.
You can go ahead and create and develop stateless applications in more or less the same way you have been doing for years, trusting to externalize the state to some queuing system or some other data store, but again handling the complexity of having a distributed application at scale. Personally I think the stateful approach in Service Fabric is what make this so exciting. Being able to write stateful applications that is constantly available, having a primary/replica relationship with its members is very tempting. We are trusting the Service Fabric itself to deal with all the complexity we have been trying to enable in the Infrastructure layer for years, at the same time as the stateful microservices keep the logic and data close so we don’t need queues and caches.

Ok, but what about the container stuff you mentioned?

So Service Fabric provides everything out of the box. You can think of it as a complete way to handle everything from beginning to the end, including a defined programming model that even brings an easy way of handling stateful applications.
ACS on the other side provides a core infrastructure which provides significant flexibility but this brings a cost when trying to implement stateful services. However, the applications themselves are more portable since we can run them wherever Docker containers can run, while microservices on Service Fabric can only run on Service Fabric.

The focus for ACS right now is around open source technologies that can be taken in whole or in part. The orchestration layer and also the application layer brings a great level of portability as a result of that, where you can leverage open source components and deploy them wherever you want.

In the end of the day, Service Fabric has a more restrictive nature but also gives you a more rapid development experience, while ACS provides the most flexibility.

So what exactly is the comparison of Containers and microservices with Service Fabric at this point?

What they indeed do have in common is that this is another layer of abstraction in addition to the things we are already dealing with. Forget what you know about virtual machines for a moment. Containers and microservices is exactly what engineers and developers are demanding to unlock new business scenarios, especially in a time where IoT, Big Data, insight and analytics is becoming more and more important for businesses world wide. The cloud itself is the foundation that enables all of this, but having the great flexibility that both container – and service fabric provides is really speeding up the innovation we’re seeing.

For organizations that has truly been able to adopt the DevOps mindset, they are harnessing that investment and is capable of shipping quality code at a much more frequent cadence than ever before.

Coffee number 4 and closing notes

First I want to thank you for spending these minutes reading my thoughts around Azure, containers, microservices, Service Fabric and where we’re heading.

2016 is a very exciting year and things are changing very fast in this industry. We are seeing customers who are making big bets in certain areas, while others are taking a potential risk of not making any bets at all. I know at least from my point of view what’s the important focus moving forward. And I will do my best to guide people on my way.

While writing these closing notes, I can only use the opportunity to point to the tenderloin in this blog post:

My background is all about ensuring that the Infrastructure is providing whatever the applications need.
That skillset is far from obsolete, however, I know that the true value belongs to the upper layers.

We are hopefully now realizing that even the infrastructure that we have been ever so careful about is turning into commodity, and now handled more through an ‘infrastructure as code’ approach than ever before, trusting that it works, empowers the PaaS components – that again brings the world forward while powering SaaS applications.

Container technologies and Microservices as part of Service Fabric is taking that for granted, and from now on, I am doing the same.




Monday, December 21, 2015

Azure Windows Server Container with IIS

A couple of months ago, Microsoft announced their plans for Azure and containers where they would provide you with a first class citizen resource provider in Azure so that you could build, run and manage scalable clusters of hosts machines onto which containerized applications would be running.

What you also probably have noticed is that Microsoft is having an open approach to container management. In fact, the container service is currently based and pre-configured with Docker and Apache Mesos, so any tools you would prefer for management “should just work”.
This is a new game for me to play so I am learning a lot. J

In the meantime, I am also working a lot with Windows Server Containers in Windows Server Technical Preview 4 – which is an image that is available in the Azure gallery.
However, I wanted to extend the experience a bit and decided to create my own ARM template that will ‘mimic’ some of the functionality in the Azure Container Resource Provider, to actually instantiate a new container running IIS Web-Server and be available for requests.

The template will deploy:

·         A Vnet
·         Network interface
·         Public IP address with DNS (the DNS will be based on the hostname.region.cloudapp.azure.com and provided as output once the deployment has completed)
·         Storage account
·         Network Security Group to allow RDP to the host – as well as http
·         Virtual machine (based on the TP4 image)
o   Custom Extension that will:
§  Spin up a new Windows Server Container based on the existing image (server core)
§  Install Web-Server within the newly created container
§  Stop the container – and create a new container image
§  Deploy a new container based on the newly created container image
§  Create a static NAT rule and a firewall rule to allow traffic on port 80 to the container from the host


This is a working experiment and I am planning to extend the template with more applicable tasks as we move forward.

The template can be explored and deployed from this GitHub repo: 

https://github.com/krnese/AzureDeploy/tree/master/AzureContainerWeb 


Thursday, December 3, 2015

Getting started with Containers in Azure

Recently, I had a presentation/workshop in Norway at a Docker conference (http://www.code-conf.com/day-of-docker-osl15/program/#knese )


This was quite a new audience for me and it was great to be the person who showed them what Microsoft is doing in the era of container technologies, using both Microsoft Azure and Windows Server 2016 Technical Preview 4.

The big picture 

One of the key things to point out is that containers are “just” a part of the big picture that we are seeing in these days.
The following graphic shows where we are coming from – and also where we’re heading.


Starting at the bottom, the early generation in this industry used to have a lot of physical machines to run their business. We all know that having workloads and applications on physical machines is not where we want to be today, because that is not flexible, scalable and for sure want do any good for our demand for utilization.

Above physical machines we can find machine virtualization. This should all be quite common now and we have been very good at virtualizing servers for quite some time. In fact, we are now not only virtualizing servers – but also the other infra components too, such as networks and storage.
Machine virtualization in this context is showing us that we are abstracting the compute resources from the underlying physical machine – which introduces us to the first stepping stones towards flexibility, scalability and increase the utilization.

Further, we have infrastructure hosting which can be seen as the early days of cloud, although the exact service model here is not defined. This means that “someone” would do the investment and ensure the required amount of capacity for you as a customer, and you can have your workloads and applications hosted in the hosting datacenter. This was machine virtualization at scale.

The next step is the more familiar service models we can consume from a cloud, such as Infrastructure as a Service, Platform as a Service and Software as a Service. Although these service models are different, they share the same set of attributes such as elasticity, self-servicing, broad network access, chargeback/usage and resource pooling. Especially elasticity and resource pooling is a way to describe the level of flexibility, scalability and utilization we can achieve. I expect you as the reader to be quite comfortable with cloud computing in general, so I won’t dive deeper into the definition at this point.

Next, we are now facing an era where containers are lit up – regardless whether you are a developer or IT-pro. Containers builds on many of the same principals as machine virtualization, where abstraction is key. A container can easily be lifted – and shifted to other deployment environments without having the same cost, luggage and complexity as a virtual machine – as a comparison.

In the Microsoft world we have two different runtimes for containers.
Windows Server Containers that are sharing the kernel with the container host which is ideal for scalability, performance and resource utilization.
Hyper-V Containers gives you the exact same experience, only that the kernel in this case isn’t shared among the containers. This is something you need to specify during deployment time. Hyper-V Containers will give you the level of isolation you require and is ideal when the containers aren’t trusting each other nor the container host.
Microsoft has also announced that they will come with their own Azure Container Service in the future, as a first-class citizen resource provider managed by ARM.

Last but not least, we have something called “microservices” on the top in this graphic. In the context of Microsoft we are talking about Service Fabric – which is currently a preview feature in Microsoft Azure today.
Service Fabric is a distributed system platform where you can build scalable, reliable and easily managed applications for the cloud. This is where we are really seeing that the redundancy, high-availability, resiliency and flexibility isn’t built into the infrastructure – but handled at the application level instead.
Service Fabric represents the next-generation middleware platform for building and managing these enterprise class, tier-1 cloud scale services.

From a Microsoft Azure standpoint it is also important that you know that “VM Scale Sets” (http://kristiannese.blogspot.no/2015/11/getting-started-with-vm-scale-sets-with.html ) is the IaaS that enables these PaaS services (Azure Container Service + Service Fabric).
Also, as part of Windows Server 2016 Technical Preview 4, we will be able to leverage Nano Server for containers too, so you can get the optimal experience for your born-in-the-cloud applications.

So, that was me trying to put things into context and why I spent some time that day to have a workshop on Containers using Azure.

Getting started with Containers in Microsoft Azure

The material I used for this workshop can be found in this public GitHub repo: https://github.com/krnese/AzureDeploy/tree/master/AzureContainer


I created an ARM template that will:

·         Create a new storage account
·         Create a new Network Security Group
o   Create a new vNet and associate the new subnet with the NSG
·         Create a new network interface
o   Associate the vNic with a public IP address
o   Associate the vNic with the vNet
·         Create a new virtual machine
o   Associate the VM with the storage account
o   Associate the VM with the network interface
o   Use Custom Script Extension that will create x amount of Windows Server Containers based on the parameter (count) input
 


















If you deploy this from GitHub and follow the ps1 examples you should be able to simulate the life-cycle of containers in Windows Server 2016 TP4.



Friday, November 20, 2015

Getting started with VM Scale Sets with Azure Resource Manager

Recently, Microsoft announced the public preview of ‘VM Scale Sets’ which is a new Azure Compute resource (Microsoft.Compute/virtualMachineScaleSets) that lets customers deploy and manage a set of virtual machines that are identical.

Sounds familiar?

Yes, but at the same time, this is new. Let me explain why.

Azure Compute, Network and Storage serves as the backend for many familiar Azure Services that we are using already today, such as Web Apps, Batch, Azure Automation and much more.

You have probably also heard about the newly announced public preview of Service Fabric – which is the ideal platform for microservices and containerized workloads to ensure business continuity and reliable applications, completely changing the way customers can develop applications at hyper-scale.
But did you know that the Service Fabric is also a service that runs on top of a virtual machine, requires network connectivity using a “normal” network in Azure and also can use some of the storage features in the backend?

That is also where the VM Scale Sets can come into play, serving as the perfect foundation for those kind of services and applications you want to build.
VM Scale Sets are designed to support autoscale and doesn’t require any pre-provisioning of the virtual machines.

Network and storage features are of course incorporated as you would expect, so that you can easily leverage VM Scale Sets as a first class citizen in Azure, following the Resource Group structure you prefer.

I encourage you to take a closer look at VM Scale Sets, which is a new resource type within the Microsoft.Compute namespace for the CRP.

I have already created an Azure Resource Manager template that let you deploy VM Scale Sets – using DSC VM Extension to configure Web-Server (IIS), as well as deploying a virtual machine you can use for management.
There’s also additional details such as load balancing, network security groups and more that you can explore.





Have fun – and happy scaling!



Thursday, October 29, 2015

Azure Resource Manager – Deployment options

Hi all,

This is just a quick blog post to demonstrate how to provision an IaaS environment in Azure with the VM DSC extension to instantiate a new Web Server (IIS).

Say what?

You have probably seen many examples of this already, so I won’t try to sell you something new here.
However, I want to point out the difference of using an Azure Resource Manager template (.json, declarative) compared to using PowerShell – in an imperative way.

The reason for this blog post is the newly released AzureRM PowerShell module which introduces us to a new set of cmdlets (the downside here is that I am now forced to update the whitepaper… https://gallery.technet.microsoft.com/Cloud-Consistency-with-0b79b775 ).

Where we are coming from

Previously with the Service Management API, we normally created our virtual machines in a similar way to this:

$image = Get-AzureVMImage -ImageName "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201412.01-en.us-127GB.vhd"
$vnet = Get-AzureVNetSite
$vm = New-AzureVMConfig -Name $VMName -InstanceSize "Basic_A2" -ImageName $image.ImageName

### Deploy a new domain joined VM

$vm = Add-AzureProvisioningConfig -VM $vm -AdminUsername $username -Password $pwd -WindowsDomain -JoinDomain "azure.systemcenter365.com" -Domain "azure" -DomainUserName "knadm" -DomainPassword "superPWD" | Set-AzureSubnet -SubnetNames $vnet.subnets.name | Set-AzureStaticVNetIP -IPAddress "10.0.40.52"

New-AzureVM -VM $vm -Location "North Europe" -VNetName $vnet.name -ServiceName $ServiceName -Verbose -WaitForBoot

Also, if I wanted to add DSC to my VM using the Service Management API, I would have to do something like this:

# Fire and forget some DSC

$dscvm = Get-AzureVM -ServiceName $ServiceName -Name $VMName

Set-AzureVMDSCExtension -VM $dscvm -ConfigurationArchive "azureDSCDemo.ps1.zip" -ConfigurationName "tester" | Update-AzureVM

This has drastically changed with Azure Resource Manager, which introduces us to a new world with a lot of more opportunities (someone would also say more complexity).

Where we are going

In order to show you where we are heading with this, I would like to point you to my GitHub repo where you can find some learning examples on how this looks like by using Azure Resource Manager templates – but also the new AzureRM PowerShell module

ARM Template with a single-button deployment + PowerShell cmdlet for deployment


PowerShell script using the new AzureRM Module to create IaaS environment with DSC


Here’s the example using PowerShell:

# Connect to your Azure subscription

Add-AzureRmAccount -Credential (get-credential)

# Add some variables that you will use as you move forward

# Global

$RGname = "KNRGTest01"
$Location = "west europe"

# Storage

$StorageName = "Knstor5050"
$StorageType = "Standard_LRS"

# Network

$vnicName = "vmvNic"
$Subnet1Name = "Subnet1"
$vNetName = "KNVnet01"
$vNetAddressPrefix = "192.168.0.0/16"
$vNetSubnetAddressPrefix = "192.168.0.0/24"

# Compute

$VMName = "KNVM01"
$ComputerName = $VMName
$VMSize = "Standard_A2"
$OSDiskName = $VMName + "osDisk"

# Create a new Azure Resource Grou

$RG = New-AzureRmResourceGroup -Name $RGname -Location $location -Verbose

# Create Storage

$StorageAccount = New-AzureRmStorageAccount -ResourceGroupName $RGname -Name knstor5050 -Type $StorageType -Location $Location

# Create Network

$PIP = New-AzureRmPublicIpAddress -Name $vnicName -ResourceGroupName $RGname -Location $Location -AllocationMethod Dynamic
$SubnetConfig = New-AzureRmVirtualNetworkSubnetConfig -Name $Subnet1Name -AddressPrefix $vNetSubnetAddressPrefix
$vNET = New-AzureRmVirtualNetwork -Name $vNetName -ResourceGroupName $RGname -Location $Location -AddressPrefix $vNetAddressPrefix -Subnet $SubnetConfig
$Interface = New-AzureRmNetworkInterface -Name $vnicName -ResourceGroupName $RGname -Location $Location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id

# Create Compute

# Setup local VM object

$Credential = Get-Credential
$VirtualMachine = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize
$VirtualMachine = Set-AzureRmVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $credential -ProvisionVMAgent -EnableAutoUpdate
$VirtualMachine = Set-AzureRmVMSourceImage -VM $VirtualMachine -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version "latest"
$VirtualMachine = Add-AzureRmVMNetworkInterface -VM $VirtualMachine -Id $interface.Id
$OSDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + "vhds/" + $OSDiskName + ".vhd"
$VirtualMachine = Set-AzureRmVMOSDisk -VM $VirtualMachine -Name $OSDiskName -VhdUri $OSDiskUri -CreateOption fromImage

# Deploy the VM in Azure

New-AzureRmVM -ResourceGroupName $RGname -Location $Location -VM $VirtualMachine

# Publish DSC config to your newly created storage account

Publish-AzureRmVMDscConfiguration -ResourceGroupName $RGname -ConfigurationPath .\webdsc.ps1 -StorageAccountName knstor5050

# Add DSC Extension with config to the newly created VM

Set-AzureRmVMDscExtension -ResourceGroupName $RGname -VMName $virtualmachine.Name -ArchiveBlobName webdsc.ps1.zip -ArchiveStorageAccountName knstor5050 -ConfigurationName webdsc -Version 2.7 -Location $location

# Good night

Please have a look at these examples, and I encourage you to explore the new opportunities with the AzureRM module.

Happy ARMing!