Quantcast
Channel: Daniel – Daniel's Tech Blog
Viewing all 142 articles
Browse latest View live

Microsoft Azure – managed disks and port 8443 issue

$
0
0

Managed disks are new and maybe you have run into a deployment error with VMs using managed disks. The reason for that is mentioned in the notes, directly at the beginning of the documentation.

VMs with Managed Disks require outbound traffic on port 8443 to report the status of the installed VM extensions to the Azure platform. Provisioning a VM with extensions will fail without the availability of this port. Also, the deployment status of an extension will be unknown if it is installed on a running VM. If you cannot unblock port 8443, you must use unmanaged disks. We are actively working to fix this issue. Please refer to the FAQ for IaaS VM Disks for more details.

https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview

The solution right now is to unblock outbound traffic on port 8443 or using unmanaged disks. Regarding the FAQ: Microsoft will fix this issue at the end of May 2017.

Is there an estimated date for this issue to be fixed so I no longer have to unblock port 8443?

Yes, by the end of May 2017.

https://docs.microsoft.com/en-us/azure/storage/storage-faq-for-disks#managed-disks-and-port-8443

Want to know why you should use managed disks on Microsoft Azure? Have a look at my previous blog post.

-> http://www.danielstechblog.info/use-managed-disks-microsoft-azure/

Der Beitrag Microsoft Azure – managed disks and port 8443 issue erschien zuerst auf Daniel's Tech Blog.


Microsoft Azure Stack Technical Preview 3 on lower hardware specifications

$
0
0

New technical preview, new challenges. After I got the TP1 and TP2 running on lower hardware specifications, I have tried the same for the TP3 and want to share the results with you.

First of all we have the same hardware requirements for TP3 as for TP1 and TP2.

ComponentMinimumRecommended
CPUDual-Socket: 12 Physical CoresDual-Socket: 16 Physical Cores
Memory96 GB RAM128 GB RAM
BIOSHyper-V Enabled (with SLAT support)Hyper-V Enabled (with SLAT support)
NICWindows Server 2012 R2 Certification required for NIC, no specialized features requiredWindows Server 2012 R2 Certification required for NIC, no specialized features required
OS1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)
DATA4 disks. Each disk provides a minimum of 140 GB of capacity (SSD or HDD).4 disks. Each disk provides a minimum of 250 GB of capacity (SSD or HDD).

-> https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-deploy

My lab server at home has the following specifications.

ComponentMy lab server
CPUSingle-Socket: 4 Physical Cores
Memory32 GB RAM
BIOSHyper-V Enabled (with SLAT support)
NICWindows Server 2012 R2 Certification required for NIC, no specialized features required
OS1 OS disk with 1 TB available for system partition (HDD)
DATA3 disks. Each disk provides 1 TB of capacity (HDD).

The issue with the setup is that it checks the amount of memory and CPU cores that is installed in the server and the Azure Stack Technical Preview 3 VMs have fixed configurations during the setup process.

Before you can start the PowerShell deployment script you have to get the following file out of the CloudBuilder.vhdx: Microsoft.AzureStack.Solution.Deploy.CloudDeployment.1.0.381.0.nupkg.

Copy the file Microsoft.AzureStack.Solution.Deploy.CloudDeployment.1.0.381.0.nupkg to a folder onto your workstation and rename the .nupkg file to .zip and extract it.

AzureStackTP301AzureStackTP302

Now go to the directory content.

AzureStackTP303

The folder structure should be familiar to you, when you have done the same configuration adjustments in Azure Stack TP2. The following two files must be edited.

-> .\Configuration\Roles\Infrastructure\BareMetal\OneNodeRole.xml
-> .\Configuration\Roles\Fabric\VirtualMachines\OneNodeRole.xml

For the memory and CPU core check you have to edit the xml file in the ValidationRequirements section at the yellow marked position.

AzureStackTP304

For my system I have entered 4 for the CPU cores and 32 for the memory.

For the VMs I have changed vCPU and vRAM settings in the xml file to the following ones and also disabled Dynamic Memory for the VMs ADFS01, CA01 and BGPNAT01. The other VMs are using static memory.

AzureStackTP305

VM namevCPUvRAM in GB
ACS0123
ADFS0121
BGPNAT0122
CA0121
Con0121
DC0121
ERCS0121
Gwy0122
NC0122
SLB0121
Sql0122
SUS0121
WAS0122
WASP0121
Xrp0144

After you have edited the files, copy them back into the Microsoft.AzureStack.Solution.Deploy.CloudDeployment.1.0.381.0.zip with the Windows Explorer. Just double click on the .zip file and place the edited files into the target folders and rename the .zip file to .nupkg. Then copy it back into the CloudBuilder.vhdx.

AzureStackTP306

Keep in mind that TP3 has one more VM than TP2. That means the resource consumption is higher and the values have to be adjusted after the installation is completed. Otherwise you are not able to deploy any VMs, when your system has only 32 GB of memory.

AzureStackTP307

Conclusion:

Yes, it is possible to run the Azure Stack TP3 on lower hardware specifications. Would I really recommend it? No! You should at least have hardware that met the minimum requirements for the Azure Stack TP3. Enough CPU power and memory is key during the deployment process.

If you do not have the minimum required hardware, then make sure your system has at least 64 GB memory and more than 4 physical cores. I would recommend 8 at least.

Happy testing!

Der Beitrag Microsoft Azure Stack Technical Preview 3 on lower hardware specifications erschien zuerst auf Daniel's Tech Blog.

Using managed disks with Azure DevTest Lab

$
0
0

Currently the Azure DevTest Lab service does not let you use managed disks during the VM deployment, because it does not support them.

When you have the need for managed disks in an Azure DevTest Lab, you can use the environment capability with Azure Resource Manager templates.

Create an Azure Resource Manager template  for a VM deployment with managed disks and upload it to GitHub repository. The next step is to configure the DevTest Lab to use the GitHub repository for the ARM templates.

-> https://azure.microsoft.com/en-us/blog/announcing-azure-devtest-labs-support-for-creating-environment-with-arm-templates/
-> https://blogs.msdn.microsoft.com/devtestlab/2016/11/16/connect-2016-news-for-azure-devtest-labs-azure-resource-manager-template-based-environments-vm-auto-shutdown-and-more/

After that DevTest Lab users are able to deploy VMs with managed disks.

DTLMD01DTLMD02DTLMD03DTLMD04

Der Beitrag Using managed disks with Azure DevTest Lab erschien zuerst auf Daniel's Tech Blog.

Managed disks support in new created Azure DevTest Lab labs

Get notified with OMS on Azure service incidents

$
0
0

One of my colleagues has posted already a blog article about using OMS for Azure service incidents notification.

-> https://blogs.msdn.microsoft.com/nicole_welch/2017/03/using-oms-to-alert-on-azure-service-outages/

So read this blog post first, before you continuing with mine. Because I am describing how to fine-tune the notifications.

In the blog post of my colleague, she only creates one alert in OMS for Azure service incidents. That is really nice for the beginning, but you will recognize that the messages will have a different activity status. The following activity status are available active, in progress and resolved.

I recommend to create three alerts, for each activity status one. The search queries are the following ones.

Active:

Type=AzureActivity Category=ServiceHealth Active

AzureServiceIncident1

In progess:

Type=AzureActivity Category=ServiceHealth "In progress"

AzureServiceIncident2

Resolved:

Type=AzureActivity Category=ServiceHealth Resolved

AzureServiceIncident3

As you can see, having three different alerts are an advantage to keep track of the different activity status of an Azure service incident.

Der Beitrag Get notified with OMS on Azure service incidents erschien zuerst auf Daniel's Tech Blog.

Demystifying Azure VMs bandwidth specification – F-series

$
0
0

As you may know Microsoft specifies the bandwidth of Azure VMs with low, moderate, high, very high and extremely high. As Yousef Khalidi, CVP Azure Networking, has written in his blog post in March, Microsoft will provide specific numbers to each Azure VM size in April.

When our world-wide deployment completes in April, we’ll update our VM Sizes table so you can see the expected networking throughput performance numbers for our virtual machines.

https://azure.microsoft.com/en-us/blog/networking-innovations-that-drive-the-cloud-disruption/

I have run some network performance tests on each F-series VM size to get the numbers for it. For my tests I have used the NTttcp utility by Microsoft.

-> https://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769

My test setup was the following:

  • send-crp1 as the sender VM with the internal IP address 10.0.0.4
  • receive-crp1 as the receiver VM with the internal IP address 10.0.0.5

Both VMs were deployed as F1 to start the tests with and were running Windows Server 2016.

NTttcp option on sender VM:

ntttcp.exe -s -m 8,*,10.0.0.5 -l 128k -a 2 -t 15

NTttcp option on receiver VM:

ntttcp.exe -r -m 8,*,10.0.0.5 -rb 2M -a 16 -t 15

Before running the tests it is advisable to disable the Windows firewall on the systems.

NetworkPerformance1

Here are the results for all F-series VM sizes. So you get an idea what you can expect, when you are reading network bandwidth is high. But keep in mind that depending on the CPU cores the network bandwidth varies.

SizeCPU coresMemory: GiBNetwork bandwidthMeasured network bandwidth
F112Moderate750 Mbit/s
F224High1,5 Gbit/s
F448High3 Gbit/s
F8816High6 Gbit/s
F161632Extremely high12 Gbit/s

The most powerful VM size regarding network bandwidth without the use of RDMA is the D15_v2 with the accelerated networking option.

NetworkPerformance2

SizeCPU coresMemory: GiBNetwork bandwidthMeasured network bandwidth
D15_v220140Extremely high24 Gbit/s

I am looking forward to the specific numbers to get published in the Azure documentation as I mentioned and quoted it at the beginning of this blog post.

-> https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

Der Beitrag Demystifying Azure VMs bandwidth specification – F-series erschien zuerst auf Daniel's Tech Blog.

Using Veeam FastSCP with Azure VMs and self-signed certificates

$
0
0

When working with Azure VMs you have several options to copy files into your VMs. One tool I really like is Veeam FastSCP, because my Azure VMs are just dev / test machines and are neither part of an Active Directory nor I have a VPN connection with the my Azure Virtual Network.

-> https://www.veeam.com/fastscp-azure-vm.html

So I want a secure way to get files into my Azure VMs and here comes Veeam FastSCP into play. It uses WinRM over HTTPS and therefor you need a certificate on the VM to configure WinRM for HTTPS use. As I already mentioned, the VMs do not belong to an Active Directory and that is the reason why I have to use self-signed certificates.

Before creating the self-signed certificate we have to configure the NSG and public IP in Azure first.

The NSG should allow inbound traffic from the Internet to the VM on TCP port 5986.

AzureFASTSCP01

Next step is the public IP and its configuration. You should specify the DNS label, because we will need this in the next step in the PowerShell script.

AzureFASTSCP02

Now we can connect to the VM via RDP and run the following PowerShell script to create the self-signed certificate. Make sure the DNS label of the Azure VM is placed into the dnsName variable.

$dnsName="azst-crp3.northeurope.cloudapp.azure.com"
$cert=New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My\ -DnsName $dnsName
$command="winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname="+'"'+$cert.DnsNameList.Unicode+'"'+"; CertificateThumbprint="+'"'+$cert.Thumbprint+'"'+"}"
cmd.exe /c $command
New-NetFirewallRule -Name "Windows Remote Management (HTTPS-In) (Azure)" -DisplayName "Windows Remote Management (HTTPS-In)" -Protocol TCP -LocalPort 5986 -Direction Inbound -Profile Any -Action Allow –Verbose

Next step is to add the Azure VM to Veeam FastSCP.

AzureFASTSCP03

Enter the DNS name, leave the port on its default value, check if use SSL and skip certificates trusted authority verification are checked and finally enter username and password for the Azure VM.

AzureFASTSCP04

Now we can upload or download files to our Azure VMs.

Der Beitrag Using Veeam FastSCP with Azure VMs and self-signed certificates erschien zuerst auf Daniel's Tech Blog.

Azure Backup and Azure Site Recovery available in Azure Germany

$
0
0

Since today the Recovery Services vault for Azure Backup and Azure Site Recovery is available in Azure Germany.

-> https://blogs.msdn.microsoft.com/azuregermany/2017/05/04/azure-backup-and-site-recovery-verfugbar-available-in-azure-germany/

Just hit the Azure Marketplace in Azure Germany select the Monitoring + Management section and select Backup and Site Recovery (OMS).

MCDASRAB01

Select your favorite region Germany Central or Germany Northeast and deploy the Recovery Services vault.

MCDASRAB02

After the successful deployment you are ready to leverage the Azure Backup or Azure Site Recovery capabilities in Azure Germany.

MCDASRAB03

Der Beitrag Azure Backup and Azure Site Recovery available in Azure Germany erschien zuerst auf Daniel's Tech Blog.


Using Azure Backup with ADE protected VMs in Azure Germany

$
0
0

Yesterday I have written a blog post about the availability of ASR and Azure Backup in Azure Germany.

-> http://www.danielstechblog.info/azure-backup-azure-site-recovery-available-azure-germany/

Today I would like to share some information with you about using Azure Backup with Azure Disk Encryption protected VMs. If you start right away deploying a Recovery Services vault and protecting your VMs, then you will run into an error.

MCD00MCD01

{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "UserErrorKeyVaultPermissionsNotConfigured",
"message": "Azure Backup Service does not have sufficient permissions to Key Vault for Backup of Encrypted Virtual Machines."
}
]
}
}

The error message is clear, you have to assign the required permissions to the Backup Management Service to be able to access keys and secrets in your deployed Azure  Key Vault. Otherwise Azure Backup is not able to backup your ADE protected VMs.

MCD05

In Azure Germany you cannot modify the Azure Key Vault access policies through the portal currently, you have to do it via PowerShell.

$ResourceGroupName="Security"
$RecoveryVaultName="azurestackrecoverygermanycentral"
$KeyVaultName="azurestackkeyvault"
$ServicePrincipalName="262044b1-e2ce-469f-a196-69ab7ada62d3"
Login-AzureRmAccount -EnvironmentName AzureGermanCloud
Get-AzureRmSubscription|Out-GridView -PassThru|Select-AzureRmSubscription
Set-AzureRmKeyVaultAccessPolicy -VaultName $KeyVaultName -ResourceGroupName $ResourceGroupName -PermissionsToKeys backup,get,list -PermissionsToSecrets get,list -ServicePrincipalName $ServicePrincipalName –Verbose

After you have assigned the permissions to the Backup Management Service, start again to protect your VMs and you will see that this time the deployment succeeds.

MCD03

Afterwards initialize the initial backup and come back later to check, if it was successful.

MCD04

Der Beitrag Using Azure Backup with ADE protected VMs in Azure Germany erschien zuerst auf Daniel's Tech Blog.

Does have ADE or SSE a performance impact on Azure IaaS VMs?

$
0
0

Before I begin to write about this topic, I want to clarify that the results are not an official statement by Microsoft.

The opinions expressed herein are my own personal opinions and do not represent my employer’s view in anyway.

Now we have clarified that, let us begin to talk about what ADE and SSE are in a short way. ADE stands for Azure Disk Encryption and is the volume-based encryption option for Azure IaaS VMs leveraging BitLocker or dm-crypt inside the operating system.
SSE stands for Storage Service Encryption and is the encryption option to enable encryption on storage account level. Both ADE and SSE are working with AES 256 bit.

My setup for the performance tests requires a look at the different performance values that the disk provides and what different VM sizes support.

Managed disk typeThroughputIOPS
S4|S6|S10|S20|S3060 MBps500
P10100 MBps500
P20150 MBps2300
P30200 MBps5000

Best fitting VM sizes for the tests are Standard_D4_v2 and Standard_DS4_v2. I had to select the Standard_DS4_v2 series to have support by the VM size for not limiting the throughput. Have a look at the details in the Azure documentation.

-> https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general#dsv2-series

As well I am using for the standard managed disks the same VM size as for premium managed disks. So I am making sure this does not influence the test results using different VM sizes, even if it is possible for standard managed disks looking at the supported performance values.

For running the storage performance tests I am using the Microsoft PerfInsights tool.

-> https://www.microsoft.com/en-us/download/details.aspx?id=54915

But I had to modify the test settings in the PerfInsights_Settings.xml file to get the appropriate results for the throughput tests regarding the Azure documentation for testing premium storage based disks.

If you are using an application, which allows you to change the IO size, use this rule of thumb for the IO size in addition to other performance guidelines,

  • Smaller IO size to get higher IOPS. For example, 8 KB for an OLTP application.
  • Larger IO size to get higher Bandwidth/Throughput. For example, 1024 KB for a data warehouse application.

Here is an example on how you can calculate the IOPS and Throughput/Bandwidth for your application. Consider an application using a P30 disk. The maximum IOPS and Throughput/Bandwidth a P30 disk can achieve is 5000 IOPS and 200 MB per second respectively. Now, if your application requires the maximum IOPS from the P30 disk and you use a smaller IO size like 8 KB, the resulting Bandwidth you will be able to get is 40 MB per second. However, if your application requires the maximum Throughput/Bandwidth from P30 disk, and you use a larger IO size like 1024 KB, the resulting IOPS will be less, 200 IOPS.

-> https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage-performance#nature-of-io-requests

Have a look at the following screenshots, the changes are marked yellow.

ADESSETest01ADESSETest02

The changes are the IO size for running the throughput test and adding the throughout test for the OS disk as well.

Before we jump into the test results I will share the settings for the standard managed disks and premium managed disks test setup with you, so you can run the tests on your own.

Standard managed disks:

  • Azure region: North Europe
  • Azure VM size: Standard_D4_v2
  • OS: Windows Server 2016
  • 100% write
  • 1 GB test file
  • 30 seconds warm-up
  • 90 seconds test duration
  • 3 runs each for IOPS and throughput testing for each disk
  • OS disk: S10 | NTFS 4 KB | read/write cache enabled
  • Data disk: S30 | NTFS 64 KB | none

Premium managed disks:

  • Azure region: North Europe
  • Azure VM size: Standard_DS4_v2
  • OS: Windows Server 2016
  • 100% write
  • 1 GB test file
  • 30 seconds warm-up
  • 90 seconds test duration
  • 3 runs each for IOPS and throughput testing for each disk
  • OS disk: P10 | NTFS 4 KB | read/write cache enabled
  • Data disks: P10, P20, P30 | NTFS 64 KB | none

Test results – standard managed disks – ADE:

Standard_D4_v2 w/o ADEStandard_D4_v2 w/ ADE
OS disk IOPS (500)494.24 (99%) IOPS494.83 (99%) IOPS
OS disk throughput (60MB)60.00 (100%) MB/sec59.96 (99%) MB/sec
Data disk IOPS (500)495.89 (99%) IOPS495.85 (99%) IOPS
Data disk throughput (60MB)60.00 (100%) MB/sec60.00 (100%) MB/sec
CPU average in % when creating 20 GB fixed VHD on data disk0.234%3.200 %

StandardStandard_ADE

Test results – standard managed disks – SSE:

Standard_D4_v2 w/o SSEStandard_D4_v2 w/ SSE
OS disk IOPS (500)494.24 (99%) IOPS494.32 (99%) IOPS
OS disk throughput (60MB)60.00 (100%) MB/sec59.99 (99%) MB/sec
Data disk IOPS (500)495.89 (99%) IOPS496.13 (99%) IOPS
Data disk throughput (60MB)60.00 (100%) MB/sec60.00 (100%) MB/sec
CPU average in % when creating 20 GB fixed VHD on data disk0.234%0.215%

StandardStandard_SSE

Test results – premium managed disks – ADE:

Standard_DS4_v2 w/o ADEStandard_DS4_v2 w/ ADE
P10 OS disk IOPS (500)508.15 (102%) IOPS508.24 (102%) IOPS
P10 OS disk throughput (100MB)95.20 (95%) MB/sec70.15 (70%) MB/sec
P10 data disk IOPS (500)509.94 (102%) IOPS509.97 (102%) IOPS
P10 data disk throughput (100MB)97.28 (97%) MB/sec72.55 (73%) MB/sec
P20 data disk IOPS (2300)2345.75 (102%) IOPS2345.83 (102%) IOPS
P20 data disk throughput (150MB)145.90 (97%) MB/sec145.91 (97%) MB/sec
P30 data disk IOPS (5000)5099.80 (102%) IOPS5099.58 (102%) IOPS
P30 data disk throughput (200MB)192.07 (96%) MB/sec194.50 (97%) MB/sec
CPU average in % when creating 20 GB fixed VHD on data disk0.272%2.934%

PremiumPremium_ADE

Test results – premium managed disks – SSE:

Standard_DS4_v2 w/o SSEStandard_DS4_v2 w/ SSE
P10 OS disk IOPS (500)508.15 (102%) IOPS498.29 (100%) IOPS
P10 OS disk throughput (100MB)95.20 (95%) MB/sec97.20 (97%) MB/sec
P10 data disk IOPS (500)509.94 (102%) IOPS509.97 (102%) IOPS
P10 data disk throughput (100MB)97.28 (97%) MB/sec97.27 (97%) MB/sec
P20 data disk IOPS (2300)2345.75 (102%) IOPS2342.84 (102%) IOPS
P20 data disk throughput (150MB)145.90 (97%) MB/sec145.91 (97%) MB/sec
P30 data disk IOPS (5000)5099.80 (102%) IOPS5100.67 (102%) IOPS
P30 data disk throughput (200MB)192.07 (96%) MB/sec194.54 (97%) MB/sec
CPU average in % when creating 20 GB fixed VHD on data disk0.272%0.221%

PremiumPremium_SSE

Conclusion / Take aways:

Looking at the test results, without a surprise, using SSE does not have a performance impact on Azure IaaS VMs. Because SSE runs on the Azure platform itself and not in the VM, you have full performance without resigning a necessary security option.

The results for ADE differ a bit comparing VMs with standard managed disks and VMs with premium managed disks. Starting with VMs with standard managed disks and looking at the results, ADE adds up to 3% more CPU load. Further explanation should not be necessary, because we are using BitLocker in the VM itself. So the CPU has to deal with encrypting and decrypting data and that adds the additional CPU load. IOPS and throughput values of the disks are not effected by ADE.

Using ADE on VMs with premium managed disks also adds up to 3% more CPU load. IOPS and throughput values of the P20 and P30 disk sizes are not affected by ADE. Surprisingly the throughput values for P10 disks with ADE are significantly lower compared to the result without ADE. Looking at the results, the impact is nearly 30% and I have neither a clue nor an explanation why this happens with a P10 running a 100% write test. IOPS values are alright for the P10.

Those test results led me to run even more tests for the P10 with different settings for the write test. Comparing those results I can recommend for write intensive workloads that depend on throughput and not on IOPS to use a P20 or P30 disk. For IOPS intensive workloads you can also use a P10 beside the other ones. In general, looking at the results, a P10 works best for general workloads having 30% write and 70% read operations. For all other workloads with higher percentage of write operations it is better to use a P20 or P30, when throughput is required rather than IOPS.

I hope the blog post is helpful for you and you get the necessary information about how encryption impacts the performance of an Azure IaaS VM. Do not forget, the results are not an official statement by Microsoft.

Der Beitrag Does have ADE or SSE a performance impact on Azure IaaS VMs? erschien zuerst auf Daniel's Tech Blog.

Develop solutions for Azure Stack on Azure

$
0
0

If you want to develop solutions for Azure Stack in your Azure subscription, then have a look at the Azure Stack Policy Module.

-> https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-policy-module

What does the Azure Stack Policy Module with your Azure subscription?

The policy module will deploy and enable a set of Azure policies, so only Azure services and API versions supported by Azure Stack are available in your subscription. The good thing about the Azure policies is that they can be enabled on subscription level as well on resource group level.

In the end the Azure Stack Policy Module provides you with the necessary environment in Azure to be able to develop compatible solutions for Azure Stack and Azure. You do not need an Azure Stack environment on-premises to develop solutions for Azure Stack. Just get an Azure subscription, download and apply the Azure Stack Policy Module and you are ready to go.

Der Beitrag Develop solutions for Azure Stack on Azure erschien zuerst auf Daniel's Tech Blog.

Deletion of Azure Recovery Services Vault used for Azure SQL Database backups fails

$
0
0

Maybe you stumbled over this behavior. You are using a Recovery Services Vault to store Azure SQL Database backup for long-term retention.

-> https://docs.microsoft.com/en-us/azure/sql-database/sql-database-long-term-retention

When you have deleted the Azure SQL Database before you disabled the backup and now trying to delete the Recovery Services Vault, you will run into the following error.

RecoveryServicesVault01

Currently, you cannot use the Azure Portal to delete the backup item or even see it. You have to use PowerShell to get rid of it to be able to delete the Recovery Services Vault.

Have a look at the following PowerShell script.

$vault=Get-AzureRmRecoveryServicesVault -ResourceGroupName RecoveryServicesVault -Name backupstoragetest -Verbose
Set-AzureRmRecoveryServicesVaultContext -Vault $vault
$container=Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureSQL -BackupManagementType AzureSQL
$item=Get-AzureRmRecoveryServicesBackupItem -Container $container
Disable-AzureRmRecoveryServicesBackupProtection -Item $item -RemoveRecoveryPoints -Force -Verbose
Unregister-AzureRmRecoveryServicesBackupContainer -Container $container -Verbose
Remove-AzureRmRecoveryServicesVault -Vault $vault -Verbose

The PowerShell script will remove the backup item from the Recovery Services Vault, unregister the backup container and finally delete the Recovery Services Vault.

Der Beitrag Deletion of Azure Recovery Services Vault used for Azure SQL Database backups fails erschien zuerst auf Daniel's Tech Blog.

Monitoring HDInsight Spark cluster on Azure with OMS

$
0
0

Today I would like to share the information with you on how to monitor an HDInsight Spark cluster on Azure with OMS.

HDInsightOMS01HDInsightOMS02

For this I just created an HDInsight Spark cluster with default settings and no further customization in my Azure subscription. After the successful creation, I looked into the solution gallery of my OMS workspace and had to recognize that only HDInsight HBase Monitoring (Preview) showed up.

HDInsightOMS03

A quick search on the Internet pointed me to the following GitHub repository.

-> https://github.com/hdinsight/HDInsightOMS

You can download there the Spark view and import it through the view designer into the OMS workspace.

Nevertheless there is an HDInsight Spark Monitoring solution available in the solution gallery, but you do not see it in the portal. You have to use the following PowerShell script to enable the solution for your OMS workspace.

Login-AzureRmAccount
$subscription=Get-AzureRmSubscription|Out-GridView -PassThru -Title "Select Azure subscription"
Select-AzureRmSubscription -SubscriptionId $subscription.Id -Verbose
$OMSworkspace=Get-AzureRmOperationalInsightsWorkspace|Out-GridView -PassThru -Title "Select OMS workspace"
$tempIP=Get-AzureRmOperationalInsightsIntelligencePacks -ResourceGroupName $OMSworkspace.ResourceGroupName -WorkspaceName $OMSworkspace.Name|Out-GridView -PassThru -Title "Select OMS solution"
Set-AzureRmOperationalInsightsIntelligencePack -ResourceGroupName $OMSworkspace.ResourceGroupName -WorkspaceName $OMSworkspace.Name -IntelligencePackName $tempIP.Name -Enabled $true -Verbose

HDInsightOMS04HDInsightOMS05

The next step is to connect to the HDInsight Spark cluster via SSH and run the following commands to install the OMS agent onto it.

HDInsightOMS07HDInsightOMS08

wget https://raw.githubusercontent.com/hdinsight/HDInsightOMS/master/monitoring/scriptspark.sh
chmod 777 scriptspark.sh
./scriptspark.sh workspaceid workspacekey

Do not forget to provide the OMS workspace id and the primary or secondary key, when you execute the shell script.

HDInsightOMS09

Afterwards you have to wait a couple of minutes, so the OMS agent can send the first data sets to the OMS workspace. But then the data should show up in the HDInsight Spark Monitoring solution view.

HDInsightOMS10HDInsightOMS11HDInsightOMS12

Der Beitrag Monitoring HDInsight Spark cluster on Azure with OMS erschien zuerst auf Daniel's Tech Blog.

Troubleshoot Azure VPN gateways with the Azure Network Watcher

$
0
0

Earlier this year Microsoft has launched a new Azure service for network diagnostics and troubleshooting called Network Watcher.

-> https://azure.microsoft.com/en-us/services/network-watcher/

The Network Watcher offers a range of tools like VPN diagnostics and packet capturing to mention two of them. But I would like to talk about the VPN diagnostics capability in this blog post.

Before we can use the VPN diagnostics, we have to enable the Network Watcher for the specific region.

NetworkWatcherVPN01

There is only one Network Watcher instance per Azure region in a subscription.

For the next step we jump into the VPN Diagnostics section and selecting our desired VPN gateway with the corresponding connection. We also have to select a storage account to store the generated log files.

NetworkWatcherVPN02

Before we kick off the diagnostic run, we have to make sure that the VPN gateway type is supported by the Network Watcher! Currently, only route-based VPN gateway types are supported.

-> https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-troubleshoot-overview#supported-gateway-types

Now we can start the diagnostic run with a click on Start troubleshooting.

NetworkWatcherVPN03NetworkWatcherVPN04

For downloading the log files, I am using the Azure Storage Explorer.

-> http://storageexplorer.com/

NetworkWatcherVPN06

The log files are sorted by date and time of the latest run and will be placed in a .zip file.

Our first run was a healthy one. So the .zip file contains two files.

ConnectionStats.txt

Connectivity State : Connected
Remote Tunnel Endpoint :
Ingress Bytes (since last connected) : 10944 B
Egress Bytes (Since last connected) : 10944 B
Connected Since : 6/28/2017 7:01:49 AM

CPUStat.txt

Current CPU Usage : 0 %
Current Memory Available : 595 MBs

To force a run that shows the VPN gateway in an unhealthy state, I have edited the PSK on one side. So the PSK does not match anymore.

NetworkWatcherVPN05

Now we get additional files with the .zip file. Beside the ConnectionStats.txt and the CPUStat.txt, we got IKEErrors.txt, Scrubbed-wfpdiag.txt, wfpdiag.txt.sum and wfpdiag.xml. The most important ones are IKEErrors.txt and Scrubbed-wpfdiag.txt.

IKEErrors.txt

Error: Authenticated failed. Check keys and auth type offers.
based on log : Peer sent AUTHENTICATION_FAILED notify
Error: Authentication failed. Check shared key. Check crypto. Check lifetimes.
based on log : Peer failed with Windows error 13801(ERROR_IPSEC_IKE_AUTH_FAIL)
Error: On-prem device sent invalid payload.
based on log : IkeFindPayloadInPacket failed with Windows error 13843(ERROR_IPSEC_IKE_INVALID_PAYLOAD)

Scrubbed-wfpdiag.txt


[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|Failure type: IKE/Authip Main Mode Failure
[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|Type specific info:
[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|  Failure error code:0x000035e9
[0]0368.0D7C::06/28/2017-11:46:45.651 [ikeext] 13|51.5.240.234|    IKE authentication credentials are unacceptable

The IKEErrors.txt file gives us an overview what maybe wrong and we can start checking those settings. For a better troubleshooting in more details we have to take a look into the Scrubbed-wfpdiag.txt file. As quoted out from the file we got the exact information that something is wrong with the provided credentials also known as our PSK.

As you can see the Network Watcher is an easy to use SaaS service providing you with the necessary tool set to diagnose and troubleshoot network issues and misconfigurations in your Azure environment.

Der Beitrag Troubleshoot Azure VPN gateways with the Azure Network Watcher erschien zuerst auf Daniel's Tech Blog.

Speaking at Experts Live Europe

$
0
0

I am honored to be able to speak at Experts Live Europe, former System Center Universe Europe, for the fourth time in a row.

The conference takes place in Berlin from 23rd August 23rd till August 25th. I will be on-site the whole conference and having the following sessions.

Looking forward to see you in Berlin and to have a chat about Microsoft Azure.

If you have not a ticket yet, then go ahead and visit the Experts Live Europe homepage.

-> http://www.expertslive.eu/

Der Beitrag Speaking at Experts Live Europe erschien zuerst auf Daniel's Tech Blog.


Running the Azure Stack Development Kit on Azure

$
0
0

After I posted a picture on Twitter showing the Azure Stack Development Kit running on Azure, I got several question how I have done it.

First of all it was just a demonstration of our new nested virtualization capability available with the Dv3 and Ev3 VM sizes. I recommend to deploy the ASDK on-premises on hardware.

-> https://azure.microsoft.com/en-us/blog/introducing-the-new-dv3-and-ev3-vm-sizes/https://azure.microsoft.com/en-us/blog/introducing-the-new-dv3-and-ev3-vm-sizes/
-> https://azure.microsoft.com/en-us/blog/nested-virtualization-in-azure/

I started with creating a new Azure VM with Windows Server 2016 Datacenter as OS and the Standard_E16s_v3 as size on premium storage.

I added a 256 GB data disk (P20) for the CloudBuilder.vhdx and a 512 GB data disk (P20) for the four 128 GB data disk I created later for the Azure Stack deployment.

ASDKonAzure05

After the VM deployment finished the following steps were taken by me to get the Azure Stack Development Kit up and running.

  • Added the Hyper-V role to the Azure VM
  • Created a NAT Virtual Switch with the following PowerShell script
$SwitchTest=Get-VMSwitch -Name "NATSwitch" -ErrorAction SilentlyContinue
if($SwitchTest -eq $null){
New-VMSwitch -Name "NATSwitch" -SwitchType Internal -Verbose
$NIC=Get-NetAdapter|Out-GridView -PassThru
New-NetIPAddress -IPAddress 172.16.0.1 -PrefixLength 24 -InterfaceIndex $NIC.ifIndex
New-NetNat -Name "NATSwitch" -InternalIPInterfaceAddressPrefix "172.16.0.0/24" -Verbose
}
  • Created a VM with 12 Cores, 96 GB memory and add the OS disk later.
  • Downloaded the Azure Stack Development Kit
  • Copied the CloudBuilder.vhdx onto the drive with 256 GB storage and attached it as OS disk to the newly created VM.
  • Created four 128 GB data disks on the drive with 512 GB storage and attached them to the VM.

ASDKonAzure02

Before starting the VM mount the CloudBuilder.vhdx and edit the following files.

  • X:\CloudDeployment\Roles\PhysicalMachines\Tests\BareMetal.Tests.ps1
  • X:\CloudDeployment\Configuration\Roles\Infrastructure\BareMetal\OneNodeRole.xml

In the BareMetal.Test.ps1 search for the term $isVirtualizedDeployment and remove the -not in the if statement. Otherwise the deployment will fail.

ASDKonAzure03

In the OneNodeRole.xml edit the following lines.

<MinimumSizeOfDataDisksGB>100</MinimumSizeOfDataDisksGB>

<MinimumSizeOfSystemDiskGB>100</MinimumSizeOfSystemDiskGB>

I have set them to 100 to be able to deploy to smaller disk sizes.

Last step enabling nested virtualization for the VM.

Set-VMProcessor -VMName MASPOC -ExposeVirtualizationExtensions $true -Verbose

Now the VM can be started. After the login set the VM’s IP address to 172.16.0.2 with subnet mask 255.255.255.0, gateway 172.16.0.1 and DNS server for example to 8.8.8.8.

Next step is to start the Azure Stack deployment out of the following place C:\CloudDeployment\Setup.

Use the following command to kick it off.

.\InstallAzureStackPOC.ps1 -InfraAzureDirectoryTenantName xxx.onmicrosoft.com -NATIPv4Subnet 172.16.0.0/24 -NATIPv4Address 172.16.0.3 -NATIPv4DefaultGateway 172.16.0.1 -Verbose

The deployment will fail at the step trying to register the necessary Azure AD applications. The reason for this is that the BGPNAT VM cannot access the Internet. I have worked around it creating another NAT Virtual Switch on the nested VM.

$SwitchTest=Get-VMSwitch -Name "NATSwitch" -ErrorAction SilentlyContinue
if($SwitchTest -eq $null){
New-VMSwitch -Name "NATSwitch" -SwitchType Internal -Verbose
$NIC=Get-NetAdapter|Out-GridView -PassThru
New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex $NIC.ifIndex
New-NetNat -Name "NATSwitch" -InternalIPInterfaceAddressPrefix "192.168.0.0/24" -Verbose
}

Changing the Virtual Switch on the secondary NIC on the BGPNAT VM and logged into the BGPNAT VM and set the IP address to 192.168.0.2 with subnet mask 255.255.255.0 and gateway 192.168.0.1.

ASDKonAzure04

Afterwards run the following command to rerun the deployment.

.\InstallAzureStackPOC.ps1 -Rerun -Verbose

The deployment should complete successfully and you are now able to login to the Azure Stack Administration Portal and User Portal from the nested VM.

ASDKonAzure06ASDKonAzure07

Der Beitrag Running the Azure Stack Development Kit on Azure erschien zuerst auf Daniel's Tech Blog.

Running Azure Functions on Azure Germany with the Functions Runtime

$
0
0

Azure Functions is currently not available in Azure Germany. If you want to start with Functions in Azure Germany, then you have to use the Functions Runtime as a workaround.

-> https://docs.microsoft.com/en-us/azure/azure-functions/functions-runtime-overview

-> https://www.microsoft.com/en-us/download/details.aspx?id=55239

The Functions Runtime can be installed on a Windows Server 2016 or on your Windows 10 with Creators Update client machine. We will install the Functions Runtime on an IaaS VM in Azure Germany.

Because the Runtime requires a SQL database, I have used the “Free License: SQL Server 2016 SP1 Express on Windows Server 2016” image and installed the Functions Runtime after the successful VM deployment.

FunctionsMCD

You should install both roles, Functions Management and Worker, on the same machine.

FunctionsMCD2

After the setup has completed successfully, start the configuration wizard.

FunctionsMCD3

The first view is the General section which says: Configuration required. So you have to go through each of the sections Database, Credential, File Share and IIS step by step to get the Functions Runtime up and running.

In the Database section enter the computer name of the VM under server name and the SQL credentials you have defined during the VM deployment through the Azure portal. Afterwards hit apply to finish the database configuration.

FunctionsMCD4

Specify the username and password for the file share owner and user.

FunctionsMCD5

Just hit apply to provision the file share on the server.

FunctionsMCD6

The last step is to configure the IIS. I have used the default inputs and the option generate self-signed cert.

FunctionsMCD7

Now, we can open a web browser and enter the server name to access the Functions Runtime portal. Keep in mind that the Functions Runtime portal does not require any authentication and can be accessed anonymously.

FunctionsMCD8FunctionsMCD9

Until Azure Functions becomes available in Azure Germany, the only way to run Functions in Azure Germany is to use the Functions Runtime on an Azure VM as a workaround.

Der Beitrag Running Azure Functions on Azure Germany with the Functions Runtime erschien zuerst auf Daniel's Tech Blog.

Speaking at Microsoft Ignite in Orlando

$
0
0

Microsoft Ignite in Orlando starts in a few weeks and I am happy to be one of the many speakers attending Microsoft Ignite.

Even this is my second time, I attended the first one in 2015, it is my first time being a speaker at Ignite.

MSIgnite_TechCMU_JoinSession_01_FB

If you are coming to Orlando, I am inviting you to join my session on Friday at 9 a.m. about “Azure IaaS design and performance considerations: Best practices and learnings from the field”.

-> https://myignite.microsoft.com/sessions/53416?source=sessions

Any questions about the session before Ignite? Join the Microsoft Tech Community to start a discussion.

-> https://techcommunity.microsoft.com/t5/Microsoft-Ignite-Content-2017/Azure-IaaS-design-and-performance-considerations-Best-practices/m-p/98636

Troughout the conference week you can find me from Monday till Thursday at the Expo at different booths like Windows Server in Azure, Azure Compute and Azure Resource Manager. I am looking forward to get in touch with you at Ignite and having a chat about Azure.

See you in Orlando!

Der Beitrag Speaking at Microsoft Ignite in Orlando erschien zuerst auf Daniel's Tech Blog.

Integrate auto-shutdown configuration in ARM template deployments for Azure VMs

$
0
0

In November 2016 Microsoft introduced the auto-shutdown feature to Azure VMs which was originally available in Azure DevTest Labs.

-> https://azure.microsoft.com/en-us/updates/set-auto-shutdown-within-a-couple-of-clicks-for-vms-using-azure-resource-manager/

The auto-shutdown feature can be simply enabled through the Azure portal after a VM is deployed.

auto-shutdown01

If you are doing a lot of VM deployments, an Azure Resource Manager template is the way to go. Enabling and configuring the auto-shutdown feature through ARM templates is easy.

Just add the following lines to your ARM template for VM deployments and modify it appropriately. You should place the lines of code in the resources section of the VM resource.

auto-shutdown02

{
"apiVersion": "[providers('Microsoft.DevTestLab','labs').apiVersions[0]]",
"type": "microsoft.devtestlab/schedules",
"name": "[concat('shutdown-computevm-',parameters('vmName'),copyIndex(parameters('numerationOfVMs')))]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/',concat(parameters('vmName'),copyIndex(parameters('numerationOfVMs'))))]"
],
"properties": {
"status": "Enabled",
"taskType": "ComputeVmShutdownTask",
"dailyRecurrence": {
"time": "1900"
},
"timeZoneId": "W. Europe Standard Time",
"notificationSettings": {
"status": "Disabled",
"timeInMinutes": 15
},
"targetResourceId": "[resourceId('Microsoft.Compute/virtualMachines',concat(parameters('vmName'),copyIndex(parameters('numerationOfVMs'))))]"
}
}

As you can see I am always using the latest API version. This approach is a bit risky, because things in the template can get broken quickly with an newly API version released on the Azure platform. So the latest API version right now is 2017-04-26-preview and this version works without any issues.

Beside the API version you also have to modify name, dependsOn and targetResourceId to get it running in your ARM template.

Name is the name of the resource as shown in the Azure portal or through API calls after the resource is successfully deployed.

auto-shutdown03

Last but not least, we need dependsOn to specify the dependency on the VM. So the VM will be deployed first before we start the configuration. The targetResourceId directly links to the VM we would like to configure with the auto-shutdown feature.

Happy deploying!

Der Beitrag Integrate auto-shutdown configuration in ARM template deployments for Azure VMs erschien zuerst auf Daniel's Tech Blog.

Enabling Azure Disk Encryption on Windows Server 2016 Server Core in Azure

$
0
0

Beside the Windows Server 2016 Datacenter image, Microsoft also provides an image with Windows Server 2016 Datacenter – Server Core in Azure.

ServerCoreADE1

If you are using the Server Core image and want to enable Azure Disk Encryption for the VM, you will see the following error message.

New-AzureRmResourceGroupDeployment : 14:27:53 - Resource Microsoft.Compute/virtualMachines/extensions 'azst-crp4/BitLocker' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'BitLocker'. Error message: \"Failed to configure bitlocker as expected. Exception: The system cannot find the file
specified, InnerException: , stack trace:    at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)\r\n   at System.Diagnostics.Process.Start(ProcessStartInfo
startInfo)\r\n   at Microsoft.Cis.Security.BitLocker.BitlockerIaasVMExtension.BitlockerPrep.RunCommand(String cmd, String args)\r\n   at
Microsoft.Cis.Security.BitLocker.BitlockerIaasVMExtension.BitlockerPrep.SplitOSVolumeForBitlocker(Boolean&amp; rebootRequired)\r\n   at
Microsoft.Cis.Security.BitLocker.BitlockerIaasVMExtension.BitlockerOperations.PrepareMachineForBitlocker(Boolean&amp; rebootInitiated)\r\n   at
Microsoft.Cis.Security.BitLocker.BitlockerIaasVMExtension.BitlockerExtension.PrepareMachineForBitlocker(Boolean&amp; rebootInitiated)\r\n   at
Microsoft.Cis.Security.BitLocker.BitlockerIaasVMExtension.BitlockerExtension.HandleEncryptionOperations()\r\n   at
Microsoft.Cis.Security.BitLocker.BitlockerIaasVMExtension.BitlockerExtension.OnEnable()\"."
}
]
}
}'
At C:\Volume\OneDrive\Sync\Azure\ARM\Azure_Global\setupADE.ps1:31 char:13
+             New-AzureRmResourceGroupDeployment -Name $deploymentGUID. ...
+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdlet

The official solution is described in the Azure documentation.

-> https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-tsg#troubleshooting-windows-server-2016-server-core

You do not need to take the steps 1 to 3. You only need to copy the four files from a 2016 Datacenter installation onto the 2016 Datacenter – Server Core installation. Afterwards you can follow the steps 1 to 3 as stated in the documentation or directly enable ADE for the VM via PowerShell or an ARM template.

Der Beitrag Enabling Azure Disk Encryption on Windows Server 2016 Server Core in Azure erschien zuerst auf Daniel's Tech Blog.

Viewing all 142 articles
Browse latest View live