You are here

Feed aggregator

SQLCAT at Microsoft Ignite 2016

MSDN Blogs - 3 hours 54 min ago

Hi all, we are looking forward, as we are sure you are, to a great Microsoft Ignite 2016! Three members of the SQLCAT team will be in Atlanta (September 26th-30th) and of course we would love to see everyone and talk about SQL Server, Azure SQL DB, and SQL DW.

We have 3 sessions where we will share some great customer experiences and learnings with SQL Server 2016. Mike Weiner (@MikeW_SQLCAT) will co-present with early adoptors from Attom Data Solutions and ChannelAdvisor:

BRK2231 Understand how ChannelAdvisor is using SQL Server 2016 to improve their business (Wednesday, September 28th from 2-2:45PM EST)

BRK3223 Underststand how Attom Data Solutions is using SQL Server 2016 to accelerate their business (Thursday, September 29th from 9-9:45AM EST)

Then, be sure to keep your Friday open for Arvind Shyamsundar (@arvisam) and Denzil Ribeiro’s (@DenzilRibeiro) presentation:

BRK3094 Accelerate SQL Server 2016 to the max: lessons learned from customer engagements (Friday, September 30th from 12:30-1:45PM EST)

When we are not presenting we’ll primarily be at the Expo in Hall B at the Microsoft Showcase – Data Platform and IoT, eager to talk to you! Look forward to seeing everyone there!

Arvind, Denzil and Mike

[Windows/Mac] Xamarin 環境構築(既にインストール済みかチェックする方法も)

MSDN Blogs - 14 hours 6 min ago


Windows でも Mac でも Xamarin で ネイティブアプリ開発することができます。

Xamarin は、もともとは「Xamarin社」の作っていたサービスで、ライセンス代が1人1年25万円ほどで高かったのですが、
2016年春に Microsoft社に会社ごと買収されてからは無料になりました。

↑ iOS/Android/Windowsアプリ

この記事では、それぞれで、どうやって Xamarinの環境を作るか、を書きます。

/ Windows Mac 使用IDE Visual Studio 2015~ Xamarin Studio Androidアプリ開発 可 可 iOSアプリ開発 リモート接続されたMacがあれば可(*) 可 Windowsアプリ開発 可 (UWP/Win8.1) ✕

(*) 「リモート接続されたMacがあれば可」とは:iOSの SDKは Xcode だけが持っており、Xcode は Mac アプリなので。「リモート接続されたMac」とは、つまり、Macを「ビルドホスト」として使うということです。VSで iOSプロジェクトを作ったら、Macエージェントが立ち上がり、リモート接続の許可された近くのMacを認識しようとします。

環境構築 (Mac編)

Mac (Xamarin Studio) で始める場合。

Macの場合、話はかんたんで、Xamarin Studio という IDE が入っていれば、イコール、Xamarinインストール済みです。
また、iOSアプリ開発には、Xcode (iOS SDKが入っている)もインストールされている必要があります。

Xamarin Studio インストール方法など、読んで下さい:
Mac で Xamarin 使ってみた!インストール〜実行まで【完全無料】[Getting Started Xamarin on Mac]

環境構築 (Windows編)

Windows (Visual Studio) で始める場合。

  • まだ Visual Studio 自体が入っていない場合:
    • の「Visual Studio Community」の「Download VS」をクリック。(無料です)
  • すでに Visual Studio を入れている場合:
    • → そのお手持ちのVSに Xamarinがインストール済みかどうかチェックしましょう。以下に手順を示します
お手持ちの VS に Xamarin がインストール済かどうか確認する


「テンプレート」→「C#」→「クロスプラットフォーム」の中に Xamarin関係(「Blank App」)が入っている

もし入っていなかった(存在していない)ら、あなたの環境に Xamarinはまだインストールされていません。

以下に、お手持ちの VS へ Xamarin のインストールする方法を書きます。何度も言いますが無料です。

お手持ちの VS に Xamarin をインストールする


「アプリと機能」→「Visual Studio {エディション(Communityとか)} 2015 with Update 2/3」→「編集」


ここで Xamarin にチェックを入れると、付随して、他のも勝手にチェックが入ります(特定のバージョンのUWP SDKなど)が、それは必要なものなので、そのままインストールをお願いします。

環境構築に関する参考リンク 環境構築が終わったら


Azure News on Friday (KW38/16)

MSDN Blogs - 15 hours 30 min ago

Auch diese Woche gab’s wieder viele Nachrichten zur Microsoft Azure Plattform. Hier sind nähere Infos dazu…

Aktuelle Neuigkeiten Datum Nachricht 23.09. Service Fabric SDK and Runtime for version 5.2 released
Neues Azure Service Fabric SDK in Version 5.2 verfügbar 22.09. Service Bus client 3.4.0 is now live
Neue Service Bus Client Library in Version 3.4.0 – wichtig insbesondere für Nutzer von Event Hubs 22.09. Microsoft Azure Storage samples – cross platform quick starts and more
Beispielcode und Samples für die Programmierung mit Azure Storage 22.09. Umbraco uses Azure SQL Database Elastic Pools for thousands of CMS tenants in the cloud
Azure SQL Database Elastic Pools als Basis für Umbraco as a Service auf Microsoft Azure 21.09. Azure ML, as Part of the IoT Suite, Now Available in Azure Germany
Azure ML jetzt auch in der Azure Cloud in Deutschland verfügbar 21.09. Azure Stream Analytics support for IoT Hub Operations Monitoring
Azure Stream Analytics ermöglicht jetzt auch Analysen zum Betrieb eines IoT Hub (Device-Telemetrie, -Identity etc.) 21.09. Microsoft Azure Germany now available via first-of-its-kind cloud for Europe
Endlich ist die Microsoft Cloud Deutschland da! Zwei Azure Rechenzentren in Deutschland unter Datentreuhänderschaft 20.09. Project Bletchley – Blockchain infrastructure made easy
Project Bletchley – Blockchain Infrastruktur auf Microsoft Azure mit dem Azure Resource Manager bereitstellen 20.09. Announcing the release of Azure Mobile Apps Node SDK v3.0.0
Azure Mobile Apps Node SDK v3.0.0 verfügbar Neue Videos Datum Nachricht Video 23.09. Episode 214: Hockey App and Azure App Insights with Evgeny Ternovsky and Josh Weber
Hockey App und Azure App Insights vorgestellt – Analysen und App Telemetrie von Client- und Cloud-Apps
23.09. What is Microsoft Azure Stack?
Azure Stack im Kurzportrait
22.09. Azure Functions and the evolution of web jobs
In knapp 10 Minuten alles Wissenswerte zu Azure Functions – Serverless Computing auf Microsoft Azure
22.09. PowerShell on Linux – Azure Demo
Azure PowerShell auf Linux
22.09. Get Started with Azure Portal
Erste Schritte mit dem Azure Portal
22.09. Create a Linux Virtual Machine
Eine Linux VM mit Azure Virtual Machines aufsetzen – Kurzüberblick in 4 Minuten
21.09. PowerShell Tools for Visual Studio 2015
Azure PowerShell mit Visual Studio nutzen
21.09. Unified application model
Das Unified Application Model mit Azure Resource Manager – Apps in die Azure Cloud oder Azure Stack deployen
20.09. Tuesdays with Corey: More Azure Portal stuff with Vlad
Corey Sanders mit Neuigkeiten zum Azure Portal

Visual Studio での Xamarin のアップデート方法

MSDN Blogs - Fri, 09/23/2016 - 23:49

Xamarin for Visual Studio のアップデート方法について書きます。

  1. Visual Studio を開きます。
  2. 上のメニューバーの「ツール」
  3. 「オプション」

  1. 「Xamarin」(下の方にある)
  2. 「Other」
  3. 「Check Now」
  4. 「Download」

これで Xamarin インストーラーが立ち上がってくるので、
Visual Studio を閉じて、インストーラーの指示に従ってください。(基本 accept と OK連打です)

Integrate Azure logs to QRadar

MSDN Blogs - Fri, 09/23/2016 - 23:12

Please read Azure log integration. It covers the high level architecture of the integration.

This blog is for anyone who has Azure resources and wants to have their logs integrated to QRadar SIEM (Security Information and Event Management).

By following the steps outlined here, you will be able to integrate the following logs to QRadar

  1. Azure Activity logs
  2. Azure Security Center Alerts

At the time of this blog post, there are about 200 events from Azure Activity logs that will successfully map to categorized event in QRadar. The number of events supported will increase as we work closely with IBM to add more events to the DSM (Device Support Module).

Step 1 – Install Azure DSM released from IBM Qradar 7.2 download


Step 2 – Uninstall previous version of Azure log integration

If you have previous version of Azure log integration installed, you will need to uninstall it first. Uninstalling it will remove all sources that are registered.

Steps to Uninstall –

  1. Open the command prompt as administrator and cd into c:Program FilesMicrosoft Azure Log Integration.
  2. Run the command

            Azlog removeazureid

3. In Control Panel –>Add remove programs –> Microsoft Azure log integration –> uninstall

Install Azure log integration
  1. Download Azure log integration and follow the install instruction
  2. Open command prompt as administration and cd into c:Program FilesMicrosoft Azure Log Integration
  3. Run “azlog.exe powershell”. This will open a PowerShell window
  4. In the PowerShell window, run the following

              Add-AzLogEventDestination -Name QRadarConsole1 -SyslogServer -SyslogFormat LEEF

             Name is a friendly name for the Destination

             SyslogServer is the IP address of the QRadar console (you can specify Syslog Port if necessary).

  1. Run the command

           .azlog.exe createazureid 

  1. Run the command

           .azlog authorize <SubscriptionID>

  1. Qradar will autodiscover the source. You can verify on the Log Activity tab on your Qradar console. Provide the SubscriptionID in the Quick filter or if you want to search across all subscriptions, provide ‘azure’ as the text in quick filter. Note that only 200 events are currently categorized








[Sample Of Sept. 24] How to create a Hello World 3D holographic app with Unity

MSDN Blogs - Fri, 09/23/2016 - 18:27
Sept. 24

Sample :

This sample demonstrates how to create a Hello World 3D holographic app with Unity.

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage

Angular 2 with @types declaration files

MSDN Blogs - Fri, 09/23/2016 - 14:11

– To install TypeScript declarations files, at the beginning you had to install DefinitelyTyped packages using NuGet
– Then the node tool TSD followed and after that came Typings
– The next step in the evolution of type acquisition is @types

– @types only requires npm to be managed
– In enterprise environments, this means one tool less to manage and one proxy setting less to worry about (Typings doesn’t use node proxy settings)

– Currently, as of 2016-09-23, the official Angular 2 tutorials use a Typings file to install the declaration files

– To use @types instead:
1. Make sure TypeScript 2 is installed in Visual Studio (if you are using it)
2. Make sure you are using "typescript": "^2.0.3" (or newer) in the package.json file
3. Remove "typings" from "devDependencies" in the package.json file
4. Add the following to "devDependencies":
4a. "@types/core-js": "*"
4b. "@types/node": "*"
4c. If you want to run Jasmine tests, add: "@types/jasmine": "*"
5. Delete the “typings.json” file
6. Run npm install

Machine Learning Workshop in Charlotte, NC

MSDN Blogs - Fri, 09/23/2016 - 14:08

Hello everyone,

Just two weeks ago in Charlotte, NC, the Microsoft Dynamics and Cortana Intelligence team held a 2 day workshop for our Industry Partners at the Microsoft campus off of Arrowood Road.  In attendance was an array of partners who provide Dynamics AX and CRM customers with actionable business intelligence, forecasting and integration support.  We had a busy two day schedule where the class learned and participated hands-on in the integration from Cortana Intelligence Cognitive APIs into Dynamics AX.  We also built and operationalized machine learning models using Azure Machine Learning; we wired up data from AX through the entity store and Azure Data Factory to make a data pipeline capable of retraining that model (as purchases and catalogs change, so should the model); and we had a fun time playing with Event Hub, Stream Analytics, and PowerBI to show how streaming IOT data can be filtered and processed using Azure.

It was exciting to see the participation, the questions and the overall engagement from the class, and I wanted to thank folks for making the trip from far-off places (or not so far) to attend.

Our Workshop Class in Charlotte, NC

Thanks to all who participated and made this a fun and engaging two days!

Materials from the course are publicly available here:

Non-public materials from the course (such as source code and sample applications) can be requested .


Failover Clustering @ Ignite 2016

MSDN Blogs - Fri, 09/23/2016 - 13:35

I am packing my bags getting ready for Ignite 2016 in Atlanta, and I thought I would post all the cluster and related sessions you might want to check out.  See you there!
If you couldn’t make it to Ignite this year, don’t worry you can stream all these sessions online.

  • BRK3196 – Keep the lights on with Windows Server 2016 Failover Clustering
  • BRK2169 – Explore Windows Server 2016 Software Defined Datacenter
Storage Spaces Direct for clusters with no shared storage:
  • BRK3088 – Discover Storage Spaces Direct, the ultimate software-defined storage for Hyper-V
  • BRK2189 – Discover Hyper-converged infrastructure with Windows Server 2016
  • BRK3085 – Optimize your software-defined storage investment with Windows Server 2016
  • BRK2167 – Enterprise-grade Building Blocks for Windows Server 2016 SDDC: Partner Offers
Storage Replica for stretched clusters:
  • BRK3072 – Drill into Storage Replica in Windows Server 2016
SQL Clusters
  • BRK3187 – Learn how SQL Server 2016 on Windows Server 2016 are better together
  • BRK3286 – Design a Private and Hybrid Cloud for High Availability and Disaster Recovery with SQL Server 2016

Elden Christensen
Principal PM Manager
High Availability & Storage

New project type has XAML forms by default

MSDN Blogs - Fri, 09/23/2016 - 13:08

Xamarin has some great documentation. The article at [ ] shows how to get started with Xaml by using the Blank Xaml App ( Xamarin.Forms Portable ) template.

Xamarin New Forms Portable

The Blank Xaml App template sets up a project without any Xaml, just C# classes for creating a Xamarin Forms project. You must manually add a Xaml Page to this template, as this article shows [].

In my article at [] I show how to add an App.Xaml to the Blank Xaml App template as well.  App.xaml using xml gives a developer Xaml based resources and style definitions, imho a much cleaner and manageable approach than C# based repositories.

Xamarin New Forms PCL App

Manually adding in XAML pages works very well.  Functional? Yes! Tedious, absolutely.Wish there was an easier way?  Of course.

Good news. XAML templates are here.

Update your Xamarin to at least Xamarin 4.1, and new  templates are available. My very favorit-est is the Blank Xaml App ( Xamarin.Forms Portable ). Blank Xaml App versus Blank App? Blank Xaml App gives us App.Xaml and App.Xaml.cs, as well as MainPage.xaml and MainPage.Xaml.cs files.

Developers no longer have to have to manually add in the Xaml page, or an App.Xaml . Great news for those of us who prefer the pointy thing approach < xaml /> to form layout versus C# driven.

Prior to Xamarin 4.1 we were forced to use the Xaml App template with manually added XAML forms if we wanted to use XAML instead of code.

Note this is also available in Xamarin Studio on the Mac, via the File > New > Solution > Xamarin.Forms > Forms App.   Keep a sharp watch, as the default page in Xamarin STudio for Mac will match the name of the project instead of being “MainPage.xaml”.

Xamarin Forms keeps getting better. I can’t wait to see where it goes. But in the meantime, the new default layout with Xaml based pages gives us pointy types a much better launchpad for our projects than the previous C# only options.

Happy coding and tight lines!

2016-09-23 Release Update

MSDN Blogs - Fri, 09/23/2016 - 10:59

Release Notes:

  • Service Bus long-polling trigger (for recurrence intervals < 30 seconds will instantly fire on queue or topic message)
  • Service Bus peek/lock and complete (rolling out over next week)
  • Terminate action
  • Can now replace the body object inputs with a full reference (e.g. “body”: { “name”: “foo” } => “body”: “@triggerBody()” )
  • Updated icon for Logic Apps
  • Visual Studio Tools General Availability (9/26)

Bug Fixes:

  • SQL Stored procedure data would sometimes be lost on reload
  • Case-insensitive references to dynamic properties
  • Action pallet dropdown would cutoff for larger flows
  • Run monitoring view had issues rendering some actions
  • Recurrence values weren’t showing in monitoring view

Karachites to witness First Official Xamarin Dev Day Tomorrow

MSDN Blogs - Fri, 09/23/2016 - 10:48

Karachi’s First Official Xamarin Dev Day provides attendees an introduction to Cross-Platform App development using Xamarin with a hands-on learning experience. The meetup is unique as it is the first official Xamarin Dev day in Karachi, Pakistan and that it brings the best of speakers at one platform, both online and offline. The meetup is officially organized under Karachi Mobile Developers group.

The need of such a meetup was identified by Mr. Ahad Khan who’s a Xamarin Certified Mobile Developer and he initiated the conversation with officials of Xamarin. As a local field team of Microsoft when we learnt of such an activity, we facilitated by introducing Microsoft MVP, Daniyal Saleem to bring the flavor of Microsoft Azure in the meetup.

Here’s a review of the agenda with speaker profiles,

9:00am – 9:58am Welcome and marking attendance 10:00am -10:10am Keynote speech by Andy Larkin – CEO at Grappetite 10:10am – 10:40am Intro to Xamarin by Ahad Khan – Xamarin Certified Mobile Developer 10:40am – 11:00am Tea break 11:00am – 11:30am Xamarin Forms by Arif Imran – Xamarin Certified Mobile App Developer 11:30am – 1:00 pm Azure Cloud Integration by Daniyal Saleem – Microsoft Most Valuable Professional on Visual Studio & Development Technologies 1:00pm – 1:30 pm Azure App demo by Syed Asad Ali – Senior Software Engineer 1:30pm – 2:50 pm Lunch and prayers break 2:50pm – 4:50pm Group activity / Collaborative Programming


Getting Started with Docker and Container Services

MSDN Blogs - Fri, 09/23/2016 - 10:46

Docker is the world’s most popular containerization platform.

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

Containers are similar to virtual machines (VMs) in that they provide a predictable and isolated environment in which software can run. Because containers are smaller than VMs, they start almost instantly and use less RAM. Moreover, multiple containers running on a single machine share the same operating system kernel.

Docker is based on open standards, enabling Docker containers to run on all major Linux distributions as well as Windows Server 2016. To simplify the use of Docker containers, Azure offers the Azure Container Service (ACS), which hosts Docker containers in the cloud and provides an optimized configuration of popular open-source scheduling and orchestration tools, including DC/OS and Docker Swarm.

In the following tutorial we will  package a Python app and a set of colour images in a Docker container. Then you will run the container in Azure and run the Python app inside it to convert the colour images to grayscale. You will get hands-on experience creating Azure Container Services and remoting into them to execute Docker commands and manipulate Docker containers

In this blog, you will learn how to:

  • Create an Azure Container Service
  • Remote into an Azure Container Service using SSH
  • Create Docker images and run Docker containers in Azure
  • Run jobs in containers created from Docker images
  • Stop Docker containers that are running
  • Delete a container service


The following are required to complete this hands-on lab:

  • An active Microsoft Azure subscription.
  • PuTTY (Windows users only). You can either install the full package using the MSI installer, or install just two binaries: putty.exe and puttygen.exe.
  • Docker client (also known as the Docker Engine CLI) for Windows, macOS, or Linux

To install the Docker client for Windows, open and copy the executable file named “docker.exe” from the “docker” subdirectory to a local folder. To install the Docker client for macOS, open and copy the executable file named “docker” from the “docker” subdirectory to a local folder. To install the Docker client for Linux, open and copy the executable file named “docker” from the “docker” subdirectory to a local folder. (You can ignore the other files in the “docker” subdirectory.)

You do not need to install the Docker client if you already have Docker (or Docker Toolbox) installed on your machine.


Task 1: Create an SSH key pair

Before you can deploy Docker images to Azure, you must create an Azure Container Service. And in order to create an Azure Container Service, you need a public/private key pair for authenticating with that service over SSH. In this exercise, you will create an SSH key pair. If you are using macOS or Linux, you will create the key pair with ssh-keygen. If you are running Windows instead, you will use a third-party tool named PuTTYGen.

Unlike macOS and Linux, Windows doesn’t have an SSH key generator built in. PuTTYGen is a free key generator that is popular in the Windows community. It is part of an open-source toolset called PuTTY, which provides the SSH support that Windows lacks.

  1. If you are running Windows, skip to Step 6. Otherwise, proceed to Step 2.

  2. On your Mac or Linux machine, launch a terminal window.

  3. Execute the following command in the terminal window:


    Press Enter three times to accept the default output file name and create a key pair without a passphrase. The output will look something like this:

    Generating a public/private key pair

  4. Use the following commands to navigate to the hidden “.ssh” subdirectory created by ssh-keygen and list the contents of that subdirectory:

    cd ~/.ssh ls

    Confirm that the “.ssh” subdirectory contains a pair of files named id_rsa and The former contains the private key, and the latter contains the public key. Remember where these files are located, because you will need them in subsequent exercises.

  5. Proceed to Step 2. The remaining steps in this exercise are for Windows users only.

  6. Launch PuTTYGen and click the Generate button. For the next few seconds, move your cursor around in the empty space in the “Key” box to help randomize the keys that are generated.

    Generating a public/private key pair

  7. Once the keys are generated, click Save public key and save the public key to a text file named public.txt. Then click Save private key and save the private key to a file named private.ppk. When prompted to confirm that you want to save the private key without a passphrase, click Yes.

    Saving the public and private keys

You now have a pair of files containing a public key and a private key. Remember where these files are located, because you will need them in subsequent steps.

Task 2: Create an Azure Container Service

Now that you have an SSH key pair, you can create and configure an Azure Container Service. In this exercise, you will use the Azure Portal to create an Azure Container Service for running Docker containers.

  1. Open the Azure Portal in your browser. Select + New -> Containers -> Azure Container Service.

    Creating a container service

  2. Click the Create button in the “Azure Container Service” blade. In the “Basics” blade, enter “dockeruser” (without quotation marks) for User name and the public key that you generated in Exercise 1 for SSH public key. Select Create new under Resource group and enter the resource-group name “ACSLabResourceGroup” (without quotation marks). Select the location nearest you under Location, and then click the OK button.

    Basic settings

  3. In the “Framework configuration” blade, select Swarm as the orchestrator configuration. Then click OK.

    DC/OS and Swarm are popular open-source orchestration tools that enable you to deploy clusters containing thousands or even tens of thousands of containers. (Think of a compute cluster consisting of containers rather than physical servers, all sharing a load and running code in parallel.) DC/OS is a distributed operating system based on the Apache Mesos distributed systems kernel. Swarm is Docker’s own native clustering tool. Both are preinstalled in Azure Container Service, with the goal being that you can use the one you are most familiar with rather than have to learn a new tool.

    Framework configuration settings

  4. In the “Azure Container service settings” blade, set Agent count to 2, Master count to 1, and enter a DNS name in the DNS prefix for container service box. (The DNS name doesn’t have to be unique across Azure, but it does have to be unique to a data center. To ensure uniqueness, you should include birth dates or other personal information that is unlikely to be used by other students working this lab. Otherwise, you may see a green check mark in the DNS prefix box but still suffer a deployment failure.) Then click OK.

    When you create an Azure Container Service, one or more master VMs are created to orchestrate the workload. In addition, an Azure Virtual Machine Scale Set is created to provide VMs for the “agents,” or VMs that the master VMs delegate work to. Docker container instances are hosted in the agent VMs. By default, Azure uses a standard D2 virtual machine for each agent. These are dual-core machines with 7 GB of RAM. Agent VMs are created as needed to handle the workload. In this example, there will be one master VM and up to two agent VMs.

    Service settings

  5. In the “Summary” blade, review the settings you selected. Then click OK.

    Settings summary

  6. In the ensuing “Purchase” blade, click the Purchase button to begin deploying a new container service.

  7. Deployment typically takes about 10 minutes. To monitor the deployment, click Resource groups on the left side of the portal to display a list of all the resource groups associated with your account. Then select the ACSLabResourceGroup resource group created for the container service to open a resource-group blade. When “Succeeded” appears under “Last Deployment,” the deployment has completed successfully.

    Click the browser’s Refresh button every few minutes to update the deployment status. Clicking the Refresh button in the resource-group blade refreshes the list of resources in the resource group, but does not reliably update the deployment status.

    Successful deployment

Task 3: Connect to the Azure Container Service

In this step you will establish an SSH connection to the container service you deployed in Exercise 2 so you can use the Docker client to create Docker containers and run them in Azure.

  1. After the container service finishes deploying, return to the blade for the “ACSLabResourceGroup” resource group. Then click the resource named swarm-master-lb-xxxxxxxx. This is the master load balancer for the swarm.

    Opening the master load balancer

  2. Click the IP address under “Public IP Address.”

    The master load balancer’s public IP

  3. Hover over the DNS name under “DNS Name.” Wait for a Copy button to appear, and then click it to copy the master load balancer’s DNS name to the clipboard.

    Copying the DNS name

  4. If you are running Windows, skip to Step 9. Otherwise, proceed to Step 5.

  5. On your Mac or Linux machine, launch a terminal window (or return to the one you opened in Exercise 1 if it’s still open).

  6. Execute the following command to SSH in to the Azure Container Service, replacing dnsname with the DNS name on the clipboard:

    ssh dockeruser@dnsname -p 2200 -L 22375:

    The purpose of the -L switch is to forward traffic transmitted through port 22375 on the local machine (that’s the port used by the docker command you will be using shortly) to port 2375 at the other end. Docker Swarm listens on port 2375. The -p switch instructs SSH to use port 2200 rather than the default 22. The load balancer you’re connecting to listens on port 2200 and forwards the SSH messages it receives to port 22 on the master VM.

  7. If asked to confirm that you wish to connect, answer yes. Once connected, you’ll see a screen that resembles the one below.

    Observe that you didn’t have to enter a password. That’s because the connection was authenticated using the public/private key pair you generated in Exercise 1. Key pairs tend to be much more secure than passwords because they are cryptographically strong.

    Successful connection

  8. Leave the terminal window open and proceed to Exercise 4. The remaining steps in this exercise are for Windows users only.

  9. Launch PuTTY and paste the DNS name on the clipboard into the Host Name (or IP address) box. Set the port number to 2200 and type “ACS” (without quotation marks) into the Saved Sessions box. Click the Save button to save these settings under that name.

    Why port 2200 instead of port 22, which is the default for SSH? Because the load balancer you’re connecting to listens on port 2200 and forwards the SSH messages it receives to port 22 on the master VM.

    Configuring a PuTTY session

  10. In the treeview on the left, click the + sign next to SSH, and then click Auth. Click the Browse button and select the private-key file that you created in Exercise 1.

    Entering the private key

  11. Select Tunnels in the treeview. Then set Source port to 22375 and Destination to, and click the Add button.

    The purpose of this is to forward traffic transmitted through port 22375 on the local machine (that’s the port used by the docker command you will be using shortly) to port 2375 at the other end. Docker Swarm listens on port 2375.

    Configuring the SSH tunnel

  12. Click Session at the top of the treeview. Click the Save button to save your configuration changes, and then click Open to create a secure SSH connection to the container service. If you are warned that the server’s host key isn’t cached in the registry and asked to confirm that you want to connect anyway, click Yes.

    Opening a connection to the container service

  13. An SSH window will open and prompt you to log in. Enter the user name (“dockeruser”) that you specified in Exercise 2, Step 2. Then press the Enter key. Once connected, you’ll see a screen that resembles the one below.

    Observe that you didn’t have to enter a password. That’s because the connection was authenticated using the public/private key pair you generated in Exercise 1. Key pairs tend to be much more secure than passwords because they are cryptographically strong.

    Successful connection

Now that you’re connected, you can run the Docker client on your local machine and use port forwarding to execute commands in the Azure Container Service. Leave the SSH window open while you work through the next exercise.

Task 4: Create a Docker image and run it in a container

Now comes the fun part: creating a Docker image and running it inside a container in Azure. If you haven’t already installed the Docker client, refer to the instructions at the beginning of this lab to download and install the Docker client for your operating system.

  1. Open a terminal window (macOS or Linux) or a Command Prompt window (Windows) and navigate to the “resources” subdirectory of this lab. It contains the files that you will build into a container image.

    Take a moment to examine the contents of the “resources” subdirectory. It contains a file named Dockerfile, which contains the commands Docker will use to build a container image. It also contains a Python script named, a subdirectory named “input,” and a subdirectory named “output.” The latter subdirectory is empty. The “input” subdirectory contains several color JPG images. The Python script enumerates the files in the “input” subdirectory, converts them to grayscale, and writes the grayscale images to the “output” subdirectory.

  2. If you are running macOS or Linux, execute the following command in the terminal window:

    export DOCKER_HOST=tcp://

    If you are running Windows instead, execute this command in the Command Prompt window:

    set DOCKER_HOST=tcp://

    This command directs the Docker client to send output to localhost port 22375, which you redirected to port 2375 in the Azure Container Service in the previous exercise. Remember that port 2375 is the one Docker Swarm listens on. The commands that you execute in the next few steps are typed into a local terminal window, but they are executed in the container service you deployed to the cloud using the SSH tunnel that you established in the previous exercise.

  3. Be sure you’re in the “resources” subdirectory. Then execute the following command to create a container image named “ubuntu-convert” containing the Python script as well as the “input” and “output” subdirectories and their contents. Be sure to include the period at the end of the command:

    docker build --no-cache --tag ubuntu-convert .
  4. Wait for the command to finish executing. (It will take a few minutes for Docker to build the container image.) Then execute the following command to list the images that are present, and confirm that the list contains an image named “ubuntu-convert:”

    docker images
  5. Now execute the following command to start the container image running and name the container “acslab:”

    docker run -dit --name acslab ubuntu-convert /bin/bash
  6. The container is now running. The next task is to execute the Python script in the root of the file system in the running container. To do that, execute the following command:

    docker exec -it acslab /
  7. If the Python script ran successfully, the “output” subdirectory in the container should contain grayscale versions of the JPG files in the “input” subdirectory. Use the following command to copy the files from the “output” subdirectory in the container to the “output” subdirectory on the local machine:

    docker cp acslab:/output .

    Because you are still in the lab’s “resources” subdirectory, this command will copy the grayscale images to the “resources” subdirectory’s “output” subdirectory.

  8. Stop the running container by executing the following command:

    docker stop acslab
  9. Type the following command to delete the “acslab” container:

    docker rm acslab
  10. List the contents of the “output” subdirectory under the “resources” subdirectory that you are currently in. Confirm that it contains eight JPG files copied from the container.

  11. Open one of the JPG files and confirm that it contains a grayscale image like the one pictured below.

    Grayscale image copied from the container

Congratulations! You just created a Docker container image and ran it in a Docker container hosted in Azure. You can close the SSH window now if you’d like because you are finished using the SSH connection.

Task 5: Suspend the master VM

When virtual machines are running, you are being charged — even if the VMs are idle. Therefore, it’s advisable to stop virtual machines when they are not in use. You will still be charged for storage, but that cost is typically insignificant compared to the cost of an active VM.

Your container service contains a master VM that needs to be stopped when you’re not running containers. The Azure Portal makes it easy to stop virtual machines. VMs that you stop are easily started again later so you can pick up right where you left off. In this exercise, you will stop the master VM to avoid incurring charges for it.

  1. In the Azure Portal, open the blade for the “ACSLabResourceGroup” resource group. Click the virtual machine whose name begins with swarm-master to open a blade for the master VM.

    Opening a blade for the master VM

  2. Click the Stop button to stop the master VM. Answer Yes when prompted to verify that you wish to stop it.

    Stopping the master VM

There is no need to stop the agent VMs. They are part of an Azure Virtual Machine Scale Set and are automatically spun up and down as needed by the master VM. Note that if you wish to run containers again in this container service, you will need to restart the master VM.

Step 6: Delete the resource group

Resource groups are a useful feature of Azure because they simplify the task of managing related resources. One of the most practical reasons to use resource groups is that deleting a resource group deletes all the resources it contains. Rather than delete those resources one by one, you can delete them all at once.

In this exercise, you will delete the resource group created in TASK 2 when you created the container service. Deleting the resource group deletes everything in it and prevents any further charges from being incurred for it.

  1. In the Azure Portal, open the blade for the “ACSLabResourceGroup” resource group. Then click the Delete button at the top of the blade.

    Deleting a resource group

  2. Because deleting a resource group is a permanent action that can’t be undone, you must confirm that you want to delete it. Do so by typing the name of the resource group into the box labeled TYPE THE RESOURCE GROUP NAME. Then click Delete to delete the resource group and everything inside it.

    Confirming resource-group deletion

After a few minutes, you will be notified that the resource group was deleted. If the deleted resource group still appears in the “Resource groups” blade, click that blade’s Refresh button to update the list of resource groups. The deleted resource group should go away.


The Azure Container Service makes it easy to run apps packaged in Docker containers in the cloud without having to manage servers or install a container stack yourself. Container images are smaller than VM images, they start faster, and they typically cost less since a single VM can host multiple container instances. More importantly, Docker containers can be hosted in other cloud platforms such as Amazon Web Services (AWS). If you want to avoid being tied to a single cloud platform, containers are a great way to achieve that independence.

Additional labs and resources for Academics and Students

What does #HackTheClassroom mean to educators?

MSDN Blogs - Fri, 09/23/2016 - 09:00

If you’re reading this before 4pm on Saturday 24th September, there is still time to register for #HackTheClassroom!

#HackTheClassroom is very nearly upon us! We’re all looking forward to tomorrow’s live event hosted by Anthony Salcito, and featuring a fantastic line-up of guest speakers. [Anthony has written a post of his own, ahead of the event – check it out here]. As you should hopefully know by now, the theme of the day is ‘small steps to big impact’, and we already have thousands of educators from all over the world who have registered for the session, to share their own stories and learn new ideas from others.

For many of them, this is not their first #HackTheClassroom session! In February, during the E2 Educator Exchange in Budapest, Hungary, many of our UK MIEEs took part in another #HackTheClassroom event. Kristy Griffin (MIEE) wrote a report of the whole exchange for the May 2016 edition of #TheFeed, which can be found here:

#MIEExpert Report: E2 – Educator Exchange

However #HackTheClassroom is all about connecting and sharing ideas and experiences with educators from all over the world, and in doing so learning from them. In keeping with that theme, we’d like to share to posts from the Microsoft in Education blog, which are written by educators from the USA, and put into words what #HackTheClassroom means to them:

Guest post by Jim Pedrech: What Hack the Classroom Means to Me

Guest post by Tammy Dunbar: What Hack the Classroom Means to Me 

Guest post by Jeanne Parent: What Hack the Classroom Means to Me

Guest post by Aaron Maurer: What Does Hack the Classroom Means to Me?

Be sure to join the conversation on Twitter tomorrow using the hashtag #HackTheClassroom!

How do I start learning or using Azure?

MSDN Blogs - Fri, 09/23/2016 - 08:27

A question that gets asked very often…and you are not the first or only one!

There are several options to get started, 30-day trial is just one but perhaps you are looking for more. The options are constrained by the authentication protocols required for an organization i.e. using work identity or personal/Microsoft account to access Azure subscription. If work authentication is of prime importance, #3 is the way to go. Outlined below are few that could be very helpful in your Azure journey.

(1)   Monthly Azure credits for Visual Studio subscribers: Upto $150 a month per user (most popular for starters, I used this immensely)

(2)   Enterprise Dev/Test: If the organization already has an Enterprise agreement, they can avail Dev/Test services at subsidized rates.

(3)   Pay-As-You-Go Dev/Test: If you have been using MSDN/Visual Studio free credits and need to start a team project without owning an Enterprise subscription yet availing subsidized rates.

(4)   Azure in Education: Using Azure in your research or in teaching an advanced course at the university, read more details and specs here

(5)   Azure4Research: (Click Apply) Azure for research is an award program to get your research started, grants available upto $20K for 12 months.

Happy learning!

How can I connect my premises or data center to my Azure subscription?

MSDN Blogs - Fri, 09/23/2016 - 08:15

Network Architecture between your Institution premises or data center to your Azure subscription

There are 2 key elements of the network architecture, however, it is imperative to understand that private peering is primarily required for IaaS based services and for strict compliance reasons around data security. Most PaaS and SaaS services provide data access over RESTful interface where data is encrypted over Https/TLS protocols.

  • Connection between university premises and Cloud subscription: At a high level, there are 3 options to connect between your premises/data center and your Azure subscription, consider these as good-better-best (in that sequence) for bandwidth and performance aspects.
    1. Site to Site VPN (Good): An IPSec encrypted S2S VPN is a good starter option to establish secure connectivity between university premises or Data Center to Azure and several options are available to deploy and build secure encrypted tunnel (Basic, Standard and High performance). For a select few institutions (e.g. an R1 institution) this tunnel is built through Internet2 (which is more protected than public internet but still shared with other schools) hence the network performance is subjected to fluctuations depending upon the Internet2 usage/bandwidth. There is no QoS SLA provided by Microsoft. Key challenge here is the network capacity is capped at 100 Mbps for Basic and Standard and 200 Mbps for High performance gateway hence works well as a starter option. You can have upto 10 Basic and Standard and upto 30 High performance tunnels under one Azure subscription, few universities have deployed as many as 12-15 tunnels before consider more advanced options.
    2. Third-party  network appliance in Azure Market place (Better): Azure Market Place offerings such as  Barracuda, Cisco, Palo Alto (and many others..) network appliances and these go beyond 200 Mbps Azure gateway limits depending on specific models you chose to deploy (some can go upto 1.4 Gbps). These appliances provide several additional features like firewall, redundant connection etc. however comes at a premium charge payable to the market place vendors. The connection is still an encrypted VPN tunnel built over Internet2 or public internet and primarily used by customers that have out-grown S2S VPN Capabilities or need larger bandwidth but not ready to go for Express Route just yet. There is no QoS SLA provided by Microsoft on this option either.
    3. Express route (Best): Azure ExpressRoute is a private connection between Azure data centers and infrastructure on your premises or in a colocation environment. ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a dedicated private connection facilitated by a connectivity provider serving a range of bandwidth options from 50 Mbps to 10 Gbps. With ExpressRoute you can establish connections to all Azure services, , Office 365 and CRM Online. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility. Express route is like a toll road  where you pay additional cost and get a reliable , predictable performance for your Enterprise level applications that are latency sensitive and provides a secure passage of data without traversing through shared Internet. This is an add-on to Internet2/Public Internet and does not use any of these resources. This is supported by a QoS SLA from Microsoft and is highly recommended for VOIP (Skype), Video streaming and Research use cases. Read more details here , FAQs here and pricing.
  • Network design of the Cloud subscription (considering security, policies, firewalls, resiliency and performance): This is a very important element of your Cloud architecture and Microsoft has provided various reference architectures that can be readily deployed using JSON templates or Power Shell with little or no modification. These range from single VM, Multiple VMs, N-tier, Reliable N-tier and High availability (multiple region) architectures. Read all the reference architecture and detailed considerations here.


How are other institutions setting up their cloud connectivity? How it has been accomplished?

It differs from case to case and the purpose of leveraging Azure in short to medium term. Institutions leveraging Azure from a research focus (and other services)  and advance their ongoing research OR institutions that have a large infrastructure footprint and plan to run their operations in Hybrid mode (on-premise and cloud) have decided to upfront install Express route and scale it up and down based on their needs/patterns to manage their cost. Other institutions have commenced their cloud journey using Site to Site VPN and have added additional gateways as their demands grow, they have self-architected their VPN connections to be resilient by building a full-mesh topology or redundant connections between their cloud subscriptions (multiple regions) and data centers (multiple locations). One such example of Express Route usage is here and other S2S options of Full-mesh, Daisy Chain and Hub-spoke model are here.

In almost all cases, the network design (both aspects above) is a team exercise between institution’s infrastructure team and Microsoft Azure specialist team white-boarding several scenarios, discussing pros and cons and then finalizing an architecture. In certain cases, customers have requested Azure network specialists to help perform an independent review of the end state architecture.

Please read more specific details on these services, limits and prices on Azure documentation.

Unable to launch Visual Studio 2015 IDE after removing Update3

MSDN Blogs - Fri, 09/23/2016 - 08:04

Visual Studio 2015 IDE can’t be launched after removing Visual Studio U3. In the task manager it shows devenv.exe but there is no UI availavle on the screen. If I launch VS IDE again, the behavior remains same and the task manager shows two instances of devenv.exe. Even if I repair VS 2015, it doesn’t make any difference.

Post investigation, it is found that the issue occurs due to mismatched binaries.

       Microsoft.VisualStudio.Imaging.dll C:windowsMicrosoft.NetassemblyGAC_MSILMicrosoft.VisualStudio.Imagingv4.0_14.0.0.0__b03f5f7f11d50a3aMicrosoft.VisualStudio.Imaging.dll          Yes         N/A        Loading disabled by Include/Exclude setting.    30     14.00.25125.3       6A7A0000-6A83C000          [0xF70] devenv.exe          [1] DefaultDomain          

Microsoft.VisualStudio.Utilities.dll C:windowsMicrosoft.NetassemblyGAC_MSILMicrosoft.VisualStudio.Utilitiesv4.0_14.0.0.0__b03f5f7f11d50a3aMicrosoft.VisualStudio.Utilities.dll              Yes         N/A        Loading disabled by Include/Exclude setting.     19     14.00.23107.0        6B310000-6B3FE000    [0xF70] devenv.exe   [1] DefaultDomain          

Here the component Microsoft.VisualStudio.Imaging.dll didn’t get reverted back to RTW. As per the VS logs (using VS collect tool):

• Both VSUpdate and MicroUpdate patches are now absent in the “view installed updates” section of ARP
• But, I still see those two patches applied to a small number of MSIs (full list of MSIs they’re applied to is below)

As per the MSIINV output,

{030A6785-C3A9-37DA-8530-444C320629FA} Microsoft Visual Studio 2015 Shell (Minimum)

   Package code: {19E2D78C-2716-44EF-B992-E383D631C8F3}

   Install date: 2016.09.02

        Version: 14.0.23107

      Publisher: Microsoft Corporation

     Assignment: Per-Machine

       Language: 1033

        Package: vs_minshellcore.msi

  Local package: C:WindowsInstaller87217e.msi

Installed from: C:ProgramDataPackage Cache{030A6785-C3A9-37DA-8530-444C320629FA}v14.0.23107packagesvs_minshellcore

    Last source: n;1;C:ProgramDataPackage Cache{030A6785-C3A9-37DA-8530-444C320629FA}v14.0.23107packagesvs_minshellcore

       Features: Provider, vs_minshellcore, PID_Validation, PIDGenX_DLL, System.Threading.Tasks.Dataflowx86enu, Servicing_Key

Total features: 6

              6: Local

          Patch: {D174F2C0-A894-495F-B276-C28B52D4DBB4} KB3151378 (Applied)

          Patch: {B9041113-67E7-46A3-BC24-A977D1FD13A1} KB3165756 (Applied)

  Total patches: 2



{14D1CABE-2B5A-3AED-B3A7-42315D062965} Microsoft Visual Studio Enterprise 2015

   Package code: {FE7684DC-63FA-401C-A1BA-A879B3C57FAE}

   Install date: 2016.09.02

        Version: 14.0.23107

      Publisher: Microsoft Corporation

     Assignment: Per-Machine

       Language: 1033

        Package: vs_enterprisecore.msi

  Local package: C:WindowsInstaller872219.msi

Installed from: C:ProgramDataPackage Cache{14D1CABE-2B5A-3AED-B3A7-42315D062965}v14.0.23107packagesenterprisecore

    Last source: n;1;C:ProgramDataPackage Cache{14D1CABE-2B5A-3AED-B3A7-42315D062965}v14.0.23107packagesenterprisecore

      Help link:

       Features: Provider, Visual_Studio_Ultimate_x86_enu, Team_Developer_and_Test_tools_x86_enu, Testing_Tools_12153_x86_enu, VsttliteSpecific_Feature, VsttLite_Update1_Feature, VSU_TestTools_Update1_Feature, VSU_TestTools_Update2_Feature, AgileTestWindow_net, ProductRegKeyVSTS_12211_x86_enu, PID_Validation, PIDGenX_DLL, Servicing_Key, Detection_Keys

Total features: 14

             14: Local

          Patch: {B9041113-67E7-46A3-BC24-A977D1FD13A1} KB3165756 (Applied)

  Total patches: 1


{DE064F60-6522-3310-9665-B5E3E78B3638} Microsoft Visual Studio Community 2015

   Package code: {4CCB2524-FCB2-461B-9530-CF2737B95732}

   Install date: 2016.09.02

        Version: 14.0.23107

      Publisher: Microsoft Corporation

     Assignment: Per-Machine

       Language: 1033

        Package: vs_communitycore.msi

  Local package: C:WindowsInstaller8721d1.msi

Installed from: C:ProgramDataPackage Cache{DE064F60-6522-3310-9665-B5E3E78B3638}v14.0.23107packagescommunitycoreSetup

    Last source: n;1;C:ProgramDataPackage Cache{DE064F60-6522-3310-9665-B5E3E78B3638}v14.0.23107packagescommunitycoreSetup

       Features: Visual_Studio_Community_x86_enu, VB_for_VS_7_Pro_11320_x86_enu, Provider, VCsh_for_VS_7_Pro_810_x86_enu, VWD_for_VS_Pro_11324_x86_enu, Testing_Tools_for_Pro_x86_enu, Code_Analysis_Tools_11987_x86_enu, Performance_Tools_11988_x86_enu, TSDevPkg_12650_x86_enu, WinSDK_EULA, VS_Remote_Debugging_x86_enu, PID_Validation, PIDGenX_DLL, SilverlightSL4_Reg, Servicing_Key, Detection_Keys, UnitTest_Agent_12142

Total features: 17

             17: Local

          Patch: {B9041113-67E7-46A3-BC24-A977D1FD13A1} KB3165756 (Applied)

  Total patches: 1


{DF32E41C-24AD-4A87-B43A-B38553B1806E} Visual Studio 2015 Prerequisites

   Package code: {470C3138-DE41-4A82-AF51-621DA2F70582}

   Install date: 2016.09.02

        Version: 14.0.23107

      Publisher: Microsoft Corporation

     Assignment: Per-Machine

       Language: 1033

        Package: VS_Prerequisites_x64_neutral.msi

  Local package: C:WindowsInstaller872106.msi

Installed from: C:ProgramDataPackage Cache{DF32E41C-24AD-4A87-B43A-B38553B1806E}v14.0.23107packages64bitPrereqx64

    Last source: n;1;C:ProgramDataPackage Cache{DF32E41C-24AD-4A87-B43A-B38553B1806E}v14.0.23107packages64bitPrereqx64

      Help link:

       Features: VS_BSLN_enu_amd64_sfx_SETUP, Provider, Visual_Studio_A64_Prereqs_amd64_enu, WinSDK_Registry_x64, Servicing_Key, Detection_Keys

Total features: 6

              6: Local

          Patch: {B9041113-67E7-46A3-BC24-A977D1FD13A1} KB3165756 (Applied)

  Total patches: 1


{66D86CBC-EFCD-3502-A249-F91F775427F8} Microsoft Visual Studio Premium 2015

   Package code: {B5F4BF5C-5086-4A0C-BF4E-C821E0401A7B}

   Install date: 2016.09.02

        Version: 14.0.23107

      Publisher: Microsoft Corporation

     Assignment: Per-Machine

       Language: 1033

        Package: vs_premiumcore.msi

  Local package: C:WindowsInstaller8721fa.msi

Installed from: C:ProgramDataPackage Cache{66D86CBC-EFCD-3502-A249-F91F775427F8}v14.0.23107packagespremiumcore

    Last source: n;1;C:ProgramDataPackage Cache{66D86CBC-EFCD-3502-A249-F91F775427F8}v14.0.23107packagespremiumcore

      Help link:

       Features: Provider, Visual_Studio_Premium_x86_enu, VB_for_VS_7_Ent_28_x86_enu, VCsh_for_VS_7_Ent_670_x86_enu, Team_Developer_Tools_11986_x86_enu, Testing_Tools_for_Dev_11989_x86_enu, VSU_UITest_Components_Update1_Feature, TSDevPkg_12650_x86_enu, VsttliteSpecific_Feature, VsttLite_Update1_Feature, SilverlightSL4_Reg, Servicing_Key, Detection_Keys

Total features: 13

             13: Local

          Patch: {B9041113-67E7-46A3-BC24-A977D1FD13A1} KB3165756 (Applied)

  Total patches: 1


What happened:

When VSU3 is uninstalled, it also uninstalls MU3.x (MicroUpdate 3).  That is okay. However, the MU3.x uninstall didn’t remove the MU MSP (patch) that targets many VS MSIs. 

From dd_patch_KB3165756_2060902195618.log:


[1914:1DB8][2016-09-02T19:56:20]i201: Planned package: kb3165756_enu, state: Present, default requested: None, ba requested: None, execute: None, rollback: None, cache: No, uncache: No, dependency: Unregister

[1914:1DB8][2016-09-02T19:56:20]i201: Planned package: kb3165756, state: Present, default requested: None, ba requested: None, execute: None, rollback: None, cache: No, uncache: No, dependency: Unregister


Why did it happen:

Unfortunately, this is how bundles are designed to work for performance reasons.  It is assumed that if a patch bundle (like VS Update) is being removed because it is a related bundle and the related parent bundle is also being removed, then the parent bundle will be uninstalling the product (MSI) so the patch bundle doesn’t need to remove the patch (MSP).  If it didn’t do this, it would take twice as long to uninstall VS when VS Update is also present.  So this performance design works great for the VS & VS Update scenario.


However, this design does not work with Micro Updates which are a patch to a patch.  Since it is assumed that if a patch bundle (like MU3.x) is being removed because it is a related bundle and the related parent bundle (VSUpdate) is also being removed, then the parent bundle will remove the product.  But, VS Update isn’t a product, it is a patch.  So what happens here is that the VS U3 patch gets removed but the MU3.x patch does not get removed since the product (VS RTM) is still present.



Find all the Micro Updates on your machine.  Even though the Micro Updates are already uninstalled when VS Update is uninstalled, there is still a copy of the MU setup exes in the SecondaryInstaller cache.  Note: The paths will vary depending on the versions of MUs you previously had installed.  You can find them by running this command:   dir “ProgramDataVS14-KB*.exe” /s


Next, run all of these exes with “/uninstall” to force it to uninstall the MUs again using the below commands.  This time, they will not run in “related bundle” mode so they will actually remove the patch (MSP) and VS files that are patched will go back to their RTM version.


C:ProgramDataMicrosoftVisualStudioSecondaryInstaller14.0installersMicroUpdate2.1envs14-kb3151378.exe /uninstall


C:ProgramDataMicrosoftVisualStudioSecondaryInstaller14.0installersMicroUpdate3.4envs14-kb3165756.exe /uninstall


If you find more than one MU, uninstall all of them just to be sure they are all removed.  On my machine, I previously had MU2.1 and MU3.4.  That is why even after uninstalling VS U3 some MSIs still had 2 patches applied. Now the VSU3 and MU3.4 and MU2.1 patches are all removed and Visual Studio IDE can be launched successfully.

Minecraft Education Edition ab 1. November verfügbar

MSDN Blogs - Fri, 09/23/2016 - 07:37

Spielend lernen leichtgemacht: Ab dem 1. November ist die Minecraft: Education Edition für Lehrer und Pädagogen erhältlich. Die Education Edition des beliebten Open World Games Minecraft ist speziell für den Unterricht entwickelt worden und fördert Kreativität, Zusammenarbeit und die Fähigkeit, Probleme zu lösen.

Um sich jetzt schon ein Bild machen zu können, steht Lehrern und Pädagogen die Minecraft: Education Edition bereits jetzt als Early Access zur Verfügung. Alle Infos dazu gibt es hier. Seit dem Start der Early Access-Phase im Juni haben weltweit mehr als 15.000 Schüler und Lehrer Minecraft: Education Edition gespielt. Auf Basis ihres Feedbacks haben wurden nicht nur weitere Features aus Minecraft integriert, sondern auch neue Features entwickelt:

• Classroom Mode: Diese zusätzliche App beinhaltet eine Kartenansicht der Minecraft-Welt, eine Liste aller Schüler, die sich in der Welt befinden sowie ein Chat-Fenster zur Kommunikation. Die App ist für Lehrer gedacht, die Schüleraktivitäten beobachten wollen, ohne dabei selbst im Spiel zu sein.
• Easy Classroom Collaboration: Schüler können als Gruppe, in Paaren oder als gesamte Klasse in einem virtuellen Klassenraum Projekte planen und Probleme lösen. Bis zu 30 Schüler können in einer Welt zusammenspielen.
• Non-Player Characters: Lehrer können NPC (Non-Player Character) erstellen, um Schülern als Ratgeber zur Seite zu stehen.
• Kamera und Portfolio: Schüler haben dank dieser Features die Möglichkeit, ihre Fortschritte zu verfolgen, Nachweise ihrer Lernerfolge zu sammeln, die Entwicklung ihrer Projekte zu dokumentieren und Screenshots ihrer Arbeiten zu machen.
• Tafeln: Für die Kommunikation von Lernzielen, die Bereitstellung zusätzlicher Informationen und konkrete Anweisungen stehen Lehrern Tafeln in verschiedenen Größen zur Verfügung.
• Sign-in: Individuelle Logins über Office 365 für Schüler und Lehrer garantieren Datensicherheit.
• Tutorial World: Hilft den Spielern bei der Navigation im Spiel.
• Neue Features: Die Minecraft: Education Edition wird ständig erweitert und die neue Features werden durch Updates bereitgestellt.

Weitere Infos gibt auf der Website von Minecraft: Education Edition.

Join us for a meetup on “The Pursuit of a Right Career Path in Software Industry” by MVP Arslan Pervaiz

MSDN Blogs - Fri, 09/23/2016 - 07:10

This meetup is Free and brought to you by Uninama in collaboration with Microsoft on 24th September 2016 at 3rd Floor Auditorium, Afra Software Technology Park, Ferozepur Road, Lahore from 12:00 noon – 1:00 pm. No prior registration is required so please walk-in.

We’ll be joined by Mr. Arslan Pervaiz, a Microsoft Most Valuable Professional (MVP) to share his journey of becoming a community speaker. Here’s a quick overview of his talk,

  • Pursuit of career path in software industry
  • Microsoft Innovation Center’s role in career counselling
  • Career path dilemma: Management vs Development
  • On-ground, technical conflicts (Open-source vs Closed source)

Speaker’s Profile – Arslan Pervaiz

Arslan Pervaiz is working as a Principal Software Engineer at Vopium & a Microsoft Most Valuable Professional (MVP) in Visual Studio and Development Technologies. He holds over 5+ years of experience in IT Industry as a Software Development in analysis, design, development, debugging, documentation, and maintenance of Windows, Web applications and mobile applications using Object-Oriented concepts and portfolio management.

IIS web-servers running in Windows Azure may reveal their private IP for certain requests.

MSDN Blogs - Fri, 09/23/2016 - 07:09

Internet Information Services (the handy web-server from Microsoft) runs on Windows server OS but also in the Microsoft Azure Cloud. If you are building virtual machines and deploying them to the cloud (IAAS – Infrastructure as a Service) or using Cloud Services from Windows Azure (PAAS – Platform as a Service), you will basically be using an IIS web-server in behind the scenes to host your service.

When deployed inside Windows Azure, the virtual machines (IAAS or PAAS) that are running your IIS server are allocated private IP addresses. Windows Azure does the job of forwarding traffic from the public IP address and port that you are using to the private IP address and port combination of the virtual machine(s) that you are hosting in the cloud. The scenario diagram looks a bit like the one shown below:

You request a resource from you service in Azure, this is routed to the public IP address the service is hosted on. The request is then routed further by Windows Azure to the private IP address of (one of) the server(s) that host this service. The details of how this is done is beyond the purpose of this blog.

What is interesting to note is that we can send some requests to the IIS server which will make it respond with the internal (private IP) that the server has in the Cloud. Some may consider this a disclosure of information that is not intended for the end client, so we may wish to mitigate against this disclosure, but let’s first try to understand what happens.

HTTP (Hyper text transfer protocol) currently supports three versions:

  • Version 1.0 of the protocol specification (the original version)
  • Version 1.1 of the protocol (which is the most widely used)
  • Version 2.0 which is starting to gain traction in today’s web-server world.

The requests that are problematic for this scenario are all sent using the HTTP 1.0 version of the protocol. There are a couple of differences between HTTP 1.0 and HTTP 1.1 but the one that we are interested in here is the fact that in HTTP 1.0 we did not have to specify a ‘host’ http-header in the request.

When sending http requests to a server, we usually type in the name of the site / service we are trying to reach. You would type in
should you be trying to reach my bookmarking service. The ‘’ is the host name, which will be resolved by the browser to the server’s public IP address. Hence the request to the site would look something like this:

GET / HTTP/1.1
Accept    text/html, application/xhtml+xml, */*
Accept-Encoding    gzip, deflate
Accept-Language    fr-FR,en-US;q=0.5
Connection    Keep-Alive
User-Agent    Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko

Note that Host http-header in the sample above is set to However, to open the TCP connection, the browser also needs to resolve this name into the IP address of the server – which it will do under the covers.

In HTTP 1.0, it is possible to instruct a client (browser or other) to open a TCP connection to a web-server and send a request without sending the Host header, and only send the name of the requested resource (which is / (slash) in the above example). In the response from the server contains a redirection to another page or resource (http status codes 301 or 302), the server will specify where the client should redirect to via a Location http header. This header will contain the name of the internal IP of the IIS server if no host http-header is provided for the request.

Why is this happening?

When performing a redirect, IIS needs to tell the connecting client where to look for the resource. It needs to build the url for the redirection based on information it has: the incoming request and the settings for the IIS web-site. Since the incoming request does not contain a host header (like in the case of the HTTP 1.0 request), and the IIS website does not have any host header mappings setup (and is basically listening for all HTTP traffic on an IP and port combination), the only thing it can be sure of is the IP and port address combination the site is bound to.

It cannot do any reverse DNS lookups – since traffic it forwarded to the private address of the server by Windows Azure (and IIS does not do reverse IP DNS queries anyways). Hence, to build the URL for the redirect it needs to perform, it will use the private IP address of the server inside Windows Azure when it issues the response.

You can see this issue happening when using a tool like curl to issue an HTTP 1.0 request. Consider the following command in curl:

curl –http1.0 –header “Accept:” –header “Connection:” –header “Host:” -i -v

Where is the public IP address of the service running in azure, and the /somepage.aspx would result in a 301 or 302 redirect status code from the server. Since we are setting the Host header to null when sending the request, and we are indicating we are using http1.0, we will not be sending any information about the name of the site to the IIS server.

How can we work around this issue?

There are a couple of ways to work around the problem. The first is to define an alternateHostName for the server or the site you wish to protect. Here are the appcmd or powershell commands you can use to set this parameter up:

For a website:

“[SiteName]” –section:system.webServer/serverRuntime /alternateHostName:“[AltHostName]”  /commit:apphost

‘MACHINE/WEBROOT/APPHOST’ -location ‘[SiteName]’
“alternateHostName” -value “[AltHostName]”

For the entire server:

config  –section:system.webServer/serverRuntime /alternateHostName:“[AltHostName]”  /commit:apphost

“alternateHostName” -value “[AltHostName]”

You may also use the configuration editor in IIS, and navigate into the system.WebSerer/ServerRuntime tag and set the value as shown below:

This will give the server an extra piece of information on how to deal with the response url it needs to build. Instead of just basing itself on the IP address of the site’s bindings, it will use the alternateHostName value if it is provided.

The other way in which you can work around this issue is to use Url Rewrite to deny HTTP 1.0 traffic to your site. You can download url rewrite for the site here:

You will need to configure a new ‘Inbound Rule’ in url rewrite. Name the rule something meaningful like ‘Block HTTP 1.0 traffic’ and then set the match type to ‘Math URL’. The Requested URL should be ‘Matches the Pattern’ and the pattern type should be ‘Wildcards’ and the pattern should be ‘*’ to trap all incoming requests.

In the conditions part of the rule configuration, you need to add a condition to match the {SERVER_PROTOCOL} variable to the ‘HTTP/1.0’ pattern. The SERVER_PROTOCOL variable will be populated by the IIS server based on the HTTP version specified in the incoming request.

Finally, the action that should be taken when such a request is detected is to ‘Abort Request’, essentially closing the TCP connection to the client by sending back a TCP RST (reset) on the connection. The entire configuration is in the screenshot below:

When looking into the web.config of the site, the resulting Url Rewrite rule is the following:

<rule name=”Block HTTP 1.0 Rule” patternSyntax=”Wildcard” stopProcessing=”true”>
<match url=”*” />
<conditions logicalGrouping=”MatchAll” trackAllCaptures=”false”>
<add input=”{SERVER_PROTOCOL}” pattern=”HTTP/1.0″ />
<action type=”AbortRequest” />

For this rule to work, you will need to make sure that it is the first rule listed inside the inbound rules section for your website or web-application in the Url Rewrite section:

If this is not the case, you can use the ‘Move Up’ action button on the right hand side of the IIS console to make sure the rule is the first one to be interpreted on any incoming request, essentially blocking all HTTP 1.0 traffic to the site.

By Paul Cociuba


Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux