You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 1 hour 32 min ago

Using the new Keyword search option to create Sales Orders in Dynamics AX 2012 R3

Thu, 08/07/2014 - 14:25

In AX 2012 R3 there is a new search feature that you can use to create sales order.  With this new feature you can search based on multiple options such as Sales order, Purchase order, and phone number just to name a few.  The option that will be discussed in this post is Keyword.

In order to use Keywords in your search you first need to define criteria to be included as keywords.  I will use the type of Customer in this example as we working with creating Sales Orders.

  1. Go to Sales and Marketing > Setup > Search and click on Define criteria.
  2. In the Type dropdown list select Customer.
  3. Click new in the menu bar.  This will populate a new line where you can select the field you want to enable for keyword search
    1. Some common fields that could be used to search on with keywords are:
      1. AccountNum
      2. City
      3. Name
      4. Address
      5. State
      6. Street
      7. ZipCode
      8. CustGroup
  4. Once the fields are defined click the Refresh button in the menu bar.  This essentially saves and activates the criteria you have defined.  If you add additional criteria at a later time remember to click the Refresh button.

Now that the search criteria is setup it can be used when creating Sales Orders. 

For example let's say that you have criteria setup for Name and you need to create a Sales Order for a school but you cannot remember the actual name or the account number. However you do know that School is part of the name.

When you click to create the Sales Order, select Keyword as the Search by and enter school in the search.   The search will return results that have school in any of the fields you have defined in the search criteria. In my case Address is also set as search criteria, and since one of my Customers has School as part of the address, 2 results are displayed.

At this point you can mark the record you want, click Select Customer in the menu bar and continue to create the Sales Order.

 

Dave B.

 

Visual Studio 2013 Goodies

Thu, 08/07/2014 - 12:05

Just a quick post to remind everyone that Visual Studio 2013 Update 3 has now been released.  It has too many features to cover in detail here.  However, in addition to a number of fixes, it also allows you to get rid of the ALL CAPS menus.

 

This:

becomes this:

 

New goodies include better Web tooling (MVC, WebAPI, JSON, CSS) and vastly-increased debugging capabilities.  This is a free update, and it is well worth the download time. 

 

The REAL reason for this post, however, is to make sure that people know about these add-ins:

I have been doing more development lately, and these tools are really useful. 

 

Productivity Power Tools provides about 25 useful features to general development.  My favorites are:

  • Peek Help (Alt + F1) - invoke context-sensitive help from MSDN and display it in the "Peek Definition" window.
  • Solution Explorer Errors - Solution Explorer now places "squiggles" beneath files, folders, projects, and solutions that contain errors.  Now it is obvious where my errors lie.  Hovering the mouse pointer over a squiggle causes the error messages to pop up in a little window.
  • Structure Visualizer - Vertical lines connect the begin and end of block constructs, such as if statements.  This is really nice when there are a lot of nested structures or one structure is large than one screen height.
  • Document Well - Customize many of the color and behaviors of the tabs in the source editor window.

All of the features are configurable, and can be enabled or disabled as you see fit.  The full feature list is available on the Visual Studio Gallery page.

 

Web Essentials adds enhanced support for Web page development, including CSS (more Intellisense), JavaScript (Minification, Auto-complete braces), and more.  It also enhances BrowserLink so that it is even more useful than before. Check out the web site for a full list of features.

 

 



 

Learn how to create a queue, place and read a message using Azure Service Bus Queues in 5 minutes

Thu, 08/07/2014 - 12:04

This is the first of two post that will illustrate how to write and read messages from Azure service bus queues. The second post will actually illustrate how to read these messages using Node.js.

What I like about this post is that it only took a few minutes to complete.

This post is here to support an MSDN article that I am writing with Steven Edouard.

Creating the Service Bus Namespace and Queue

The assumption here of course is that you have opened an account with Windows Azure.

The first order of business is to actually create the service. You will need to go to the Azure portal click on the service bus , then New in the lower left corner.

Figure 1: Creating a new service bus queue

Select Queue , Custom Create .

Figure 2: Selecting Custom Create

You will need to provide a queue name and you will need to create a namespace name.

Figure 3: Configuring the service bus queue

We will accept the default values here.

Figure 4: Specifying the queue name and namespace

You can see the created namespace in the portal once it is complete.

Figure 5: Validating the creation of a service bus queue

In the figure above you can click on connection information . Specifically, you will need the connection string . The connection string will be pasted into the app . config file of a Visual Studio project we are about to download.

Figure 6: Viewing connection information

Downloading the Sample

Here is the sample to illustrate how to write and read to Azure Service Bus Queues. This is the download link:

http://code.msdn.microsoft.com/windowsazure/Getting-Started-Brokered-aa7a0ac3

Figure 7: Downloading the Service Bus Sample

Modifying the Sample

The modifications will be quite simple. The only thing that we will need to do is modify the connection string in app.config

Figure 8: Viewing the solution

App.config

You can see below that we have modified the connection string . You will need to paste in the connection string you obtained from the portal .

Figure 9: Viewing and modifying App.config

Viewing the Code

Let's take a brief look at the actual code that reads and writes to the message queue.

Creating, Writing to, an Reading from the message queue


I appreciate that you took the time to read this post. I look forward to your comments.

Service Bus Queue Sample 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
public class program
{

    private static QueueClient queueClient;
    private static string QueueName = "SampleQueue";
    const Int16 maxTrials = 4;

    static void Main(string[] args)
    {

        Console.WriteLine("Creating a Queue");             
        CreateQueue();
        Console.WriteLine("Press anykey to start sending messages ...");
        Console.ReadKey();
        SendMessages();
        Console.WriteLine("Press anykey to start receiving messages that you just sent ...");
        Console.ReadKey();
        ReceiveMessages();
        Console.WriteLine("\nEnd of scenario, press anykey to exit.");
        Console.ReadKey();
    }

    private static void CreateQueue()
    {
        NamespaceManager namespaceManager = NamespaceManager.Create();

        Console.WriteLine("\nCreating Queue '{0}'...", QueueName);

        // Delete if exists
        if (namespaceManager.QueueExists(QueueName))
        {
            namespaceManager.DeleteQueue(QueueName);
        }

        namespaceManager.CreateQueue(QueueName);
    }

    private static void SendMessages()
    {
        queueClient = QueueClient.Create(QueueName);

        List<BrokeredMessage> messageList = new List<BrokeredMessage>();

        messageList.Add(CreateSampleMessage("1", "First message information"));
        messageList.Add(CreateSampleMessage("2", "Second message information"));
        messageList.Add(CreateSampleMessage("3", "Third message information"));

        Console.WriteLine("\nSending messages to Queue...");

        foreach (BrokeredMessage message in messageList)
        {
            while (true)
            {
                try
                {
                    queueClient.Send(message);
                }
                catch (MessagingException e)
                {
                    if (!e.IsTransient)
                    {
                        Console.WriteLine(e.Message);
                        throw;
                    }
                    else
                    {
                        HandleTransientErrors(e);
                    }
                }
                Console.WriteLine(string.Format("Message sent: Id = {0}, Body = {1}", message.MessageId, message.GetBody<string>()));
                break;
            }
        }

    }

    private static void ReceiveMessages()
    {
        Console.WriteLine("\nReceiving message from Queue...");
        BrokeredMessage message = null;

        NamespaceManager namespaceManager = NamespaceManager.Create();
        queueClient = QueueClient.Create(QueueName);
        while (true)
        {
            try
            {
                //receive messages from Queue
                message = queueClient.Receive(TimeSpan.FromSeconds(5));
                if (message != null)
                {
                    Console.WriteLine(string.Format("Message received: Id = {0}, Body = {1}", message.MessageId, message.GetBody<string>()));
                    // Further custom message processing could go here...
                    message.Complete();
                }
                else
                {
                    //no more messages in the queue
                    break;
                }
            }
            catch (MessagingException e)
            {
                if (!e.IsTransient)
                {
                    Console.WriteLine(e.Message);
                    throw;
                }
                else
                {
                    HandleTransientErrors(e);
                }
            }
        }
        queueClient.Close();
    }

    private static BrokeredMessage CreateSampleMessage(string messageId, string messageBody)
    {
        BrokeredMessage message = new BrokeredMessage(messageBody);
        message.MessageId = messageId;
        return message;
    }

    private static void HandleTransientErrors(MessagingException e)
    {
        //If transient error/exception, let's back-off for 2 seconds and retry
        Console.WriteLine(e.Message);
        Console.WriteLine("Will retry sending the message in 2 seconds");
        Thread.Sleep(2000);
    }
}

Running the Sample

Screen below shows messages that are being written to the message queue.

Figure 10: Viewing queue creating and message sending

As the program progresses, it will actually display the messages that are read from the message queue . In a future post we will show how to do this from node.JS

Figure 11: Viewing messages in the queue

When messages are read from the message queue , they are removed . Because we want these messages to remain in the queue so that our node.JS program can read them, you should think about commenting out the code that reads messages from the queue . Alternatively, you can use the Peek () method to read messages, but not remove them from the queue.

Why can’t I create an app package? Windows Phone Xap vs. Appx

Thu, 08/07/2014 - 11:59

Differences between Windows Phone Silverlight apps and Windows Phone Store apps are a recurring source of confusion for Windows Phone developers.

A frequently asked question on the Windows Phone forums is “Why can’t I create an app package? When I go to the store menu it has only a ‘Launch Windows App Certification Kit…’ option”.

The reason is that the developer is writing a Windows Phone Silverlight app, and app packages are used only by Windows Runtime apps:

 Visual Studio Project TypePackage TypePackage Built Silverlight Windows Phone Silverlight 8.0 Xap With project Windows Phone Silverlight 8.1 Xap With project Universal Windows 8.1 Appx By Store menu Windows Phone 8.1 Appx By Store menu Shared N/A N/A

 

Silverlight apps use xap files which are generated as part of the normal build process. They don’t have a separate packaging step. Build the app and look in the project's bin\<configuration> folder and you’ll find the xap. Build for Release and the bin\Release\<app>.xap file is ready to upload.

Universal apps (both Windows Store and Windows Phone Store apps) need to be built into app packages to be uploaded to the store, and this menu is the way to do that. When you build a Universal app for debugging the project files are built but not packaged for installation. The app doesn't need to be packaged until you are ready to upload it to the store or to send it to another machine for testing. If you deploy a Universal app’s Windows project locally you can examine the staged Appx in the project directory\bin\<configuration>\Appx. Windows Phone projects will create this directly to the phone. The compressed appx isn’t needed until you want to deploy the app to another system (i.e. for testing or release), and that file will be created in the project's AppPackages folder by the “Create App Packages…” menu.

With a Windows Store or Windows Phone Store project selected the menu looks likes this:

If the solution or Shared project is selected the create options will be disabled:

 

- Rob

Follow the Windows Store Developer Solutions team on Twitter @wsdevsol.

Get advance notice about August 2014 security updates

Thu, 08/07/2014 - 11:33

Today, the Microsoft Security Response Center (MSRC) posted details about the August security updates.

If you have automatic updating turned on, most of these updates will download and install on their own. Sometimes you may need to provide input for Windows Update during an installation. In this case, you'll see an alert in the notification area at the far right of the taskbar—be sure to click it.

In Windows 8, Windows will turn on automatic updating during setup unless you choose to turn it off. To check this setting and turn on automatic updating, open the Search charm, enter Turn automatic updating on or off, and tap or click Settings to find it. 

Learn how to install Windows Updates in Windows 7.

If you are a technical professional

The Microsoft Security Bulletin Advance Notification Service offers details about security updates approximately three business days before they are released. We do this to enable customers (especially IT professionals) to plan for effective deployment of security updates.

Sign up for security notifications

Creating animations with Bing Maps (JavaScript)

Thu, 08/07/2014 - 10:32

Bing Maps is a very powerful mapping platform that is often used for creating engaging user experiences. The fluid interactive maps make for a great canvas when visualizing location based data. In this blog post we are going to take a look at how to make the user experience a little more engaging by adding custom animations that can be used in both web and Windows Store apps.

Full source code for this blog post can be found in the MSDN Code Samples here.

Setting up the base project

In this blog you have the choice of creating a web or a Windows Store app. The web based app will make use of the Bing Maps V7 AJAX control while the Windows Store app will make use of the Bing Maps Windows Store SDK. Both API’s are nearly identical but there are a couple of minor differences. The main difference is that we will be using different JavaScript references to load the map control into the page. The Windows Store app will reference a local copy of the Bing Maps API while the web app will be loading in a cloud hosted version of the Bing Maps API. Since the API’s are nearly identical we will be able to use the only differences between the Windows Store and web app will be the main HTML page that loads in the script references needed for the app.

Creating a Web based Project

If you would like to create a web based app open Visual Studio and create a new ASP.NET Web Application called AnimatedMaps.

Next, select the Empty template. We will be creating a basic web app to keep things simple.

Next create a new HTML page called default.html by right clicking on the project and selecting Add → New Item. Create two folders called js and css. In the css folder create a style sheet called default.css. We will use this file to store all our styles for laying out the web app. In the js folder create two JavaScript files. The first JavaScript file will be called AnimatedMap.js and will be used to store our application logic. The second JavaScript file will be called AnimationModule.js and will be used to store the code for an animation module that can be used with Bing Maps. At this point your project should look like this:

Creating a Windows Store Project

If you would like to create a Windows Store app then create a new JavaScript Windows Store project. Select the Blank Template and call the project AnimatedMaps.

Add a reference to the Bing Maps SDK. To do this, right click on the References folder and press Add Reference. Select Windows -> Extensions and then select Bing Maps for JavaScript. If you do not see this option ensure that you have installed the Bing Maps SDK for Windows Store apps.

Next right click on the js folder and select Add -> New Item. Create two new JavaScript files called AnimatedMaps.js and AnimationModule.js. At this point your project should look like this:

Setting up the UI

The JavaScript and CSS styles that we will use in this app will be identical for both the web and Windows Store app. The defaut.html file however will have some differences. The main difference will be with the JavaScript script files being referenced in the header. We will also make use of the onload event on the body of the page in the web app to load the map. The page itself will consist of a map and a panel above it that contains a number of buttons for testing out the animations.

If you are creating a web based application update the default.html file with the following HTML:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<title></title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />

<link href="/css/default.css" rel="stylesheet" />

<!-- Reference To Bing Maps API -->
<script type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"></script>

<!-- Our Bing Maps JavaScript Code -->
<script src="/js/AnimatedMaps.js"></script>

<!-- Use script tag to register the Animation module -->
<script>Microsoft.Maps.registerModule('AnimationModule');</script>
<script type="text/javascript" src="/js/AnimationModule.js"></script>
</head>
<body onload="GetMap();">
<div id='myMap'></div>

<div class="sidePanel">
<input type="button" value="Clear Map" onclick="ClearMap();" /><br /><br />

<span>CSS Pushpin Animations</span><br />
<input type="button" value="Scale on Hover" onclick="ScalingPin();" /><br /><br />

<span>Pushpin Animations</span><br/>
<input type="button" value="Drop Pin" onclick="DropPin();" /><br />
<input type="button" value="Bounce Pin" onclick="BouncePin();" /><br />
<input type="button" value="Bounce 4 Pins After Each Other" onclick="Bounce4Pins();" /><br /><br />

<span>Path Animations</span><br />
<input type="button" value="Move Pin Along Path" onclick="MovePinOnPath();" /><br />
<input type="button" value="Move Pin Along Geodesic Path" onclick="MovePinOnPath(true);" /><br />
<input type="button" value="Move Map Along Path" onclick="MoveMapOnPath();" /><br />
<input type="button" value="Move Map Along Geodesic Path" onclick="MoveMapOnPath(true);" /><br />
<input type="button" value="Draw Path" onclick="DrawPath();" /><br />
<input type="button" value="Draw Geodesic Path" onclick="DrawPath(true);" />
</div>
</body>
</html>

If you are creating a Windows Store app update the default.html file with the following HTML:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>AnimatedMaps</title>

<!-- WinJS references -->
<link href="//Microsoft.WinJS.2.0/css/ui-dark.css" rel="stylesheet" />
<script src="//Microsoft.WinJS.2.0/js/base.js"></script>
<script src="//Microsoft.WinJS.2.0/js/ui.js"></script>

<!-- AnimatedMaps references -->
<link href="/css/default.css" rel="stylesheet" />
<script src="/js/default.js"></script>

<!-- Bing Maps references -->
<script type="text/javascript" src="ms-appx:///Bing.Maps.JavaScript//js/veapicore.js"></script>
<script type="text/javascript" src="ms-appx:///Bing.Maps.JavaScript//js/veapimodules.js"></script>

<!-- Our Bing Maps JavaScript Code -->
<script src="/js/AnimatedMaps.js"></script>

<!-- Use script tag to register the Animation module -->
<script>Microsoft.Maps.registerModule('AnimationModule');</script>
<script type="text/javascript" src="/js/AnimationModule.js"></script>
</head>
<body>
<div id='myMap'></div>

<div class="sidePanel">
<input type="button" value="Clear Map" onclick="ClearMap();" /><br /><br />

<span>CSS Pushpin Animations</span><br />
<input type="button" value="Scale on Hover" onclick="ScalingPin();" /><br /><br />

<span>Pushpin Animations</span><br />
<input type="button" value="Drop Pin" onclick="DropPin();" /><br />
<input type="button" value="Bounce Pin" onclick="BouncePin();" /><br />
<input type="button" value="Bounce 4 Pins After Each Other" onclick="Bounce4Pins();" /><br /><br />

<span>Path Animations</span><br />
<input type="button" value="Move Pin Along Path" onclick="MovePinOnPath();" /><br />
<input type="button" value="Move Pin Along Geodesic Path" onclick="MovePinOnPath(true);" /><br />
<input type="button" value="Move Map Along Path" onclick="MoveMapOnPath();" /><br />
<input type="button" value="Move Map Along Geodesic Path" onclick="MoveMapOnPath(true);" /><br />
<input type="button" value="Draw Path" onclick="DrawPath();" /><br />
<input type="button" value="Draw Geodesic Path" onclick="DrawPath(true);" />
</div>
</body>
</html>

Next open the default.css file and update it with the following styles:

html, body {
width:100%;
height:100%;
margin:0;
padding:0;
}

#myMap {
position:relative;
width:100%;
height:100%;
}

.sidePanel {
position:absolute;
right:10px;
top:200px;
top:calc(50% - 250px);
margin:10px;
width:250px;
height:500px;
border-radius:10px;
background-color:#000;
background-color:rgba(0,0,0,0.8);
color:#fff;
padding:10px;
}

.sidePanel input {
margin:5px 0;
}

If you try to run the application at this point you won’t see too much as the map hasn’t been loaded. We will add the code to load the map to the AnimatedMap.js file. In addition to this we will also include a couple of global variables that we will use later and event handlers for all of the buttons. We will add the logic for the clearing the map as well. When we clear the map we will also stop any current animations that might be running. Update the AnimatedMap.js file with the following code:

var map, currentAnimation;

var path = [
new Microsoft.Maps.Location(42.8, 12.49), //Italy
new Microsoft.Maps.Location(51.5, 0), //London
new Microsoft.Maps.Location(40.8, -73.8), //New York
new Microsoft.Maps.Location(47.6, -122.3) //Seattle
];

function GetMap() {
map = new Microsoft.Maps.Map(document.getElementById("myMap"), {
credentials: "YOUR_BING_MAPS_KEY"
});

//Load the Animation Module
Microsoft.Maps.loadModule("AnimationModule");
}

function ClearMap() {
map.entities.clear();

if (currentAnimation != null) {
currentAnimation.stop();
currentAnimation = null;
}
}

function ScalingPin() {
}

function DropPin() {
}

function BouncePin() {
}

function Bounce4Pins() {
}

function MovePinOnPath(isGeodesic) {
}

function MoveMapOnPath(isGeodesic) {
}

function DrawPath(isGeodesic) {
}

//Initialization logic for loading the map control
(function () {
function initialize() {
Microsoft.Maps.loadModule('Microsoft.Maps.Map', { callback: GetMap });
}

document.addEventListener("DOMContentLoaded", initialize, false);
})();

If you run the application you will see the map load up and a bunch of buttons appearing in a panel like this:

Creating Animations using CSS

You can create fairly complex animations in web based apps using CSS3 animations and transitions. However there are some limitations. The first is that only modern browsers support CSS3. Older browsers will ignore these CSS styles. The second one limitation is that we can only modify CSS properties.

There are two ways you can make use of CSS3 to animate pushpins in Bing Maps. The first method consists of creating a standard pushpin and setting the typeName property to the name of a CSS class. By doing this Bing Maps will use the typeName value to set the class property of the generated pushpin HTML. For example, if we had a CSS style called “scaleStyle” we could assign this to a pushpin like so:

var pin = new Microsoft.Maps.Pushpin(map.getCenter(), { typeName: 'scaleStyle' });

The second method is to create a custom HTML pushpin and set the CSS class on one f the elements way. For example:

var pin = new Microsoft.Maps.Pushpin(map.getCenter(), {
htmlContent: "<div class='scaleStyle'>Custom Pushpin</div>"
});

To try this out default.css file and add the following CSS style. This CSS style is designed to scale an HTML element to twice its size when the mouse hovers over it.

.scaleStyle:hover {
-webkit-transition: 0.2s ease-in-out;
-moz-transition: 0.2s ease-in-out;
-o-transition: 0.2s ease-in-out;
transition: 0.2s ease-in-out;

-webkit-transform: scale(2);
-moz-transform: scale(2);
-o-transform: scale(2);
-ms-transform: scale(2);
transform: scale(2);
}

Next open the AnimatedMaps.js file and update the ScalingPin function with the following code:

function ScalingPin() {
ClearMap();

var pin = new Microsoft.Maps.Pushpin(map.getCenter(), { typeName: 'scaleStyle' });
map.entities.push(pin);
}

If you run the app and press the “Scale on Hover” button a pushpin will appear in the center of the map. If you then hover your mouse over the pushpin you will notice that it grows to be twice its size. When you hover off the pushpin it goes back to its original size. Here is an animated gif that demonstrates this animation:

As we will see later in this blog post, CSS3 is only one way we can animate data on our map. We can create even more powerful animations using JavaScript which will work on nearly any web browsers.

Creating an Animation Module

Bing Maps has a large set of features and functionalities. So many in fact that it’s unusual for a single application to make use of all of them. With this in mind, if your app doesn’t require certain features then why download all that extra code. The Bing Maps JavaScript API’s use what is called a modular framework. This framework consists of a main API which contains the core set of features, such as the interactive map and pushpins, along with several modules which contain additional features such as directions, and venue maps. One benefit of using modules is that it allows you to pick and choose which features you want to load into your application. It also allows you to delay the loading of certain features until they are need. Take for example directions, you really don’t need to download that module until the user has pressed a button to calculate directions. In addition to the modules that are built into the Bing Maps platform you can create your own reusable modules as well. In fact several developers have created modules for Bing Maps and made them available through the open source Bing Maps V7 Modules CodePlex project.

In this section of the blog we are going to create the base of a module that will contain all our animation functionality. Since animations are common and there are a lot of different animation libraries out there we will give this module a namespace of Bing.Maps.Animations so as to limit the chances of it clashing with other animation libraries that you might want to use in your app. A Bing Maps module at its core is nothing more than a self-contained JavaScript file that make use of Bing Maps and triggers the Microsoft.Maps.moduleLoaded event on the last line of the file. While we are creating the base module we will also add two local variables to it. The first one will be called _delay and is the amount of time in milliseconds between each frame of the animation. A delay of 30ms is roughly equivalent to 33 frames per second. The second variable will be the radius of the earth in kilometers. We will make use of this constant later in this blog when we look at animations along a path. To create the base module open the AnimationModule.js file and add the following code:

window.Bing = window.Bing || {};
window.Bing.Maps = window.Bing.Maps || {};
window.Bing.Maps.Animations = new function () {
var _delay = 30, //Time in ms between each frame of the animation
EARTH_RADIUS_KM = 6378.1;

};

// Call the Module Loaded method
Microsoft.Maps.moduleLoaded('AnimationModule');

Now if you looked through all the HTML in the default.html file we created earlier you may have noticed that we registered the module and loaded in the code using a script reference like this:

<!-- Use script tag to register the Animation module -->
<script>Microsoft.Maps.registerModule('AnimationModule');</script>
<script type="text/javascript" src="/js/AnimationModule.js"></script>

You may have also noticed in the GetMap function in the AnimatedMap.js file the following code:

//Load the Animation Module
Microsoft.Maps.loadModule("AnimationModule");

This code loads the module if it wasn’t already loaded. We can also add a callback as an option to this function that will be fired after the module is loaded. Since we don’t need to worry about code possibly running before the module is loaded we don’t need to worry about using a callback. At this point we have the base module created, however it’s doesn’t do much at the moment as we haven’t exposed any public functions yet. We will add a number of different functions to this module as we take a look at different types of animations.

Simple Animations using JavaScript

As I mentioned earlier there are a lot of animation libraries out there that we could make use of. For example the jQuery Effects and WinJS.UI.Animation libraries. However these are not really related to spatial data and don’t give us all the functionality we will need for some of these animations. For our animations we are going to keep things simple and make use of the setInterval JavaScript functions. The setInterval function repeated calls a callback function on a set interval specified in milliseconds. In our case we will have an interval time of 30ms. The setInterval function will continue to run for as long as the app is running unless we stop it. For simple animations we will want to run the animation for a specified duration. Since we will have a specified duration and a constant delay between each frame we can easily calculate how many frames will be in the animation. By keeping track of how many frames that have been rendered we can calculate the progress of the animation by multiplying the current frame count by the delay and then dividing it by the duration. This will give us a decimal number between 0 and 1 for the progress. When the progress is equal to 1 the animation has completed and we can then stop it by calling the clearInterval function in JavaScript. Since this is likely to be a common task we can create the following reusable function for simple animations. Add this to the animation module.

function simpleAnimation(renderFrameCallback, duration) {
var _timerId,
_frame = 0;

duration = (duration && duration > 0) ? duration : 150;

_timerId = setInterval(function () {
var progress = (_frame * _delay) / duration;

if (progress > 1) {
progress = 1;
}

renderFrameCallback(progress);

if (progress == 1) {
clearInterval(_timerId);
}

_frame++;
}, _delay);
}

Now that we have a nice reusable function to help us create simple animations it’s time to create an actual animations. To start off with we will look at simple pushpin animations. The first animation we will create will drop the pushpin from a specified pixel height above the map to its location on the map. Here is an animated gif of how this animation will look.

When using Bing Maps the HTML that is generated for the map and pushpins is not directly accessible through the API. It might be tempting to us a trick or two to grab the pushpin DOM element but it’s not needed. The Pushpin class has an anchor property which is used to specify the offset used to align the point of the pushpin to the proper location on the map. To create a drop animation we simply need to animate the y value of the anchor. Since this is a linear animation we can easily decrease the y value as progress of the animation increases.

The second animation we will be very similar to the first but instead of just dropping the pushpin we will have it bounce a couple of times to rest. To accomplish this we need to calculate different values for the height as the progress increases to create this bounce effect. After a bit of playing around with a graphing calculator I came up with the following formula:

This then generates a graph that looks like this:

Using this formula to animate the pushpins position results in a nice bounce effect as demonstrated in this animated gif:

To help keep things clean we will create the Bing.Maps.Animations.PushpinAnimations namespace for these animations. To do this, add the following code to the animation module.

this.PushpinAnimations = {
Drop: function (pin, height, duration) {
height = (height && height > 0) ? height : 150;
duration = (duration && duration > 0) ? duration : 150;

var anchor = pin.getAnchor();
var from = anchor.y + height;

pin.setOptions({ anchor: new Microsoft.Maps.Point(anchor.x, anchor.y + height) });

simpleAnimation(
function (progress) {
var y = from - height * progress;
pin.setOptions({ anchor: new Microsoft.Maps.Point(anchor.x, y) });
},
duration
);
},

Bounce: function (pin, height, duration) {
height = (height && height > 0) ? height : 150;
duration = (duration && duration > 0) ? duration : 1000;

var anchor = pin.getAnchor();
var from = anchor.y + height;

pin.setOptions({ anchor: new Microsoft.Maps.Point(anchor.x, anchor.y + height) });

simpleAnimation(
function (progress) {
var delta = Math.abs(Math.cos(progress * 2.5 * Math.PI)) / Math.exp(3 * progress);
var y = from - height * (1 - delta);
pin.setOptions({ anchor: new Microsoft.Maps.Point(anchor.x, y) });
},
duration
);
}
};

We can now update our button handlers to make use of these new animations. Open the AnimatedMap.js file and update the DropPin, BouncePin, and Bounce4Pins functions with the following code.

function DropPin() {
ClearMap();

var pin = new Microsoft.Maps.Pushpin(map.getCenter());
map.entities.push(pin);

Bing.Maps.Animations.PushpinAnimations.Drop(pin);
}

function BouncePin() {
ClearMap();

var pin = new Microsoft.Maps.Pushpin(map.getCenter());
map.entities.push(pin);

Bing.Maps.Animations.PushpinAnimations.Bounce(pin);
}

function Bounce4Pins() {
ClearMap();

var idx = 0;

for (var i = 0; i < path.length; i++) {
setTimeout(function () {
var pin = new Microsoft.Maps.Pushpin(path[idx]);
map.entities.push(pin);

Bing.Maps.Animations.PushpinAnimations.Bounce(pin);
idx++;
}, i * 500);
}
}

If you run the application and press the buttons to drop or bounce a pushpin you will see a pushpin that falls to the center of the map just like the animated gif’s we saw before. If you press the button to animate 4 pushpins you will see 4 pushpins added to the map, one after another with a 500ms delay between them. This will look like this animated gif.

Creating Path Animations

The animations we have seen so far have been fairly simple and only run once. Before we dive into path animations it would be useful if we could not only play the animation but also pause or stop it. With a little work we can create a modified version of our simpleAnimation function that supports for play, pause and stop. The following code shows how to do this. Add this to the animation module.

this.BaseAnimation = function (renderFrameCallback, duration) {
var _timerId,
frameIdx = 0,
_isPaused = false;

//Varify value
duration = (duration && duration > 0) ? duration : 1000;

this.play = function () {
if (renderFrameCallback) {
if (_timerId) {
_isPaused = false;
} else {
_timerId = setInterval(function () {
if (!_isPaused) {
var progress = (frameIdx * _delay) / duration;

renderFrameCallback(progress, frameIdx);

if (progress >= 1) {
reset();
}

frameIdx++;
}
});
}
}
};

this.pause = function () {
_isPaused = true;
};

this.stop = function () {
reset();
};

function reset() {
if (_timerId != null) {
clearInterval(_timerId);
}

frameIdx = 0;
_isPaused = false;
}
};

We can now use this BaseAnimation class to power our more complex animations. One common type of animation I see developers struggle with when working with maps is animating along a path. To get a sense of the complexity involved consider the path between two locations on the map. If asked you to draw the shortest path between these two locations your first instinct might be to draw straight line, and visually you would be correct. However, the world is not flat and is actually an ellipsoid, yet most online maps show the world as a flat 2d rectangle. In order to accomplish this the map projects the 3D ellipsoid to this 2D map using what is called a Mercator projection. This ends up stretching the map out at the poles. So what does this all mean, well it means that the shortest distance between two locations on the map is rarely a straight line and is actually a curved path, commonly referred to as a geodesic path. Here is an image with a path connecting Seattle, New York, London and Italy. The red line connects these locations using straight lines while the purple line shows the equivalent geodesic path.

So which type of line do you want to animate with? Straight line paths are great for generic animations where you want to move things across the screen and only really care about the start and end point. Whereas geodesic lines are great for when you want the path to be spatially accurate, such as when animating the path of an airplane. It’s worth noting that when you are working with short distances the differences between are very minor.

Animating along a straight path is fairly easy. One method is to calculate the latitude and longitude differences between two locations and then divide these values by the number of frames in the animation to get a single frame offset values for latitude and longitude. Then when each frame is animated we take the last calculate coordinate and add these offsets to the latitude and longitude properties to get the new coordinate to advance the animation to.

Animating along a geodesic path is a bit more difficult. One of our Bing Maps MVP’s, Alastair Aitchison, wrote a great blog post on creating geodesic lines in Bing Maps. The process of creating a geodesic line consists of calculating a several midpoint locations that are between two points. This can be done by calculating the distance and bearing between the two locations. Once you have this you can divide the distance by the number of mid-points you want to have and then use the distance to each midpoint and the bearing between the two end points to calculate the coordinate of the mid-point location. To help us with this type of animation we will create some helper functions to do these calculations. Add the following code to the animation module. These functions allow you to calculate the Haversine distance between two locations (distance along curvature of the earth), bearing and a destination coordinate.

function degToRad(x) {
return x * Math.PI / 180;
}

function radToDeg(x) {
return x * 180 / Math.PI;
}

function haversineDistance(origin, dest) {
var lat1 = degToRad(origin.latitude),
lon1 = degToRad(origin.longitude),
lat2 = degToRad(dest.latitude),
lon2 = degToRad(dest.longitude);

var dLat = lat2 - lat1,
dLon = lon2 - lon1,
cordLength = Math.pow(Math.sin(dLat / 2), 2) + Math.cos(lat1) * Math.cos(lat2) * Math.pow(Math.sin(dLon / 2), 2),
centralAngle = 2 * Math.atan2(Math.sqrt(cordLength), Math.sqrt(1 - cordLength));

return EARTH_RADIUS_KM * centralAngle;
}

function calculateBearing(origin, dest) {
var lat1 = degToRad(origin.latitude);
var lon1 = origin.longitude;
var lat2 = degToRad(dest.latitude);
var lon2 = dest.longitude;
var dLon = degToRad(lon2 - lon1);
var y = Math.sin(dLon) * Math.cos(lat2);
var x = Math.cos(lat1) * Math.sin(lat2) - Math.sin(lat1) * Math.cos(lat2) * Math.cos(dLon);
return (radToDeg(Math.atan2(y, x)) + 360) % 360;
}

function calculateCoord(origin, brng, arcLength) {
var lat1 = degToRad(origin.latitude),
lon1 = degToRad(origin.longitude),
centralAngle = arcLength / EARTH_RADIUS_KM;

var lat2 = Math.asin(Math.sin(lat1) * Math.cos(centralAngle) + Math.cos(lat1) * Math.sin(centralAngle) * Math.cos(degToRad(brng)));
var lon2 = lon1 + Math.atan2(Math.sin(degToRad(brng)) * Math.sin(centralAngle) * Math.cos(lat1), Math.cos(centralAngle) - Math.sin(lat1) * Math.sin(lat2));

return new Microsoft.Maps.Location(radToDeg(lat2), radToDeg(lon2));
}

For the path animation we will create a class that extends from the base animation class we created earlier. When creating the path animation we will have this class take in four parameters;

  • path - The path and consist of an array of Microsoft.Maps.Location objects.
  • intervalCallback - A callback function that will be triggered on each frame interval. The interval callback function will receive three parameters; a midpoint location, the last path location that has been passed, and the frame index of the animation.
  • isGeodesic - A Boolean value that indicates if the path animation should follow the geodesic path or a straight path.
  • duration – The length of time the animation should take to complete.

When the path animation is created it will pre-calculate all the midpoint locations that the animation passes through. As a result little to no calculations needing to be performed when the animation advances a frame and thus should create a smooth animation. Add the following code for the path animation class to the animation module.

this.PathAnimation = function (path, intervalCallback, isGeodesic, duration) {
var _totalDistance = 0,
_intervalLocs = [path[0]],
_intervalIdx = [0],
_frameCount = Math.ceil(duration / _delay), idx;

var progress, dlat, dlon;

if (isGeodesic) {
//Calcualte the total distance along the path in KM's.
for (var i = 0; i < path.length - 1; i++) {
_totalDistance += haversineDistance(path[i], path[i + 1]);
}
}else{
//Calcualte the total distance along the path in degrees.
for (var i = 0; i < path.length - 1; i++) {
dlat = (path[i + 1].latitude - path[i].latitude);
dlon = (path[i + 1].longitude - path[i].longitude);

_totalDistance += Math.sqrt(dlat*dlat + dlon*dlon);
}
}

//Pre-calculate midpoint locations for smoother rendering.
for (var f = 0; f < _frameCount; f++) {
progress = (f * _delay) / duration;

var travel = progress * _totalDistance;
var alpha;
var dist = 0;
var dx = travel;

for (var i = 0; i < path.length - 1; i++) {

if(isGeodesic){
dist += haversineDistance(path[i], path[i + 1]);
}else {
dlat = (path[i + 1].latitude - path[i].latitude);
dlon = (path[i + 1].longitude - path[i].longitude);
alpha = Math.atan2(dlat * Math.PI / 180, dlon * Math.PI / 180);
dist += Math.sqrt(dlat * dlat + dlon * dlon);
}

if (dist >= travel) {
idx = i;
break;
}

dx = travel - dist;
}

if (dx != 0 && idx < path.length - 1) {
if (isGeodesic) {
var bearing = calculateBearing(path[idx], path[idx + 1]);
_intervalLocs.push(calculateCoord(path[idx], bearing, dx));
}else{
dlat = dx * Math.sin(alpha);
dlon = dx * Math.cos(alpha);

_intervalLocs.push(new Microsoft.Maps.Location(path[idx].latitude + dlat, path[idx].longitude + dlon));
}

_intervalIdx.push(idx);
}
}

//Ensure the last location is the last coordinate in the path.
_intervalLocs.push(path[path.length - 1]);
_intervalIdx.push(path.length - 1);

return new Bing.Maps.Animations.BaseAnimation(
function (progress, frameIdx) {

if (intervalCallback) {
intervalCallback(_intervalLocs[frameIdx], _intervalIdx[frameIdx], frameIdx);
}
}, duration);
};

Now that the path animation class is created we can start implementing it. The first animation will move a pushpin along either a straight or geodesic the path. Update the MovePinOnPath function in the AnimatedMap.js file with the following code.

function MovePinOnPath(isGeodesic) {
ClearMap();

var pin = new Microsoft.Maps.Pushpin(path[0]);
map.entities.push(pin);

currentAnimation = new Bing.Maps.Animations.PathAnimation(path, function (coord) {
pin.setLocation(coord);
}, isGeodesic, 40000);

currentAnimation.play();
}

If you run the application and press the "Move Pin Along Path" button you will see a pushpin follow a straight line between the path locations. The following animated gif shows what this animation will look like. I’ve added in a red line as a reference of the straight line path.

If you press the "Move Pin Along Geodesic Path" button you will see a pushpin follow a geodesic path between the locations as you can see in the following animated gif. I’ve also included the straight line between the locations as a reference.

The next path animation we will implement will move the map along either a straight or geodesic path. Update the MovePinOnPath function in the AnimatedMap.js file with the following code.

function MoveMapOnPath(isGeodesic) {
ClearMap();

//Change zooms levels as map reaches points along path.
var zooms = [5, 4, 6, 5];

map.setView({ center: path[0], zoom: zooms[0] });

currentAnimation = new Bing.Maps.Animations.PathAnimation(path, function (coord, idx) {
map.setView({ center: coord, zoom: zooms[idx] });
}, isGeodesic, 100000);

currentAnimation.play();
}

Pressing the "Move Map Along Path" or "Move Map Along Geodesic Path" buttons you will see the map pan from one location to another, while changing zoom levels when it passes one of the path points. I have not included animated gif’s for this animation as they ended up being several megabytes in size.

The final path animation we will implement will animate the drawing of the path line. Update the DrawPath function in the AnimatedMap.js file with the following code.

function DrawPath(isGeodesic) {
ClearMap();

var line;

currentAnimation = new Bing.Maps.Animations.PathAnimation(path, function (coord, idx, frameIdx) {
if (frameIdx == 1) {
//Create the line the line after the first frame so that we have two points to work with.
line = new Microsoft.Maps.Polyline([path[0], coord]);
map.entities.push(line);
}
else if (frameIdx > 1) {
var points = line.getLocations();
points.push(coord);
line.setLocations(points);
}
}, isGeodesic, 40000);

currentAnimation.play();
}

If you run the application and press the "Draw Path" button you will see a pushpin follow a straight line between the path locations. The following animated gif shows what this animation will look like.

If you run the application and press the "Draw Geodesic Path" button you will see a pushpin follow a straight line between the path locations. The following animated gif shows what this animation will look like.

Wrapping Up

In this blog we have seen a number of different ways to animate data on Bing Maps. Let your imagination go wild and create some cool animations. As mentioned at the beginning of this blog post the full source code can be found in the MSDN Code Samples here.

Introducing the Azure PowerShell DSC (Desired State Configuration) extension

Thu, 08/07/2014 - 10:31

Earlier this year Microsoft released the Azure VM Agent and Extensions as part of the Windows Azure Infrastructure Services. VM Extensions are software components that extend the VM functionality and simplify various VM management operations; for example, the VMAccess extension can be used to reset a VM’s password, or the Custom Script extension can be used to execute a script on the VM.

Today, we are introducing the PowerShell Desired State Configuration (DSC) Extension for Azure VMs as part of the Azure PowerShell SDK. You can use new cmdlets to upload and apply a PowerShell DSC configuration on an Azure VM enabled with the PowerShell DSC extension. PowerShell DSC extension will call into PowerShell DSC to enact the received DSC configuration on the VM.

If you already have the Azure PowerShell SDK installed, you will need to update to version 0.8.6 or later.

Once you have installed and configured Azure PowerShell and authenticated to Azure, you can use the Get-AzureVMAvailableExtension cmdlet to see the PowerShell DSC extension.

PS C:\> Get-AzureVMAvailableExtension -Publisher Microsoft.PowerShell Publisher : Microsoft.Powershell ExtensionName : DSC Version : 1.0 PublicConfigurationSchema : PrivateConfigurationSchema : SampleConfig : ReplicationCompleted : True Eula : http://azure.microsoft.com/en-us/support/legal/ PrivacyUri : http://www.microsoft.com/ HomepageUri : http://blogs.msdn.com/b/powershell/ IsJsonExtension : True Executing a simple scenario

One scenario in which this new extension can be used is the automation of software installation and configuration upon a machine’s initial boot-up.

As a simple example, let’s say you need to create a new VM and install IIS on it. For this, you would first create a PowerShell script that defines the configuration (NOTE: I saved this script as C:\examples\IISInstall.ps1):

001
002
003
004
005
006
007
008
009
010
011
012 configuration IISInstall
{
    node ("localhost")
    {
        WindowsFeature IIS
        {
            Ensure = "Present"
            Name = "Web-Server"                      
        }
    }
}

Then you would use Publish-AzureVMDscConfiguration to upload your configuration to Azure storage. Publish-AzureVMDscConfiguration is one of the new cmdlets in the Azure PowerShell SDK. The example below uses all the default values, but later in this post we’ll go over more details of how this works.

PS C:\> Publish-AzureVMDscConfiguration -ConfigurationPath C:\examples\IISInstall.ps1

This cmdlet creates a ZIP package that follows a predefined format that the PowerShell Desired State Configuration Extension can understand and then uploads it as a blob to Azure storage. The ZIP package in the above example was uploaded to

https://examples.blob.core.windows.net/windows-powershell-dsc/IISInstall.ps1.zip

“examples” in this URI is the name of my default Azure storage account, “windows-powershell-dsc” is the default storage container used by the cmdlet, and “IISInstall.ps1.zip” is the name of the blob for the file I just published.

Now my sample configuration is available for VMs to use, so let’s write a script that creates a VM that uses our sample configuration (NOTE: I saved this script as C:\examples\example-1.ps1):

001
002
003
004
005
006
007
008 $vm = New-AzureVMConfig -Name "example-1" -InstanceSize Small -ImageName "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201407.01-en.us-127GB.vhd" 

$vm = Add-AzureProvisioningConfig -VM $vm -Windows -AdminUsername "admin_account" -Password "Bull_dog1"

$vm = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive "IISInstall.ps1.zip" -ConfigurationName "IISInstall" 

New-AzureVM -VM $vm -Location "West US" -ServiceName "example-1-svc" -WaitForBoot

New-AzureVMConfig, Add-AzureProvisioningConfig, and New-AzureVM are the existing Azure cmdlets used to create a VM. The new kid on the block is Set-AzureVMDscExtension:

 

 

005 $vm = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive "IISInstall.ps1.zip" -ConfigurationName "IISInstall"

This cmdlet injects a DSC configuration into the VM configuration object ($vm in the example). When the VM machine boots, the Azure VM agent will install the PowerShell DSC Extension, which in turn will download the ZIP package that we published previously (IISInstall.ps1.zip), will execute the “IISInstall” configuration that we included as part of IISInstall.ps1, and then will invoke PowerShell DSC by calling the Start-DscConfiguration cmdlet.

Now, let’s go ahead and execute the sample script (NOTE: if you get an error telling you that the VM vhd is not available or you don’t have access to it, that likely means that the image referenced on line 1 of the script has been updated, and you will need to find the new image name. You can do so by enumerating the available images with Get-AzureVMImage and picking the image that you wish to use. See Azure SDK documentation for more details on this. In my case, I will use a 2012-R2 machine).

PS C:\> C:\examples\example-1.ps1 OperationDescription OperationId OperationStatus -------------------- ----------- --------------- New-AzureVM 9cfb922d-db5b-cdd0-9c74-1a4e34b91e28 Succeeded New-AzureVM 17acca22-c6ff-cb5a-8116-a41ff9764d35 Succeeded

Our sample configuration was very simple: it just installed IIS. As a quick verification that it executed properly, we can logon to the VM and verify that IIS is installed by visiting the default web site (http://localhost):

That is the PowerShell DSC Extension in a nutshell.

And now for the gory details…

    Publish-AzureVMDscConfiguration

As the previous example illustrated, the first step in using the PowerShell Desired State Configuration Extension is publishing. In this context, publishing is the process of creating a ZIP package that the extension can understand and uploading that package to Azure blob storage. This is accomplished using the Publish-AzureVMDscConfiguration cmdlet.

Why use a ZIP package for publishing? Publish-AzureVMDscConfiguration will parse your configuration looking for Import-DSCResource statements and will include a copy of the corresponding modules along with the script that contains your configuration. For example, let’s take a look at the ZIP package produced by a configuration that creates an actual website instead of just installing IIS. This new example is the FourthCoffee website, which you may have already seen in other DSC blog posts or demos). The FourthCoffee demo has a dependency on the DSC resource xWebAdministration, which is included in the DSC Resource Kit Wave 5.

(NOTE: I saved this script as C:\examples\FourthCoffee.ps1)

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049 configuration FourthCoffee
{
    Import-DscResource -Module xWebAdministration            

    # Install the IIS role
    WindowsFeature IIS 
    { 
        Ensure          = "Present" 
        Name            = "Web-Server" 
    } 
 
    # Install the ASP .NET 4.5 role
    WindowsFeature AspNet45 
    { 
        Ensure          = "Present" 
        Name            = "Web-Asp-Net45" 
    } 
 
    # Stop the default website
    xWebsite DefaultSite 
    { 
        Ensure          = "Present" 
        Name            = "Default Web Site" 
        State           = "Stopped" 
        PhysicalPath    = "C:\inetpub\wwwroot" 
        DependsOn       = "[WindowsFeature]IIS" 
    } 
 
    # Copy the website content
    File WebContent 
    { 
        Ensure          = "Present" 
        SourcePath      = "C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\BakeryWebsite"
        DestinationPath = "C:\inetpub\FourthCoffee"
        Recurse         = $true 
        Type            = "Directory" 
        DependsOn       = "[WindowsFeature]AspNet45" 
    } 

    # Create a new website
    xWebsite BakeryWebSite 
    { 
        Ensure          = "Present" 
        Name            = "FourthCoffee"
        State           = "Started" 
        PhysicalPath    = "C:\inetpub\FourthCoffee" 
        DependsOn       = "[File]WebContent" 
    } 
}

To inspect the ZIP package created by the publish cmdlet I used the -ConfigurationArchivePath parameter, which saves the package to a local file instead of uploading it to Azure storage (NOTE: I typed the command below in two separate lines using the ` character; the >>> characters are PowerShell’s prompt):

PS C:\> Publish-AzureVMDscConfiguration C:\examples\FourthCoffee.ps1 ` >>> -ConfigurationArchivePath C:\examples\FourthCoffee.ps1.zip

When I look at the ZIP package using the File Explorer I can see that it contains my configuration script and a copy of the xWebAdministration module:

That copy comes from the xWebAdministration module that I already installed on my machine under “C:\Program Files\WindowsPowerShell\Modules”. The publish cmdlet requires that the imported modules are installed on your machine, and that they are located somewhere in $PSModulePath.

(NOTE: To simplify the example, I slightly altered the xWebAdministration module so it included the files needed for the website as part of the xWebAdministration module, in the “BakeryWebsite” directory)

The two previous examples use a PowerShell Script file (.ps1) to define the configuration that will be published. You can also do this in a PowerShell Module file (.psm1), or if the configuration you want to publish is part of a larger module, you can create the ZIP package manually and simply copy the directories for the module that defines your configuration and any modules referenced by your configuration. For example, if the configuration of our example was defined within a PowerShell module named FourthCoffee the ZIP package would include these two directories: the FourthCoffee module folder, and the dependent DSC resource module folder for xWebAdministration

Once you have a local ZIP package (either created manually, or using the publish cmdlet), you can upload it to Azure storage with the publish cmdlet:

PS C:\> Publish-AzureVMDscConfiguration C:\examples\FourthCoffee.ps1.zip ContainerName and StorageContext parameters

By default Publish-AzureVMDscConfiguration will upload the ZIP package to Azure blob storage using “windows-powershell-dsc” as the container and picking up the default storage account from the settings of your Azure subscription.

You can change the container using the –ContainerName parameter:

PS C:\> Publish-AzureVMDscConfiguration C:\examples\FourthCoffee.ps1.zip ` >>> -ContainerName mycontainer

And you can change the storage account (and authentication settings) using the –StorageContext parameter (you can use the New-AzureStorageContext cmdlet to create the storage context).

Set-AzureVMDSCExtension

Once a configuration has been published, you can apply it to any Azure virtual machine using the Set-AzureVMDSCExtension cmdlet. This cmdlet injects the settings needed by the PowerShell DSC extension into a VM configuration object, which can then be applied to a new VM, as in our first example, or to an existing VM. Let’s use this cmdlet again to update the VM we created previously (NOTE: the first example used the configuration defined in C:\examples\IISInstall.ps1; now we will update this machine with the configuration defined in C:\examples\FourthCoffee.ps1; the script that we will use was saved as C:\examples\example-2.ps1)

001
002
003
004
005
006 $vm = Get-AzureVM -Name "example-1" -ServiceName "example-1-svc"

$vm = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive "FourthCoffee.ps1.zip" -ConfigurationName "FourthCoffee" 

$vm | Update-AzureVM PS C:\> C:\examples\example-2.ps1 OperationDescription OperationId OperationStatus -------------------- ----------- --------------- Update-AzureVM afa38e1a-5717-cac6-a6e7-6f72d0af51d2 Succeeded                                                                                                                      

In our first example we were working with a new VM, so the Azure VM agent first installed the PowerShell DSC Extension and then it invoked it using the information provided by the Set-AzureVMDSCExtension cmdlet. In this second example we are working on an existing VM on which the extension is already installed so the Azure VM agent will skip the installation part and just invoke the PowerShell DSC Extension with the new information provided by the set cmdlet.

The extension will then

  • download the ZIP package specified by the –ConfigurationArchive parameter and expand it to a temporary directory
  • remove the .zip extension from the value given by –ConfigurationArchive and look for a PowerShell script or module with that name and execute it (in our second example, it will look for FourthCoffee.ps1)
  • look for and execute the configuration named by the -ConfigurationName parameter (in this case "WebSite")
  • invoke the Start-DscConfiguration with the output produced by that configuration

To verify that our second configuration was applied successfully we can again check the default website:

Configuration Arguments

DSC configurations are very similar to PowerShell advanced functions and can be parameterized for greater flexibility. The PowerShell DSC extension provides support for configuration arguments via the –ConfigurationArgument parameter of Set-AzureVMDSCExtension.

As a very simple example, let’s change our last script in such a way that the name of the website is a parameter to the FourthCoffee configuration. The updated configuration has been saved as C:\examples\FourthCoffeeWithArguments.ps1; notice that we have added the $WebSiteName parameter (lines 4-7), which is used as the Name property of the BakeryWebSite resource (line 51).

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056 configuration FourthCoffee
{
    [CmdletBinding()]
    param(
        [Parameter(Mandatory=$true, Position=0)]
        [string] 
        $WebSiteName
    )

    Import-DscResource -Module xWebAdministration            

    # Install the IIS role
    WindowsFeature IIS 
    { 
        Ensure          = "Present" 
        Name            = "Web-Server" 
    } 
 
    # Install the ASP .NET 4.5 role
    WindowsFeature AspNet45 
    { 
        Ensure          = "Present" 
        Name            = "Web-Asp-Net45" 
    } 
 
    # Stop the default website
    xWebsite DefaultSite 
    { 
        Ensure          = "Present" 
        Name            = "Default Web Site" 
        State           = "Stopped" 
        PhysicalPath    = "C:\inetpub\wwwroot" 
        DependsOn       = "[WindowsFeature]IIS" 
    } 
 
     # Copy the website content
    File WebContent 
    { 
        Ensure          = "Present" 
        SourcePath      = "C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\BakeryWebsite"
        DestinationPath = "C:\inetpub\FourthCoffee"
        Recurse         = $true 
        Type            = "Directory" 
        DependsOn       = "[WindowsFeature]AspNet45" 
    } 

    # Create a new website
    xWebsite BakeryWebSite 
    { 
        Ensure          = "Present" 
        Name            = $WebSiteName
        State           = "Started" 
        PhysicalPath    = "C:\inetpub\FourthCoffee" 
        DependsOn       = "[File]WebContent" 
    } 
}

Our third example publishes the new configuration script and updates the VM that we created previously (I saved this script as C:\examples\example-3.ps1:

001
002
003
004
005
006
007
008
009
010
011 Publish-AzureVMDscConfiguration C:\examples\FourthCoffeeWithArguments.ps1

$vm = Get-AzureVM -Name "example-1" -ServiceName "example-1-svc"

$vm = Set-AzureVMDscExtension -VM $vm `
        -ConfigurationArchive "FourthCoffeeWithArguments.ps1.zip" `
        -ConfigurationName "FourthCoffee" `
        -ConfigurationArgument @{ WebSiteName = "FourthCoffee" }

$vm | Update-AzureVM PS C:\> C:\examples\example-3.ps1 OperationDescription OperationId OperationStatus -------------------- ----------- --------------- Update-AzureVM 2b6f18e7-42f2-c216-8199-edfa06b52e33 Succeeded

The value of the –ConfigurationArgument parameter on line 8 of C:\examples\example-3.ps1 is a hashtable that specifies the arguments to the WebSite configuration, i.e. a string specifying the name of the website (this corresponds to parameter $WebSiteName, on line 7 of C:\examples\FourthCoffeeWithArguments.ps1)

Configuration Data

Configuration data can be used to separate structural configuration from environmental configuration (see this blog post for an introduction to those concepts). The PowerShell DSC extension provides support for configuration data via the –ConfigurationDataPath parameters of Set-AzureVMDSCExtension.

Let’s create another variation of the FourthCoffee configuration: IIS and ASP.NET will always be installed by the configuration, but the FourthCoffee website will be installed only if the role of the VM is “WebServer”. The updated configuration has been saved as C:\examples\FourthCoffeeWithData.ps1; the check for the VM’s role is on line 20:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053 configuration FourthCoffee
{
    Import-DscResource -Module xWebAdministration            

    # Install the IIS role
    WindowsFeature IIS 
    { 
        Ensure          = "Present" 
        Name            = "Web-Server" 
    } 
 
    # Install the ASP .NET 4.5 role
    WindowsFeature AspNet45 
    { 
        Ensure          = "Present" 
        Name            = "Web-Asp-Net45" 
    } 
 
   # Setup the website only if the role is "WebServer"
    Node $AllNodes.Where{$_.Role -eq "WebServer"}.NodeName
    {
        # Stop the default website
        xWebsite DefaultSite 
        { 
            Ensure          = "Present" 
            Name            = "Default Web Site" 
            State           = "Stopped" 
            PhysicalPath    = "C:\inetpub\wwwroot" 
            DependsOn       = "[WindowsFeature]IIS" 
        } 
 
        # Copy the website content
        File WebContent 
        { 
            Ensure          = "Present" 
            SourcePath      = "C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\BakeryWebsite"
            DestinationPath = "C:\inetpub\FourthCoffee"
            Recurse         = $true 
            Type            = "Directory" 
            DependsOn       = "[WindowsFeature]AspNet45" 
        } 

        # Create a new website
        xWebsite BakeryWebSite 
        { 
            Ensure          = "Present" 
            Name            = $WebSiteName
            State           = "Started" 
            PhysicalPath    = "C:\inetpub\FourthCoffee" 
            DependsOn       = "[File]WebContent" 
        } 
    }
}

The configuration data has been saved as C:\examples\FourthCoffeeData.psd1:

001
002
003
004
005
006
007
008
009 @{
    AllNodes = @(
        @{
            NodeName = "localhost";
            Role     = "WebServer"
        }
    );
}

And the script that publishes and applies this new configuration is C:\examples\example-4.ps1:

001
002
003
004
005
006
007
008
009
010
011 Publish-AzureVMDscConfiguration C:\examples\FourthCoffeeWithData.ps1

$vm = Get-AzureVM -Name "example-1" -ServiceName "example-1-svc"

$vm = Set-AzureVMDscExtension -VM $vm `
        -ConfigurationArchive "FourthCoffeeWithData.ps1.zip" `
        -ConfigurationName "FourthCoffee" `
        -ConfigurationDataPath C:\examples\FourthCoffeeData.psd1

$vm | Update-AzureVM C:\ PS> C:\examples\example-4.ps1 OperationDescription OperationId OperationStatus -------------------- ----------- --------------- Update-AzureVM fa6a525b-c411-c213-8f57-69dc2a09df1c Succeeded

The value of  the –ConfigurationDataPath parameter on line 8 of C:\examples\example-4.ps1 is the path to a local .psd1 file containing the configuration data. A copy of this file will be uploaded to Azure blob storage and then downloaded to the VM by the PowerShell DSC Extension and passed along to the FourthCoffee configuration. This file is uploaded to the default container (“windows-powershell-dsc”) and storage account; similarly to the Publish-AzureVmDscConfiguration cmdlet, the Set-AzureVMDscExtension cmdlet includes parameters –ContainerName and –StorageContext that can be used to override those defaults.

ACQUIRING REMOTE ACCESS TO OUR VM

Since we know the name of your VM we can simply use Azure SDK cmdlets to get RDP file and kick off a remote access to it. See Azure Powershell SDK documentation for more information. 

001
002
003
004
005
006 $vm = Get-AzureVM –ServiceName "example-1-svc" –Name "example-1" $rdp = Get-AzureEndpoint -Name "RDP" -VM $vm 
$hostdns = (New-Object "System.Uri" $vm.DNSName).Authority 
$port = $rdp.Port 
Start-Process "mstsc" -ArgumentList "/V:$hostdns`:$port /w:1024 /h:768"    READING LOGS

Let us say that I wish to check in detail that everything went well on my VM. How would I do that? I can log in to my VM from Azure and check the local logs. The files of interest to us will be the following two locations on VM hard drive: 

C:\Packages\Plugins\Microsoft.Powershell.DSC\1.0.0.0

C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0

You may find that your VM has a newer version of Powershell DSC extension,in which case the version number at the end of the path might be slightly different.

“C:\Packages\Plugins\Microsoft.Powershell.DSC\1.0.0.0” contains the actual extension files. You generally don’t need to worry about this location. However, if an extension failed to install for some reason and this folder isn’t present, that is a critical issue.

Now let’s start digging into the logs: C:\WindowsAzure\Logs. This folder contains general Azure logs that were captured for us. If for some reason DSC extension failed to deploy or there was some general infrastructure error, it would appear here under log files “WaAppAgent.*.log”

The lines of interest in these files are as follows. Note that your log may look slightly different.

  • [00000003] [07/28/2014 23:57:33.02] [INFO]  Beginning installation of plugin Microsoft.Powershell.DSC.
  • [00000003] [07/28/2014 23:59:47.25] [INFO]  Successfully installed plugin Microsoft.Powershell.DSC.
  • [00000009] [07/29/2014 00:02:51.02] [INFO]  Successfully enabled plugin Microsoft.Powershell.DSC.
  • [00000009] [07/29/2014 00:02:51.03] [INFO]  Setting the install state of the handler Microsoft.Powershell.DSC_1.0.0.0 to Enabled

Now we know that DSC extension was successfully installed and enabled. We continue our analysis by going to DSC extension logs. “C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0” contains various logs from DSC extension itself.

PS C:\> PS C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0> dir Directory: C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0 Mode LastWriteTime Length Name ---- ------------- ------ ---- -a--- 7/29/2014 12:28 AM 1613 CommandExecution.log -a--- 7/28/2014 11:59 PM 1429 CommandExecution_0.log -a--- 7/29/2014 12:01 AM 2113 CommandExecution_1.log -a--- 7/29/2014 12:02 AM 1613 CommandExecution_2.log -a--- 7/29/2014 12:28 AM 13744 DSCBOOT_script_20140729-002759.log -a--- 7/29/2014 12:03 AM 473528 DSCLOG_metaconf__20140729-000322.json -a--- 7/29/2014 12:28 AM 713196 DSCLOG_metaconf__20140729-002823.json -a--- 7/29/2014 12:03 AM 608050 DSCLOG__20140729-000311.json -a--- 7/29/2014 12:28 AM 713196 DSCLOG__20140729-002826.json

As you can see there are a number of various logs present. 

“CommandExecution*.log” are logs written by Azure infrastructure as it enabled the DSC extension.

“DSCBOOT_script*.log” is a high level log that applied our configuration that we mentioned previously. It is fairly concise. If everything went well towards the end of the log you should be able to see a line such as this:

VERBOSE: [EXAMPLE-1] Configuration application complete.

If we wish to dig deeper into DSC logs, then the rest of logs can tell us much deeper story. “DSCLOG_*.json” logs are ETL DSC logs converted to JSON format. If configuration has completed successfully you should be able to see an event like this one:

    {

        "EventType":  4,

        "TimeCreated":  "\/Date(1406593703182)\/",

        "Message":  "[EXAMPLE-1]: LCM:  [ End    Set      ]    in  14.1745 seconds.",

“DSCLOG_metacong*.json” are the logs for your configuration if your PowerShell DSC configuration had a meta-config that modified PowerShell DSC properties, such as this:

      LocalConfigurationManager

        {

            ConfigurationID = "646e48cb-3082-4a12-9fd9-f71b9a562d4e"

            RefreshFrequencyMins = 23

        } 

You would see a similar event if meta configuration was applied successfully.

    {

        "EventType":  4,

        "TimeCreated":  "\/Date(1406593703182)\/",

        "Message":  "[EXAMPLE-1]: LCM:  [ End    Set      ]    in  14.1745 seconds.",

 

More info

Here are some additional resources about PowerShell DSC, Azure VM agent and extensions:

Desired State Configuration Blog Series – Part 1, Information about DSC (by Michael Green, Senior Program Manager, Microsoft)

VM Agent and Extensions - part 1 (by Kundana Palagiri, Senior Program Manager, Windows Azure)

VM Agent and Extensions - Part 2 (by Kundana Palagiri, Senior Program Manager, Windows Azure)

Automating VM Customization tasks using Custom Script Extension (by Kundana Palagiri, Senior Program Manager, Windows Azure) 

Moving to the .NET Framework 4.5.2

Thu, 08/07/2014 - 10:10

A few months ago we announced the availability of the .NET Framework 4.5.2, a highly compatible, in-place update to the .NET 4.x family (.NET 4, 4.5, and 4.5.1). The .NET Framework 4.5.2 was released only a few short months after the release of .NET 4.5.1 and gives you the benefits of the greater stability, reliability, security and performance without any action beyond installing the .NET 4.5.2 update i.e., there is no need to recompile your application to get these benefits.

The quick pace at which we’re evolving and shipping means the latest fixes, features, and innovations are available in the latest version and not in legacy versions. To that end, we are making it easier than ever before for customers to stay current on the .NET Framework 4.x family of products with highly compatible, in-place updates for the .NET 4.x family.

We will continue to fully support .NET 4, .NET 4.5, .NET 4.5.1, and .NET 4.5.2 until January 12, 2016, this includes security updates as well as non-security technical support and hotfixes. Beginning January 12, 2016 only .NET Framework 4.5.2 will continue receiving technical support and security updates. There is no change to the support timelines for any other .NET Framework version, including .NET 3.5 SP1, which will continue to be supported for the duration of the operating system lifecycle.

We will continue to focus on .NET and as we outlined at both TechEd NA and Build earlier in 2014, we are working on a significant set of technologies, features and scenarios that will be part of .NET vNext, our next major release of the .NET Framework coming in 2015.

For more details on the .NET Framework support lifecycle, visit the Microsoft Support Lifecycle site.

If you have any questions regarding compatibility of the .NET Framework you may want to review the .NET Application Compatibility page. Should you have any questions that remain unanswered we’re here to help, you should engage with Microsoft Support through your regular channels for a resolution. Alternatively you can also write to us at netfxcompat_at_microsoft.com.


We have outlined a few Q&A below to help address any questions you may have.

Will I need to recompile/rebuild my applications to make use of .NET 4.5.2?

No, .NET 4.5.2 is a compatible, in-place update on top of .NET 4, .NET 4.5, and .NET 4.5.1. This means that applications built to target any of these previous .NET 4.x versions will continue running on .NET 4.5.2 without change. No recompiling of apps is necessary.

Are there any breaking changes in .NET 4.5.2? Why do you include these changes?

There are a very small number of changes in .NET 4.5.2 that are not fully compatible with earlier .NET versions.  We call these runtime changes. We include these changes only when absolutely necessary in the interests of security, in order to comply with industry wide standards, or in order to correct a previous incompatibility within .NET. Additionally, there are a small number of changes included in .NET 4.5.2 that will only be enabled if you choose to recompile your application against .NET 4.5.2; we call these changes retargeting changes.

More information about application compatibility including both .NET runtime and retargeting changes across the various versions in the .NET 4.x family can be found here.

Microsoft products such as Exchange Server, SQL Server, Dynamics CRM, SharePoint, and Lync are built on top of .NET. Do I need to make any updates to these products if they are using .NET 4, 4.5 or 4.5.1?

Newer versions of products such as Exchange, SQL Server, Dynamics CRM, Sharepoint, and Lync are based on the .NET 4 or .NET 4.5. Since .NET 4.5.2 is a compatible, in-place update on top of the .NET 4, 4.5, and 4.5.1 even a large software application such as Exchange that was built using .NET 4 will continue to run without any
changes when the .NET runtime is updated from .NET 4 or .NET 4.5 to .NET 4.5.2. That said we recommend you validate your deployment by updating the .NET runtime to .NET 4.5.2 in a QA/pre-production environment first before rolling this out to a production environment.

What about .NET 3.5 SP1? Is that no longer available?

No, this announcement does not affect versions prior to .NET 4. The .NET 3.5 SP1 version is installed side-by-side with .NET 4.x version, so updates to one do not have impact on the other. You can continue to use .NET 3.5 SP1 beyond January 12, 2016.

Stay up-to-date with Internet Explorer

Thu, 08/07/2014 - 10:07

As we shared in May, Microsoft is prioritizing helping users stay up-to-date with the latest version of Internet Explorer. Today we would like to share important information on migration resources, upgrade guidance, and details on support timelines to help you plan for moving to the latest Internet Explorer browser for your operating system.

Microsoft offers innovative and transformational services for a mobile-first and cloud-first world, so you can do more and achieve more; Internet Explorer is core to this vision.  In today’s digital world, billions of people use Internet-connected devices, powered by cloud service-based applications, spanning both work and life experiences.  Running a modern browser is more important than ever for the fastest, most secure experience on the latest Web sites and services, connecting anytime, anywhere, on any device.

Developer and User Benefits

Developers benefit when users stay current on the latest Web browser. Older browsers may not support modern Web standards, so browser fragmentation is a problem for Web site developers. Web app developers, too, can work more efficiently and create better products and product roadmaps if their customers are using modern browsers. Upgrading benefits the developer ecosystem.

Users also benefit from a modern browser that enables the latest digital work and life experiences while decreasing online risks. Internet Explorer 11, our latest modern browser, delivers many benefits:

  • Improved Security – Outdated browsers represent a major challenge in keeping the Web ecosystem safer and more secure, as modern Web browsers have better security protection. Internet Explorer 11 includes features like Enhanced Protected Mode to help keep customers safer. Microsoft proactively fixes many potential vulnerabilities in Internet Explorer, and our work to help protect customers is delivering results: According to NSS Labs, protection against malicious software increased from 69% on Internet Explorer 8 in 2009 to over 99% on Internet Explorer 11. It should come as no surprise that the most recent, fully-patched version of Internet Explorer is more secure than older versions.
  • Productivity – The latest Internet Explorer is faster, supports more modern Web standards, and has better compatibility with existing Web apps. Users benefit by being able to run today’s Web sites and services, such as Office 365, alongside legacy Web apps.
  • Unlock the future — Upgrading and staying current on the latest version of Internet Explorer can ease the migration to Windows 8.1 Update and the latest Windows tablets and other devices, unlocking the next generation of technology and productivity.
Browser Migration Guidance

Microsoft recommends enabling automatic updates to ensure an up-to-date computing experience—including the latest version of Internet Explorer—and most consumers use automatic updates today. Commercial customers are encouraged to test and accept updates quickly, especially security updates. Regular updates provide significant benefits, such as decreased security risk and increased reliability, and Windows Update can automatically install updates for Internet Explorer and Windows.

For customers not yet running the latest browser available for your operating system, we encourage you to upgrade and stay up-to-date for a faster, more secure browsing experience. Beginning January 12, 2016, the following operating systems and browser version combinations will be supported:

Windows Platform Internet Explorer Version Windows Vista SP2 Internet Explorer 9 Windows Server 2008 SP2 Internet Explorer 9 Windows 7 SP1 Internet Explorer 11 Windows Server 2008 R2 SP1 Internet Explorer 11 Windows 8.1 Internet Explorer 11 Windows Server 2012 Internet Explorer 10 Windows Server 2012 R2 Internet Explorer 11

After January 12, 2016, only the most recent version of Internet Explorer available for a supported operating system will receive technical support and security updates. For example, customers using Internet Explorer 8, Internet Explorer 9, or Internet Explorer 10 on Windows 7 SP1 should migrate to Internet Explorer 11 to continue receiving security updates and technical support. For more details regarding support timelines on Windows and Windows Embedded, see the Microsoft Support Lifecycle site.

As some commercial customers have standardized on earlier versions of Internet Explorer, Microsoft is introducing new features and resources to help customers upgrade and stay current on the latest browser. Customers should plan for upgrading to modern standards—to benefit from the additional performance, security, and productivity of modern Web apps—but in the short term, backward compatibility with legacy Web apps may be a cost-effective, if temporary, path. Enterprise Mode for Internet Explorer 11, released in April 2014, offers enhanced backward compatibility and enables you to run many legacy Web apps during your transition to modern Web standards. 

Today we are announcing that Enterprise Mode will be supported through the duration of the operating system lifecycle, to help customers extend their existing Web app investments while staying current on the latest version of Internet Explorer. On Windows 7, Enterprise Mode will be supported through January 14, 2020. Microsoft will continue to improve Enterprise Mode backward compatibility, and to invest in tools and other resources to help customers upgrade and stay up-to-date on the latest version of Internet Explorer.

Browser Migration Resources

Microsoft offers numerous online support resources for customers and partners who wish to migrate to the latest version of Internet Explorer.

  1. Modern.IE – For developers updating sites to modern standards, Modern.IE provides a set of tools, best practices, and prescriptive guidance. An intranet scanner is available for download, for assessing Web apps within corporate networks.
  2. Internet Explorer TechCenter – The Internet Explorer TechNet site includes technical resources to deploy, maintain and support Internet Explorer. Enterprise Mode for Internet Explorer 11 is covered in detail, to help customers extend Web app investments by leveraging this new backward compatibility feature.
  3. Internet Explorer Developer Center – The MSDN developer site includes resources related to application development for Internet Explorer.
  4. Microsoft Assessment and Planning (MAP) Toolkit – This is an agentless inventory and planning tool that can assess your current browser install base.

For customers and partners who want hands-on guidance, Microsoft has a number of deployment and compatibility services available to assist with migrations. These services include:

  1. Microsoft Services Premier Support – Gain the most benefit from your IT infrastructure by pairing your business with Microsoft Services Premier Support. Our dedicated support teams provide continuous hands-on assistance and immediate escalation for urgent issues, which speeds resolution and helps you keep your mission-critical systems up and running.
  2. Microsoft Consulting Services – Fast and effective deployment of your Microsoft technologies shortens the time it takes to see value from your investments; and when your people use those technologies to their fullest extent, they help grow their skills and your business. Microsoft Services consultants work with your organization to deploy and adopt Microsoft technologies efficiently and cost-effectively, and we can help you minimize risk in your most complex initiatives. Our expertise on the Microsoft platform and collaboration with our global network of partners and technical communities fuel our ability to help you consider just what else is possible through your innovation and Microsoft technologies and solutions.
  3. Internet Explorer Migration Workshop – The Microsoft Services Internet Explorer Migration Workshop helps customers understand the migration process to the latest version of Internet Explorer, using a structured workshop targeted towards IT professionals and developers. Your subject matter experts will quickly learn how to evaluate compatibility issues and remediation techniques. For more information, contact your Microsoft Services representative or visit www.microsoft.com/services.
  4. Find a Microsoft partner on Pinpoint – Connect with a certified IT specialist in your area who knows how to help you upgrade to the most current version of Internet Explorer [and the .NET Framework], with minimal disruption to your business and applications.

By offering better backward compatibility and resources to help customers upgrade, Microsoft is making it easier than ever before for commercial customers to stay current on the latest version of Internet Explorer. In addition to modern Web standards, improved performance, increased security, and greater reliability, migrating to Internet Explorer 11 also helps unlock upgrades to Windows 8.1 Update, services like Office 365, and the latest Windows devices.

— Roger Capriotti, Director, Internet Explorer

Bring Your Maps to Life: Creating animations with Bing Maps (JavaScript)

Thu, 08/07/2014 - 10:00

Bing Maps is a very powerful mapping platform that is often used for creating engaging user experiences. The fluid interactive maps make for a great canvas when visualizing location based data. In this blog post we are going to take a look at how to make the user experience a little more engaging by adding custom animations that can be used in both web and Windows Store apps…READ MORE

The Principles of Modern Management

Thu, 08/07/2014 - 09:12

Are your management practices long in the tooth?

I think I was lucky that early on, I worked in environments that shook things up and rattled the cage in pursuit of more customer impact, employee engagement, and better organizational performance.

In one of the environments, a manufacturing plant, the management team flipped the typical pyramid of the management hierarchy upside down to reflect that the management team is there to empower and support the production line.

And when I was on the Microsoft patterns & practices team, we had an interesting mix of venture capitalist type management coupled with some early grandmasters of the Agile movement.   More than just Agile teams, we had an Agile management culture that encouraged a customer-connected approach to product development, complete with self-organizing, multi-disciplinary teams, empowered people, a focus on execution excellence, and a fierce focus on being a rapid learning machine. 

We thrived on change.

We also had a relentless focus on innovation.  Not just in our product, but in our process.  If we didn’t innovate in our process, then we got pushed out of market by becoming too slow, too expensive, or by lacking the quality experience that customers have come to expect.

But not everybody knows what a great environment for helping people thrive and do great things for the world, looks like.

While a lot of people in software or in manufacturing have gotten a taste of Agile and Lean practices, there are many more businesses that don’t know what a modern learning machine of people and processes that operate at a higher-level looks like. 

Many, many businesses and people are still operating and looking at the world through the lens of old world management principles.

In the book The Future of Management, Gary Hamel walks through the principles upon which modern management is based.

The Principles of Modern Management

Hamel gives us a nice way to frame looking at the modern management principles, by looking at their application, and their intended goal.

Via The Future of Management:

Principle Application Goal Standardization Minimize variances from standards around inputs, outputs, and work methods. Cultivate economies of scale, manufacturing efficiency, reliability, and quality. Specialization (of tasks and functions) Group like activities together in modular organizational units. Reduce complexity and accelerate learning. Goal alignment Establish clear objectives through a cascade of subsidiary goals and supporting metrics. Ensure that individual efforts are congruent with top-down goals. Hierarchy Create a pyramid of authority based on a limited span of control. Maintain control over a broad scope of operations. Planning and control Forecast demand, budget resources, and schedule tasks, then track and correct deviations from plan. Establish regularity and predictability in operations; conformance to plans. Extrinsic rewards Provide financial rewards to individuals and teams for achieving specified outcomes. Motivate effort and ensure compliance with policies and standards. What are the Principles Upon Which Your Management Beliefs are Based?

Most people aren’t aware of the principles behind the management beliefs that they practice or preach.  But before coming up with new ones, it helps to know what current management thinking is rooted in.

Via The Future of Management:

“Have you ever asked yourself, what are the deepest principles upon which your management beliefs are based? Probably not.  Few executives, in my experience, have given much thought to the foundational principles that underlie their views on how to organize and manage.  In that sense, they are as unaware of their management DNA as they are of their biological DNA.  So before we set off in search of new management principles, we need to take a moment to understand the principles that comprise our current management genome, and how those tenets may limit organizational performance.”

A Small Nucleus of Core Principles

It really comes down to a handful of core principles.  These principles serve as the backbone for much of today’s management philosophy.

Via The Future of Management:

“These practices and processes of modern management have been built around a small nucleus of core principles: standardization, specialization, hierarchy, alignment, planning, and control, and the use of extrinsic rewards to shape human behavior.”

How To Maximize Operational Efficiency and Reliability in Large-Scale Organizations

It’s not by chance that the early management thinkers came to the same conclusions.  They were working on the same problems in a similar context.  Of course, the challenge now is that the context has changed, and the early management principles are often like fish out of water.

Via The Future of Management:

“These principles were elucidated early in the 20th century by a small band of pioneering management thinkers -- individuals like Henri Fayol, Lyndall Urwick, Luther Gullick, and Max Weber. While each of these theorists had a slightly different take on the philosophical foundations of modern management, they all agreed on the principles just enumerated. This concordance is hardly surprising, since they were all focusing on the same problem: how to maximize operational efficiency and reliability in large-scale organizations. Nearly 100 years on, this is still the only problem that modern management is fully competent to address.”

If your management philosophy and guiding principles are nothing more than a set of hand me downs from previous generations, it might be time for a re-think.

You Might Also Like

Elizabeth Edersheim on Management Lessons of a Lifelong Student

How Employees Lost Empathy for their Work, for the Customer, and for the Final Product

No Slack = No Innovation

The Drag of Old Mental Models on Innovation and Change

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Who’s Managing Your Company

Seizing the Opportunity – A One Scotland - One Education approach to 21st Century Learning

Thu, 08/07/2014 - 09:09

Guest post from Melvyn Ingleson - MJI Business Solutions.

I recently organised a very involved conference call between Scottish Government and a member of the Microsoft Corporation software engineering group. A fascinating call for a non techie like me. As with much of my work at the moment, the call was to get a better understanding from everyone involved about how to continuously improve the experience of Glow users, the Scottish schools technology platform. We need to drive maximum collaboration whilst being mindful of appropriate child protection policies, a delicate balancing act.

However it made me think about how you deliver a One Scotland, One Education approach to learning. In other words how do you use the power of collaboration across multiple audiences and geographies to deliver the finest education experience for everyone. Let me explain…

You quickly find yourself looking at the old ways of managing technology across public services, with a really heavy emphasis on security, on limiting identification of pupils and limiting access from one group to another. And yet whether we like it or not older children and adults use Facebook , WhatsApp , Skype and other social media and collaboration tools and frankly set their own identities, sometimes with parental involvement, often without. Some may feel that such ubiquitous identity and access when allowed is highly dangerous. In truth the scale of abuse of identity and access to individuals of any age is minimal when compared to the sheer global usage of such social engagement tools. Checks and balances are built in by the technology providers, and you can choose to be very private, limiting access to photos for example to a few close friends. In truth few limit access to the maximum permitted.

So lets turn the issue round. Lets examine the case for a One Scotland access and identity policy, where individuals must provide at minimum, a substantial amount of recognition data. Why? So that if anyone abuses the access rights at any level the individual can quickly be traced either as an online persona or physical individual. Exemplary action is then taken against any individual who steps outside the boundaries of common sense or legal framework. Yes, but more importantly, because it is then easy to find your collaborator wherever or whoever they be.

On that basis we create an absolute de minimis number of security groups within what is termed, technically, a national tenancy. Something that is now made possible by huge advances in Cloud computing.

  • We encourage teachers in any school to communicate with teachers in any other school in Scotland.
  • We encourage pupils to collaborate with pupils in other schools where there is a common project interest.
  • We ignore local authority boundaries for the purpose of collaboration
  • We invite any parent or guardian to engage with the school online, to engage directly with teachers, and to create parent interest groups at any level, local, regional or national
  • We invite business to engage directly with individual schools and with pupils groups to inform educational choices as we build a twenty first century workforce
  • We encourage pupils, teachers, parents to reach out to schools and pupils in other countries where there is common interest
  • We encourage any device form in schools including mobile as a “learning platform”
  • A platform that is extended to all Higher and further Education

In reality that world is already there in the business world which some enquiring minds may join! Yes there are rules and protocols but in truth we live in an Open Age. We reach out through email, but that is really now old school. Increasingly Yammer groups are the preferred route for businesses to address issues, or seize opportunities. Sometimes partners or supply chains are invited in, sometimes not. We use Skype or Lync to hold video conference calls, share documents, etc, on a global scale.

The one remaining barrier to truly global collaboration has been language, and we in the UK, have been very fortunate that English is very widely taught and spoken. Now that barrier is being removed. Microsoft ‘s World Partner Conference last month enabled the new CEO to demonstrate Skype Translate, a brave live demo in front of 14000 partners. And yes it will be formally launched within a few months. Spontaneous audio translation in real time to enable communication in two languages. Truly awesome!

So let s get a real debate going across Scotland, from every interested party. For too long there has been an unhealthy focus on current structures, protocols and governance. Scottish Government and Microsoft have been investing heavily to make the Glow 365 Platform do the business for Scottish teaching and learning. However, in an era of Cloud First, Mobile First, I would argue that we need to have the courage to take a brave decision. Let s create a truly collaborative environment, a One Scotland Education & Learning Experience. One that drives collaboration. One that demonstrates that Scotland is a true innovator when it comes to enabling the use of technology by highly committed educators and enquiring minds.

I would love to hear your views……..

All she wanted was to see her baby – luckily the hospital found a way to bring them together

Thu, 08/07/2014 - 09:00

Yeilin wanted nothing more than to see her new-born daughter Kaitlyn. But medical complications were keeping them apart.

Luckily, the hospital found another way to bring them together.

Learn more about how technology is helping patients connect

Currently seeking a new role

Thu, 08/07/2014 - 08:38
After more than 2 and a half years at Tribal I took the decision to leave to give me time to hunt for a new role. You'll find a good deal about me on LinkedIn and I can send my CV/resume on request. With 20 years of professional selling and business development experience I am looking for a similar role. I wanted a fresh and free perspective to hunt for a new role which is why it was simplest to leave Tribal and move forward from an unencumbered position. My LinkedIn profile is here http://uk...(read more)

�� Web 到 App 之路 (三): 使用 WAT (Web App Template)

Thu, 08/07/2014 - 08:13

WAT (Web App Template) 是微軟所開發,專門設計將既有的網站轉成 App 的免費 Visual Studio 擴充元件,此前的文章已有簡介: WAT- 將既有網站快速轉為 Windows 8.1 App 或 Windows Phone App 的免費工具。上個月也剛推出了 Universal Apps v2.0 的版本 (http://wat.codeplex.com/)。

WAT 的目標定義明確,即是要讓企業能完全延用在既有網站上投資、同時還能以直覺簡易的 config 方式讓 App 實作出平台功能 (如 Windows Live Tile、App Bar 等)、而且無需每次網站改版即需重新編譯。

WAT 實作

在 Visual Studio 2013 中開啟 Web App Template for Universal Apps- JavaScript 範本

可以看到此方案已預設包含 Windows 8.1 及 Windows Phone 8.1 的 Universal Apps 專案

要快速將 Web 轉為 App,需要 config 的地方只有一個,就是 Shared 中的 config.json 檔。

將其中的 homeURL 改為 Bill Gates 的網址 (http://www.gatesnotes.com/) 之後,同時以 Windows Phone 及 Windows 8.1 模擬器執行的畫面如下,可見到已符合 Responsive Web Design (RWD) 規範的網站轉為 App 之後,即能因應不同尺寸螢幕大小作適當的版面配置:

透過 config.json 客製化 App

所有 App 功能面的實作,全部是透過修改這個 config.json 檔來完成的,包含最重要的無網際網路時的離線 Offline 支援、給定 RSS Feed 即能達到 Live Tile 的效果、給定 Search URL 即實作出 Search Charm Bar、給定外部連結即可實作 App Bar 及 Navigation Bar 等等。

完整的說明文件可見: http://wat-docs.azurewebsites.net/JsonWindows

目前能透過 config 方式即能實現的 App 功能如以下,未來能實現的功能還會不斷增加。

除了功能面外,WAT 也提供了讓您修改 CSS 檔以改變 App 呈現時的版型 (但不影響原有網站的版型)

實際案例

有愈來愈多的國外網站在維護網站的同時、經由 WAT 建置 App 以因應行動化的需求,除了前文中介紹的 Low’sZoopla Property 之外,我們再以著名旅遊規劃網站 Expedia.com 為例。

以下可看到其網站及在 Windows Phone App 上各自的呈現:

 

  

回饋及討論

利用 WAT 將即有網站轉為 App 的過程可說是最簡單的,config 的方式易於操作、維護及偵錯門檻也低。

但最最重要的一點是,網站作了任何更新或變動,其 App 即能立即反應最新的內容;也就是企業本來即正常運行的網站維運流程幾乎不用作調整,在最大化既有投資的同時還能以 App 打開另一個市場。

當然,為了因應各種不同尺寸的行動裝置,亦強烈建議採行此方法的網站符合 Responsive Web Design (RWD) 的設計準則,以達到最好的使用者經驗。

WAT 的未來

微軟設計 WAT 的原始精神是將 Web Experience 作最佳化的運用,短期目標即是讓各客戶能延用在人力、技術及維護上的既有投資、同時能滿足市場行動化的需求;長遠而言微軟會在 Web Experience 上作更進一步的投資!

由 Web 到 App 之路小結

簡單來說,無論是否要走向 App 化,建議各網站可先行實作「釘選」的功能,畢竟簡單又容易測試。

往 App 發展之路,若跨平台是現階段的急迫性需求,可走 Apache Cordova 的方式,但其客製化及維運成本就相對高些;若無跨平台的急迫需求,建議可先走 WAT 的方式,對未來整體維運考量上應能達到最好的 ROI 成效。

延伸閱讀:

由 Web 到 App 之路 (一): 釘選網站 Pinned Site

由 Web 到 App 之路 (二): 使用 Apache Cordova (PhoneGap)

The latest technologies in plaintext editing: NotepadConf

Thu, 08/07/2014 - 07:00

On November 13, 2014, Saint Paul, Minnesota will be the home to NotepadConf, which bills itself as "the premier technology conference for Notepad.exe users and text enthusiasts."

I'm still not sure whom Microsoft will send to the conference, but maybe that person could give a talk on how you can use Notepad to take down the entire Internet.

YOMER Yet One More Error Reporting (Method)

Thu, 08/07/2014 - 07:00
I wrote about creating an Out-Error function before to avoid outputting the stack dump that a normal Write-Error displays.  Many newcomers to PSH take one look at the huge blob of angry red text and throw up their hands in frustration. Write-Error is useful in populating $Error, which is very useful for exploring why a script went wrong.  One of the nice bits of data for debugging is $Error[0].InvocationInfo.ScriptLineNumber, which says where the script died. Unless you use an Out-Error...(read more)

Since clean-up functions can't fail, you have to soldier on

Thu, 08/07/2014 - 07:00

Clean-up functions can't fail, so what do you do if you encounter a failure in your clean-up function?

You have to keep cleaning up.

Some people like to follow this pattern for error checking:

HRESULT Function() { hr = SomeFunction(); if (FAILED(hr))) goto Exit; hr = AnotherFunction(); if (FAILED(hr))) goto Exit; ... and so on until ... hr = S_OK; Exit: return hr; }

And some like to put it inside a cute flow control macro like

#define CHECK_HRESULT(hr) if (FAILED(hr)) goto Exit;

or even

#define CHECK_HRESULT(f) if (FAILED(hr = (f))) goto Exit;

Whatever floats your boat.

But you have to be careful if using this pattern in a clean-up function, because you might end up not actually cleaning up. For example:

HRESULT Widget::Close() { HRESULT hr; CHECK_HRESULT(DisconnectDoodad(m_hDoodad)); m_hDoodad = nullptr; for (int i = 0; i < ARRRAYSIZE(GadgetArray); i++) { CHECK_HRESULT(DestroyGadget(m_rghGadget[i])); m_rghGadget[i] = nullptr; } hr = S_OK; Exit: return hr; }

What if there is an error disconnecting the doodad? (Maybe you got RPC_E_SERVER_DIED because the doodad lives on a remote server which crashed.) The cleanup code treats this as an error and skips destroying the gadget. But what can the caller do about this? Nothing, that's what. Eventually you get a bug that says, "On an unreliable network, we leak gadgets like crazy."

Or worse, what if you're doing this in your destructor. You have nowhere to report the error. The caller simply expects that when the object is destroyed, all its resources are released.

So release as much as you can. If something goes wrong with one of them, keep going, because there's still other stuff to clean up.

Related: Never throw an exception from a destructor.

Bonus chatter: Yes, I know that you can avoid this problem by wrapping the Doodad and Gadget handles inside a class which disconnects/destroys on destruction. That's not my point.

Getting traces for .NET apps when you cannot modify the application and don't want to use more than one machine.

Thu, 08/07/2014 - 06:53

One of the biggest pains of .NET applications going against Exchange is getting detailed logs.  Using a tracing tool like Fiddler is great since it will SSL decode traffic, however it won't capture traffic from a .NET application running on the same box unless that application can set its proxy settings to that of Fiddler.  Having a client app modified to give the ability to set proxy settings is time consuming and often not possible.  However, you don’t necessarily need to go through that given that a config file can be created in the same folder as the applications .NET executable and can be used to have the traffic logged to a file and even provide proxy setting defaults. 

There are two fairly easy solutions:

    1) Use an application config file to log off traffic to a file.
    2) Use an application config file to force the application traffic to a proxy server – ie Fiddler.

The methods I'm describing only work for .NET applcations which are using .NET code to do the calls.  There are some .NET applications which use non-.Net APIs to do calls, so these methods won't work for those applications. If the application sets an override for proxying then the .config file won’t work.  However, if the application accounts for these settings then usually there is a way through the application to set them.

With .NET applications you can modify the application's application config file or its web.config so that you can have the traffic logged. See, .NET classes have default trace listeners built into them and you can flip them on and have their logging directed to a file.   Doing this allows us to get logging for times where it would otherwise be difficult or not possible.  You will want to NOT do this with machine.config as it will cause all applications on the system to log to the log file. Also watch out for web.configs which might be further down the sub folder paths as you could inappropriately override their settings. Every .NET application can have a .config file. With the exception of web apps which use web.config, a config file will start with the same name as the application’s executable name.  So, bob.exe would have a config file of bob.config.  If a config file already exists then you can splice in what you need – however, be very careful.  If an application does not have a config file then you can just create one – also be careful as you can override its default settings and change its behavior in ways which may not be appropriate for the application.  Note that when you modify a web.config that it will cause the application to reset.  Further, you should ALWAYS make a backup copy of a config file before modifying it.

This example below will have the default listeners of several of the .NET namespaces log to the file called network.log.  Be sure to set the .Net version.  I advise doing research on the settings before usage as it may log more than you wish.  Note that you should remove logging after you have resolved the problem since logging is accumulative and the log can easily grow dramatically.  In between runs of a repro you should consider closing he application and deleting the log file so that you can easily find the start of the repro entries. 

<?xml version="1.0"?>
<configuration>
  <system.diagnostics>
    <trace autoflush="true" indentsize="4"/>
    <sources>
      <source name="System.Net"> 
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>
      <source name="System.Net.HttpListener">
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>
      <source name="System.Net.Sockets">
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>
      <source name="System.Net.Cache">
        <listeners>
          <add name="System.Net"/>
        </listeners>
      </source>
      <source name="System.Web.Cache">
        <listeners>
          <add name="System.Web"/>
        </listeners>
      </source>
    </sources>
 
    <switches>
      <add name="System.Net" value="Verbose"/>
      <add name="System.Net.Sockets" value="Verbose"/>
      <add name="System.Net.Cache" value="Verbose"/>
      <add name="System.Net.HttpListener" value="Verbose"/>
    </switches>

    <sharedListeners>
      <add name="System.Net" type="System.Diagnostics.TextWriterTraceListener" initializeData="network.log"/>
    </sharedListeners>

  </system.diagnostics>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
  </startup>
</configuration>

Alternatively can also set a default for the proxy address and port.  If you set the proxy address and port to that of Fiddler (127.0.0.1 at port 8888) then you can have Fiddler do the tracing for you.  This is possible because Fiddler is itself a proxy.

Consider the following for Fiddler:

<configuration>
    <system.net>
      <defaultProxy>
        <proxy
              autoDetect="false"
              proxyaddress="http://127.0.0.1:8888/
              bypassonlocal="false"
              usesystemdefault="false"
        />
      </defaultProxy>
    </system.net>
</configuration>

See: 

<defaultProxy> Element (Network Settings)
http://msdn.microsoft.com/en-us/library/kd3cf2ex(v=vs.110).aspx

Configure .NET Applications (This is on Fiddler’s site).
http://docs.telerik.com/fiddler/configure-fiddler/tasks/configuredotnetapp

 

Pages

Drupal 7 Appliance - Powered by TurnKey Linux