You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 22 min 58 sec ago

Configure Lab Management for TFS 2013

Thu, 09/25/2014 - 14:11
This blog post will cover installation of SCVMM console on the TFS application tier and the TFS server level and team project collection level configurations required to enable lab management. You should have a setup of SCVMM 2012 R2 ready, before proceeding to these steps. To install and configure SCVMM to be used with lab management, please refer this article. Once we are done with the configurations from the TFS server, please refer this article , to configure test controllers and create lab environments...(read more)

WCF User and Password Authentication

Thu, 09/25/2014 - 13:15

Part 3: Get started with Python: Functions and File Handling

Thu, 09/25/2014 - 13:00

This is a tutorial series which will teach you how to code in Python. We will start with the absolute basics of installation of Python in Windows and Python Tools in Visual Studio. We will then go through basic constructs in Python and write a couple of programs to summarize what we have learned. We will end with an Object Oriented Approach using Python and a specific feature of Python Tools in Visual Studio: Mixed mode C/C++/Python Debugging

Part 1: Get Started with Python summarized the steps involved to setup Python and Visual Studio for Python Development. We essentially learned how to install Python in Windows, Visual Studio and Python Tools for Visual Studio.

Part 2: Get Started with Python took you through the basics in programming constructs such as output, input, variables and control flow statements including conditional statements and loops including while and for. Using these tools was enough to get started with coding basic applications in Python.

Part 3: Functions and File Handling

Welcome to this weeks edition of Get Started with Python! In this section, let us take a deeper dive. We will be looking into Functions and File Handling. After having gone through this, we will develop an application using Functions and Files to demonstrate the ease of use and utility of these constructs.

Function Handling

Functions are blocks of code which are logically grouped to perform some action. In Python, a function is defined using the ‘def’ keyword, followed by the name of the function, followed by the list of parameters in parenthesis, followed by ‘:’. The distinctive feature in Python being that the block of code is not delimited by brackets or parenthesis, but by an indentation of a tab space. The first line is an optional line in quotes which is called as a ‘docstring’ and can be used by third party tools for documentation purposes. The code snippet below gives a brief description of the syntax and the usage.

1: def <function_name> (<parameter list>): 2: "<docstring>" 3: ... 4: ...

File Handling

As most other procedural languages, the most important object in File handling is the File Object handler, the difference being that there is no need to import or include any additional libraries. We get the file object handler using the ‘open’ construct. The file can be opened in either read, write or append mode. We use the ‘write’ construct to write contents in the file while we use the ‘read’ construct to read contents from the file. After we finish our operations with the file, we close the file using the ‘close’ construct. The following code snippet shows the syntax and the usage.

1: FileHandle=open("<file_name>",'<mode>') #file_name is the name of the file 2: #<mode> is either r, w, a 3: FileHandle.write("<text>\n") #<text> will be the content of the file 4: #reads the content of the file 5: FileHandle.readline() #reads the content line by line 6: FileHandle.close() #closes the file

ReadTextMessage Application

Now that we have gone through function and file handling, let us look into how both of these can be used to build an application. We will first define the objective of the application, explain the design which will be followed by the code.


Expand regularly used short-forms in text message jargon. For eg: lol, rofl and lmao will be expanded to “laughing out loud”, “rolling on the floor laughing” and “laughing my a** out” respectively.


  • def mainfunction() : This will make the call to all the other functions
  • def filewrite(): This will write contents to the file
  • def filecheck(): Will open and check the validity of the existence of the file
  • def separatewordlist(): separate the contents of the file into words
  • def dictionarylist(): creates 2 lists, one containing the words and the other contains the expansion
  • def compare(): compare the final list of words
  • def printMessage(): prints the final message

Find the details of the code of the app:

1: #This is the main function 2: def mainFunction(): 3: print("Please enter the message (no punctuations please):") 4: message = input(">") #will asign message with the user's message 5: fileWrite(message) #will write the to a file 6: fileCheck("TextFile2.txt") #will check if the file exists or not 7: messageWords = seperateWordsList("TextFile1.txt") #will create a list of all words in the message 8: AllWords, AllDefs = dictionaryList("TextFile2.txt") #will create two lists - one containing the words, the other containing the definitions 9: finalWords = compare(messageWords, AllWords, AllDefs) # the final list of words for the message 10: printMessage(finalWords) #will print the message 11:  12: #This will write the message to a file 13: def fileWrite(message): 14: fileObj = open("TextFile1.txt","w") #creates the file object with the name "TextFile1.txt" 15: fileObj.write(message) #writes a message to the file 16: fileObj.close() #closes the file objects 17:  18: #will check if the file exists or not 19: def fileCheck(fileName): 20: try: 21: fileObj = open(fileName) #will try to open the file 22: except IOError: #will handle the exception 23: print("The file could not be opened.") 24: print("Either the file does not exist, or you have entered the wrong name.") 25: os.system("pause") 26: os.system("cls") 27: mainFunction() 28:  29: #will seperate words and return a list of words 30: def seperateWordsList(fileName): 31: fileObj = open(fileName) 32: fileContents = 33: AllWords = fileContents.split() #will split the entire file contents into words 34: return AllWords 35:  36: #This function is to return two lists - one containing all the short forms, the other containing the definitions 37: def dictionaryList(fileName): 38: fileObj = open(fileName) 39: AllWords = [] 40: AllDefs = [] 41: for line in iter(fileObj): #This for loop will read the file line by line 42: words = line.split() #This will split the sentence into a list of words 43: AllWords.append(words[0]) #appends the short form to this list 44: s = "" 45: for x in range(2,len(words)): 46: s = s + words[x] + " " 47: AllDefs.append(s[0:len(s) - 1]) #appends the definition to this list 48: return (AllWords, AllDefs) 49:  50: #this function will compare message words and those in dictionary 51: def compare(messageWords, AllWords, AllDefs): 52: for x in range(0, len(messageWords)): 53: word = messageWords[x] 54: for y in range(0, len(AllWords)): 55: if(word.upper() == AllWords[y]): 56: messageWords[x] = AllDefs[y] #This will replace the word with the dictonary definition if it's true 57: return (messageWords) 58:  59: #will print the message based on the list finalWords 60: def printMessage(finalWords): 61: message = "" 62: for word in finalWords: 63: message = message + " " + word 64: print(message[1:]) #will remove the inital space 65:  66: mainFunction()

The output of this application on execution looks as follows:



Understanding and using files and functions is key to the development for any  application. Now you are definitely equipped with enough arrows in your quiver to start developing your applications on Python. In the next part, I will cover an object oriented approach to programming in Python which will include classes and objects. So stay tuned and hope to see you there !

Technorati Tags: ,,,,

Continuous Delivery in Minutes with Node.js, Grunt, Mocha and Git

Thu, 09/25/2014 - 12:53

Modern application development now'a'days demands a rigorous continuous deliver mechanism. This is especially true when it comes to the cloud.

You want to be able to get through the Build -> Measure -> Learn cycle as fast as possible. With cloud services, you can quickly create mechanisms to build, test and deploy your development team's code. You can even maintain development and production branches.

Continue reading for the full walk-through.

Issue with Application Insights Data Stream Services – 9/25 – Mitigated

Thu, 09/25/2014 - 12:28

Final update: 9/25/2014 19:25 UTC

We have mitigated an issue in data stream services that has caused data loss for up to 24 hours. This issue was caused by time pointers in data processing services which was set to current time. Customers would not be able to see data from 9/24/2014 13:00 UTC till 9/25/2014 13:00 UTC. At present Data is current and processing is normal as expected. We continue to monitor our services closely for any re-occurrence or other impact. No further updates in this blog unless we see a repentance.

We apologizes for any inconvenience this might have caused.

-Application Insights Service Delivery Team

A Simple BackgroundDownloader driven User Control Implementation for Windows Store Apps

Thu, 09/25/2014 - 12:09

This post demonstrates a very simple user control implementation that displays a ringtone name and has a download button upon clicking which the relative MP3 file begins downloading. As downloading begins, the button is disabled and relative progress bar visibility is shown using a custom dependency property. The control makes use of BackgroundDownloader class for downloading file in the background. The downloaded file is by default saved in Music library and as the download progresses, the progress bar updates progress using a custom exposed Dependency Property. Here's how the user control bound in GridView looks like in sample app implemtnation,

To begin with, here's our POCO representing a ring tone,

public class Ringtone
public string Title

public string Path


In the user control (Downloader.xaml.cs) you can find two dependency properties; IsDownloadInProgress that is used to control the visibility of progress bar and DownloadProgress, a double value to represent download percentage as bytes are received,

public Visibility IsDownloadInProgress
get { return (Visibility)GetValue(IsDownloadInProgressProperty); }
set { SetValue(IsDownloadInProgressProperty, value); }

// Using a DependencyProperty as the backing store for IsDownloadInProgress. This enables animation, styling, binding, etc...
public static readonly DependencyProperty IsDownloadInProgressProperty =
DependencyProperty.Register("IsDownloadInProgress", typeof(bool), typeof(Downloader), new PropertyMetadata(Visibility.Collapsed));

public double DownloadProgress
get { return (double)GetValue(DownloadProgressProperty); }
set { SetValue(DownloadProgressProperty, value); }

// Using a DependencyProperty as the backing store for DownloadProgress. This enables animation, styling, binding, etc...
public static readonly DependencyProperty DownloadProgressProperty =
DependencyProperty.Register("DownloadProgress", typeof(double), typeof(Downloader), new PropertyMetadata(0));

Above two dependency properties are bound with progress bar control as follows,

<TextBlock Margin="20" Style="{StaticResource SubheaderTextBlockStyle}" Text="{Binding Title}"></TextBlock>
<Button Margin="20" Tag="{Binding Path}" Name="Button1" Click="Button_Click">Download</Button>
<ProgressBar Name="Progressbar1" Margin="20" Visibility="{Binding IsDownloadInProgress}" Minimum="0" Maximum="100" Value="{Binding DownloadProgress}"></ProgressBar>

Note that in the constructor of the user control (Downloader.xaml.cs), we've explicitly set the data context of progress bar to this, referring that its bound properties are exposed within the very user control.

public Downloader()

//This will ensure that only progress bare makes use of depdnency property
//otherwise Ringtone Title won't be displayed in the Text Block.
Progressbar1.DataContext = this;

Here’s how downloading is performed once button is clicked (file in stored in MusicLibrary folder by default and thus the corresponding capability is explicitly checked in manifest file). Also note that existing file is overridden,

private async void Button_Click(object sender, RoutedEventArgs e)
Button button = (Button)sender;
string path = button.Tag.ToString();
string name = path.Substring(path.LastIndexOf('/') + 1);

IsDownloadInProgress = Visibility.Visible;
button.IsEnabled = false;

BackgroundDownloader downloader = new BackgroundDownloader();
StorageFile file = await KnownFolders.MusicLibrary.CreateFileAsync(name,CreationCollisionOption.ReplaceExisting);
DownloadOperation operation = downloader.CreateDownload(new Uri(path, UriKind.Absolute), file);
Progress<DownloadOperation> progressCallback = new Progress<DownloadOperation>();
progressCallback.ProgressChanged += progressCallback_ProgressChanged;

await operation.StartAsync().AsTask(progressCallback);

And here’s the progress response callback handler that sets progress bar value and hides progress bar as downloading is complete. Note that if we would have not used dependency property (DownloadProgress & IsDownloadInProgress), we were supposed to make use of INotifyPropertyChanged. That’s one of the many advantages of DependencyProeprty that it avoids need of INotifyPropertyChanged.

void progressCallback_ProgressChanged(object sender, DownloadOperation e)
double bytesRecieved = Convert.ToDouble(e.Progress.BytesReceived);
double totalBytesToReceive = Convert.ToDouble(e.Progress.TotalBytesToReceive);
DownloadProgress = (bytesRecieved / totalBytesToReceive) * 100;

if (DownloadProgress == 100)
IsDownloadInProgress = Windows.UI.Xaml.Visibility.Collapsed;
Button1.IsEnabled = true;
catch (Exception)

The sample implementation is attached. For simplicity of purpose, the Urls of ringtones are hardcoded in the app. Note that if the progress bar doesn’t yield progress for you, try changing path of MP3 to a larger file.

Happy Coding :)

Go faster on Azure with new D-Series virtual machines

Thu, 09/25/2014 - 11:46

Faster processors, more memory and SSD hard drives... The new D-Series virtual machines on Azure are ideal for many compute and data-intensive research applications. They offer up to 112GB RAM, 800 GB SSD temporary hard drives, and 60% more speed. Find out more details at

5 Minute FIM Hacks: Changing FIM Portal Time Zone

Thu, 09/25/2014 - 11:46

This is the first in a series of posts we’ll be calling “5 Minute FIM Hacks”. The purpose of these posts will be to provide quick and simple tips and tricks for customizing FIM to make it perform better or be easier to use.


Today’s 5 Minute FIM Hack is about changing the internal time zone the FIM portal uses. You may have noticed (while searching your Search Requests) that the time stamp is incorrect (even though the system time of your server is set correctly). This is because FIM actually has its own internal time configuration. Most likely, your FIM implementation is set to the default (GMT) time zone. To change this, start by navigating to your FIM portal. In the bottom left-hand corner, click on “Administration”:

From the “Administration” menu, select “Portal Configuration”:

From the “Portal Configuration” dialogue window, click on “Extended Attributes”:

Scroll down until you see the “Time Zone” attribute. Notice (in this case) it is set incorrectly. You may clear this by simply clicking in the box and deleting the value. To find your correct time zone, click on the “Browse” button on the far right (the button that looks like several sheets of paper):

In the top right-hand corner, click on “Search within:” and select “All Resources”:

In the “Search for:” box, enter “(GMT” and click on the magnifying glass. This will display all available time zone resources. Find your desired time zone in the list and check the box to select it. Click “OK”.

Here we see the pending change (remove incorrect time zone and add correct time zone). When finished, click “Submit”.

Now, from this point forward, all internal date/time stamps will be set to the correct time zone.







Getting the most out of MAT’s Microsoft Translator provider

Thu, 09/25/2014 - 11:32

Recently I was contacted via by a developer using MAT with a question.  “Why are the results from the Microsoft Translator Provider in MAT different than those from  Well, that is a great question.  Let me answer the question by showing some of the configuration options available for that provider.

The quick answer: 

The release of MAT v3.0 added support for Microsoft Translator’s Hub (See: for specifics).  MAT uses the ‘Tech’ category by default as this is geared more towards software terms.  The Translator website uses the ‘General’ category when processing the requests.  The good news is that this is configurable if the Tech category does not fit your needs. 

Here is how to configure MAT to match that of the Translator website.
  1. Open Notepad as Administrator
  2. Open MAT’s Microsoft Translator configuration file . It is located at "C:\ProgramData\Multilingual App Toolkit\MicrosoftTranslatorProvider\TranslatorSettings.xml"
  3. Change "<Category>Tech</Category>" to "<Category>General</Category>"
  4. Save the file

Please adjust the above path If your %ProgramData% environment is different. 
Be sure to restart the Editor (or VS) to use the updated configuration.

Since we are here, let’s discuss some of the other options as well…

Here is a snippet of the file:
</Provider >

<Category> element

As indicated above, the <Category> element is used to add Microsoft Translator’s Hub functionality into MAT’s translation services.  However, this is not limited to the General and Tech categories.  If you have a custom Hub (or your friend does), you can set the <Category> value to their Hub and take advantage of their customized translations directly within MAT.

<Protocol> element

Looking at the configuration file, you probably noticed the <Protocol> element.  The Microsoft Translator APIs allow for translation requests using HTTPS.  By default, this is set to HTTP – as indicated by the <Protocol> value.  Changing this to HTTPS will access the Microsoft Translator Service over the SSL protocol.

<Language> elements

When you look at the <language> elements, you might be asking yourself “Why we define all the region languages separately”?  The answer is that the Microsoft Translator service uses a language neutral approach to generating translations.  An example is French France which is slightly different than French Canadian, but most of the words or phrases are common between both.  As such, to enable the supported languages and indicators, we map the Microsoft Translator neutral languages (FR) to the language specific codes (fr-FR, fr-CA, etc.)  This allows us (and you) to fine tune the support to ensure the alignment is as you desire.

As you can tell, the configuration file is pretty straightforward.  I hope this helps you understand some of the options that you have when using MAT and the Microsoft Translator Provider. 

Thank you,
The Multilingual App Toolkit team
User voice site:

TFS on Azure (IAAS) Planning and Ops Guide v1.4.2 update published

Thu, 09/25/2014 - 10:48

We are pleased to announce that the v1.4.2 update of the TFS Planning and DR Avoidance Guide which includes revisions based on real-world feedback.

The only artefacts affected by this update is the TFS on Azure IaaS Guide pdf, and the everything zip package, which includes all the guides, planning workbook and quick reference posters.




  summary of changes
  • New section
    • Azure Network - Planning Authentication
  • Revised sections
    • Azure Network - Planning your Domain
    • Domain Controller walkthrough
    • Data Tier (DT) Server walkthrough
    • Application Tier server walkthrough
    • Build server walkthrough
special thanks

A special THANK YOU to Chris Margraff who made this update happen!

please send candid feedback!

We can’t wait to hear from you, and learn more about your experience using the guidance. Here are some ways to connect with us:

  • Add a comment below.
  • Ask a question on the respective CodePlex discussion forum.
  • Contact me on my blog.

Working with Names and Name Based Attributes

Thu, 09/25/2014 - 10:30

I’d like to take a minute to discuss something that can be a real pain when deploying an identity management solution: names. As anyone who has deployed or managed a large scale IdM solution can attest to, names can be a real hassle. Proper casing, uniqueness, length limits and titles/preferred names all make for a real challenge sometimes. So, if we have decided to deploy an IdM solution (such as FIM) to programmatically handle our user management, can we, from a fully autonomous approach, overcome these hurdles without drawing the ire of our user base? The answer is, yes, we can…for the most part. It’s important to remember that this really is a “numbers game”. It’s easy to write logic to make everyone happy in an organization of 500 users. This may not, however, be the case in an organization with 500,000 users. As the number of our user base increases, so does the potential complexity of name/accountname logic. My personal feeling (and what I often convey to customers), is that, if out of an organization of 500,000 users, I still have to manually administer 100 users, that means I will never have to touch the other 499,900 ever again. To me, that is a win. The other thing I would urge you to ask yourself is, “is it worth it?”. By that I mean, if I can implement logic that handles 99.99% of all users, does it really make sense to spend hours (if not days) figuring out the logic to automate the management of a handful of people (.01%)?

 With that in mind, let’s start by talking about names. More specifically, let’s talk first, middle and last names. I’m a big fan of using these to build out other attributes (such as accountName, mailNickName, etc.). So before we even begin to look at those other attributes, let’s first get first, middle and last looking good. We are assuming this data is being fed from somewhere (such as an HR data feed). If this user data is coming from a database, it is very likely it might come in as all uppercase. When it comes to proper casing names, we have a few options on how to handle that. Option one is to use a set/workflow/MPR within the FIM portal. For example, you could create a “New User Attribute Builder” workflow that proper cases names, builds accountName, etc.. In this case, you might have a couple of activities that look something like:

In many cases, this might be fine. However, there is an issue that exists here. What happens if I have a defined precedence that goes something like this: HR -> FIM -> AD? Under this scenario, if the data coming from HR is always authoritative over the data in FIM, my (now properly cased) names in FIM will be overwritten the next time a sync job runs. This could cause an issue where a user has their name proper cased, HR sync runs and exports to FIM overwriting names as all upper case, workflow proper cases them and this cycle repeats endlessly. Some admins have overcome this by creating a custom attribute that essentially marks that user object as being “managed by FIM”. By doing so, after these values are set initially, they are not modified by HR (even though it is precedent).

 Another method of addressing this is to do the conversion directly in the inbound user synchronization rule. This can be easily done by use of a function on the “source” tab of the inbound attribute flow, as shown:

This method, however, is not without fault. The downside here is that this evaluation will occur every time the sync rule runs. This could theoretically slow down imports and syncs. At the end of the day, the decision here must be made by you based on your own environment.


However we do it, once we have arrived at the point where our first, middle and last names have been proper cased, we can then move on to building account names. The real trick here is to do so in a way that guarantees uniqueness across the organization. To use the example above, this may be relatively easy in an environment of 500 users, but what about 500,000? Please note that for the following scenarios, we are using a custom activity workflow to generate unique values. For smaller environments, an activity such as this may be sufficient:

Here, we are doing a simple first initial + last name. For user John Doe, the resulting AccountName would be


With the addition of a “uniqueness key seed”, if jdoe is taken and another user (Jim Doe, for example) comes in, their AccountName would subsequently be jdoe2. For smaller environments, this may be a perfectly acceptable approach. For larger environments, however, this might possibly result in users with AccountName values of jdoe47. Likewise, this also fails to address uniqueness in Active Directory. Fortunately, however, we do have the ability to do LDAP queries directly, as illustrated here:

In this case, we are querying LDAP to determine uniqueness (and not just within FIM). Also, you may notice not only the inclusion of MiddleName, but also the proper casing occurring here (rather than in the sync rule). For user JOHN ADAM DOE, the above three value expressions would result in the following three account names:





In any of these cases, by also using a uniqueness key seed, Jim Doe no longer becomes an issue. The seed would only be used in cases such as:





Specifically, in the third example, a user with the same first name and middle initial (John Allen Doe and John Adam Doe, for example) would have to exist within the organization.


Even with this approach, there is still, however, a potential issue. In examples 2 & 3 above, what happens if the middleName attribute is not present? The resulting AccountName would be:




This can be overcome with the addition of an “IsPresent” check. For example:

Since the entire Value Expression is not visible I the above image, here they are in full:

[//Target/FirstName] + "." + IIF(IsPresent([//Target/MiddleName]), Left([//Target/MiddleName],1), "") + IIF(IsPresent([//Target/MiddleName]), ".", "") + [//Target/LastName]

 [//Target/FirstName] + "." + IIF(IsPresent([//Target/MiddleName]), Left([//Target/MiddleName],1), "") + IIF(IsPresent([//Target/MiddleName]), ".", "") + [//Target/LastName]+[//UniquenessKey]


By doing so, user John A. Doe would receive an Account Name of John.A.Doe, while user John Doe would simply be John.Doe (and not John..Doe). You may also notice the use of “Left” in the above examples. “Left” is a function we can make use of to count over a certain numbers of characters. In the example of:


We would start at the beginning and count over 1 character (producing the first initial). Technically, there is no limit on the number of spaces we can count (up to the full length of the value). For example:



Would return: Bart


There are also functions for “Right” (which counts backwards from the end) and “Mid” (which starts in the middle).

It is also worth noting at this point that this same logic can be used when building values such as DisplayName. In terms of Active Directory, DisplayName must be unique per container, but not forest wide. Also, depending on how your organization handles email addresses, it may be useful to recycle the bits above (since we’ve already determined AccountName to be unique. The activity here may be as simple as:

Finally, there are a few other considerations when it comes to handling names with FIM. The attribute sAMAcountName in Active Directory, for example, has a maximum length of 20 characters. For compliance, we can easily use the function “Trim” (or even “Left”) to grab the first 20, but this may be confusing for users whose name is far longer than 20 characters. Likewise, it may also be worth considering titles (such as “Dr.”) when handling names. Let’s say we’d like our DisplayName to be in the following format:

“LastName, FirstName Middle Initial” (i.e. Doe, John A.)


How do we handle it if Mr. Doe is a doctor? The cleanest solution, in my opinion, is to create a custom attribute in FIM to hold this title. Then, as shown above, we could use an “IIF(IsPresent” statement for the new attribute.

“LastName, Title (if present) FirstName Middle Initial” (i.e. Doe, Dr. John A.)


If the title attribute were not present, it would not be included (and neither would an additional whitespace).


Nuevos recursos para problemas de performance

Thu, 09/25/2014 - 10:26

Buenas tardes,

Recientemente se han publicado varios artículos en el Blog global del equipo de soporte de Dynamics AX:

Contienen información muy completa e importante para analizar y resolver problemas de performance con Dynamics AX.

"Managing general performance issues in Microsoft Dynamics AX":

"AX Performance Troubleshooting Checklist Part 1A [Introduction and SQL Configuration]":

"AX Performance Troubleshooting Checklist Part 1B [Application and AOS Configuration]":

"AX Performance Troubleshooting Checklist Part 2":

Vale la pena tener estos artículos como "favoritos" en sus navegadores.


All About Load Test Planning (Part 5-Load Profile Additional Considerations)

Thu, 09/25/2014 - 09:15

In the previous post, I showed you how to come up with the profiles to use in a test as well as the numbers to plug into the profile. I also showed two fairly simple examples of load profiles that you might generate. In this post, I will show you some more examples of profiles, as well as some of the gotchas from these profiles. All of these are taken from real engagements I have performed, although the data and information is completely sanitized.

Example 1: Too Many Use Cases

This customer had defined eight different use cases to use for the load profile, but they provided FIVE sets of numbers to use for the loads on each use case. The five different sets of numbers represented five different business cycles in their industry and they felt that it was important to see if the system could handle the load expected in each of the cyclers:

Use Case Profile 1 Profile 2 Profile 3 Profile 4 Profile 5 Read Only 100 30 50 50 50 Active 0 0 20 20 20 Generate 120 50 60 60 60 Regenerate 0 150 20 20 20 Sign Off 0 0 50 200 50 Archive 0 0 25 100 300 Modify 2000 5000 4000 2000 2000 No Change 2000 1500 4000 2000 2000

As we looked at the table, and we started adding up all of the different load tests we would need to execute, we realized that we would not have enough time to complete every one of the desired tests. When I looked at the numbers, I noticed that there wasn’t too much difference between the load in Profile 3 and other profiles except for the last two use cases. I suggested that we build a new profile that used the highest count from each use case and run that. If it passed our criteria, then we knew that all of the individual profiles would pass. We could do this because we were testing specifically to see “If the System can handle the expected peak load.” Below was our final profile. The system could handle this load, so we could easily assume that the system could handle any of the loads specified in the profiles above.

Use Case Final Profile Read Only 100 Active 20 Generate 120 Regenerate 150 Sign Off 200 Archive 300 Modify 5000 No Change 4000   Example 2: Too Fast

I was brought into an engagement that was already in progress to help a customer who was trying to figure out why the system was so slow when we pushed the load to the “expected daily amount.” The system was taking as long as 120 seconds for some requests to respond and the maximum allowed time was 60 seconds. They said that they were used to seeing faster times when the system was not under load. I started asking them about the load profile and I learned two things that they had not done properly.

  1. They were using the wrong type of load pattern to drive load. They had chosen to use the “Based on number of tests” pattern when they should be using the “based on user pace” pattern. By selecting the Based on number of tests, they were pushing the load harder than they should have (explanation below)
  2. They were using the wrong numbers for the amount of work that an actual user would be expected to perform.

Because of these two items, the work load they were driving was about 6 times higher than expected peak load. No wonder the system was being slow. I showed them how to rework the numbers and we switched the test profile to user pace. When we ran the tests again, the system behaved exactly as it should.

Comparing “Number of Tests” to “User Pace”

The reason that using the “Based on the number of tests” (or “based on the number of virtual users” is NOT good when trying to drive a specific load is that Visual Studio will not throttle the speed of the tests. When a test iteration is completed in either of these modes, Visual Studio will wait for the amount of time defined by the “think time between test iterations” setting and then execute the next test it is assigned.. Now, if you assume that you know how long a given iteration of a test should take and you use that number to work backwards to a proper pace, you still may not get the right load. Consider this:

  • A given web test takes 2 minutes to complete.
  • You want to have that web test execute 12,000 times in an hour.
  • If you work it backwards, you would see that you could set the test to use 1,000 vUsers and set a “think time between test iterations” of 3 minutes.

This will give you the user pace you want….. Until you fire up those 1,000 users and realize one of two things that could cause the pace to be wrong:

  • the load slows down the test so that it takes 3 minutes. Now your pace is not 12,000/hour, but 10,000/hour.
  • the test is being run on a faster system (or something else causes the test to run faster, including performance tuning) and the time for an iteration is 1 minute. Your pace is now 15,000/hour.

If you set the model to “Based on User Pace” Visual Studio will ignore the “think time between test iterations” setting and will create the pacing on the fly. In this case, you set 1,000 vUsers and tell each one to do 12 iterations/hour. Visual Studio will target a total time of 5 minutes for each iteration, including the think time. If the iteration finished in less than five minutes, it will wait the right amount of time. If the iteration takes longer than 5 minutes Visual Studio will throw a warning and run the next iteration with no think time between iterations.

Example 3: Need to use multiple scenarios

Sometimes when you look at the rate that one test needs to execute compared to another test, you may find that you cannot have both tests in the same scenario. For instance if you have one test that needs to run once/hour and another that needs to run 120/hour, but the 120/hour takes 2 minutes to complete. You can no longer run that test with a single user . So you decide to decrease the rate to 30/user/hour and then increase the total number of users to 4. Now, the other test is running at four times the rate. For situations like this, I simply move the tests into two separate scenarios.

You may also find that you have too many tests in a scenario that has “Based on User Pace” to allow a user to complete them all. When you specify the User Pace model, Visual Studio will expect that a single vUser will execute EVERY test in the scenario at the pace defined. Let’s go back to the school test from the previous post. If you look at the scenario for Students, you will see that there are 75 vUsers. Each vUser will have to complete 29 test iterations in an hour to stay on track. Visual Studio does not create separate users for each webtest. Therefore you need to make sure that there is enough time for all of the tests to complete. If not, split them up into separate scenarios.

Example 4: Don’t Count It Twice

This one bites a lot of people. Let’s say I am testing my ecommerce site and I need to drive load as follows:

Use Case Qty to execute Browse Site 10,000 Add To Cart 3,000 Checkout 2,000

So you create your three tests and set the pacing up for each. However, you need to remember that *usually* in order to checkout, you have to already have something in the cart, and to add something to the cart, you have to have browsed. If you use the quantities above, you will end up with 15,000 browse requests to the site and 5,000 Add to Cart.

Bottom Line, if a test you execute contains requests that fulfill more than one of your target load numbers, account for that in the final mix.

Example 5: Multiple Acceptance Criteria for the Same Item

This is in response to a comment left on my previous post about Scenarios and Use Cases. In this situation, I may have a requirement for the response time for generating a report. Let’s assume that the requirements are:

  • Generation of the report must be < 2 seconds for <500 rows
  • Generation of the report must be < 7 seconds for <10,000 rows

First, I would need to get more info from the business partners.

  • Is the user type the primary reason for the big size difference? (a sales clerk checks the sales he/she has performed today vs. a store manager checking all of the sales by the entire staff?).
    • I would add a new use case to the manager scenario and a separate use case in the sales scenario of the test plan and move forward as normal.
  • Is a parameter passed in, or the query being executed the primary reason (same as the first example, but the same person runs both reports)
    • I would ask the business partner what the likelihood of either happening is and then I would devise a set of data to feed into the test that would return results close to each number. I would probably then create two different web tests, one for each query set and give them names that indicate the relative size of the response. Then I could easily see how long each one took.

It is also worth noting that you can have the same request show up multiple times in a webtest and change the way it gets reported by using the “Reporting Name” property on the request to display the relative size:

Example 6: To Think or not To Think

I covered this topic in a separate post, but I am adding a pointer to it here because it applies directly to this topic, and if you have not read my other post, you should. The post (“To Think or not to Think”) is here.

Power BI August Roundup

Thu, 09/25/2014 - 09:00

We are a little bit late with this roundup (just like the Power Query update). August was a great month for Power BI and Excel updates. We have lots of great content to share with you. To start off, we received tons of comments on our blog posts. Here's our favorite from the latest Power Query update:

We love it too! Thank you all for your thoughts. For our August Roundup, we’ve gathered a number of great articles for you to read at your leisure including content on data visualization and Power BI demos:


August Product Updates

08/19/14 - Scheduled Data Refresh Update: New Data Sources

09/02/14 - 7 new updates in Power Query

August Power BI Articles

08/13/14 - Data Visualization for the 2014 World Cup results using Excel and Power BI: Marc Reguera updates his analysis on the World Cup history using information from the 2014 World Cup. Looks like there's a new world order in soccer

08/14/14 - Power BI is changing the way health services are provided: Tom Lawry gives us several examples on how Power BI is being used to change the way health services are provided around the world

08/25/14 - Visualizing the Primetime Emmy History: with the Emmys happening in August, I wanted to get more insights on the history of these awards. This is the result

08/26/14 - Best practices for building hybrid business intelligence environments:  Joseph D'Antoni and Stacia Misner illustrate on their white paper the power of hybrid solutions using Power BI

08/26/14 - Power BI Data Management Gateway 1.2 Changes the Game: John P. White explains why he thinks the new capabilities of the data management gateway that allows Power BI to connect to on-premise data sources is so important

08/26/14 - Waterfall chart with Power Pivot: Philipp Lenz shows us a very creative way to build waterfall charts using Power Pivot


Hope these articles are useful to you. As always, don't forget to send our way any interesting posts or articles you find about Excel and Power BI! We are always looking to share great Power BI content with our community.

Reach us @MSPowerBI with the hashtag #PowerBIroundup

FREE Game Templates in Construct 2

Thu, 09/25/2014 - 07:48

Hello guys, I have been creating several game templates that should help you to get a faster and polished products, or help you to learn Construct 2 much faster.

In the following weeks I should find some time to polish these templates and get them all up to date. You will notice that some are more polished in their behaviors and graphics. Also, to get them ready for Github I should make certain that I have translated all comments and should create a nice description of the project.

In any case, I hope you enjoy the templates, many more should be available soon.

• Roguelike Alien

Top down adventure with randomly generated scenarios. Virtual thumbstick for touch controlled devices. Keyboard support. Polished art from @KenneyWings. Shadow management for Mobs. Full game already published in the store:

• Chili Zombies
Side scrolling shooting. Gamepad implementation. Ready to use mouse, keyboard and touch.

• The Falling
Game that I reserved to explain in around 45 mins to students. Many of the videos in my channel ( ) explain the details of this one.

• Doodle Bombs
Platformer with my own kind of twist. Perfect to explain the bullet behavior.

• Falling Xmas
Xmas themed platformer.

• Flappy In the Storm
Riding the Flappy Bird wave. Using a C2 template I polished and completed a full game.

• Pumpkin Escape
My take on Doodle Jump. Infinite jumper with a few twists like falling zombies.

• Santa Vs Zombies
My kind of Xmas. Infinite Runner.

• S G Runner
One of my first infinite runners. Not the best, but simple to modify.

• S G Storm
Copter like game.

• Super G
My first Infinite Jumper. It has inclinometer support.

• Tainted Love   (Yes, I do like to created stupid links that put weird ideas in your head. I have issues, I know :)
Valentine’s themed platformer.

• Super G Invaders
Not the best graphics, but it was my take on Space Invaders.


You may see that some games are incredibly much more polished than others, and that is because I was learning about C2 at the same time I was publishing the games. So, you should be able to find games for all tastes and expertise levels.


Let me know what you think of them.

Controlled Vocabulary 101 … typed at the most stunning office!

Thu, 09/25/2014 - 07:33

Yesterday I enjoyed listening to Hyper-V, PowerShell and other 933>|m (aka geeky) MVPs sharing their knowledge, experience and passion at the Canadian MVP Days Community Roadshow 2014.

During one of the breaks, I needed a quick reboot break and sat at the harbour, in what must be the most beautiful office I have ever had the pleasure to work in  

While enjoying the tranquil beauty, I decided to answer a question I received from a colleague about CV Tags.

controlled vocabulary 101

We use the Controlled Vocabulary (CV) Outlook Add-In to decorate our emails with a CV tag, which (if consistent) can be used to effectively drive mail rules. Email rules allows you to organise your email into folders, raise triggers and increase your productivity even when having to deal with (lots) of email.

Our project teams are geographically distributed, part-time, volunteer driven and competing with family and job responsibilities. As we cannot simply pick-up the phone (we could, but waking Brian at 3AM is not generally a good idea), or fire up a messenger conversation, our core collaboration tool of choice is therefore email, resulting in email, lots of email. To process the “normal” email and the “Rangers” email effectively we have become reliant on the Controlled Vocabulary and consistent tags.

Peruse FAQ – How can I determine which of the 100’s of ALM Ranger emails is important to “Gregg”? for more details on why we use it.

if you are an alm ranger, where do you find the bits?


  1. Close Outlook
  2. Install Controlled Vocabulary
  3. Download and run this configuration file: MSCommunity
  4. Select the buttons (i.e. champs, rangers) you wish to add and click Add Selected
  5. Start Outlook and use the Controlled Vocab menu to create emails and meeting invites
if you are an alm ranger, how do you use it? To …     Send a general email or get email usage guidance.
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Select Email Usage Guide to get guidance or a tag, i.e. Chatter, to send a general chatting-type email.
  • Note that list of addressees and priority of email can be preconfigured.
  • Revise the CV tag style subject [ Chatter Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your subject.
  • HINT:
    Create an email rule to delay emails with the PleaseCompleteSubject tag in the subject line to ensure you do not forget to update the subject.
Schedule a meeting.
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Revise the CV tag style subject [ Chatter Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your subject.
  • Select Meeting and the type, i,e. kick-off, to create a meeting.
  • Revise the CV tag style subject [ Kick-off Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your meeting subject.
    [ Kick-off Rangers ] vsarDevOps – Unicorn
Send an ALM technology email.
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Select Visual Studio ALM.
  • Select the relevant technology, i.e. Build.
  • Revise the CV tag style subject [ ALM Build Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your meeting subject.
    [ ALM Build Rangers ] How about updating the guidance?
Send a Ranger project email.
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Select Project Collaboration.
  • Revise the CV tag style subject [ vsar@@ ] PleaseCompleteSubject, replace the PleaseCompleteSubject placeholder with your subject and the @@ with the project code.
    [ vsarDevOps ] Unicorn rocks!
    vsar prefix = Visual Studio ALM Rangers

Most importantly, do not forget to create mail rules to filter and/or prioritise emails, based on the CV tag. For example, move all incoming and outgoing emails with the CV Tag [ vsarDevOps ] to the vsarDevOps mailbox folder.

Common questions we get:

  • Where do I find the project code to replace the @@ in the project collaboration vsar@@ placeholder?
    The project codes are shared at the kick-of meetings, are the same as the folder name in source control and worst case, can be queried with the project lead or program manager of the team.
  • Why do we not have the project codes in the vocabulary? Why must I replace @@ with every email?
    Simplicity! We have numerous projects, which would result in a long list. We also have a lot of project code churn, which would result in continuous vocabulary maintenance and require users to refresh the vocabulary.
… what if you do not use CV?

Generally not much happens … unless you email someone who has managed to effectively reduce mail inbox maintenance using a platter of mail rules, reliant on CV tags.

Initially replies may mention “+ CV tag” and get progressively more aggressive. Worse, your emails may be get lost in a lo…………………………………………………………… queues of “untagged email”, resulting in delayed responses.

Remember … tag it!

… what if you are not an alm ranger?

Download it, evaluate it and enjoy the productivity gain it delivers in high-volume email collaboration environments.

last but not least

Thank you Michael Fourie for this great tool!

If a process crashes while holding a mutex, why is its ownership magically transferred to another process?

Thu, 09/25/2014 - 07:00

A customer was observing strange mutex ownership behavior. They had two processes that used a mutex to coordinate access to some shared resource. When the first process crashed while owning the mutex, they found that the second process somehow magically gained ownership of that mutex. Specifically, when the first process crashed, the second process could take the mutex, but when it released the mutex, the mutex was still not released. They discovered that in order to release the mutex, the second process had to call Release­Mutex twice. It's as if the claim on the mutex from the crashed process was secretly transferred to the second process.

My psychic powers told me that that's not what was happening. I guessed that their code went something like this:

// code in italics is wrong bool TryToTakeTheMutex() { return WaitForSingleObject(TheMutex, TimeOut) == WAIT_OBJECT_0; }

The code failed to understand the consequences of WAIT_ABANDONED.

In the case where the mutex was held by the first process when it crashed, the second process will attempt to claim the mutex, and it will succeed, and the return code from Wait­For­Single­Object will be WAIT_ABANDONED. Their code treated that value as a failure code rather than a modified success code.

The second program therefore claimed the mutex without realizing it. That is what led the customer to believe that ownership was being magically transferred to the second program. It wasn't magic. The second program misinterpreted the return code.

The second program saw that Try­To­Take­The­Mutex "failed", and it went off and did something else for a while. Then the next time it called Try­To­Take­The­Mutex, the function succeeded: It was a successful recursive acquisition, but the program thought it was the initial acquisition.

The customer didn't reply back, so we never found out whether that was the actual problem, but I suspect it was.

How to always open the Role Center on startup

Thu, 09/25/2014 - 07:00

Role Centers are role-specific home pages that provide an overview of information that pertains to a user's job function in the business or organization. With Dynamics AX 2012, when a user logs in, the application displays the area page of the last visited module, and not the Home/Role Center page.

Dynamics AX 2012 was designed this way as most of the users prefer to find the application where they left it, and don't want to have to navigate back to the module they were using.

Unfortunately there isn't any parameter to change this behavior and some customers would like to always have the Home/Role Center displayed when they log in.

We thought about various workarounds to make this working so the Role Center always displays on login, and the simplest seems to be the following:

  1. Open a Developer workspace
  2. In the AOT, find the class named Application
  3. Double-click the startup method
  4. Add the following line of code at the end of the method: infolog.navPane().selectedGroup('Home');
  5. Save and compile

I hope this is helpful!




Microsoft Student Partners (MSP) Asia-Pacific で大集合!

Thu, 09/25/2014 - 04:13


こんにちは! Microsoft Student Partners (MSP) の松原です。


この写真は何をしているところだか わかりますか?


Microsoft Student Partners (MSP) のアジア太平洋が大集合したオンラインミーティングの様子です!


9月20日(土)にMSP Asia-Pacific Meetingの第一回目が行われ、日本各地そして世界各地からMSP約60名が集まりました。
























Drupal 7 Appliance - Powered by TurnKey Linux