You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
Get the latest information, insights, announcements, and news from Microsoft experts and developers in the MSDN blogs.
Updated: 1 hour 48 min ago

Data rollback

Wed, 07/06/2016 - 06:23

A major challenge in testing, particularly data driven applications like ERP systems, is managing data during the test execution. For a test to execute reliably, it needs to start from a known state… every single time. How can we make that happen reliably? Fortunately in AX7, a significant testability feature was added to the SysTest framework that enables this behavior.

Before describing the new feature, let’s dive into the problem a bit further. A book that I highly recommend, XUnit Test Patterns: Refactoring Test Code, uses the term fixture to describe the setup for a test. There are several different fixture patterns, but I will focus on only a couple.

The fresh fixture pattern creates a new environment for each test method. This is the most common approach for tests at the bottom of the test pyramid, primarily unit tests (see this post on the test pyramid). This typically produces the most reliable tests as there is no possible carryover from previous tests.

The shared fixture pattern creates an environment that is reused by many tests. As tests get broader in scope as in integration or business cycle tests, more and more data needs to exist for the test to execute in a reasonable amount of time. This is where a shared fixture is frequently used, but it has its downsides. A significant challenge can be inadvertent state carryover that impacts downstream tests. This situation impacts test reliability, can be very frustrating to debug, and results in lost confidence in the test suite.

An ERP system, with its mass of back end, interconnected, data, lends itself to a shared fixture for tests broader than unit tests. Internally, we’ve come up with different approaches to address the data sharing and fixture tear down challenges over past releases – things like test suites and various isolation approaches. Most of these were “bolt-ons” to the SysTest framework, and all ultimately had some drawbacks.

For AX’7′, we focused on reliably rolling back data at different layers directly within the SysTest framework. To accomplish this, the transaction savepoint feature in SQL Server was leveraged. Savepoints are created at the start of each class and each method, enabling nested rollback to these known states as part of the teardown process. Because it leverages native SQL capabilities, it is both reliable and fast.

Data rollback using SQL transaction savepoints is the default behavior for all SysTest classes. You can choose to disable it at the class level using the SysTestTransaction attribute with the ‘NoOp’ value. This might be helpful while debugging tests, but you should almost always take the default behavior.

Now that we have reliable and fast data rollback, we’ve discouraged the use of legacy test isolation mechanisms. This has resulted in big reduction of ‘flaky’ tests… and that has been a huge benefit to our engineering efforts!

Automate HDInsight Cluster Creation using .net sdk

Wed, 07/06/2016 - 05:57


This blog post shows how to automate HDInsight cluster creation from .net sdk. Information is available in pieces so I thought to blog about it so that I can refer it in future and it may help others.

This blog post shows how it can be done but when you execute program it will prompt for authentication which will not work when you want to automate the entire process.

We want to automate the process means no manual intervention is required to provide credential for authentication. In below steps I am setting up a application in Azure AD (Active Directory) which is a must if you want to automate the process, providing access to resource group and then .net code setting up HDInsight cluster.

In this scenario, I have test resource group created and a storage in it from Ibiza portal.

First step is to create Azure Active Directory application and service principal to access resources on Azure. Please follow here to setup.

Once Azure Active Directory application and role assignment  is configured it will look like below.

Once it’s done. Open Visual Studio 2015. Create new console application.

Add reference mentioned here.

Download code from here and paste it.

Execute the code.

Switch to Azure portal to monitor if request has been accepted.

After sometime you will get message that HDInsight cluster is created.

Hope it helps.

Eat Healthy, Stay Fit and Keep Learning!

Create your personalized Twitter Analytics Dashboard in Power BI in 10 minutes!

Wed, 07/06/2016 - 05:38

Create your own personalized Twitter Analytics Dashboard in Power BI

When I do Tweet the most? What day and hour do my Tweets receive the most interactions and impressions? With Power BI, these questions are very simple to answer. In this blog, you’ll learn how to export your Twitter Analytics stats to use as a data source in a pre-built Power BI template provided here. Once you import the data into the template, you can explore many other interested metrics, such as the number of retweets, link clicks, profile clicks, and more.

The first step to creating this simple dashboard is to export your Twitter stats from

  1. Navigate to
  2. Click on “Tweets” on the top ribbon
  3. On the right, you’ll see a button that defaults to the “Last 28 Days”, change this to the past 90 days (maximum allowed), and click on “Export Data” to export the .csv

Next, download the Power BI Template from here and continue with the following steps:

  1. Download and open the attached TwitterAnalyticsDashboardTemplate.pbit file and open it in Power BI Desktop
  2. Click on “Edit Queries” in the top ribbon
  3. On the right side, you’ll see “Applied Steps”. Click on the gear icon next to the first step titled Source.
  4. Change the file path to point to your exported stats and click on OK
  5. Click “Close & Apply” in the top ribbon bar and your dashboard will be populated with your exported stats

Sam Lester (MSFT)


How to install Office using a Provisioning Package

Wed, 07/06/2016 - 05:17

Hi everyone,

Sorry, it took me quite a while to write that article about how to install Office using a Provisioning Package (PPKG).

As you might know, there’s some challenges deploying  a Win32 apps using PPKG because of the limitation of the file you can upload inside a PPKG to use with “ProvisioningCommands” feature. Indeed, it’s not currently possible to have a folder structure within the “CommandFiles” field; only files, not folders. This makes importing Office sources a big challenge since Office has a quite deep folder structure.

Fortunately there’s a workaround for that!

Let me show you how to install Office using PPKG and in the below explanation, i will use the Click-to-Run version of Office 2016 but it would work the same for Office 2016 Professional Plus (you might have to modify the PowerShell script to reflect the correct installation command line).

Creating the ZIP file containing Office source

The first thing you need to do is downloading the source of Office 2016 Click-to-Run and to do that, you need a free tool called “Office 2016 Deployment Tool” that you can download here.

  • Create a folder named “C:O365” on the computer where you installed Windows ICD
  • Extract the Office Deployment Tool inside “C:O365” folder.
  • Edit the “Configuration.xml” file using Notepad. I put an example below.
<Configuration>   <Add SourcePath="" OfficeClientEdition="32" Branch="Current">     <Product ID="O365ProPlusRetail">       <Language ID="en-us" />     </Product>   </Add>   <!--  <Updates Enabled="TRUE" Branch="Current" /> -->   <Display Level="None" AcceptEULA="TRUE" />   <Logging Level="Standard" Path="C:WindowsDebug" />   <!--  <Property Name="AUTOACTIVATE" Value="1" />  --> </Configuration>
  • Open a Command Prompt and run the following command to download Office source. C:O365setup.exe /download configuration.xml
    • That will download Office source inside “C:O365” folder
  • ZIP the “C:O365” folder into a file called “
Create the PowerShell script to install silently Office
  • Copy the sample code below and save it into a file called “Start-ProvisioningCommands.ps1
<# .Synopsis Office 2016 installation sample script .DESCRIPTION Extract the ZIP and run the Office setup.exe with the configuration file as a parameter #> [CmdletBinding()] [Alias()] [OutputType([int])] Param ( [Parameter(Mandatory=$false, ValueFromPipelineByPropertyName=$true, Position=0)] $Log = "$env:windirdebugStart-ProvisioningCommands.log" ) Begin { <# # Start logging #> Start-Transcript -Path $Log -Force -ErrorAction SilentlyContinue <# # Extract the ZIP #> $Archives = Get-ChildItem -Path $PSScriptRoot -Filter *.zip | Select-Object -Property FullName ForEach-Object -InputObject $Archives -Process { Expand-Archive -Path $_.FullName -DestinationPath "$env:TEMP" -Force } } Process { <# # Office 2016 installation #> $WorkingDirectory = "$env:TEMPO365" $Configuration = Get-ChildItem -Path $WorkingDirectory -Filter *.xml | Select-Object -Property FullName [XML]$XML = Get-Content -Path $Configuration.FullName $XML.Configuration.Add.SourcePath = $WorkingDirectory $XML.Save($Configuration.FullName) # Run Office 2016 setup.exe Start-Process -FilePath "$WorkingDirectorySetup.exe" -ArgumentList ('/Configure "{0}"' -f $Configuration.FullName) -WorkingDirectory $WorkingDirectory -Wait -WindowStyle Hidden # If you want to remove the extracted Office source, uncomment below # Remove-Item -Path $WorkingDirectory -Force } End { <# # Stop logging #> Stop-Transcript -ErrorAction SilentlyContinue } Create the Provisioning Package using Windows ICD
  • Open Windows ICD and navigate to [Runtime Settings]>[ProvisioningCommands]>[DeviceContext]
    • Under [CommandFiles] add “” and “Start-ProvisioningCommands.ps1” files
    • Under [CommandLine], type the following command which will install silently Office: PowerShell.exe –ExecutionPolicy Unrestricted .Start-ProvisioningCommands.ps1

You should get something which look like this:

Create the PPKG and voila

PS: I want to thank my coworker Ryan Hall working as a Senior PFE in Australia who explained me that method to deploy Office using PPKG.

Running the allocation step of Wave processing in parallel

Wed, 07/06/2016 - 05:04

This blog post applies to Microsoft Dynamics AX 2012 R3 CU11 and KB 3153040.

Wave processing is used to generate work for the warehouse. The processing can be time consuming and majority of the processing time is spent in the allocation step and in the work creation step.

It is now possible to run the wave allocation step in parallel, which can improve the performance of the wave processing, and allow for a larger throughput of waves in the same warehouse.

Previously it was only possible to allocate one wave at a warehouse at a time. This constraint was enforced by using a SQL application lock that basically locked on the warehouse ID. This constraint has now been removed. A new constraint has been introduced so the locking is done on the item and dimensions that are above location in the reservation hierarchy. Dimensions above the location always include product dimensions, so if an item is configured using Color, variants for Red, Blue, and Yellow could be processed in parallel.

This means that if the same item with the same dimensions above the location is being allocated by one wave, other waves will have to wait to acquire a lock on the same item and dimensions. If the lock cannot be acquired in a timely manner, an error will be thrown and the wave processing will fail.

In order to utilize parallel processing, the wave needs to run in batch.

Performance improvements – what can be expected

It is hard to predict how much the parallel processing can improve the performance.

The performance benefits fall in two categories:

  1. Improved throughput: The throughput of waves should be improved even if parallel processing is not configured, especially for scenarios where there is no overlap of items within the waves.
  2. Improvement of the allocation for a single wave: Testing on customer data on a 4 core environment using 8 tasks, resulted in almost a 50% improvement of the overall processing for larger waves with more than 700 different items and variants. The parallel processing is done per items and dimensions above the location, so the improvements depends on how many different items a wave contains, the infrastructure available, and the duration of the allocation vs the duration of the work creation.
Configuration Warehouse management parameters

The following values on the Warehouse management parameters form are relevant. This form is found under Warehouse management > Setup > Warehouse management parameters.:

Field Description Wave processing batch group This determines the batch group that the initial processing of the waves should use. The subsequence processing of allocation can be done using different batch groups. Process waves in batch This determines if the waves are processed in batch. If this is not enabled, parallel processing will not be used. Create pending allocation log This determines if logging should be done during the parallel processing of pending allocations. This should only be enabled if you need insight into the wave allocation, for example, to troubleshoot issues, since it adds an extra overhead. Wait for lock (ms) This setting determines how long the wave processing should wait to acquire a lock on the item and dimensions above location (this is the logical unit that is locked during wave processing). We recommend that you allow waits of at least a few seconds, since it allows for allocation of one logical unit to finish. The setting is in milliseconds. Wave process methods

The actual configuration of the parallel processing is done on the Wave process methods form found under Warehouse Management > Setup > Waves > Wave process methods.

A new button called Task configuration is enabled for the allocateWave method. This buttons opens the Wave post method task configuration form.

In this form you can configure how many batch tasks should be used for the allocation in a specific warehouse. If the number is set to 8, a maximum of 8 batch tasks will be used to process the allocation.

Note: The optimal number of batch tasks depends on the infrastructure available and what other batch jobs are being processed on the server. Tests done on a 4 core environment that was dedicated to wave processing showed that 8 tasks lead to good results.

Specific batch groups can be used for different warehouses in order to allow the allocation processing to scale out per warehouse.

If a configuration record does not exist for a warehouse, parallel processing will not be used.


Since the batch framework is used, errors that occur during wave processing will be captured as part of the batch jobs Infologs. The batch jobs related to a wave can be viewed using the Batch jobs button:

This is what a typical set of batch jobs would look like for a wave:

The first batch job is the one that was initially created when the wave processing began. If parallel execution is used, this job prepares data for the next job.

The second job is the one for the allocation. This job can have multiple tasks dependent on the number of batch tasks configured. The third job is for the rest of the wave processing and has information about the first step after the allocation. This is typically the createWork step or replenishment if that is enabled.

The wave processing is self-correcting so any error that’s detected during the processing should be handled gracefully and reported using the Infolog.

A typical error related to parallel processing could be that two waves try to allocate the same item at the same time and one does not complete so that the other wave is unable to acquire a lock within the specified time. If this situation occurs the batch jobs log will contain information stating that the lock for the item could not be acquired. If this occurs, the wave that failed needs to be processed again.

Since the processing is happening in parallel, data needs to be maintained in different tables to track the state of the processing. This means that the logs for the batch jobs might contain errors such as duplicate key errors. The screenshot below is an example of such errors where 8 tasks were created and all failed the allocation:

The errors from the batch tasks are also part of the batch jobs log. The most important information is typically at the bottom. In this example the log is telling us that the shipment was not fully allocated.

In rare cases, for example if the SQL connection is ended, it is possible for the wave processing to end up in an inconsistent state where the batch job appears to be running but the processing is stopped. The wave can’t handle errors like this, so an attempt to clean up failed waves is done when the next wave runs. Alternatively, you can use the Clean up wave data button to clean up the current wave if it is in an inconsistent state.

The pending allocation log

If the Pending allocation logging option is enabled data can be viewed in the Wave pending allocation log form by clicking the button. A log record is created every time allocation for an item and its dimensions begins and ends.

Logging should only be enabled if you need it, for example, during initial testing or for troubleshooting.

Bienvenue à l’été ! Bienvenue à la nouvelle newsletter pour les techies !

Wed, 07/06/2016 - 03:47

Ces dernières années, le développement et les activités n’ont jamais été si proches. Le savoir-faire, les technologies et les plateformes couvrent désormais la gamme complète, tout comme notre nouvelle newsletter.

Nos newsletters pour les professionnels de l’informatique et les développeurs ne forment désormais plus qu’une seule newsletter destinée à tous les passionnés de technologie. Celle-ci est publiée deux fois par semaine en allemand, français et anglais. Votre abonnement existant est conservé, vous n’avez donc rien à faire pour recevoir notre nouveau format. Si vous souhaitez changer la langue de votre newsletter, vous pouvez gérer à tout moment votre abonnement ici !

En outre, notre newsletter s’est refait une beauté pour vous davantage. Nous attendons vos retours avec impatience ! Faites-nous part de vos commentaires à l’adresse ou sur Twitter @msdev_ch.

Si vous appréciez le nouveau format, n’hésitez pas à le partager avec vos amis et collègues.

Nous sommes heureux de vous compter parmi nos abonnés ! Votre équipe Microsoft Suisse DX.

Hallo Sommer! Hier ist der neue Newsletter für Techies!

Wed, 07/06/2016 - 03:45

Entwicklung und Betrieb sind in letzter Zeit stärker denn je zusammengewachsen. Fachwissen, Technologien und Plattformen erstrecken sich nun über das gesamte Spektrum – genau wie unser Newsletter.

Unsere Newsletter für Entwickler und IT-Experten wurden zu einem zentralen Newsletter für alle Technikbegeisterten zusammengefasst, der zweiwöchentlich auf Deutsch, Französisch und Englisch erscheint. Ihr Abo gilt weiterhin, Sie müssen also nichts weiter tun, um unser neues Format zu erhalten. Wenn Sie zwischen den verfügbaren Sprachen wechseln möchten, können Sie hier weiterhin Ihr Abo verwalten.

Wie Sie sehen, ist unser neues Layout jetzt noch übersichtlicher – wir wollen, dass Ihnen der Newsletter beim Lesen noch mehr Freude bereitet. Wir freuen uns auf Ihr Feedback! Teilen Sie uns Ihre Anmerkungen unter oder @msdev_ch mit.

Wenn Ihnen das neue Format gefällt, würden wir uns freuen, wenn Sie auch Ihre Freunde und Kollegen daran teilhaben lassen.

Vielen Dank, dass Sie uns abonniert haben! Ihr Team von Microsoft Schweiz DX.

Welcome Summer! Welcome New Newsletter for Techies!

Wed, 07/06/2016 - 03:43

In the recent past, development and operations have grown closer together than ever before. Know-how, technologies and platforms now span the full spectrum and so does our new newsletter.

Our developer- and IT professional newsletters are now one newsletter for all technology interested audiences, released bi-weekly in German, French and English. Your existing subscription applies, so you don’t have to do anything to get our new format. You can continue to manage your subscription here if you want to switch between the available languages!

You will also notice a new, clean look that we think increases the newsletter’s value for you. We are looking forward to your feedback! Send us your comments to or share them at @msdev_ch.

If you like the new format, feel free to share it with your friends and colleagues.

Thank you very much for being a subscriber! Your Microsoft Switzerland DX Team.


Drupal 7 Appliance - Powered by TurnKey Linux