You are here

Feed aggregator

Dynamics CRM 2015/Online 2015 更新プログラム SDK: フォームスクリプトでビジネスプロセスフローを操作する その 2

MSDN Blogs - Mon, 03/02/2015 - 19:00


前回に続いて Microsoft Dynamics CRM 2015 および Microsoft Dynamics CRM
Online 2015 更新プログラム SDK の新機能から、フォームスクリプトでビジネス

参考: 業務プロセス フローのスクリプトを作成する


利用する業務プロセスフローを切り替える getEnabledProcesses で取得したプロセスの一覧から
利用したいプロセスを に渡すことで



function switchBPF() {
    // ユーザーのロールを取得
    var userRoles = Xrm.Page.context.getUserRoles();

    // ユーザーのロールが複数ある場合はなにもしない
    if (userRoles.length > 1)

    // 業務プロセスフロー一覧の取得 (processes) {
        for (var processId in processes) {
            // セキュリティロールが営業担当者の場合
            if (userRoles[0] == "b215f8d4-32ae-e411-80e3-c4346badf6d8") {
                if (processes[processId] == "営業担当者向け業務プロセス")
                    // 業務プロセスフローの切り替え
          , function () {
            // セキュリティロールがマーエティング プロフェッショナルの場合
            else if (userRoles[0] == "ac0df8d4-32ae-e411-80e3-c4346badf6d8")
                if (processes[processId] == "マーケティング担当者向け業務プロセス")
                    // 業務プロセスフローの切り替え
          , function () {



※ setActiveStage を利用する場合、移動先ステージは同じエンティティである

その場合は 関数を利用できます。




Xrm.Page.ui.process.setDisplayState 関数を利用します。引数が Boolean 型


Xrm.Page.ui.process.setVisible 関数を利用します。引数は Boolean 型です。




- 中村 憲一郎

The Case of Azure Websites and Continuous Deployment

MSDN Blogs - Mon, 03/02/2015 - 18:36
So this post has a misleading title… Because I’m not going to explain how this works, Azure Websites continuous deployment is not anything really new, and also It’s well documented here and here . Read more >>...(read more)

Getting Insights from Data in Real Time : Azure Stream Analytics

MSDN Blogs - Mon, 03/02/2015 - 17:47

Data analytics provides insights into data which help’s business take it to the next well. There are scenarios where you may want to analyze the data in real time, even before it’s saved into the database and perform analytics on it. Making decisions quickly in certain scenarios facilitate to have an edge over others and take the experience of your products to the next level. Also with Internet of things gaining momentum where billion of devices and sensors are connected to Internet, there is a need of processing these events in real time to perform appropriate actions.


What is the difference between traditional big data analytics and real time analytics?

To understand the difference between traditional big data analysis and real time analytics let’s explore the concept of data at rest and data in motion

This can be understood by the analogy of water, consider water in a lake to represent static data, similarly water falling through a fall is similar to the data in motion. So here we have to consider analytics with reference to time. Another interesting example to understand this consider you have to count the no. of cars in parking lot, you can count the cars in the entire lot. Now let’s say you need to calculate the number of cars passing a crossing, you have to analyses this data in a windows of time and analyses it in real time. The main idea that the analytics is carried out without storing the data


One of the biggest challenges in real time data analytics is time, effort and expertise needed to develop complex real time analytics solutions. Azure stream analytics helps overcome this entry barrier and allows to provision a solution for processing millions of events using familiar SQL queries with a few clicks.

Azure Stream Analytics is a fully managed service by Microsoft Azure which allows you to position a streaming solution for complex event processing and analytics within minutes as we will see in the example in the last section of this article.



Architecture of Azure Stream Analytics is given below:

  1. You can define input streams to be Data streams which can be coming through two sources currently
  • Event Hub
  • Azure Storage Blob
  1. Alternately input can also be from a reference data source which could be in a Blob
  2. You can write SQL like queries within windows which are discussed below to perform analysis on this data
  3. You can output this data to a SQL database , event hub or blob
  4. From this SQL database you could create solutions for presenting using Power BI dashboards or predictive analysis using Machine Learning
  5. Through the Even hub perform some actions with the sensors 



 Concept of Windows in Azure Stream Analytics Queries

To be able to write the queries for Stream Analytics, you need to understand the concepts of Windows there are three different kinds of windows which you can define with your SQL queries .Windows are nothing but time intervals within which events are analyzed.

  •  Tumbling Window

            It’s a fixed length, non –overlapping time interval

  •  Hopping Window

            The next windows overlaps with the previous windows by a certain fixed amount of time interval.

  • Sliding Window

          In case you want the overlap of the new next windows to be at every time duration we have sliding windows.


Getting Started with Azure Stream Analytics

Currently Azure stream analytics in Public Preview. To be able to try Azure Stream Analytics you must have an Azure subscription. In case you don’t have an Azure subscription you can sign up for a free one month trial here

 We will implement a sample toll both, cars continuously keep on entering the toll and exiting the toll.  We would be assuming sensors have been installed at the entry and exit which are continuously sending data to event hub and a vehicle registration data which is available for lookup. Using Stream Analytics we would calculate no. of cars which pass through this toll in every five minutes using the data available from the input stream. Then we would calculate the average time taken by a car at the toll, this analysis would help in increasing the efficiency of the Toll Booth.

To get started you would need to provision the following:

  1. Event Hubs: You would need to provision two event hubs “Entry” and  “Exit“
  2. SQL Database: You would provision a SQL Database for storing the output results from Stream Analytics jobs
  3. Azure Storage: To store reference data about Vehicle registration.
In case you are new to Azure , detailed steps for setting up the above can be found here.(

Create Stream Analytics Job

  1. In Azure Portal navigate to Stream Analytics and click on the “New” button at the bottom to create a new analytics job. Currently the service is in preview and is available in limited regions



2. “QuickCreate”, select either Western Europe or “Central US” as the region. For regional monitoring storage account, create a new storage account. Azure Stream analytics would use this account to store monitoring information for all your future jobs.


 3. Define Input sources.

3.1 We need to define input sources for Stream Analytics, we would be using Event hubs for input

Steps to add Input Sources

  1. Click on the created Stream Analytics job à Click on the Input tab
  2. Select “Data Stream “, as an input job.


3. 1Select “Event Hub” as the input source

4. Add input alias as “EntryStream”. Choose the event hub name you have created for this demo from the dropdown

5. Move to the next page select default values for all Serialization settings on this page.

6. Repeat the above steps for creating an Exit Stream and choosing the “exit” hub as the event hub this time

 3.2 Adding Vehicle Registration Data as reference as another input Source

Steps to be followed

  1. Click on Add Input at bottom.


2. Add reference data to your input job


3.Select the storage account you had created while setting up the lab environment. Container name should be “tolldata” and blob name should be “registration.csv”


4. Choose the existing serialization settings and Click Ok


      3.3 Output Data

  1. Go to “Output” tab and click “Add an output”


  1. Choose SQL databaseà Choose the SQL server database from the drop down, that you created while setting up the lab.  Enter the username and password for this server.

Table name would be “TollDataRefJoin”





In the Query tab you can write in the familiar SQL syntax itself, which would perform the transformation over the incoming data streams.


Download and extract to your local machine. This contains the following files:

1. Entry.json

2. Exit.json

3. Registration.json


 Now we will attempt to answer several business questions related to Toll data and construct Stream Analytics queries that can be used in Azure Stream Analytics to provide a relevant answer.


For testing this query we upload sample data representing data from a stream. You can find this sample JSON data file in TollData zip folder located here.


  1. Open the Azure Management portal and navigate to your created Azure Stream Analytic job. Open the Query tab and copy paste Query below


SELECT TollId, System.Timestamp AS WindowEnd, COUNT(*) AS Count FROM EntryStream TIMESTAMP BY EntryTime  GROUP BY TUMBLINGWINDOW(minute,3), TollId


 To validate this query against sample data, click the Test button. In the dialog that opens, navigate to Entry.json (downloaded on your local system in Data folder) with sample data from the EntryTime event stream.



We want to find average time required for the car to pass the toll to assess efficiency and customer experience.


SELECT EntryStream.TollId, EntryStream.EntryTime, ExitStream.ExitTime, EntryStream.LicensePlate,  DATEDIFF(minute, EntryStream.EntryTime, ExitStream .ExitTime) AS DurationInMinutes FROM EntryStream TIMESTAMP BY EntryTime JOIN ExitStream TIMESTAMP BY ExitTime ON (EntryStream.TollId= ExitStream.TollId AND EntryStream.LicensePlate = ExitStream.LicensePlate) AND DATEDIFF(minute, EntryStream, ExitStream ) BETWEEN 0 AND 15  


Click test and specify sample input files for EntryTime and ExitTime.

Click the checkbox to test the query and view output:



Azure Stream Analytics can use static snapshots of data to join with temporal data streams. To demonstrate this capability we will use the following sample question.


If a commercial vehicle is registered with the Toll Company, they can pass through the toll booth without being stopped for inspection. We will use Commercial Vehicle Registration lookup table to identify all commercial vehicles with expired registration.

Note that testing a query with Reference Data requires that an input source for the Reference Data is defined.

To test this query, paste the query into the Query tab, click Test, and specify the 2 input sources

 Following results would appear as shown below

SELECT EntryStream.EntryTime, EntryStream.LicensePlate, EntryStream.TollId, Registration.RegistrationId FROM EntryStream TIMESTAMP BY EntryTime JOIN Registration ON EntryStream.LicensePlate = Registration.LicensePlate WHERE Registration.Expired = '1'



Now as we have written our first Azure Stream Analytics query, it is time to finish the configuration and start the job.

 Save the query from Question 3, which will produce output that matches the schema of our output table TollDataRefJoin.

 Navigate to the job Dashboard and click Start.



 Starting the job can take a few minutes. You will be able to see the status on the top-level page for Stream Analytics.


 View the Table data in the SQL Database to view the results of the above query :)




SQL Server Data Tools and Data-Tier Application Framework Update for February 2015

MSDN Blogs - Mon, 03/02/2015 - 16:17

The SQL Server Data Tools team is pleased to announce an update for SQL Server Data Tools in Visual Studio and the Data-tier Application Framework (DACFX) is now available.

Get it here:

SQL Server Data Tools:

Data-Tier Application Framework (DACFX):

What’s New?

Support for the latest Azure SQL Database V12 features

Many features were added to Azure SQL Database in V12, and now the developer tools in Visual Studio support V12.

Improved Cross-Platform Schema Comparison

Schema compare has required you to select the "allow incompatible platforms" option when comparing a source to a target that supports fewer objects, as when the source is SQL Server 2014 and the target is SQL Server 2008. Now you can compare anything without selecting the "allow incompatible platforms" option. If any compatibility issues exist, you'll be notified when you attempt to update the target. Note, though, that incompatible DML, such as procedure bodies, won't be identified. Look for that feature in a future release.

New advanced publish options

We've added new options to increase your control over publishing, including the ability to not drop users. For more details, click here.

Bug fixes to customer-reported issues

This release includes fixes for the following issues:

Contact Us

If you have any questions or feedback, please visit our forum or Microsoft Connect page.  We look forward to hearing from you.

Security descriptors, part3: raw descriptors and PowerShell

MSDN Blogs - Mon, 03/02/2015 - 15:48

<< Part 2

In this part I'll get to the manipulation of the security descriptors with PowerShell. I'll deal with the code a bit differently than in the previous part: the whole code is attached as a file, and I show only the examples of the use and the highlights of the implementation in the post.

As I've mentioned in Part 1, there are multiple formats used for the descriptors. I needed to deal with the security descriptors for the ETW logger sessions. These descriptors happen to be stored in the serialized binary format that I wanted to be able to print, manipulate, and put back. Which first of all means the conversion between the binary format, something human-readable to read them conveniently, SDDL, and the .NET classes. If you remember the Part1, there are two kinds of classes: Raw and Common, and Common is the more high-level one but chokes on the ACLs that are not in the canonical form. Whenever possible (i.e. whenever their APIs are the same), I've tried to support both classes but where they're different, I went for only the Raw class for the first cut. Since you never know, what would be in the descriptors you read, choking on a non-canonical descriptor would be a bad thing.

First, let me show, how to get the security descriptor for a particular ETW session defined in the Autologger. It starts with finding the GUID of the session:

PS C:\windows\system32> $regwmi = "hklm:\SYSTEM\CurrentControlSet\Control\WMI"
PS C:\windows\system32> $session = "SetupPlatform"
PS C:\windows\system32> $guid = (Get-ItemProperty -LiteralPath "$regwmi\Autologger\$session" -Name Guid -ErrorAction SilentlyContinue).Guid -replace "^{(.*)}",'$1'
PS C:\windows\system32> $guid

The curly braces around the GUID had to be removed to fit the next step. The session GUIDs seem to be pretty stable for the pre-defined sessions, though if the sessions change much between the Windows versions, the GUIDs would also change. The same approach works for the ETW providers, just get the GUID, and from there on managing the permissions for a provider is the same as for a session.

The next step is to get the descriptor itself, it's stored in Registry in the serialized byte format:

PS C:\windows\system32> $bytes = (Get-Item "$regwmi\Security").GetValue($guid)
PS C:\windows\system32> $bytes
PS C:\windows\system32>

Uh-oh, it's empty. Well, if there is no explicit descriptor for a GUID, a default descriptor is used, one with the GUID 0811c1af-7a07-4a06-82ed-869455cdf713:

PS C:\windows\system32> if ($bytes -eq $null) { $bytes = (Get-Item "$regwmi\Security").GetValue("0811c1af-7a07-4a06-82ed-869455cdf713") }
PS C:\windows\system32> $bytes
...(prints a lot of bytes)...

Got the data. Now let's make sense of it by converting it to a raw descriptor object using a function from the attached module:

PS C:\windows\system32> Import-Module Security.psm1
PS C:\windows\system32> $sd = ConvertTo-RawSd $bytes
PS C:\windows\system32> $sd
ControlFlags           : DiscretionaryAclPresent, SelfRelative
Owner                  : S-1-5-32-544
Group                  : S-1-5-32-544
SystemAcl              :
DiscretionaryAcl       : {System.Security.AccessControl.CommonAce, System.Security.AccessControl.CommonAce,
                         System.Security.AccessControl.CommonAce, System.Security.AccessControl.CommonAce...}
ResourceManagerControl : 0
BinaryLength           : 236

Just like the functions for the principals that I've shown in Part 2, ConvertTo-RawSd takes the descriptor in whatever acceptable format (a byte array, an SDDL string or another security descriptor object) and converts it to the Raw object. Incidentally, it can be used to copy the existing descriptor objects. Under the hood it works like this:

    if ($Sd -is [System.Security.AccessControl.GenericSecurityDescriptor]) {
        $Sd = (ConvertTo-BytesSd $Sd).Bytes

    if ($Sd -is [Byte[]]) {
        New-Object System.Security.AccessControl.RawSecurityDescriptor @($Sd, 0)
    } elseif ($Sd -is [string]) {
        New-Object System.Security.AccessControl.RawSecurityDescriptor @($Sd)

Just writing the $sd to the output gives some idea of the contents. It can also be printed as an SDDL string with another helper function:

PS C:\windows\system32> ConvertTo-Sddl $sd
PS C:\windows\system32> ConvertTo-Sddl $bytes

As you can see, it also accepts the descriptor in whatever format and converts it to SDDL. Under the hood, it works by first converting its argument to the raw descriptor and then getting the SDDL from it with


To print the ACL information in a more human-readable form, you can use:

PS C:\windows\system32> ConvertTo-PrintSd $sd
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessAllowed Everyone 0x00000800 [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5 [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004 [None]

PS C:\windows\system32> ConvertTo-PrintSd $sd -Trace
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessAllowed Everyone 0x00000800=RegisterGuids [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff=Query, Set, Notification, ReadDescription, Execute, CreateRealtime, CreateOndisk, Enable, AccessKernelLogger, LogEvent, AccessRealtime, RegisterGuids, FullControl [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff=Query, Set, Notification, ReadDescription, Execute, CreateRealtime, CreateOndisk, Enable, AccessKernelLogger, LogEvent, AccessRealtime, RegisterGuids, FullControl [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff=Query, Set, Notification, ReadDescription, Execute, CreateRealtime, CreateOndisk, Enable, AccessKernelLogger, LogEvent, AccessRealtime, RegisterGuids, FullControl [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff=Query, Set, Notification, ReadDescription, Execute, CreateRealtime, CreateOndisk, Enable, AccessKernelLogger, LogEvent, AccessRealtime, RegisterGuids, FullControl [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5=Query, Notification, CreateRealtime, CreateOndisk, Enable, LogEvent, AccessRealtime, RegisterGuids [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004=Notification [None]

 As usual, it accepts the descriptors in whatever format. It shows the owner and group, and the discretionary ACL that controls the access to the object. Each ACL entry contains the type of the entry (allowed/denied), name of the principal, the permissions bitmask, and the inheritance flags in the square brackets (since there is no inheritance for the ETW stuff, all of them are [None] here). With a bit of extra help, it can print the permission bits in the symbolic form as well. Remember, each kind of object that uses ACLs has its own meaning for the same bits in the mask. So to print these bits symbolically, you've got to tell it, which kind of object this security descriptor applies to. The switch -Trace selects the ETW objects, and the other supported switches are -File, -Registry, -Crypto, -Event, -Mutex, and -Semaphore.

Suppose now, we want to check the permissions granted to BUILTIN\Administrators. Another helper function to the rescue:

PS C:\windows\system32> $allowMask, $denyMask = Get-SidAccess "BUILTIN\Administrators" $sd
PS C:\windows\system32> "{0:X8}" -f $allowMask
PS C:\windows\system32> "{0:X8}" -f $denyMask

Like the other functions, it takes a principal (user or group) and a descriptor in whatever format. It returns two values, the permission bits that are allowed and those that are denied. Since the denials are usually applied first, if you want to check, which bits are really allowed, you need to do a bit more masking:

PS C:\windows\system32> $allowMask = $allowMask -band -bnot $denyMask

Internally, Get-SidAccess works by going through all the entries in the ACL and collecting two masks from all the entries with the matching principal (i.e. SID).

And you can grant a permission:

PS C:\windows\system32> Grant-To -Sid "NT SERVICE\EventLog" -Mask 0xE3 -Object $sd
PS C:\windows\system32> ConvertTo-PrintSd $sd
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessAllowed Everyone 0x00000800 [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5 [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004 [None]
 AccessAllowed NT SERVICE\EventLog 0x000000e3 [None]

Grant-To works only with the Raw descriptors or Raw ACLs as object. Unlike the other functions, the object (or possibly multiple objects in a list) gets modified in-place.Well, maybe it would get extended to produce the new objects too but for now it has this limitation. Grant-To is smart enough to add a new ACE if it doesn't find an existing one. And it's smart enough to automatically remove these bits from the Deny ACL if one is present. It can also be used to revoke the permissions:

PS C:\windows\system32> Grant-To -Revoke -Sid "NT SERVICE\EventLog" -Mask 0x3 -Object $sd
PS C:\windows\system32> ConvertTo-PrintSd $sd
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessAllowed Everyone 0x00000800 [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5 [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004 [None]
 AccessAllowed NT SERVICE\EventLog 0x000000e0 [None]

And to deny the permissions:

PS C:\windows\system32> Grant-To -Deny -Sid "NT SERVICE\EventLog" -Mask 0xc0 -Object $sd
PS C:\windows\system32> ConvertTo-PrintSd $sd
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessDenied NT SERVICE\EventLog 0x000000c0 [None]
 AccessAllowed Everyone 0x00000800 [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5 [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004 [None]
 AccessAllowed NT SERVICE\EventLog 0x00000020 [None]

The difference between -Revoke and -Deny is that -Revoke simply removes the permissions from the Allow mask, while -Deny also adds them to the Deny mask. Note that the Deny entries get added in the front, to keep the order of the entries canonical. The bits that get set in the Deny mask, get reset in the Allow mask. The bits can be revoked from the Deny mask as well:

PS C:\windows\system32> Grant-To -Revoke -Deny -Sid "NT SERVICE\EventLog" -Mask 0xF0 -Object $sd
PS C:\windows\system32> ConvertTo-PrintSd $sd
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessAllowed Everyone 0x00000800 [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5 [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004 [None]
 AccessAllowed NT SERVICE\EventLog 0x00000020 [None]

Since the Deny mask became 0, Grant-to was smart enough to remove it altogether. It's also possible to just set the mask instead of manipulating the individual bits:

PS C:\windows\system32> Grant-To -Sid "NT SERVICE\EventLog" -SetMask 0x0F -Object $sd
PS C:\windows\system32> ConvertTo-PrintSd $sd
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
 AccessAllowed Everyone 0x00000800 [None]
 AccessAllowed NT AUTHORITY\SYSTEM 0x00120fff [None]
 AccessAllowed NT AUTHORITY\LOCAL SERVICE 0x00120fff [None]
 AccessAllowed NT AUTHORITY\NETWORK SERVICE 0x00120fff [None]
 AccessAllowed BUILTIN\Administrators 0x00120fff [None]
 AccessAllowed BUILTIN\Performance Log Users 0x00000ee5 [None]
 AccessAllowed BUILTIN\Performance Monitor Users 0x00000004 [None]
 AccessAllowed NT SERVICE\EventLog 0x0000000f [None]

When setting the mask, it still updates the opposite mask to be sensible. The switch -Simple can be used with -SetMask (but not with -Mask) to skip this smartness.

As you can see, Grant-To is pretty smart in dealing with many things. One thing it's not smart enough to handle is the inheritance bits in the ACL entries. It has no way to specify them, and just leaves them as-is in the entries it modifies. The use of inheritance also means that there might be multiple Allow entries and multiple Deny entries on the same ACL for the same principal, with the different inheritance flags. Grant-To can't handle this well either. It only knows about the first entry of either type. Since I was really interested in the ETW objects that don't use the inheritance, I didn't care much for now.

It also can't deal with the Common security descriptors (other than through a manual conversion to the Raw descriptors and back). The Common descriptors have their own similar functionality, their ACLs are represented with the class DiscretionaryAcl that doesn't allow the direct messing with it but has the methods SetAccess() and RemoveAccess() that do the similar thing, and do support the inheritance bits, but with the less obvious arguments.

Under the hood, Grant-To iterates over the ACL entries, finding the interesting ones:

if ($acl[$i].SecurityIdentifier -eq $sid) {
    if ($acl[$i].AceQualifier -eq $qual) {

If the ACE is not found, it gets inserted (yes, it's a CommonAce object even in the Raw ACL):

$ace = New-Object System.Security.AccessControl.CommonAce @(0, $qual[$j], 0, $sid, $false, $null)
$acl.InsertAce($idx, $ace)

If the mask in an ACE becomes 0, the ACE gets deleted:


And the mask in the ACEs gets manipulated directly:

$ace.AccessMask = $ace.AccessMask -bor $Mask

Returning back to the task of modifying the descriptors for the ETW sessions, the last step is to set the modified ACLs:

PS C:\windows\system32> $bytes = (ConvertTo-BytesSd $sd).Bytes
PS C:\windows\system32> Set-ItemProperty -LiteralPath "$regwmi\Security" -Name $guid -Value $bytes -Type BINARY

The ConvertTo-BytesSd converts the descriptor to the serialized bytes, from whatever format (including the Common descriptor object). It uses an interesting way to return the value. It needs to return a value of type "Byte[]" but any array returned from a PowerShell function becomes disassembled and reassembled as an array of type "Object[]", which then doesn't work right. To work around that, it returns a hashtable with the array of bytes in an entry:

@{ Bytes = $o; }

And then the caller extracts the value from the hashtable unmolested, and the byte array can be set in the registry.

If you wonder, what's magical about the bitmask 0xE3, it's the value that allows to query the information about a running ETW session and modify its modes.

<< Part 2

Predictive models and Azure Machine Learning for EnviroHack 2015

MSDN Blogs - Mon, 03/02/2015 - 15:02

Data, data everywhere, but what’s the use?

There are great efforts across many research disciplines to make data available for reuse, and environmental science is one area in which this is very successful. But how do you make all this data discoverable, usable and actionable? Dozens of scientists, developers, and business people were wowed by the spectacular view at the Digital Catapult Centre in London as they gathered to see if they could figure out how to unleash the potential of data at EnviroHack2015.

There were plenty of datasets to choose from, including those of Ordnance Survey, Met Office, Environment Agency, UK Data Service, British Oceanographic Data Centre, Centre for Ecology and Hydrology, British Atmospheric Data Centre, and National Biodiversity Network - including 100 years of plant and insect species data. We at Microsoft Research provided several datasets through our FetchClimate service, developed by our Computational Science Lab. Our partners Shoothill also provided FloodAlerts and Gaugemap flood information and APIs.

We were very excited to be taking part, with Matthew Smith from our Cambridge (UK) lab getting stuck in with several teams brainstorming how to link up and make best use of data. Our dynamic data science duo from Microsoft UK, Amy Nicholson and Andrew ‘@deepfat’ Fryer, enthralled the audience with a whistle-stop tour of Azure Machine Learning and stimulated lots of discussion on how this could be used in action.

After lots of talking and planning, the hackathon teams got to work in earnest, driven by pizza and adrenalin! The Solar Checker team had a great idea to help people make the most of their solar panel installations. Tom August, from the Centre for Ecology and Hydrology explains, “Our goal was to build an app that helps home owners with solar panels to make the most of the green energy that they produce. The current problem is that you may not know when it is best to switch on your dishwasher or washing machine to make the most of the electricity you are producing. To be able to make this decision users need to be able to predict the power generation hours in advance, something that is not currently possible.”

The team took the bit between their teeth and decided to try and build a predictive model, something that would usually take days to weeks to put together.  Tom continues, “We used simulated data and Azure ML to help solve this problem. Using our simulated data of weather from the Met Office and power output from our solar panel we were able to quickly build a workflow in Azure ML that read in our data and trained a model to predict the power produced by the solar panel simply using the day of the year, time of day and cloud cover. At first our model performed pretty badly. We used a linear regression model and since the effect of time of day and day of year on the amount of sun was non-linear (it had a sin-wave form), the model couldn’t estimate our power outage well.”

Not quite the auspicious start the team hoped for, but they persevered, and Azure ML really started to come into its own. “Next we tried a neural network model, and plugged this into our existing workflow. Using the evaluation module we were able to view the results of the models side by side. Being an R programmer I wanted to plot the results to get a handle on how good the model was, this was really easy, I just dragged over the R module, plugged it in, and within a minute or two I had the plot I wanted.”

“The final model seemed to accurately predict the power output even given the random noise I had added to the simulated data and we published the model as an API. Our app then used up-to-date weather predictions from the Met Office to predict power output over the next 12 hours. We hope that these sorts of systems will exist in smart homes of the future; allowing smart appliances to to schedule their jobs to make the most of the energy the house is producing.”

Kudos to Tom and the team for coming up with such a smart predictive app in such a short time, and it was great to see them win the ‘Advanced Analytics’ and ‘Best R Solution’ Awards! There are great prospects for some of the other teams to take forwards Azure Machine Learning such as recommending scientific linked datasets to users, and the overall winning Jelly Swarm project to predict jellyfish blooms off the coast of the UK and globally. And we hope that you find it as easy as Tom did for your own work - “Azure ML was easy and intuitive to use and allowed us to mock up a work flow, and build a model, in a couple of hours.”

Read more:


Notes from my 2/27 Office Hours

MSDN Blogs - Mon, 03/02/2015 - 14:16

Thanks to those of you who joined my office hours last Friday!

I’ve noted some of the more direct questions asked during our time, and have included some information below to help answer them.


Q: When will TFS 2015 come out?

A: The first TFS 2015 CTP is out now, actually. It dropped on 2/23. It brings in several new capabilities, as well as some that have been a part of Visual Studio Online for a while now. Some of these include: better license alignment w/ VSO, Rest API's, Service hooks (yeah!), text filtering on backlogs, and folder history

Q: Can I use Release Management to deploy to an Azure website from Visual Studio Online?

A: Why not? Rather than re-invent the wheel in a more detailed reply, I'll direct you to the great Donovan Brown's blog post on this very topic.

Q: What is a Service Hook?

A: Great question! A service hook is an integration point with VSO (and soon TFS) to allow you to perform tasks in other services or applications when something happens in VSO. I while ago I provided a basic example of a service hook which would allow you to post to a Team Room via email.

There are quite a few services which already leverage service hooks, such as Jenkins, Azure Service Bus, ZenDesk, HipChat, and Campfire (full list here).

Service hooks can also be used in custom applications and services, too! For more information on how to integrate service hooks into your app, start here.


So thanks again for joining me today. I hope to chat with more of you in a couple weeks!

How It Works: MAX DOP Level and Parallel Index Builds

MSDN Blogs - Mon, 03/02/2015 - 13:53

I have been working on an issue where rebuilding an index leads to additional fragmentation.   Using XEvents I debugged the page allocations and writes and was able to narrow in on the behavior.

There are lots of factors to take into account when rebuilding the index.   I was able to break down the behavior to the worst possible case using a single file database, single heap table,  SORT IN TEMPDB and packing of the heap data to the beginning of the database file when create clustered index is issued.

When the index is build a portion of the data (range) is assigned to each of the parallel workers.  The diagram below shows a MAX DOP = 2 scenario.

Each parallel worker is assigned its own CBulkAllocator when saving the final index pages.   This means Worker 1 gets an extent and starts to fill pages from TEMPDB for Worker 1’s given key range.   Worker 2 is executing in parallel and has its own CBulkAllocator.  Worker 2 acquires the next extent and starts to spool the assigned key range.

Looking at the database a leap frog behavior of values, across extents occurs as the workers copy the final keys into place.

The diagram below shows the leap frog behavior from a MAX DOP = 4 index creation.   The saw tooth line represents the offsets in the file as read during an index order scan.  The horizontal access is the event sequence and the vertical access is the offset in the database file.  As you can see the leap frog behavior places key values all over the file.

Key 1 is at a low offset but Key 2 is at an offset higher than Key 9 as shown in the example above.  Each of the workers spreads 1/4th of the data across the entire file instead of packing the key values together in a specific segment of the file.

In comparison the a serial index build shows the desired layout across the drive.   Smaller offsets have the 1st set of keys and larger offsets always have higher key values.

This mattered to my customer because after a parallel index build an index ordered scan takes longer than a serial index build.  The chart below shows the difference in read size and IOPS requirements.

select count_big(*) from tblTest (NOLOCK)

Serial Built

Parallel Built

Avg Read Size






# Reads



SQL Server reads up to 512K in a chuck for read ahead behavior.   When doing an index order scan we read the necessary extents to cover the key range.  Since the key range is leap frogged, during the parallel build, the fragmentation limits SQL Server’s I/O size to 160K instead of 508K and drives the number of I/O requests much higher.  The same data in a serial built index maximizes the read ahead capabilities of SQL Server.

The testing above was conducted using:  select count_big(*) from tblTest with (NOLOCK)

Hint: You don’t have to rebuild the index in serial to determine how much a performance gain it may provide.   Using WITH(NOLOCK, INDEX=0) forces an allocation order scan, ignoring the key placement and scanning the object from first IAM to last IAM order.  Leveraging the statistics I/O, XEvents and virtual file statistics output you are able to determine the behaviors.

The obvious question is that a serial index rebuild can take a long time so what should I do to leverage parallel index builds and reduce the fragmentation possibilities?

1. Partition the table on separate files matching the DOP you are using to build the index.  This allows better alignment of parallel workers to specific partitions, avoiding the leap frog behavior.

2. For a non-partitioned table aligning the number of files with the DOP may be helpful.   With reasonably even distribution of free space in each file the allocation behavior is such that alike keys will be placed near each other.

3. For single partition rebuild operations consider serial index building behaviors to minimize fragmentation behaviors.

I am working with the development team to evaluate the CBulkAllocator behavior.   Testing is needed but it could be that the CBulkAllocator attempts to acquire 9 (64K) extents to align with the read ahead (512K) chunk size.   Something like this idea could reduce the fragmentation by a factor of 8.

Bob Dorr - Principal SQL Server Escalation Engineer

Running SQL Server on Machines with More Than 8 CPUs per NUMA Node May Need Trace Flag 8048

MSDN Blogs - Mon, 03/02/2015 - 11:39

Applies To:  SQL 2008, 2008 R2, 2012 and 2014 releases

Note:  The number of CPUs is the logical count, not sockets.   If more than 8 logical CPUs are presented this post may apply.

The SQL Server developer can elect to partition memory allocations at different levels based on the what the memory is used for.   The developer may choose a global, CPU, Node, or even worker partitioning scheme.   Several of the allocation activities within SQL Server use the CMemPartitioned allocator.  This partitions the memory by CPU or NUMA node to increase concurrency and performance.  

You can picture CMemPartitioned like a standard heap (it is not a HeapCreate) but this concept is the same.  When you create a heap you can specify if you want synchronized assess, default size and other attributes.   When the SQL Server developer creates a memory object they indicate that they want things like thread safe access, the partitioning scheme and other options.

The developer creates the object so when a new allocation occurs the behavior is upheld.  On the left is a request from a worker against a NODE based memory object.  This will use a synchronization object (usually CMEMTHREAD or SOS_SUSPEND_QUEUE type) at the NODE level to allocate memory local to the workers assigned NUMA NODE.   On the right is an allocation against a CPU based memory object.  This will use a synchronization object at the CPU level to allocate memory local to the workers CPU.

In most cases the CPU based design reduces synchronization collisions the most because of the way SQL OS handles logical scheduling.  Preemptive and background tasks make collisions possible but CPU level reduces the frequency greatly.  However, going to CPU based partitioning means more overhead to maintain individual CPU access paths and associated memory lists.  

The NODE based scheme reduces the overhead to the # of nodes but can slightly increase the collision possibilities and may impact ultimate, performance results for very specific scenarios.  I want to caution you the scenarios encountered by Microsoft CSS have been limited to very specific scopes and query patterns.


Newer hardware with multi-core CPUs can present more than 8 CPUs within a single NUMA node.  Microsoft has observed that when you approach and exceed 8 CPUs per node the NODE based partitioning may not scale as well for specific query patterns.   However, using trace flag 8048 (startup parameter only requiring restart of the SQL Server process) all NODE based partitioning is upgraded to CPU based partitioning.   Remember this requires more memory overhead but can provide performance increases on these systems.


The issue is commonly identified by looking as the DMVs dm_os_wait_stats and dm_os_spinlock_stats for types (CMEMTHREAD and SOS_SUSPEND_QUEUE).   Microsoft CSS usually sees the spins jump into the trillions and the waits become a hot spot.   

Caution: Use trace flag 8048 as a startup parameter.   It is possible to use the trace flag dynamically but limited to only memory objects that are yet to be created when the trace flag is enabled.  Memory objects already built are not impacted by the trace flag.




Bob Dorr - Principal SQL Server Escalation Engineer

03/02 - Errata added for [MS-SMB]: Server Message Block (SMB) Protocol

MSDN Blogs - Mon, 03/02/2015 - 10:48

Changes to the sections listed below:

Section, Per SMB Session

Section, Application Requests the Session Key for a Connection

Section, Server Application Queries a User Session Key

Section, Receiving an SMB_COM_SESSION_SETUP_ANDX Request

Making it easier for Enterprise customers to upgrade to Internet Explorer 11 — and Windows 10

MSDN Blogs - Mon, 03/02/2015 - 10:00

As we shared last year, a top priority for Microsoft is helping our Enterprise customers stay up-to-date with the latest version of Internet Explorer. This is particularly important for Windows 7 customers who are upgrading to Internet Explorer 11 by January 12, 2016 to continue receiving security updates and technical support. We understand many customers have Web apps and services that were designed specifically for older versions of Internet Explorer and we provide a set of tools, like Enterprise Mode, the Enterprise Mode Site List, and Enterprise Site Discovery, to help you run these applications in Internet Explorer 11 — and ease the upgrade to Windows 10.

Enterprise Mode helps customers extend their investments in older Web apps through higher compatibility with the IE8 rendering engine in a more modern browser like Internet Explorer 11. In the ten months since we released Enterprise Mode, we’ve heard from our customers that it is very effective at improving legacy app compatibility and that the upgrade to Internet Explorer 11 was easier than ever before. To help us tell this story, Microsoft commissioned Forrester Consulting to interview and survey large customers in the US, UK, Germany, and Japan who have started their IE11 upgrades. Customers found that:

  • Upgrading from IE8 to IE11 was 2.3 times faster than expected, thanks to Enterprise Mode.
  • The effort to rewrite applications to modern browser standards was reduced by 75%.
  • Ongoing browser support and critical applications testing were significantly reduced.
  • Many business users saw improved productivity from using a single browser.

Organizations save money, reduce risk, and experience higher productivity by upgrading to IE11. Best of all, upgrading to Internet Explorer 11 now can help ease your migration to Windows 10. Download The Total Economic Impact of Microsoft Internet Explorer 11.

Better Backward Compatibility with the Enterprise Mode Site List

Enterprise Mode can be very effective in providing backward compatibility for older Web apps. In January, for example, Microsoft held an Enterprise Customer Summit with about 100 large customers, and we found that every broken site brought by a customer to our IE App Compatibility Workshop was fixed by using Enterprise Mode. Depending on your environment, you may not need to use Enterprise Mode for better emulation of the IE8 engine, but may still benefit from using the Enterprise Mode Site List and its new <docMode> functionality.

In November, we expanded the functionality of the Enterprise Mode Site List to include the ability to put any Web app in any document mode, without changing a single line of code on the Web site. This new functionality adds a <docMode> section to the Enterprise Mode XML file, separate from the <emie> section for Enterprise Mode sites. Using this new functionality, Microsoft’s own IT department saw our internal line of business application pass rate go from 93% to 100% with 24 document mode entries in our Enterprise Mode Site List and only a single Enterprise Mode entry.

Web paths added to the Enterprise Mode Site List can now be rendered either in Enterprise Mode—which provides higher-fidelity emulation for IE8—or any of the document modes listed below:

Enterprise Mode (in green above), with its higher-fidelity emulation for IE8, was added to Internet Explorer 11 in April, 2014. The blue capabilities were added to the Enterprise Mode Site List in November, 2014, and can particularly help IE9 or IE10 customers upgrade more easily. Enterprise Mode can be further improved by using it in combination with Compatibility View, a mode added in IE8 for better compatibility for sites designed for IE7, as indicated in orange.

Compatibility View is basically a switch: If a web page has no DOCTYPE, the page will be rendered in IE5 mode. If there is a DOCTYPE, the page will be rendered in IE7 mode. You can effectively get Compatibility View by specifying IE7 in the section—as this falls back to IE5 automatically if there’s no DOCTYPE—or you can use Enterprise Mode with Compatibility View for even better emulation. See below for details.

Speaking with customers, we found that IT Pros aren’t always the ones doing the app remediation work. Much of this work is often done by IT Developers, who don’t always understand all of the IE backward compatibility offerings. In this post, we want to provide a clearer set of mitigations for IT Pros and IT Developers to help make your upgrade as easy as possible.

Call to action for IT Pros

We know that upgrading to a new browser can be a time-consuming and potentially costly venture. To help reduce these costs, we introduced the Enterprise Site Discovery toolkit to help you prioritize which sites you should be testing based on their usage in your enterprise. For example, if the data shows that no one is visiting a particular legacy Web app anymore, you may not need to test or fix it. This tool also gives you information on what document mode the page runs in your current browser, so you can better understand how to fix that site if it breaks in a newer version of the browser. This tool is currently only supported in IE11, but we are bringing Enterprise Site Discovery support to IE8, IE9, and IE10 very soon.

Once you know which sites to test and fix, the following remediation methods may help fix your app compatibility issues in IE11 and Windows 10.

If you're on IE8 and upgrading to IE11…
  • Use the Enterprise Mode Site List to add sites to IE5, IE7, and IE8 modes.
  • Sites with x-ua-compatible meta tag or HTTP header set to “IE=edge” may break in IE11 and need to be set to IE8 mode. This is because Edge in IE8 meant IE8 mode, but Edge in IE11 means IE11 mode.
  • Sites without a DOCTYPE in zones other than Intranet will default to QME (or “interoperable quirks) rather than IE5 Quirks and may need to be set to IE5 mode.
  • If you have enabled Turn on Internet Explorer Standards Mode for local intranet group policy setting, sites with a DOCTYPE in the Intranet zone will open in IE11 mode instead of IE7 mode, and may need to be set to IE7 mode. Sites without a DOCTYPE will open in QME and may need to be set to IE5 mode.
  • Some IE5, IE7, and IE8 sites may need to be added to Enterprise Mode to work.
  • Some sites may need to be added to both Enterprise Mode and Compatibility View to work. You can do this by adding the site both to the Enterprise Mode section of the Enterprise Mode Site List and to the Use Policy List of Internet Explorer 7 sites group policy.
If you’re on IE9 and upgrading to IE11…
  • Use the Enterprise Mode Site List to add sites to IE5, IE7, and IE9 modes.
  • Sites with x-ua-compatible meta tag or HTTP header set to “IE=edge” may break in IE11 and need to be set to IE9 mode. This is because Edge in IE9 meant IE9 mode, but Edge in IE11 means IE11 mode.
  • Sites without a DOCTYPE in zones other than Intranet will default to QME rather than IE5 Quirks and may need to be set to IE5 mode.
  • If you have enabled Turn on Internet Explorer Standards Mode for local intranet group policy setting, sites with a DOCTYPE in the Intranet zone will open in IE11 mode instead of IE7 mode, and may need to be set to IE7 mode. Sites without a DOCTYPE will open in QME and may need to be set to IE5 mode.
  • If your sites worked in IE9, you won’t need Enterprise Mode but can still take advantage of the newer <docMode> section of the Enterprise Mode Site List.
If you’re on IE10 and upgrading to IE11…
  • Use Enterprise Mode Site List to add sites to IE5, IE7, and IE10 modes.
  • Sites with x-ua-compatible meta tag or HTTP header set to “IE=edge” may break in IE11 and need to be set to IE10 mode. This is because Edge in IE10 meant IE10 mode, but Edge in IE11 means IE11 mode.
  • If you have enabled Turn on Internet Explorer Standards Mode for local intranet group policy setting, sites with a DOCTYPE in the Intranet zone will open in IE11 mode instead of IE7 mode, and may need to be set to IE7 mode. Sites without a DOCTYPE will open in QME and may need to be set to IE5 mode.
  • If your sites worked in IE10, you won’t need Enterprise Mode but can still take advantage of the newer <docMode> section of the Enterprise Mode Site List.
If you’re on IE11 and upgrading to Windows 10…
  • Use Enterprise Mode Site List to add sites to IE5, IE7, IE8, IE9, IE10, and IE11 modes as needed.
  • The x-ua-compatible meta tag and HTTP header will be ignored for all sites not in the Intranet zone. If you enable Turn on Internet Explorer Standards Mode for local intranet group policy, all sites, including those in the Intranet zone, will ignore x-ua-compatible. Look at your Enterprise Site Discovery data to see which modes your sites loaded in IE11 and add those sites to the same modes using the Enterprise Mode Site List tool.

We recommend that Enterprise customers focus their new development on established, modern Web standards for better performance and interoperability across devices, and avoid developing sites in older IE document modes. We often hear that due to the Intranet zone defaults to Compatibility View, IT Developers inadvertently create new sites in IE7 or IE5 modes in the Intranet zone, depending on whether they used a DOCTYPE. As you move your Web apps to modern standards, you can enable the Turn on Internet Explorer Standards Mode for local intranet group policy and add sites that need IE5 or IE7 modes to the Site List. Of course, testing is always a good idea to ensure these settings work for your environment.

Call to action for IT Developers

An IT Pro may ask you to update your site if it worked in an older IE version but no longer works in IE11. Here are the set of steps you should follow to find the right remediation:

Try Document Modes

Try to see if the site works in one of the following document modes: IE5, IE7, IE8, IE9, IE10, or IE11.

  • Open the site in IE11, load the F12 tools by pressing the ‘F12’ key or selecting ‘F12 Developer Tools’ from the ‘Tools’ menu, and select the ‘Emulation’ Tab.

  • Try running the site in each document mode until you find one in which it works. You will need to make sure the user agent string dropdown matches the same browser version as the document mode dropdown. For example, if you were testing if the site works in IE10, you should update the document mode dropdown to “10” and the user agent string drop down to “Internet Explorer 10.”
  • If you find a mode where your site works, inform your IT Pro to add the site domain, sub-domain, or URL to the Enterprise Mode Site List in the document mode where the site works. While you can add the x-ua-compatible meta tag or HTTP header, this approach will only work in Windows 10 for sites in the Intranet zone when the Turn on Internet Explorer Standards Mode for local intranet group policy is not enabled.
Try Enterprise Mode

If a document mode didn’t fix your site, try Enterprise Mode. Enterprise Mode only benefits sites written for IE5, IE7, and IE8 document modes.

  • Enable the Let users turn on and use Enterprise Mode from the Tools menu group policy setting locally on your machine. You can do this by searching and running gpedit.msc, going to ‘Computer Configuration’ ‘Administrative Template’ ‘Windows Components’ ‘Internet Explorer’, and enabling the Let users turn on and use Enterprise Mode from the Tools menu group policy setting. After making this change, run gpupdate.exe /force to make sure the setting is applied locally. Make sure to disable this setting once you’re done testing. Alternately, you can use a regkey; see Turn on Enterprise Mode and use a site list for more information.
  • Restart IE11 and open the site you’re testing, then go to Emulation Tab in F12 tools and select “Enterprise” from the Browser profile dropdown. If the site works, inform your IT Pro that the site needs to be added to the Enterprise Mode section.
Try Compatibility View with Enterprise Mode

If Enterprise Mode doesn’t work, setting Compatibility View with Enterprise Mode will give you the Compatibility View behavior that shipped with IE8.

  • While browsing in Enterprise Mode, go to the ‘Tools’ menu and select Compatibility View Settings, and add the site to the list.
  • If this works, inform your IT Pro to add the site to both the Enterprise Mode section and the Use Policy List of Internet Explorer 7 sites group policy setting. Please note that adding the same Web path to the Enterprise Mode and docMode sections of the Enterprise Mode Site List will not work, but we are addressing this in a future update.
Update site for modern Web standards

If you have the time and budget, you should update your site for established, modern Web standards, so you don’t need to use compatibility offerings to make your site continue to work.

More Resources

For more information on all of these tools, please see the following resources:

As always, we suggest that consumers upgrade to the latest version and enable automatic updates for more secure browsing. If you use an older version of Internet Explorer at work, encourage your IT department to learn more the new backward-compatible features of Internet Explorer 11. Like many of our other customers, you may find that upgrading to the latest version of Internet Explorer is easier and less costly than previous upgrades. Best of all, the upgrade to Internet Explorer 11 can help ease your migration to Windows 10.

– Jatinder Mann, Senior Program Manager Lead
– Fred Pullen, Senior Product Marketing Manager

MVP Monday - SQL Server High Availability in Windows Azure Iaas

MSDN Blogs - Mon, 03/02/2015 - 09:30

Editor’s note: The following post was written by Cluster MVP David Bermingham

SQL Server High Availability in Windows Azure Iaas

When deploying SQL Server in Windows Azure you must consider how to minimize both planned and unplanned downtime. Because you have given up control of the physical infrastructure, you cannot always determine when maintenance periods will occur. Also, just because you have given control of your infrastructure to Microsoft it does not guarantee that you are not susceptible to some of the same types of outages that you might expect in your own data center. To minimize the impact of both planned and unplanned downtime Microsoft provides what are called Fault Domains and Upgrade Domains. By leveraging Upgrade Domains and Fault Domains and deploying either SQL Server AlwaysOn Availability Groups (AOAG) or AlwaysOn Failover Cluster Instances (AOFCI) you can help minimize both planned and unplanned downtime in your SQL Server Windows Azure deployment. Throughout this document when I refer to a SQL Server Cluster, I am referring to both AOAG and AOFCI. When needed, I will refer to AOAG and AOFCI specifically.

Fault Domains are essentially “a rack of servers”, with no common single point of failure between different Fault Domains, including different power supplies and network switches. An Update Domain ensures that when Microsoft is doing planned maintenance, only one Update Domain is worked on at a given time. This eliminates the possibility that Microsoft would accidentally reboot all of your servers at the same time, assuming that each server is in a different Update Domain.

When you provision your Azure VM instances in the same Availability Set, you are ensuring that each VM instance is in a different Update Domain and Fault Domain…to an extent. You probably want to read Manage The Availability of Virtual Machines to completely understand how VMs get provisioned in different Fault Domains and Update Domains. The important part of the availability equation is ensuring that each VM participating in your SQL Server cluster is isolated from each other, ensuring that the failure of a single Fault Domain or maintenance in an Update Domain does not impact all of your Azure instances at the same time.

So that is all you need to know….right? Well, not exactly. Azure IaaS does not behave exactly like your traditional infrastructure when it comes to clustering. In fact, before July of 2013, you could not even create a workable cluster in Azure IaaS. It wasn’t until then that they released hotfix KB2854082 that made it possible. Even with that hotfix there are still a few considerations and limitations when it comes to highly available SQL Server in Windows Azure.

Before we dive into the considerations and limitations, you need to understand a few basic Azure terms. These are not ALL the possible terms you need to know to be an Azure administrator, but these are the terms we will be discussing that are specific to configuring highly available SQL Server is Azure IaaS.

Virtual Network

Before you begin provisioning any virtual machines, you will need to configure your Virtual Network such that all your SQL Server Cluster VMs can reside in the same Virtual Network and Subnet. There is an easy Virtual Network Wizard that will walk you through the process of creating a Virtual Network. Additional information about Virtual Networking can be found here.

If you are considering a Hybrid Cloud deployment where you stretch your on premise network to the Azure Cloud for disaster recovery purposes, you may want to review my blog post below.

As you will see below, It is required that each SQL Server cluster must reside in a dedicated Cloud Service (see Cloud Service section below) and clients must connect to from outside of the Cloud Service. When creating subnets, I would create a small subnet for each cluster I plan to create. These subnets will only hold a handful of VMs and will be used exclusively for the Cloud Services that contain your SQL Server clusters.

Availability Set

As previously mentioned, an Availability Set is used to define Fault Domains and Update Domains. When provisioning your SQL Servers and File Share Witness (more on this later) make sure put all of your virtual machines in the same Availability Set. Availability Sets are described as follows…

“An availability set is a group of virtual machines that are deployed across fault domains and update domains. An availability set makes sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers.”

Cloud Service

Before you go Bing “Azure Cloud Service”, you need to understand that there is the overall marketing term “Cloud Service”, which is all fine and good, but not what we are talking about here. A Cloud Service in Azure IaaS is a very specific feature that is described as follows…

“A cloud service is a container for one or more virtual machines you create. You can create a cloud service for a single virtual machine, or you can load balance multiple virtual machines by placing them in the same cloud service.”

The other thing about a Cloud Service is that it is addressable by a single public IP address. All virtual machines residing in a Cloud Service can be reached by the Public IP associated with the Cloud Service and the endpoint ports defined when you create the virtual machine. Later in this article we will also learn that it is this public IP address that will be used instead of the traditional Cluster IP Resource for connectivity to the cluster.

The thing to remember about highly available SQL Servers is that when creating highly available SQL Server instances, you will want to place ALL of your SQL instances and the File Share Witness in the same Cloud Service. It is required that you have a different Cloud Service for each additional SQL Server Clusters that you create. I also recommend that you reserve that Cloud Service for only the SQL Server cluster nodes and the File Share Witness. You will see later in this article that all SQL Server cluster clients will need to reside outside of the cluster’s Cloud Service, which is just one of the reasons to keep only the SQL cluster nodes and File Share Witness in a dedicated Cloud Service.

You can create a Cloud Service, join an existing Cloud Service, create an Availability Set or join an Availability Set at the time you provision your Virtual Machines as shown in Figure 1 below.


Figure 1 - Cloud Service and Availability Set are defined when creating your virtual machine

Configuration of SQL Cluster

Now that we have a base understanding of some of the Azure terminology, we are ready to begin the cluster installation. Whether you are configuring an AlwaysOn Availability Group Cluster or an AlwaysOn Failover Cluster Instance, you will need to start with a basic cluster configuration. If you are using Windows Server 2012 R2, you are good to go. If you are using Windows Server 2012 or Windows Server 2008 R2, you will first need to install hotfix KB2854082on each cluster node.

Assuming you have minimally completed the pre-requisites below, you are ready to create your cluster.


1. Create your Azure Virtual Network

2. Provisioned three VMS. We’ll call these VMs SQL1, SQL2 and DC1 for simplicity through the rest of this document

3. These VMs should all reside in the same Cloud Service and Availability Set

4. Applied hotfix KB2854082 if necessary (pre-Windows 2012 R2)

5. Created a Windows Domain and joined all servers to the domain

Creating a cluster is pretty straight forward; I won’t go into great detail here as it is the same as creating an onsite cluster. The one major difference is at the end of the process. You will see that the Cluster Name resource will fail to come online. The reason the Cluster Name Resource fails to come online is that Azure VMs get their IP Address information from DHCP, which will issue the same IP to the cluster. When the non-RFC-compliant DHCP service in Azure issues a duplicate IP address the Cluster IP Address resource to fail to come online. In order to fix this, you will need to manually specify another address that is not in use in the subnet. Because we have no control over the DHCP scope, I would choose an IP address that is near the end of the scope. This is another reason why I like to limit the Cloud Service to just the cluster nodes, so I don’t accidentally provision another VM that uses an IP address I have already specified for my cluster.

Because there is no shared storage in Azure, you will notice that the quorum configuration defaulted to Node-Majority. Node-majority for a two node cluster is certainly not optimal. You will need to configure a File Share Witness (FSW). In my example configuration, I configured the FSW on DC1. Wherever you configure the FSW you should ensure that the FSW is in the same Availability Set as the cluster nodes. This ensures that don’t have a failure of a cluster node and the FSW at the same time.

Now that you have configured the basic cluster, you will need to decide whether you want to deploy an AlwaysOn Availability Group (AOAG), or whether you want to deploy an AlwaysOn Failover Cluster Instance (AOFCI). To deploy an AlwaysOn Failover Cluster Instance you will need to use a 3rd party, cluster integrated replicated volume resource, such as SIOS DataKeeper Cluster Edition as there is currently no shared storage option in Azure suitable for clustering.


This post assumes that you are familiar with SQL Server AlwaysOn options, if not you should review High Availability Solutions (SQL Server)

While AOAG can meet the needs of many, there are certainly situations where AOAG does not fulfill the requirements. The chart below highlights some of the limitations of AOAG in comparison to AOFCI with SIOS DataKeeper Cluster Edition.


Figure 2 - AOAG vs. AOFCI with DataKeeper

In my experience, the two biggest reasons why people are deploying AOFCI rather than AOAG is the support for SQL Server Standard Edition and because it protects the entire SQL Server Instance rather than just the user defined databases. The later reason becomes even more important after you discover that Windows Azure only supports one client access point, meaning with AOAG all of your database must reside in a single Availability Group. It is also much easier to create one AOFCI and have every database, including System and MSDB, be replicated and protected rather than having to manually manage Agent Jogs, SQL user accounts and each database manually as you do with AOAG.

Configuring AOFCI and AOAG

Once again, the basic configuration of AOFCI or AOAG in the Azure Cloud is pretty much identical to how you would configure these services with on premise servers. (For detailed instructions on deploying a #SANLess cluster with DataKeeper visit my article Creating a SQL Server 2014 AlwaysOn Failover Cluster (FCI) Instance in Windows Azure IaaS). The difference comes when you are configuring the client access point. As we saw with the initial cluster creation process, the Cluster Name resource will fail to come online because the DHCP service will hand out a duplicate IP address. However, instead of simply specifying another address in the same subnet, you will need to set the Client Access Point IP address to be the same as the Cloud Service’s public IP address, with a host specific subnet mask of Clients will then access this SQL Cluster via load-balanced VM endpoints with direct server return. The directions outlined in the Configuring the Client Access Point section below will tell you exactly how to put this all together.

Configuring the Client Access Point

Configuring the client access point and the load balanced endpoint is probably the most confusing or misunderstood part of creating SQL Server clusters in Windows Azure, or at least it was for me. If you are configuring AOAG you are in luck, there is a great article that walks you through this process Step-by-Step.

However, if you want to configure AOFCI, you have to take some of the information supplied in that article and apply it to AOFCI rather than an AOAG. You can follow Steps 1 through 3 as described in to create the load balanced endpoints. However, when you get to Step 4 you will have to make adjustments since you will already have configured a client access point as part of your SQL Server Cluster Role configuration. On Step 4, “Create the availability group listener”, you can skip 1-6 and continue with 7 through 10 to change the IP address of the SQL Server Cluster resource. Once the IP address has been changed, you can bring the SQL Server Failover cluster instance online.

Accessing the SQL Cluster in Azure

As previously described, the SQL Server cluster must be accessed from outside of the Cloud Service via the load balanced endpoint. Depending upon which server is active, the load balanced endpoint will redirect all client requests to the active server. At the end of the day, your SQL Server cluster should look something like Figure 3 shown below.


Figure 3 - Clients accessing the SQL Server Cluster

What about Hybrid Cloud?

While this blog post is focused on High Availability in the Azure Cloud, it is certainly possible to build Disaster Recovery configurations which have some SQL cluster nodes in Azure and some nodes on premise. For more information on Hybrid Cloud configurations, read my article Creating a multi-site cluster in Windows Azure for Disaster Recovery. That article describes Hybrid Cloud solutions such as those pictured in Figure 4 below.


Figure 4 - Hybrid Cloud for Disaster Recovery


Windows Azure IaaS is a powerful platform for deploying business critical applications. All of the tools required to build a highly available infrastructure are in place. Knowing how to leverage those tools, especially in regards to providing High Availability for SQL Server, can take a little research and trial and error. I hope that this article has helped point you in the right direction and has reduced the amount of research and trial and error that you will have to do on your own. As with most Cloud Service, new features become available very rapidly and the guidance in the article may become outdated or even wrong in some cases rather rapidly. For the latest guidance, please refer to my blog Clustering for Mere Mortals where I will attempt to update guidance as things in Azure evolve.

About the author

David Bermingham is recognized within the technology community as a high availability expert and has been honored by his peers by being elected to be a Microsoft MVP in Clustering since 2010. David’s work as director of Technical Evangelist at SIOS has him focused on evangelizing Microsoft high availability and disaster recovery solutions as well as providing hands on support, training and professional services for cluster implementations. David hold numerous technical certifications and draws from over twenty years of experience IT, including work in the finance, healthcare and education fields, to help organizations design solutions to meet their high availability and disaster recovery needs. David has recently begun speaking on deploying highly available SQL Servers in the Azure Cloud and deploying Azure Hybrid Cloud for disaster recovery.

About MVP Mondays

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.



eBook deal of the week: Exam Ref 70-413 Designing and Implementing a Server Infrastructure (MCSE)

MSDN Blogs - Mon, 03/02/2015 - 09:05

List price: $31.99  
Sale price: $15.99
You save 50%


Fully updated! Prepare for Microsoft Exam 70-413 - and help demonstrate your real-world mastery designing, and implementing Windows Server infrastructure in an enterprise environment. Designed for experienced IT professionals ready to advance their status, Exam Ref focuses on the critical-thinking and decision-making acumen needed for success at the MCSE level. Learn more

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

The products offered as our eBook Deal of the Week are not eligible for any other discounts. The Deal of the Week promotional price cannot be combined with other offers.

Enabling Secure Boot on Linux with the Windows Technical Preview

MSDN Blogs - Mon, 03/02/2015 - 09:00

When we released Windows Server 2012 R2 / Windows 8.1 and introduced the world to Generation 2 virtual machines - we were only able to run Windows guest operating systems.  In the following months we worked with a bunch of folks in the Linux community and were able to get a number of Linux distributions running on Generation 2 virtual machines.

With the Windows technical preview released we have worked to make this even better.  For the first time you can enable Secure Boot on a virtual machine running Linux.  To do this you will need to:

  1. Create a Generation 2 virtual machine
  2. Change the Secure Boot certificate of the virtual machine using the following PowerShell command:
    Set-VMFirmware "VM Name" -SecureBootTemplate MicrosoftUEFICertificateAuthority
  3. Install a version of Linux that supports SecureBoot using this template (presently Ubuntu or SuSE - latest versions)

Once you have done this - you can verify that Secure Boot is present and functionaly in the system by running:

sudo apt-get install fwts
sudo fwts uefidump - | grep Secure


Microsoft Azure DevCamps in the United States

MSDN Blogs - Mon, 03/02/2015 - 08:55


To learn the latest Microsoft Azure dev tools and technologies. Join Microsoft experts at your local Microsoft Cloud DevCamp and leave with code running in the cloud!

Windows 10 and the Windows Platform Developer

MSDN Blogs - Mon, 03/02/2015 - 08:53

Last time around I talked about the advent of Windows 10 and what sorts of the things you should be looking for in your apps that will help you take advantage of this exciting new release coming from Microsoft.  Since then the Technical Preview program has introduced Windows 10 Technical Preview for phones and more than a few of my friends have jumped at the opportunity to get a first early look at what’s coming.

We discussed three things you should begin to do: Education (Build or Build Sessions), Advertising (Be ready for more customers) and Universal Apps (know the latest dev environment).  Well, Windows 10 is coming faster than you think, so I think it’s worthwhile to expand on the things you can do to get ready!

The first thing you should be doing is signing up for the Windows Insider Program, if you haven’t already.  This is going to get you access to the latest builds of the Windows 10 Technical Preview and the Windows 10 Technical Preview for phones, and the latest news on the above along with access to technical and discussion forums I’ll outline below.

Visual Studio 2015

Have you tried out the Visual Studio 2015 Preview?  You might want to get on top of that.  While I don’t yet know what version of Visual Studio will be required to write apps for Windows 10, there are many reasons to be excited about Visual Studio 2015 anyway.  The key word to remember with Visual Studio 2015 is “better”.  Better integration with almost everything.  Better Git version control, better debugging and diagnostics… (were you ever interested in Lambda expressions in debugger windows like watch or having PerfTips indicating how long each line in your code takes to execute?), better Blend (with improvements too numerous to list here) and many more.

Local Community

Your local community leaders are either planning or have already scheduled information sessions and/or discussion groups on Windows 10 Technical Preview, Visual Studio 2015 preview along with the current releases of Microsoft Azure and all that goes with that.  You want to know more about Microsoft Azure before the next version of Windows is officially released.  Your apps will likely be using Mobile Services, Storage and other capabilities of Windows Azure.

Look for a local User Group involved with Microsoft Azure or Windows Platform Development or even .NET user groups will get involved.  The Vancouver Windows Platform Developer Group is planning events around Visual Studio, Windows 10 and Windows Azure and how they all interact (see the Canadian Developer Events Hub for other user groups across the country).  You can also find user groups on LinkedIn, and other places. 

Online Community

In the online community there are a few sources of reliable information.  There is, of course, lots of speculation and “leaks” out there but if you want to know the difference between rumour and fact then you need to start at the Blogging Windows site.  You’ll find announcements, feature discussions, and lots other tidbits about Windows 10 Technical Preview.  Once you are part of the Windows Insider Program you’ll get access to special insider forums where lots of discussion is going on about almost every aspect of the Technical Preview including the one for phones.  Questions on everything from drivers to features to user interface to updates are covered in-depth.

My Favourite Bit So Far

Out of all the stuff in the Windows 10 Technical Preview, the most fun one so far is the Cortana Integration.  My daughter had some homework on capital cities of the world so she just sat in front of our Windows 10 TP machine and never touched anything.  She just said “Hey Cortana, What’s the capital of Wales?” and Cortana told her.  She went through her research list one country at a time and never had to repeat a request and never once did Cortana disappoint her.  How much fun is that?

I want to know how that was done… don’t you?

Service Bus Explorer 2.6 now available!

MSDN Blogs - Mon, 03/02/2015 - 08:46
I just released an improved version of the Service Bus Explorer tool based on the Microsoft.ServiceBus.dll The zip file contains: The source code for the Service Bus Explorer This version of the tool uses the Microsoft.ServiceBus.dll that is compatible with the current version of the Windows Azure Service Bus, but not with the Service Bus 1.1, that is, the current version of the on-premises version of the Service Bus. The Service Bus Explorer This version can be...(read more)

A first look at the Windows 10 universal app platform

MSDN Blogs - Mon, 03/02/2015 - 08:44

We first announced Windows Universal Apps with Windows 8.1 and the platform convergence story continues and further strengthens today.

Earlier today at Mobile World Congress in Barcelona, Kevin Gallo provided developers a first look at the Windows 10 developer platform strategy and universal app platform. I encourage you to tune in to our Build conference in April for the full story.

Windows 10 represents the culmination of our platform convergence journey with Windows now running on a single, unified Windows core. This convergence enables one app to run on every Windows device – on the phone in your pocket, the tablet or laptop in your bag, the PC on your desk, and the Xbox console in your living room. And that’s not even mentioning all the new devices being added to the Windows family, including the HoloLens, Surface Hub, and IoT devices like the Raspberry Pi 2. All these Windows devices will now access one Store for app acquisition, distribution and update.

Today we’ll briefly touch on how this new platform delivers on:

  1. Driving scale through reach across device type
  2. Delivering unique experiences
  3. Maximizing developer investments

You can expect us to go into all of the universal platform technical details at Build.


Driving scale through reach across device types with mobile experiences

To understand why we converged Windows into one core and one developer platform, it’s worth examining how the customers’ relationship with their devices and the experience they expect has changed. The explosive growth in mobile devices over the last decade has led to the creation of totally new app experiences and has driven an extension of existing web experiences to enable developers to reach customers in innovative and unique ways. Until now, mobile experiences have largely meant app and web experiences built for mobile devices – most often defined by the phone you carry with you.

But this is increasingly too narrow a definition for a growing number of customers who want their experiences to be mobile across ALL their devices and to use whatever device is most convenient or productive for the task at hand.

We see this preference for mobile experiences manifest itself most profoundly in what customers search for in the Store. Just a year ago, the experiences customers sought on Windows phones were different from tablet, which were different again from laptops and PCs, and different from the game console. This has changed – rapidly. Today, the top Store searches for each device type overlap significantly, both across and within app categories.

Building a platform that supports this new world of mobile experiences requires not only supporting a number of screen sizes, but also providing flexibility in interaction models, whether it be touch, mouse & keyboard, a game controller or a pen. As a customer flows across their devices, they will often quickly transition from touch gestures (e.g. selecting a song or playlist, reading a news feed or document or viewing pictures from a trip) to keyboard & mouse for productivity (e.g. managing their playlist, writing a new blog post, or touching up that video or photo for sharing). To bridge the device gap (how many devices does a customer really want to carry with them?), the industry is seeing the emerging trend of multi-modal devices, like the 2-in-1 Surface Pro 3. Within app experiences, an increasing number of apps handle this exact scenario – except developers are bridging this gap by building one or more mobile apps, a desktop application, and a website. We believe this can and should be easier.

With Windows 10, we are leading a new path forward for mobile experiences – breaking out of the limited box of just mobile devices and empowering customers take full advantage of all of the screens in their life. For Windows, these mobile experiences are powered by our one Windows core and the universal app platform.


As we built the universal app platform, we set out to ensure that all Windows developers would equally benefit from this one core. The platform enables a new class of Windows universal apps – apps that are truly written once, with one set of business logic and one UI. Apps that are delivered to one Store within one package. Apps that are able to reach every Windows 10 device the developer wants to reach. Apps that feel consistent and familiar to the customer on all devices, while also contextually appropriate to each device’s input model and screen size. The new universal app platform completes our developer platform convergence by providing you with the ability to finally create one app that can run on mobile, desktop, console, holographic, and even IoT devices.


Delivering unique and personal experiences

The universal app platform is designed to help you quickly build these new mobile experiences that are both consistent yet flexible, enabling you to deliver a unique, highly-personalized experience to delight and engage your customers across each device family you target. We do this by providing a number of platform capabilities that do most of the runtime adaptation work for you, and doing so intelligently, allowing you to focus on delighting the customer:

  • Adaptive UX: enables your app’s user interface to fluidly adapt at runtime based on how the customer is interacting with your app and the available device capabilities – rendering an experience that is contextually appropriate.
    • Screen layout: In addition to base app model improvements, we have improved the ViewStateManager to make it easier to create more adaptive experiences. This means that your universal app projects no longer require separate project heads or UI definitions for small and large screens, although we will still provide the option of separate UI definitions should you prefer it.
    • User controls: Windows 10 will determine, at runtime, how the customer is interacting with your app and render the appropriate user experience (e.g. on a laptop with a touch-screen, an app fly-out control will provide larger touch-targets if tapped with touch, as opposed to clicked with a mouse).
  • Natural user inputs: Windows 10 helps you build an app experience that is more personal and more human, by making it easy to incorporate natural user inputs into your app, such as natural speech, inking, gestures, and user gaze. Because Windows handles all of these inputs, we free you from needing to worry about how to parse the input for meaning – you only need to worry about which inputs are appropriate for your app and we’ll determine if they are present and parse the intent for you.
  • Cloud-based Services: Windows provides a number of services for use in your apps, such as Windows Notification Services (WNS), Windows roaming data and the Windows Credential Locker. With Windows 10, we are making more Windows services available to developers, including an expanded Cortana AI, OneDrive, and Application Insights. Beyond Windows, we continue to make it easier to take advantage of Microsoft Azure using services like Azure Mobile Services and the Azure Notification Hub.

But we know that your mobile experience doesn’t end when the customer closes your app. There are a number of Windows shell advances that are enabled by universal platform advances, making it easier to keep your customers engaged and getting your apps launched more often. Examples include:

  • Cortana integration: Apps now appear (and can be launched) directly in Cortana search results, with installed apps given highest priority in the search results.
  • Action Center: Windows 10 brings a more consistent and actionable notification experience to all Windows devices.

Lastly, I’d like to call out that the universal app platform is at the heart of Windows 10 itself with much of the shell running on the platform, in addition to a number of our key Windows experiences (e.g. a number of in-box apps, the Windows Store, and the ‘Project Spartan’ browser, to name a few). And the same animations, APIs, and controls used by these app experiences are available to you. You can feel confident that this platform has been ‘battle-tested’ and is ready for you to build mobile experiences that delight your customers, just as we are.


Maximizing investments in your app and web code

Windows 10 is about making it easier for you and your code to do more and go further with a new platform built to maximize and extend your existing investments, both in your code and your skills.

We’ve designed Windows 10 to continue to support existing Windows apps and desktop applications on the devices for which they were developed. And we’re working to make it as easy as possible for you to bring those investments forward to the new universal app platform.

For our HTML developers, Windows 10 provides a number of advances for the modern web:

  • New rendering engine: The new engine frees you from having to do platform-specific work to deliver a consistent mobile experience and is included in Internet Explorer 11, in our new ‘Project Spartan’ browser, and will be used by the WebView control.
  • ‘Project Spartan’: The ‘Project Spartan’ browser itself is a Windows universal app and updated via the Store – helping ensure it is always kept up-to-date.
  • Web Apps: Windows 10 will make it easy for you to create a Windows app that packages your website for publishing to the Store. Once installed, your website can update and call Universal APIs from JavaScript, creating a more engaging user experience.

Additionally, I’m pleased to announce that we will be delivering our first prototype of the Windows 10 Cordova platform in an Apache branch later this month – giving developers a preview of the update, and to get their feedback.

Getting ready for Windows 10

This is only a first look at the Windows universal app platform. We’ll have much more to share at the Build conference in April. If you’re not planning to attend the event in person, please save the date and plan to attend online – you can watch the keynotes streamed live or the recorded sessions the next day. Check out the Build 2015 website for more information.

In the meantime, we encourage you to get ready for Windows 10 by:

We look forward to sharing more with you at Build.


Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux