You are here

Feed aggregator

CppRestSDK 2.9.0 is available on GitHub

MSDN Blogs - 1 hour 38 min ago

We are delighted to announce a new version of CppRestSDK (Casablanca) 2.9.0, this new version available on GitHub introduces new features and fixes issues reported on the 2.8.0 version. The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.
We added

  • support for basic authentication on Linux.
  • static library support for Windows xp
  • a project for compiling as a static lib on Windows
  • a websocket_client_config option for ssl verify mode
  • host based connection pool map on non windows http_clients

We fixed issues of Linux, OSX and Android versions. Here are the set of changes going into this release:

  • Merged #70 & #65 which should fix building on CentOS/RedHat.
  • #143 Work around SSL compression methods memory leak in ASIO.
  • #82 Fixed ambiguous call to begin when using with boost library.
  • #117 Fix header reading on linux listener using HTTPS.
  • #97 Add support for basic authentication.
  • #206 remove warnings-errors for system-headers under linux; honour http_proxy env-variable.
  • #114 Removed redundant std::move() that was causing errors on Xcode 7.3 gcc.
  • #140 Fix returning std::move causing build failure on osx.
  • #137 Fix android build script for linux, remove libiconv dependency.
  • Use Nuget packages built with Clang 3.8 (VS 2015 Update3) and Android NDK 11rc. Update built scripts for the same.
  • #150 Add static library for windows xp.
  • #115 Added projects which target v140_xp to resolve Issue#113.
  • #71 Add a project for compiling as a static lib.
  • #102 Added websocket_client_config option for ssl verify mode.
  • #217 Fixed race condition in Casablanca WinRT Websocket client.
  • #131 Update to include access control allow origin.
  • #156 add host based connection pool map on non windows http_clients.
  • #161 Header parsing assumes whitespace after colon.
  • #146 Fix ambiguous reference to ‘credentials’
  • #149 Some perf improvements for uri related code.

· #86 Fix obtaining raw string_t pointer from temporary.

· #96 Fix typo hexidecimal/hexadecimal.

  • #116 Fixing latin1 to UTF-16 conversion.
  • #47 Fixing .then to work with movable-only types.

As always, we trust the community to inform our next steps so, let us know what you need, how to improve Casablanca, by continuing to create an issue or a pull request on

10/27 Webinar: Managing data and applications just got easier with PowerApps by James Oleinik

MSDN Blogs - 1 hour 54 min ago

Back by Popular demand, in next weeks webinar James Oleinik is going to show how creating and managing PowerApps applications just got easier.   In this webinar James Oleinik, will introduce some exciting new enhancements that will make your applications both easier to manage and more performant.  In this webinar James Oleinik will introduce new features and drill into how they can simplify your lifecycle management and the new PowerApps administration experience that will make managing your PowerApps development efforts a breeze.

When: October 27, 2016  10:00 AM – 11:00 AM


About the presenter:

James Oleinik

I’m a PM on the Microsoft PowerApps team and will be presenting. Check out the PowerApps preview today:

Introducing Microsoft Certification Badges

MSDN Blogs - 5 hours 4 min ago

Beginning October 21, 2016, Microsoft Certified Professionals (MCPs) who achieve certain certifications or pass select exams will be awarded Microsoft Certification badges. These web-enabled badges are trusted and verifiable representations of the Microsoft Certifications you’ve achieved. They allow you to easily share the details of your skills with employers or clients, and broadly across social networks. To find out which exams and certifications are included during the launch phase, and detailed instructions on claiming your badge, please visit the Microsoft Certification Badge webpage

Build new web and mobile solutions for your customers with Microsoft PowerApps and Microsoft Flow

MSDN Blogs - 6 hours 10 min ago

To complement the upcoming launch of Dynamics 365, including PowerApps and Flow, we are happy to announce a free Partner Roadshow with instructors from the PowerApps and Flow product team.  Please use the attached Partner Invitation to share with your Partner community!


Dates and Locations:

Chevy Chase, MD

 December 5, 2016

Boston, MA

 December 7, 2016

Irvine, CA

 December 7, 2016

San Francisco, CA

 December 8, 2016

Bellevue, WA

 December 9, 2016

New York, NY

 December 9, 2016

Minneapolis, MN

 January 16, 2016

Dallas, TX

 January 17, 2017

Chicago, IL

 January 18, 2017




Training Topics:

  • Overview of Microsoft PowerApps & Microsoft Flow
  • SharePoint Online Integration
  • Common Data Model
  • Hands-on Exercises
  • Product Roadmap
  • Partner Solutions and Business Opportunities
  • Question and Answer Session



Technical consultants, application consultants, presales and similar roles. Also open to marketing and sales roles.

What’s the Difference Between an Azure Security Center Alert and a MSRC Notification

MSDN Blogs - 6 hours 17 min ago

This week someone wrote to me and asked about an email he received from Microsoft regarding a possible security incident.

Of course, since I always have Azure Security Center in mind, my first question was “did the email come from an Azure Security Center alert”?

The reason why I asked about it being an alert is that it’s possible to configure Azure Security Center to forward alerts to one or more email addresses.

You can see how to do that in the figure below. Just click Email notifications in the Policy components section. That will bring up the Email notifications pane and there you enter your Security contact emails (BTW – there’s no practical limit on the number of emails, but don’t enter so many that it looks like SPAM [not that you would ]). And although this pane is intended for email notification, we also provide you the opportunity to give us a phone number that we can use if we need to contact you about high security alerts.

Also note in the Send me emails section that you can turn off/on Send me emails about alerts (which is currently in Preview) and Send email also to subscription owners (which is also in Preview). Note that “me” can actually be many “me’s” (but not mini-me’s ), because the alert will go to all email addresses listed in the Security contact emails text box.

When you do get an alert, you can check it out as seen in the figure below. Just for fun (since we’re here), I highlighted an interesting alert that was generated because of a modified system binary that was found by a crash dump analysis performed by Azure Security Center.


When we look at the details of the alert, we see that Azure Security Center detected an image mismatch on a loaded module in memory during the analysis of a crash dump. If the presence of this module is unexpected, it may indicate a system compromise and that the Process name was lync.exe – it isn’t good to have this kind of mismatch on what would otherwise be a trusted application. Notice that we provide some remediation steps too.

Anyhow, back to the story.

The email my friend received wasn’t generated by Azure Security Center – it came directly from the Microsoft Security Response Center (MSRC). These emails are different from those you get from Azure Security Center. The Security Center emails are automated. Emails coming from the MSRC are manually driven by the MSRC. The MSRC does continuous security monitoring of the Azure fabric as well as receiving threat intelligence feeds from multiple resources. When the MSRC determines that you data has been accessed by unauthorized entities (i.e., attackers) OR if you’re doing something in Azure that you shouldn’t be doing, an incident response manager will contact you via email (or phone or maybe even both).

So now you know – there’s a difference between emails you get from Azure Security Center and the MSRC – they’re both important – but if you get one from the MSRC, make sure you read it right away!

BTW – if you want to learn more about Azure Security Center alerts, check out Managing and Responding to Security Alert in Azure Security Center.



Tom Shinder
Program Manager, Azure Security
@tshinder | Facebook | LinkedIn | Email | Web | Bing me! | GOOG me

Clippers basketball is back and battling in NWAC West

SPSCC Posts & Announcements - 6 hours 53 min ago

Clippers Athletics today announced season outlooks for its Men’s and Women’s Basketball teams for the 2016-17 year.  Last season, both teams achieved personal record bests in league standings and several student athletes were honored for their achievements on and off the court.  Resonating from coaches Landon (men’s team) and Moore (women’s team) was excitement, high expectations, and confidence for their balanced teams of Clippers rookies and returners.

Men's Basketball Women's Basketball Athletics Students

C++ / VBA – How to send a COM object from VBA to a C++ DLL via PInvoke

MSDN Blogs - 7 hours 37 min ago

Today I would like to present a quite uncommon scenario, which involves requesting a COM object from VBA and forwarding it
through PInvoke to another C++ DLL.

The puzzling part is that if we work with managed COM DLLs, everything runs properly, but if we’re using C++ DLLs, Office will
crash with an  Access Violation!

Here’s some background info. about the components involved …

COM object originates from a simple C++ COM DLL

This is the Interface from which VBA gets its COM object. I created it based on based on a MSDN sample: In-process C++
COM server (CppDllComServer).

Besides the usual COM interfaces (IUnknown, IDispatch), it exposes an interface named ISimpleObject and a COM Class
that implements it.

/****************************** Module Header ****************************** Module Name: SimpleObject.h Project: CppDllCOMServer Copyright (c) Microsoft Corporation. This source is subject to the Microsoft Public License. See All other rights reserved. THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. ***************************************************************************/ #pragma once #include #include "CppDllCOMServer_h.h" class SimpleObject : public ISimpleObject { public: // IUnknown IFACEMETHODIMP QueryInterface(REFIID riid, void **ppv); IFACEMETHODIMP_(ULONG) AddRef(); IFACEMETHODIMP_(ULONG) Release(); // IDispatch IFACEMETHODIMP GetTypeInfoCount(UINT *pctinfo); IFACEMETHODIMP GetTypeInfo(UINT itinfo, LCID lcid, ITypeInfo **pptinfo); IFACEMETHODIMP GetIDsOfNames(REFIID riid, LPOLESTR *rgszNames, UINT cNames, LCID lcid, DISPID* rgdispid); IFACEMETHODIMP Invoke(DISPID dispidMember, REFIID riid, LCID lcid, WORD wFlags, DISPPARAMS *pdispParams, VARIANT *pvarResult, EXCEPINFO *pExcepInfo, UINT *puArgErr); // ISimpleObject IFACEMETHODIMP get_FloatProperty(FLOAT *pVal); IFACEMETHODIMP put_FloatProperty(FLOAT newVal); IFACEMETHODIMP HelloWorld(BSTR *pRet); IFACEMETHODIMP GetProcessThreadID(LONG *pdwProcessId, LONG *pdwThreadId); SimpleObject();



VBA accesses the COM DLL via add Tools > Reference

Sub test() Dim comObj As CppDllCOMServerLib.SimpleObject Set comObj = New SimpleObject Debug.Print comObj.HelloWorld End Sub

Finally, VBA forwards the COM Object to a different C++ DLL via PInvoke

It’s just a simple C++ DLL which exposes a function that takes in an ISimpleObject COM pointer.


Sub test() #include #include namespace SimplePInvokeDLL { #import "C:\..\C++ COM DLLDebug\CppDllCOMServer.dll" using namespace CppDllCOMServerLib; extern "C" { __declspec(dllexport) int __stdcall API_DummyCOMCall(ISimpleObject* iso); // Returns random number __declspec(dllexport) long API_DummyCall(); } }


#include "stdafx.h" #include "Pinvoke_CppDllCOMServer.h" #include <stdlib.h> #include <time.h> using namespace std; namespace SimplePInvokeDLL { __declspec(dllexport) long SimplePInvokeDLL::API_DummyCall() { /* initialize random seed: */ srand(GetTickCount()); /* generate a random number*/ return rand(); } __declspec(dllexport) int __stdcall API_DummyCOMCall(ISimpleObject* iso) { //_bstr_t Class /* A _bstr_t object encapsulates the BSTR data type. The class manages resource allocation and deallocation through function calls to SysAllocString and SysFreeString and other BSTR APIs when appropriate. The _bstr_t class uses reference counting to avoid excessive overhead. */ _bstr_t strComResult = iso->HelloWorld(); _bstr_t strLocal = _bstr_t(L"HelloWorld"); return strComResult == strLocal; } }

As I wrote before, the goal of our exercise is to obtain a COM pointer from the 1st DLL and forward it to the 2nd DLL, via VBA ..

Declare Function API_DummyCOMCall Lib "C:...DebugPinvoke_CppDllCOMServer.dll" _ (simpleObj As CppDllCOMServerLib.SimpleObject) As Integer Sub test() Dim comObj As CppDllCOMServerLib.SimpleObject Set comObj = New SimpleObject Debug.Print comObj.HelloWorld Debug.Print API_DummyCOMCall(comObj) End Sub


… the first part works, and we’re getting a working COM object from CppDllCOMServerLib:


Now, if we attempt to send this COM object via a Pinvoke call…


Let’s attach to the PInvoke DLL and see why we crash

First, we need to restart Excel, and pause the code just before we make the PInvoke call.

Then we have to open the Visual Studio PInvoke C++ DLL project, and attach to the running instance of Excel. You’ll also need to add a breakpoint on the PInvoke function which is used by VBA to send the COM pointer.


Finally, we switch back to Excel and resume executing the code. We’ll soon hit the PInvoke C++ callback function and if we
take a closer look at the input parameter, we’ll see it points at an IUnknown VTable.

The only problem here is that this VTable has a couple of NULL pointers inside … now I don’t know enough about COM to say
for sure that those bad addresses are the cause, but if we step through the code, we’ll see that when trying to execute the
COM call, we end up trying to execute code from address zero, which is not very nice.



My idea was to find a proper way of sending COM pointers from VBA, and avoid NULL VTable pointers inside the PInvoke DLL
callback function. After asking around, I was told that VBA has a special operator which returns the address of an object:
How To Get the Address of Variables in Visual Basic



ObjPtr takes an object variable name as a parameter and obtains the address of the interface referenced by this object variable. One scenario of using this function is when you need to do a collection of objects. By indexing the object using its address as the key, you can get faster access to the object than walking the collection and using the Is operator. In many cases, the address of an object is the only reliable thing to use as a key. Example: objCollection.Add MyObj1, CStr(ObjPtr(MyObj1)) objCollection.Remove CStr(ObjPtr(MyObj1))



So, we will have to modify our PInvoke declaration and the way we send the COM object like this. Notice that since ObjPtr
returns an address, we need to change the PInvoke method’s input parameter’s type from SimpleObject to LongPtr.

Declare Function API_DummyCOMCall Lib "C:...DebugPinvoke_CppDllCOMServer.dll" _ (ByVal simpleObj As LongPtr) As Integer Sub test() Dim comObj As CppDllCOMServerLib.SimpleObject Set comObj = New SimpleObject Debug.Print comObj.HelloWorld Debug.Print API_DummyCOMCall(ObjPtr(comObj)) End Sub


What’s that? VBA complains about the calling convention … hmmm.


After some research I found that PInvoke  C++ DLL functions which take parameters must have the “__stdcallcalling
convention, so that cleaning up the Stack gets done by the callee. With this occasion we’re losing our nicely formatted function
names and we’ll get decorated names instead …


Let’s fix the VBA code to reflect that change in calling conventions.

Declare Function API_DummyCOMCall Lib "C:...DebugPinvoke_CppDllCOMServer.dll" _ Alias "_API_DummyCOMCall@4"(ByVal simpleObj As LongPtr) As Integer Sub test() Dim comObj As CppDllCOMServerLib.SimpleObject Set comObj = New SimpleObject Debug.Print comObj.HelloWorld Debug.Print API_DummyCOMCall(ObjPtr(comObj)) End Sub
Success! We can now send COM objects from VBA to C++ DLLs and not crash Office in the process :).

Source code download link: Example – C++ COM DLL send COM pointer from VBA with PInvoke



Thank you for reading my article! If you have liked it, please use the rating button.

P.S. I can’t always manage to reply to your comments as fast as I’d like. Just drop me an email at cristib-at-microsoft-dot-com,
should you need help with understanding or getting something in my blog to work. 

DISCLAIMER: Please note that the code I have offered is just a proof of concept and should not be put into production without a thorough testing! Microsoft is not responsible if your users will lose data because of this programming solution. It’s your responsibility to test it before deployment in your organization. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that. You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

Integrating PolyBase with Cloudera using Active Directory Authentication

MSDN Blogs - 8 hours 56 min ago

This article outlines the steps to use PolyBase in SQL 2016(including R-Services) with a Cloudera Cluster and setup authentication using Active Directory in both SQL 2016 and Cloudera.

  1. Cloudera Cluster
  2. Active Directory with Domain Controller
  3. SQL Server 2016 with PolyBase and R-Services installed

NOTE: We have tested the configuration using the Cloudera Cluster 5.5 running on CentOS 6.6, SQL Server 2016 running on Windows Server 2012 R2 and Active Directory with Domain Controller running on Windows Server 2012 R2. Other Windows Server and CentOS operating systems might also work in this configuration.

All the prerequisites above must be in the same network and domain say (CORP.CONTOSO.COM). After the prerequisites are completed, we will follow the steps listed below in order:

  1. Connecting SQL to AD
  2. Connecting Cloudera to AD
  3. Connecting PolyBase to Cloudera

Connecting SQL 2016 with AD

Since SQL 2016 and DC are in the same domain CORP.CONTOSO.COM – you should be able to create a new login in SQL Server from an existing user in CORP.CONTOSO.COM

Connecting Cloudera with AD

For all usernames and principals, we will use the suffixes like Cluster14 for name-scalability.

  1. Active Directory setup:
  1. Install OpenLDAP utilities (openldap-clients on RHEL/Centos) on the host of Cloudera Manager server. Install Kerberos client (krb5-workstation on RHEL/Centos) on all hosts of the cluster. This step requires internet connection in Hadoop server. If there is no internet connection in the server, you can download the rpm and install.
sudo yum -y install openldap-clients krb5-workstation sudo yum -y install krb5-workstation
  1. Apply the JCE Unlimited Strength Jurisdiction Policy Files. Download the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files from Oracle. Be sure to download the correct policy file updates for your version of Java 7 or Java 8. Uncompress and extract the downloaded file. The download includes a Readme.txt and two .jar files with the same names as the existing policy files. Locate the two existing policy files: local_policy.jar, US_export_policy.jar. Look in JAVA_HOME/lib/security/ and replace the existing policy files with the unlimited strength policy files you extracted.

We will use the wizard in Cloudera Manager to enable Active Directory Authentication. The 9 steps involved in the “Enable Kerberos” Wizard are provided through the following screenshots (use relevant values for your own cluster and AD):

You can view the credentials generated by Cloudera in the Active Directory OU=Hadoop, OU=CORP, DC=CONTOSO, DC=COM ( we gave the prefix “cluster14” in step 2)

Once Kerberos is successfully enabled, let us use kinit to obtain a ticket in cache and then list the directories in HDFS

kinit hdfsCluster14@CORP.CONTOSO.COM hadoop fs -ls /

If the above command is successful, then we have configured AD Authentication for Cloudera!

Create a folder in hdfs for PolyBase tables (Say cdh)

hadoop fs -mkdir /cdh

NOTE: Make sure the setting in HDFS is set to Authentication:

Currently there is a known issue when setting this to “integrity” or “privacy” will result in failures to connect from PolyBase to HDFS. You will see error message like the following:

EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_IsDirExist: Error [Failed on local exception: Couldn't setup connection] occurred while accessing external file.' Connecting PolyBase to Cloudera

Run the following command to confirm that PolyBase has been successfully installed. If PolyBase is installed, returns 1; otherwise, 0

SELECT SERVERPROPERTY ('IsPolybaseInstalled') AS IsPolybaseInstalled;

Run sp_configure (Transact-SQL) ‘hadoop connectivity’ and set an appropriate value. To find the value, see PolyBase Connectivity Configuration (Transact-SQL).

sp_configure 'hadoop connectivity', 6; sp_configure 'allow polybase export', 1;  reconfigure

You must restart SQL Server using services.msc. Restarting SQL Server restarts these services:

  • SQL Server PolyBase Data Movement Service
  • SQL Server PolyBase Engine

In the following location, set the appropriate values in the configuration files from the Cloudera Cluster settings:

C:Program FilesMicrosoft SQL ServerMSSQL13.MSSQLSERVERMSSQLBinnPolybaseHadoopconf core-site.xml <property>  <name>polybase.kerberos.realm</name>   <value>CORP.CONTOSO.COM</value> </property> <property>   <name>polybase.kerberos.kdchost</name>   <value>ACTIVEDIRECTORY.CORP.CONTOSO.COM</value> </property> <property>   <name></name>   <value>KERBEROS</value> </property> hdfs-site.xml <property>   <name>dfs.namenode.kerberos.principal</name>   <value>hdfsCluster14/_HOST@CORP.CONTOSO.COM</value> </property> mapred-site.xml <property>   <name>mapreduce.jobhistory.principal</name>   <value>mapred/_HOST@CORP.CONTOSO.COM</value> </property> <property>   <name>mapreduce.jobhistory.address</name>   <value><HOSTNAME and port of YARN JobHistory Server></value> </property> yarn-site.xml <property>   <name>yarn.application.classpath</name>   <value>$HADOOP_CLIENT_CONF_DIR,$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*, $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*</value> </property> <property>   <name>yarn.resourcemanager.principal</name>   <value>yarnCluster14/_HOST@CORP.CONTOSO.COM</value> </property>

Now, we are ready to use PolyBase – let’s try creating an external table:

-- 1: Create a database scoped credential.  -- Create a master key on the database. This is required to encrypt the credential secret. CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Pas5w0rd_'; -- 2: Create a database scoped credential  for Kerberos-secured Hadoop clusters. -- IDENTITY: the Kerberos user name. -- SECRET: the Kerberos password  CREATE DATABASE SCOPED CREDENTIAL myCredObject WITH IDENTITY = 'myHdfsUser', Secret = 'P455w0rd!#' ;  -- 3:  Create an external data source. -- LOCATION (Required) : Hadoop Name Node IP address and port. -- RESOURCE MANAGER LOCATION (Optional): Hadoop Resource Manager location to enable pushdown computation. -- CREDENTIAL (Optional):  the database scoped credential, created above. CREATE EXTERNAL DATA SOURCE clouderaCluster14 WITH (          TYPE = HADOOP,           LOCATION ='hdfs://CLUSTER14.CORP.CONTOSO.COM:8020',           RESOURCE_MANAGER_LOCATION = 'CLUSTER14.CORP.CONTOSO.COM:8032',           CREDENTIAL = myCredObject        );  -- 4: Create an external file format.  CREATE EXTERNAL FILE FORMAT CsvFileFormat WITH (          FORMAT_TYPE = DELIMITEDTEXT,           FORMAT_OPTIONS (FIELD_TERMINATOR =',', USE_TYPE_DEFAULT = TRUE)) -- 5:  Create an external table pointing to data stored in Hadoop. -- LOCATION: path to file or directory that contains the data (relative to HDFS root).  CREATE EXTERNAL TABLE [dbo].[CarSensor_Data] (          [SensorKey] int NOT NULL,           [CustomerKey] int NOT NULL,           [GeographyKey] int NULL,           [Speed] float NOT NULL,           [YearMeasured] int NOT NULL  )  WITH (LOCATION='/cdh/',         DATA_SOURCE = clouderaCluster14,        FILE_FORMAT = CsvFileFormat  );  -- 6: Insert some data into external table and view the data INSERT INTO [dbo].[CarSensor_Data] VALUES (1,1,1,40,2011) SELECT * FROM [dbo].[CarSensor_Data]

The above data will be stored in CSV format in hdfs, you can browse the demo folder in hdfs to find the contents.

Now you can work with the table [dbo].[CarSensor_Data] as a normal table in SQL, but the data storage will be in HDFS.

You can also import existing data in HDFS as a SQL table to leverage awesome SQL features like Columnstore Indexes, R-services (In-Database)

Here is a simple example of using rxSummary on the external table [dbo].[CarSensor_Data]

exec sp_execute_external_script    @language =N'R',      @script=N'print(rxSummary(~.,InputDataSet))',        @input_data_1 =N'select * from [dbo].[CarSensor_Data]' REFERENCES

Enabling Kerberos Authentication Using the Wizard

Create the HDFS Superuser

Direct Active Directory Integration for Kerberos Authentication

Quickly Configure Kerberos for Your Apache Hadoop Cluster

Integrating Cloudera cluster with Active Directory

PolyBase Guide

PolyBase Installation

Get Started with PolyBase

PolyBase Connectivity Configuration

PolyBase Configuration

PolyBase Setup Errors and Possible Solutions

Evergreen joins Jazz Band for Duke Ellington music Dec. 2

SPSCC Posts & Announcements - 8 hours 58 min ago

South Puget Sound Community College (SPSCC) today announced a dual performance between the SPSCC Jazz Band with The Evergreen Singers featuring the music of Duke Ellington and His Orchestra.

The SPSCC Jazz Band is joined by The Evergreen singers, led by Marla Elliott from The Evergreen State College.


ArcGIS for Server on Microsoft Azure Government

MSDN Blogs - 9 hours 42 min ago

Esri is excited to announce the availability of ArcGIS for Server on the Microsoft Azure Government Cloud. ArcGIS for Server will allow you to deploy leading-edge GIS technology on Microsoft Azure virtual machines.

As the world’s leading enterprise mapping and spatial analytics platform, ArcGIS for Server provides a complete Web GIS environment for mapping and spatial analytics with ready-to-use maps and apps that can be shared and used by everyone in the organization.  ArcGIS for Server easily dovetails with other enterprise systems, including Microsoft Azure SQL and supports Azure security and compliance standards.  Mapping, analysis and geodata products can be readily used in apps for office and field workers, and for engaging and crowdsourcing communities.

Deploying on Microsoft Azure means you don’t have to maintain hardware infrastructure, and you can create or remove sites based on demand. With this new offering, customers can deploy ArcGIS for Server on Microsoft Azure sites from images on Microsoft Azure Marketplace.

For more information, a few resources are listed below.

We welcome your comments and suggestions to help us continually improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails, click “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.

Happy #Friday Five!

MSDN Blogs - 11 hours 41 min ago

Changing Domain Users’ “User Logon Names” and UPNs

Pete Long is a Technical Consultant working in the North East of England. Previously he’s worked in IT Project management, and as a Consultant for solution providers and channel partners. Pete is an IT Pro with 15 years of both infrastructure and networking experience. One week he may be fitting a firewall for a small SMB, and the following week doing major Domain and Exchange migrations for a multi thousand seat network.

Follow him on Twitter @PeteNetLive



Microsoft Advanced Threat Analytics (ATA) – Attack Simulation and Demo

Santhosh Sivarajan is a recognized expert in Microsoft technologies. He has extensive experience working on enterprise and cloud Security, identity and access management, and privileged access and account Management projects and solutions. He is the author of two books entitled Windows Server Security and Migration from Windows Server 2008 to Windows Server. He has also published hundreds of articles on various technology sites. Microsoft has recognized Santhosh with the MVP award multiple times for his exceptional contribution to the technical community.

Follow him on Twitter @Santhosh_Sivara


How to set up Angular 2 and Webpack in Visual Studio with ASP.NET Core

Fabian Gosebrink is a professional software engineer, Microsoft MVP,  and Microsoft Technology Ambassador in Switzerland. He is also a Microsoft Certified Specialist in web application development and regular speaker at Microsoft events in Switzerland. He helps companies and projects to build web applications with AngularJS, Angular2, ASP.NET, ASP.NET Core, and all the build tools around it. Fabian is very into new technologies and helps to grow his community, by leading the biggest german speaking C# forum “

Follow him on Twitter @FabianGosebrink


How I use Azure Logic App, API App and Function App in my life Frank Boucher is a trusted Microsoft Azure professional with over 15 years of experience in the IT industry. He’s leveraged his expertise in Microsoft Azure in a development role at Lixar IT, an Ottawa-based software company. At work, he leads a dedicated team of developers to advance technology in the mobile, air, and telecommunication industries. Outside of work, he is a sought-after speaker, author, and trusted collaborator on Microsoft Azure. Follow him on Twitter @fboucheros




Channel 9 Video: .NET Conf UY v2016 Event Summary

    Fabian Fernandez is CEO & Co-Founder of Kaizen Softworks and Organizer & Co-Founder of .NET Conf UY. The 28-year-old is an Agile practitioner, and loves to stay up to date on tech news and startups. In his spare time, he plays guitar and is an extreme sports fanatic. Fabian’s been a Microsoft MVP since 2015. He is based in Uruguay. 


    Follow him on Twitter @kzfabi


    Is there anything better than GetThreadTimes for obtaining per-thread CPU usage information?

    MSDN Blogs - 11 hours 42 min ago

    A customer was using the
    function for high-resolution
    profiling of performance-sensitive code,
    but found that its accuracy is rather poor.
    They were hoping there would be something more
    along the lines of a
    that reported only CPU time consumed by a particular thread,
    rather than by the system in general.

    Fortunately, there is.


    gives you the CPU cycles consumed by a particular thread.
    This includes time spent both in user mode and in kernel mode.

    Note, however, that these values are reported directly from the CPU
    using mechanisms like RDTSC

    the performance monitor control register
    This means that the actual results are at the mercy of whatever
    the CPU manufacturer decides the CPU cycle counter means.
    Maybe they correspond to wall clock time; maybe they don’t.

    Free data sets for Azure Machine Learning

    MSDN Blogs - 15 hours 22 min ago


    One of the key things students need, for learning how to use Microsoft Azure Machine learning,  is access sample data sets and experiments.

    At Microsoft we have made a number of  sample data sets available. These data sets are used by the sample models in the Azure Cortana Intelligence Gallery.

    Some of these data sets are available in Azure Blob storage so can be directly linked to Azure ML experiments whilst other are available in CSV format.

    For these data sets the below list provides a direct link.

    You can use these data sets in your experiments by using the Import Data module.

    The rest of these sample data sets are listed under Saved Datasets in the module palette to the left of the experiment canvas when you open or create a new experiment in ML Studio. You can use any of these data sets in your own experiment by dragging it to your experiment canvas.
    Try Azure Machine Learning for free

    No credit card or Azure subscription needed. Get started now >

    Here are some of the FREE Data sets available to use

    Adult Census Income Binary Classification dataset
    A subset of the 1994 Census database, using working adults over the age of 16 with an adjusted income index of > 100.

    Usage: Classify people using demographics to predict whether a person earns over 50K a year.

    Related Research: Kohavi, R., Becker, B., (1996). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Airport Codes Dataset
    U.S. airport codes.

    This dataset contains one row for each U.S. airport, providing the airport ID number and name along with the location city and state.

    Automobile price data (Raw)
    Information about automobiles by make and model, including the price, features such as the number of cylinders and MPG, as well as an insurance risk score.

    The risk score is initially associated with auto price and then adjusted for actual risk in a process known to actuaries as symboling. A value of +3 indicates that the auto is risky, and a value of -3 that it is probably pretty safe.

    Usage: Predict the risk score by features, using regression or multivariate classification.

    Related Research: Schlimmer, J.C. (1987). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Bike Rental UCI dataset
    UCI Bike Rental dataset that is based on real data from Capital Bikeshare company that maintains a bike rental network in Washington DC.

    The dataset has one row for each hour of each day in 2011 and 2012, for a total of 17,379 rows. The range of hourly bike rentals is from 1 to 977.

    Bill Gates RGB Image
    Publicly-available image file converted to CSV data.

    The code for converting the image is provided in the Color quantization using K-Means clustering model detail page.

    Blood donation data
    A subset of data from the blood donor database of the Blood Transfusion Service Center of Hsin-Chu City, Taiwan.

    Donor data includes the months since last donation), and frequency, or the total number of donations, time since last donation, and amount of blood donated.

    Usage: The goal is to predict via classification whether the donor donated blood in March 2007, where 1 indicates a donor during the target period, and 0 a non-donor.

    Related Research: Yeh, I.C., (2008). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Yeh, I-Cheng, Yang, King-Jang, and Ting, Tao-Ming, “Knowledge discovery on RFM model using Bernoulli sequence, “Expert Systems with Applications, 2008,

    Book Reviews from Amazon
    Reviews of books in Amazon, taken from the website by University of Pennsylvania researchers (sentiment). See the research paper, “Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification” by John Blitzer, Mark Dredze, and Fernando Pereira; Association of Computational Linguistics (ACL), 2007.

    The original dataset has 975K reviews with rankings 1, 2, 3, 4, or 5. The reviews were written in English and are from the time period 1997-2007. This dataset has been down-sampled to 10K reviews.

    Breast cancer data
    One of three cancer-related datasets provided by the Oncology Institute that appears frequently in machine learning literature. Combines diagnostic information with features from laboratory analysis of about 300 tissue samples.

    Usage: Classify the type of cancer, based on 9 attributes, some of which are linear and some are categorical.

    Related Research: Wohlberg, W.H., Street, W.N., & Mangasarian, O.L. (1995). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Breast Cancer Features
    The dataset contains information for 102K suspicious regions (candidates) of X-ray images, each described by 117 features. The features are proprietary and their meaning is not revealed by the dataset creators (Siemens Healthcare).

    Breast Cancer Info
    The dataset contains additional information for each suspicious region of X-ray image. Each example provides information (e.g., label, patient ID, coordinates of patch relative to the whole image) about the corresponding row number in the Breast Cancer Features dataset. Each patient has a number of examples. For patients who have a cancer, some examples are positive and some are negative. For patients who don’t have a cancer, all examples are negative. The dataset has 102K examples. The dataset is biased, 0.6% of the points are positive, the rest are negative. The dataset was made available by Siemens Healthcare.

    CRM Appetency Labels Shared
    Labels from the KDD Cup 2009 customer relationship prediction challenge (orange_small_train_appetency.labels).

    CRM Churn Labels Shared
    Labels from the KDD Cup 2009 customer relationship prediction challenge (orange_small_train_churn.labels).

    CRM Dataset Shared
    This data comes from the KDD Cup 2009 customer relationship prediction challenge (

    The dataset contains 50K customers from the French Telecom company Orange. Each customer has 230 anonymized features, 190 of which are numeric and 40 are categorical. The features are very sparse.

    CRM Upselling Labels Shared
    Labels from the KDD Cup 2009 customer relationship prediction challenge (orange_large_train_upselling.labels).

    Energy Efficiency Regression data
    A collection of simulated energy profiles, based on 12 different building shapes. The buildings differ with respect to differentiated by 8 features, such as glazing area, the glazing area distribution, and orientation.

    Usage: Use either regression or classification to predict the energy efficiency rating based as one of two real valued responses. For multi-class classification, is round the response variable to the nearest integer.

    Related Research: Xifara, A. & Tsanas, A. (2012). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Flight Delays Data
    Passenger flight on-time performance data taken from the TranStats data collection of the U.S. Department of Transportation (On-Time).

    The dataset covers the time period April-October 2013. Before uploading to Azure ML Studio, the dataset was processed as follows:

    • The dataset was filtered to cover only the 70 busiest airports in the continental US
    • Cancelled flights were labeled as delayed by more than 15 minutes
    • Diverted flights were filtered out
    • The following columns were selected: Year, Month, DayofMonth, DayOfWeek, Carrier, OriginAirportID, DestAirportID, CRSDepTime, DepDelay, DepDel15, CRSArrTime, ArrDelay, ArrDel15, Cancelled

    Flight on-time performance (Raw)
    Records of airplane flight arrivals and departures within United States from October 2011.

    Usage: Predict flight delays.

    Related Research: From US Dept. of Transportation

    Forest fires data
    Contains weather data, such as temperature and humidity indices and wind speed, from an area of northeast Portugal, combined with records of forest fires.

    Usage: This is a difficult regression task, where the aim is to predict the burned area of forest fires.

    Related Research: Cortez, P., & Morais, A. (2008). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    [Cortez and Morais, 2007] P. Cortez and A. Morais. A Data Mining Approach to Predict Forest Fires using Meteorological Data. In J. Neves, M. F. Santos and J. Machado Eds., New Trends in Artificial Intelligence, Proceedings of the 13th EPIA 2007 – Portuguese Conference on Artificial Intelligence, December, Guimarães, Portugal, pp. 512-523, 2007. APPIA, ISBN-13 978-989-95618-0-9. Available at:

    German Credit Card UCI dataset
    The UCI Statlog (German Credit Card) dataset (Statlog+German+Credit+Data), using the file.

    The dataset classifies people, described by a set of attributes, as low or high credit risks. Each example represents a person. There are 20 features, both numerical and categorical, and a binary label (the credit risk value). High credit risk entries have label = 2, low credit risk entries have label = 1. The cost of misclassifying a low risk example as high is 1, whereas the cost of misclassifying a high risk example as low is 5.

    IMDB Movie Titles
    The dataset contains information about movies that were rated in Twitter tweets: IMDB movie ID, movie name and genre, production year. There are 17K movies in the dataset. The dataset was introduced in the paper “S. Dooms, T. De Pessemier and L. Martens. MovieTweetings: a Movie Rating Dataset Collected From Twitter. Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013.”

    Iris two class data
    This is perhaps the best known database to be found in the pattern recognition literature. The data set is relatively small, containing 50 examples each of petal measurements from three iris varieties.

    Usage: Predict the iris type from the measurements.

    Related Research: Fisher, R.A. (1988). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Movie Tweets
    The dataset is an extended version of the Movie Tweetings dataset. The dataset has 170K ratings for movies, extracted from well-structured tweets on Twitter. Each instance represents a tweet and is a tuple: user ID, IMDB movie ID, rating, timestamp, numer of favorites for this tweet, and number of retweets of this tweet. The dataset was made available by A. Said, S. Dooms, B. Loni and D. Tikk for Recommender Systems Challenge 2014.

    MPG data for various automobiles
    This dataset is a slightly modified version of the dataset provided by the StatLib library of Carnegie Mellon University. The dataset was used in the 1983 American Statistical Association Exposition.

    The data lists fuel consumption for various automobiles in miles per gallon, along with information such the number of cylinders, engine displacement, horsepower, total weight, and acceleration.

    Usage: Predict fuel economy based on 3 multivalued discrete attributes and 5 continuous attributes.

    Related Research: StatLib, Carnegie Mellon University, (1993). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    Pima Indians Diabetes Binary Classification dataset
    A subset of data from the National Institute of Diabetes and Digestive and Kidney Diseases database. The dataset was filtered to focus on female patients of Pima Indian heritage. The data includes medical data such as glucose and insulin levels, as well as lifestyle factors.

    Usage: Predict whether the subject has diabetes (binary classification).

    Related Research: Sigillito, V. (1990). UCI Machine Learning Repository”. Irvine, CA: University of California, School of Information and Computer Science

    Restaurant customer data
    A set of metadata about customers, including demographics and preferences.

    Usage: Use this dataset, in combination with the other two restaurant data sets, to train and test a recommender system.

    Related Research: Bache, K. and Lichman, M. (2013). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science.

    Restaurant feature data
    A set of metadata about restaurants and their features, such as food type, dining style, and location.

    Usage: Use this dataset, in combination with the other two restaurant data sets, to train and test a recommender system.

    Related Research: Bache, K. and Lichman, M. (2013). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science.

    Restaurant ratings
    Contains ratings given by users to restaurants on a scale from 0 to 2.

    Usage: Use this dataset, in combination with the other two restaurant data sets, to train and test a recommender system.

    Related Research: Bache, K. and Lichman, M. (2013). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science.

    Steel Annealing multi-class dataset
    This dataset contains a series of records from steel annealing trials with the physical attributes (width, thickness, type (coil, sheet, etc.) of the resulting steel types.

    Usage: Predict any of two numeric class attributes; hardness or strength. You might also analyze correlations among attributes.

    Steel grades follow a set standard, defined by SAE and other organizations. You are looking for a specific ‘grade’ (the class variable) and want to understand the values needed.

    Related Research: Sterling, D. & Buntine, W., (NA). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science

    A useful guide to steel grades can be found here:

    Telescope data
    Records of high energy gamma particle bursts along with background noise, both simulated using a Monte Carlo process.

    The intent of the simulation was to improve the accuracy of ground-based atmospheric Cherenkov gamma telescopes, using statistical methods to differentiate between the desired signal (Cherenkov radiation showers) and background noise (hadronic showers initiated by cosmic rays in the upper atmosphere).

    The data has been pre-processed to create an elongated cluster with the long axis is oriented towards the camera center. The characteristics of this ellipse, (often called Hillas parameters) are among the image parameters that can be used for discrimination.

    Usage: Predict whether image of a shower represents signal or background noise.

    Notes: Simple classification accuracy is not meaningful for this data, since classifying a background event as signal is worse than classifying a signal event as background. For comparison of different classifiers the ROC graph should be used. The probability of accepting a background event as signal must be below one of the following thresholds: 0.01 , 0.02 , 0.05 , 0.1 , or 0.2.

    Also, note that the number of background events (h, for hadronic showers) is underestimated, whereas in real measurements, the h or noise class represents the majority of events.

    Related Research: Bock, R.K. (1995). UCI Machine Learning Repository Irvine, CA: University of California, School of Information

    Weather Dataset
    Hourly land-based weather observations from NOAA (merged data from 201304 to 201310).

    The weather data covers observations made from airport weather stations, covering the time period April-October 2013. Before uploading to Azure ML Studio, the dataset was processed as follows:

    • Weather station IDs were mapped to corresponding airport IDs
    • Weather stations not associated with the 70 busiest airports were filtered out
    • The Date column was split into separate Year, Month, and Day columns
    • The following columns were selected: AirportID, Year, Month, Day, Time, TimeZone, SkyCondition, Visibility, WeatherType, DryBulbFarenheit, DryBulbCelsius, WetBulbFarenheit, WetBulbCelsius, DewPointFarenheit, DewPointCelsius, RelativeHumidity, WindSpeed, WindDirection, ValueForWindCharacter, StationPressure, PressureTendency, PressureChange, SeaLevelPressure, RecordType, HourlyPrecip, Altimeter

    Wikipedia SP 500 Dataset
    Data is derived from Wikipedia ( based on articles of each S&P 500 company, stored as XML data.

    Before uploading to Azure ML Studio, the dataset was processed as follows:

    • Extract text content for each specific company
    • Remove wiki formatting
    • Remove non-alphanumeric characters
    • Convert all text to lowercase
    • Known company categories were added

    Note that for some companies an article could not be found, so the number of records is less than 500.

    Downloadable Data Sets in CSV Format

    The dataset contains customer data and indications about their response to a direct mailing campaign. Each row represents a customer. The dataset contains 9 features about user demographics and past behavior, and 3 label columns (visit, conversion, and spend). Visit is a binary column that indicates that a customer visited after the marketing campaign, conversion indicates a customer purchased something, and spend is the amount that was spent. The dataset was made available by Kevin Hillstrom for MineThatData E-Mail Analytics And Data Mining Challenge.

    Features of test examples in the RCV1-V2 Reuters news dataset. The dataset has 781K news articles along with their IDs (first column of the dataset). Each article is tokenized, stopworded, and stemmed. The dataset was made available by David. D. Lewis.

    Features of training examples in the RCV1-V2 Reuters news dataset. The dataset has 23K news articles along with their IDs (first column of the dataset). Each article is tokenized, stopworded, and stemmed. The dataset was made available by David. D. Lewis.

    Dataset from the KDD Cup 1999 Knowledge Discovery and Data Mining Tools Competition (kddcup99.html).

    The dataset was downloaded and stored in Azure Blob storage (network_intrusion_detection.csv) and includes both training and testing datasets. The training dataset has approximately 126K rows and 43 columns, including the labels; 3 columns are part of the label information, and 40 columns, consisting of numeric and string/categorical features, are available for training the model. The test data has approximately 22.5K test examples with the same 43 columns as in the training data.

    Topic assignments for news articles in the RCV1-V2 Reuters news dataset. A news article can be assigned to several topics. The format of each row is ” 1″. The dataset contains 2.6M topic assignments. The dataset was made available by David. D. Lewis.

    This data comes from the KDD Cup 2010 Student performance evaluation challenge (student performance evaluation). The data used is the Algebra_2008_2009 training set (Stamper, J., Niculescu-Mizil, A., Ritter, S., Gordon, G.J., & Koedinger, K.R. (2010). Algebra I 2008-2009. Challenge data set from KDD Cup 2010 Educational Data Mining Challenge. Find it at downloads.jsp or

    The dataset was downloaded and stored in Azure Blob storage (student_performance.txt) and contains log files from a student tutoring system. The supplied features include problem ID and its brief description, student ID, timestamp, and how many attempts the student made before solving the problem in the right way. The original dataset has 8.9M records, this dataset has been down-sampled to the first 100K rows. The dataset has 23 tab-separated columns of various types: numeric, categorical, and timestamp.

    Patterns & Practices 17 November

    MSDN Blogs - 16 hours 56 min ago

    Vi gjentar suksessen fra i fjor og har igjen invitert Patterns and Practices teamet fra USA til Norge.

    Saga Kino, Oslo den 17.november fra kl 08:30 til kl 15:30.

    Nå har vi anledningen til å tilbringe dagen med et par av de klokeste hodene i vårt «Patterns and Practices» team. Vi har gleden av å ha de på besøk i Oslo, og i den forbindelse vil vi invitere arkitekter og utviklere til et arrangement av de sjeldne.
    Registrering og kaffe starter kl 08:30 på Saga Kino, Sal 1.

    PÅMELDING – Patterns and Practices Architecture Summit i Oslo

    Microsoft Patterns and Practices (P&P) teamet ble opprettet i 2000 for å møte behovet for veiledning til arkitekter og programutviklere. P&P er et sett med mønstre og anbefalinger høstet fra erfaring, for å designe, utvikle, implementere og drive sunne programmerings vaner på Microsoft-plattformen.

    Agenda for dagen:
    • Modern cloud fundamentals
    • Resiliency guidance
    • Azure reference architecture (IaaS, PaaS, Hybrid Networking, Identity)
    • Microservices architecture

    Lunsj serveres kl 12 og vi avslutter kl 15.30.
    Hver sesjon inneholder mye dyp teknologisk og arkitektuell innsikt.
    Om du har ansvar for software utvikling, bør du ikke gå glipp av dette eventet.

    *Eventet er gratis og arrangeres i forbindelse med LEAP 2017 konferansen. P&P vil også være en viktig brikke i LEAP 2017, så dersom dette er interessant, så har du absolutt mye å glede deg til.
    Mer informasjon om LEAP,

    Nyttig informasjon:

    Adresse: Saga Kino Sal 1, Stortingsgata 28, 0161 Oslo
    Henvendelser rettes til: Hanne Wulff på e-post eller telefon +47 913 17 273

    PÅMELDING – Patterns and Practices Architecture Summit i Oslo

    Azure Resource Manager a šablony prakticky

    MSDN Blogs - 17 hours 5 min ago

    Azure Resource Manager (ARM) je již nějakou dobu primární metodou, jak v Azure pracovat s výpočetními, datovými i platformními službami, a postupně vytlačuje tu klasickou. Pokud používáte webový portál Azure, s volbou „Resource Manager“ vs. „Classic“ jste se určitě setkali, ale kromě toho, že vám každá nabídla mírně odlišný formulář, se na první pohled nijak zvlášť neliší. Síla ARM se naplno projeví, jakmile začnete používat šablony nasazení, neboli anglicky Deployment Templates.

    Pokud je vám koncept Resource Manageru cizí, projděte si nejprve oficiální dokumentaci, abyste získali představu o tom, jak ARM funguje.

    Šablona nasazení vám umožňuje definovat kompletní Azure infrastrukturu ve formátu JSON. K čemu je to dobré? Díky tomuto předpisu můžete automatizovat deployment na různá prostředí – vývoj, testování, produkci… Nasazování je libovolně opakovatelné a parametrizovatelné, takže si každý vývojář může vytvořit vlastní verzi celého backendu pro testování. Zdrojový kód šablony je možné udržovat v úložišti zdrojových kódů a verzovat ho jako zbytek aplikace. A třešnička na závěr – šablonu můžete zapojit i do cyklu Continuous Integration (CI) jako jeden z kroků po sestavení a otestování aplikace.

    V tomto článku se na šablony podíváme prakticky a ukážeme si, jak se vytvářejí, na co si dát pozor a co naopak využít, na základě zkušeností ze skutečných projektů.

    Limit velikosti šablony je 1 MB po rozpadu všech proměnných a funkcí, nicméně nejjednodušší šablona vypadá takto:

    { "$schema": "", "contentVersion": "", "parameters": { }, "variables": { }, "resources": [ ], "outputs": { } }

    Je to opravdu soubor JSON rozdělený na klíčové oblasti:

    • Do parameters se umisťují hodnoty (parametry), které přijdou do šablony zvnějšku – požadovaný počet instancí webu, název Storage Accountu nebo třeba heslo administrátora linuxového serveru. Používání parametrů není povinné, ale je vhodné. Když je vynecháte, nasadí se prostředky pokaždé se stejnými názvy ve stejných lokalitách a pravděpodobně dojde ke konfliktům.
    • Ve variables se dynamicky sestavují proměnné, aby se mohly používat dále v šabloně. Tvoří je parametry, funkce a konstanty. Používají se například pro vytvoření jedinečného názvu z parametru nebo přidání předpon/přípon ke jménům.
    • Část resources je typicky nejobsálejší, protože je v ní definice všech zdrojů, které budeme nasazovat – weby, virtuální stroje, síťové adaptéry, databáze… vše najdeme zde.
    • A nakonec v outputs říkáme, jaké hodnoty chceme ze šablony dostat zpět. Mohou to být třeba vygenerované jedinečné názvy, connection stringy nebo adresy serverů.

    Přestože je šablona textový soubor ve formátu JSON, nemusí být statická – pomocí výrazů uzavřených v hranatých závorkách [ ] můžeme hodnoty vyplňovat až za běhu podle kontextu. Výrazy se mohou objevit kdekoliv, podmínkou ale je, aby vracely validní JSON hodnotu.

    Do výrazů se dají vkládat předpřipravené funkce. Často používané jsou např.:

    Funkce Příklad Co dělá parameters() [parameters(‘webTier’)] Vrátí hodnotu parametru „webTier“. variables() [variables(‘saltedPass’)] Vrátí hodnotu proměnné „saltedPass“. concat() [concat(‘ABCD’, base64(parameters(‘password’)))] Spojí řetězec „ABCD“ s hodnotou parametru „password“ kódovanou do Base 64. resourceGroup() [resourceGroup().location] Vratí hodnotu „location“ z JSON objektu Resource Group, do níž se právě nasazuje (např. „North Europe“). uniqueString() [uniqueString(subscription().subscriptionId)] Vrátí hash dlouhý 13 znaků, který není globálně jedinečný. Parametr určuje rozsah unikátnosti hodnoty – v tomto případě unikátní v rámci subscription. [uniqueString(resourceGroup().id)] Hash dlouhý 13 znaků, jedinečný v rámci Resource Group. reference() [reference(concat(‘Microsoft.Storage/storageAccounts/’, parameters(‘storageAccountName’)), ‘2016-01-01’).primaryEndpoints.blob] Vrátí primární URI pro Blob Storage. Funkce reference() získává hodnotu z runtime, takže se nedá použít v proměnných, ale dá se použít v outputs. Nevykoná se, dokud prostředek neexistuje.

    Na velikosti písmen u názvů funkcí a parametrů nezáleží.

    Pro představu, jak vypadá celá jednoduchá šablona, můžete zabrousit na GitHub a podívat se na jeden z příkladů. Často se pro definici používají soubory dva: azuredeploy.json a azuredeploy.parameters.json (tyto názvy jsou obvyklé, ale mohou být libovolné).

    • azuredeploy.json obsahuje samotnou šablonu – deklarace parametrů, definice proměnných a zdrojů atd. Sám o sobě stačí k nasazení, pokud poskytnete hodnoty parametrů.
    • azuredeploy.parameters.json obsahuje hodnoty parametrů, které vstupují do šablony, takže je nemusíte vyplňovat při každém nasazení ručně. Při nasazení přes příkazovou řádku se přidává jako parametr.

    Práce se šablonami ve Visual Studiu zjednodušuje napovídání kódu a jeho formátování, k dispozici je také šablona projektu Azure Resource Group a panel JSON Outline.


    Začít s tvorbou šablon můžete několika způsoby:

    1. Ve Visual Studiu založit nový projekt typu Azure Resource Group a postupně pomocí UI poskládat vše potřebné.
    2. Na GitHubu najít hotovou šablonu, která odpovídá vašim potřebám, a přizpůsobit si ji. Následně do ní přidávat další prostředky.
    3. Připravit celou infrastrukturu v Azure a následně na Resource Group použít příkaz Automation Script, který vygeneruje hotovou šablonu reflektující přesně ty prostředky, které aktuálně provozujete.

    V našem případě jsme zvolili třetí možnost, protože prostředí vznikalo postupně během vývoje. Výsledný JSON vypadá jako kompletní šablona, která je i do jisté míry parametrizovaná. Nenechte se však zmást – úpravy budou potřeba.

    Upravování šablony Názvy a parametry

    První věc, na kterou se zaměříme, jsou názvy. Na začátku je dobré si říct, které budete generovat automaticky a které se budou volit jako vstup šablony. Azure typicky vygeneruje individuální parametry pro každou součást, kterou nasazujete.

    My jsme šli opačným směrem a zredukovali zadávání názvů na jediný parametr: nameRoot. Zbytek se vygeneruje automaticky. Protože jsou jména prostředků dynamická, nacházejí se v sekci variables:

    "variables": { "backendName": "[concat(parameters('nameRoot'), '-backend', uniqueString(subscription().subscriptionId))]", "hostingPlanName": "[concat(parameters('nameRoot'), '-plan')]", "documentDbName": "[concat(toLower(parameters('nameRoot')), uniqueString(subscription().subscriptionId))]", "storageAccountName": "[concat(toLower(parameters('nameRoot')), uniqueString(subscription().subscriptionId))]", "iotHubName": "[concat(parameters('nameRoot'), '-hub', uniqueString(subscription().subscriptionId))]", "serviceBusNamespaceName": "[concat(parameters('nameRoot'), '-ns', uniqueString(subscription().subscriptionId))]", "faceApiName": "[concat(parameters('nameRoot'), '-face')]", "emotionApiName": "[concat(parameters('nameRoot'), '-emotion')]" }

    Princip jejich skládání je většinou stejný – funkcí concat() spojíme text zadaný do parametru nameRoot s příponou podle typu a nakonec v některých případech přidáme pomocí funkce uniqueString() hash, který je jedinečný pro daný účet (díky subscription().subscriptionId).

    Unikátní řetězec přidáváme u těch služeb, jejichž název bude tvořit URI. Například u Service Bus vypadá výsledek pro nameRoot = „jmeno“ takto:


    Šablona slouží vývojářům, takže jsme nebyli úplně striktní vzhledem k ošetření omezení vstupů. Každopádně je dobré mít tato pravidla na paměti hlavně při ladění:

    Prostředek Unikátní název Rozsah Znaky Storage Account Ano 3-24 znaků pouze číslice nebo malá písmena Web App Ano 2-60 znaků číslice, písmena, pomlčka DocumentDB Ano 3-50 znaků číslice, malá písmena, pomlčka IoT Hub Ano 3-50 znaků číslice, malá písmena, pomlčka Service Bus Ano 6-50 znaků číslice, písmena, pomlčka
    musí začínat písmenem, musí končit písmenem nebo číslicí

    Některé zdroje mají názvy, které musí zůstat konstantní. Naše aplikace například využívá službu Service Bus, v níž je fronta s názvem „Events“. Protože se na ni odkazuje automatický job, chtěli jsme tento název zachovat, proto jsme parametr odebrali a zadali jej jako prostý text.

    Výchozí hodnoty

    Jakmile jsou názvy srovnané a parametry zredukované, je na čase pročistit definice jednotlivých prostředků. Azure má ve zvyku ve vygenerované šabloně velmi explicitně stanovit hodnotu různých parametrů, která se ale často kryje s hodnotou výchozí nebo pochází z běhového prostředí a pro definici není podstatná. Například u Service Bus:

    "properties": { "provisioningState": "Succeeded", "status": "Active", "createdAt": "2016-09-22T09:59:23.153Z", "serviceBusEndpoint": "[concat('https://', parameters('namespaces_sbus_name'),'')]", "enabled": true, "updatedAt": "2016-09-22T09:59:48.983Z" }

    Všechny tyto hodnoty je možné ze šablony odebrat.


    Souběžně s čištěním šablony se můžete zaměřit i na kontrolu závislostí. Některé prostředky závisejí na tom, aby existovaly jiné zdroje. Například:

    • Service Bus Queue nemůže vzniknout, dokud není vytvořeno Service Bus Namespace.
    • Virtual Machine nemůže vzniknout, dokud není vytvořen síťový adaptér a Storage Account.
    • Web App nemůže vzniknout, dokud není vytvořen App Service Plan.
    • My jsme zařadili frontu jako resource pod Service Bus a ještě v sekci „dependsOn“ řekli, že nemá vzniknut dříve než Service Bus:

      "type": "Microsoft.ServiceBus/namespaces", "sku": { "name": "Basic", "tier": "Basic" }, "kind": "Messaging", "name": "[variables('serviceBusNamespaceName')]", "apiVersion": "2015-08-01", "location": "[resourceGroup().location]", "tags": {}, "resources": [ { "type": "queues", "name": "Events", "apiVersion": "2015-08-01", "properties": {}, "resources": [], "dependsOn": [ "[resourceId('Microsoft.ServiceBus/namespaces', variables('serviceBusNamespaceName'))]" ] }

      Vynecháte-li správné stanovení závislostí, Azure vás upozorní chybovou hláškou.


      Cílem šablony bylo bezobslužně připravit kompletní prostředí pro nasazení a běh aplikace. Důležité jsou i připojovací řetězce a klíče k jednotlivým službám, které je potřeba doplnit do konfigurace před nasazením kódu. Proto jsme připravili sadu výstupů, díky níž není třeba procházet konfiguraci a kopírovat klíče odtud.

      Jejich získání je ve většině případů triviální a spočívá v použití funkce listKeys(), nicméně některé hodnoty je potřeba skládat a odvozovat složitěji.

      • Storage Connection String se používá v C# kódu pro připojení k Azure Storage. Tato hodnota je složena ze dvou údajů: proměnné storageAccountName a prvního klíče, který vrací nově vytvořený Storage Account. Používáme funkci concat() a listKeys():
      "StorageConnectionString": { "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2016-01-01').keys[0].value)]", "type": "string" },
      • Storage Keys obsahuje výpis obou klíčů k Azure Storage jako JSON objekt:
      "StorageKeys": { "value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2016-01-01')]", "type": "object" },
      • Service Bus Keys je opět JSON objekt, v němž najdeme connection string. Přistupujeme na něj přímo přes název RootManageSharedAccessKey, protože je konstantní a určili jsme ho v dřívější části šablony:
      "ServiceBusKeys": { "value": "[listKeys(resourceId('Microsoft.ServiceBus/namespaces/authorizationRules', variables('serviceBusNamespaceName'), 'RootManageSharedAccessKey'), '2015-08-01')]", "type": "object" },
      • IoT Hub Keys funguje stejně jako Service Bus, opět používáme název klíče:
      "IotHubKeys": { "value": "[listKeys(resourceId('Microsoft.Devices/IotHubs/Iothubkeys', variables('iotHubName'), 'iothubowner'), '2016-02-03')]", "type": "object" },
      • DocumentDB Endpoint vznikl prostým dosazením názvu účtu DocumentDB do známé adresy URL:
      "DocumentDbEndpoint": { "value": "[concat('https://', variables('documentDbName'), '')]", "type": "string" },
      • Pro získání zbylých klíčů se opět používá funkce listKeys():
      "DocumentDbKeys": { "value": "[listKeys(resourceId('Microsoft.DocumentDB/databaseAccounts', variables('documentDbName')), '2015-04-08')]", "type": "object" }, "FaceApiKeys": { "value": "[listKeys(resourceId('Microsoft.CognitiveServices/accounts', variables('faceApiName')), '2016-02-01-preview')]", "type": "object" }, "EmotionApiKeys": { "value": "[listKeys(resourceId('Microsoft.CognitiveServices/accounts', variables('emotionApiName')), '2016-02-01-preview')]", "type": "object" } Nasazení

      K nasazení šablony můžete použít PowerShell, nástroje příkazové řádky (CLI), Visual Studio Team Services nebo webový portál Microsoft Azure. Pro vývoj a postupné zkoušení a opravování chyb se ukázalo jako vhodné použít právě portál.

      1. Na hlavní obrazovce (Dashboard) klikneme na „New“.

      2. Vyhledáme „Template deployment“.

      3. Klikneme na „Edit“.

      4. Zkopírujeme svou šablonu z Visual Studia a nahradíme jí kompletní obsah editoru.

      5. Potvrdíme tlačítkem „Save“.

      6. Vytvoříme novou Resource Group nebo vybereme stávající a zkontrolujeme parametry, které se doplnily ze šablony. Nakonec potvrdíme souhlas s podmínkami a klikneme na „Purchase“.

      7. Azure zkontroluje, zda je šablona validní a upozorní vás na případné nedostatky. Pro drobné úpravy a rychlé iterování se osvědčilo použít webové rozhraní a teprve potom je promítnout zpět do Visual Studia. Následně začne samotné nasazení.

      8. Průběh deploymentu můžete sledovat po kliknutí na notifikaci, která se objeví vpravo nahoře.

      9. Výstupy najdete v Resource Group, sekci „Overview“, v části „Essentials“ pod odkazem „Last Deployment“.

      Další možnosti nasazování (příkazovou řádku a VSTS) probereme v jiném článku.

      Na závěr

      Hotová šablona je bezpečně uložená ve správě zdrojového kódu, verzovaná a připravená k opakovanému nasazení. Díky dynamickým názvům nehrozí konflikty a každý vývojář z týmu může šablonu vzít a vytvořit si vlastní verzi celého prostředí pro vývoj a testování. Připojovací údaje k jednotlivým službám najde jako výstup, takže je nemusí hledat na portále.

    Sample code to save as MSMQ messages from outgoing queues to files

    MSDN Blogs - 17 hours 29 min ago

    The following code will read messages from outoging queues and then create folder structure as domain or server name or IPprivate$ or (public)queue name to save the messages as .xml files.

    The reference is the sample from codeproject:

    private void SaveMessages(string queueFormatName, int numOfMessagesToRead) {

    MSMQ.MSMQQueueInfo info = new MSMQ.MSMQQueueInfo();
    info.FormatName = queueFormatName;


    for (int i = 0; i < numOfMessagesToRead; i++)

    object wantdest = false;
    object tr = true;
    object num = 0;

    MSMQ.MSMQMessage msg = null;


    if (i == 0)
    msg = mq.PeekCurrent(ref wantdest, ref tr, ref num, ref wantdest); //Peek(ref wantdest, ref tr, ref num, ref wantdest);
    msg = mq.PeekNext(ref wantdest, ref tr, ref num, ref wantdest);
    catch (Exception ee)

    if (msg == null) {

    WriteLine("Number of Messages to read:" + (numOfMessagesToRead - i).ToString() + " msg is null.");

    continue; }

    //ASCIIEncoding encoding = new ASCIIEncoding();
    UTF8Encoding utf8encoding = new UTF8Encoding();

    string queuePathName = queueFormatName.Substring(queueFormatName.IndexOf(":")+1);

    string path = Directory.GetCurrentDirectory();
    path += "\SavedMessages\" + queuePathName;


    if (!Directory.Exists(path))

    byte[] guid = new byte[16];
    byte[] seq = new byte[4];

    Array.Copy((byte[])msg.Id, 0, guid, 0, 16);
    Array.Copy((byte[])msg.Id, 16, seq, 0, 4);

    int seqNumber = BitConverter.ToInt32(seq, 0);

    Guid msgId = new Guid(guid);

    path += "\" + msgId.ToString() + "_" + seqNumber.ToString() + ".xml";


    try {
    File.WriteAllText(path, utf8encoding.GetString((byte[])msg.Body));
    catch (Exception ee)
    File.WriteAllText(path, (string)msg.Body);


    Best regards,
    WenJun Zhang

    Azure News on Friday (KW42/16)

    MSDN Blogs - 18 hours 23 min ago

    Auch diese Woche gab’s wieder viele Nachrichten zur Microsoft Azure Plattform. Hier sind nähere Infos dazu…

    Aktuelle Neuigkeiten Datum Nachricht 20.10. Release preview of Microsoft System Center and Application Insights integration
    Preview der Integration von System Center Operations Manager (SCOM) mit Application Insights 20.10. Data Science with Microsoft SQL Server 2016
    Kostenloses eBook: Data Science with Microsoft SQL Server 2016 17.10. Azure Data Lake U-SQL October 2016 Updates: Deprecations turn into errors, sampling is live, sharing catalog objects across ADLA accounts, outputting headers and more!
    Neues zu Azure Data Lake U-SQL im Oktober 2016 14.10. General availability: UltraPerformance Gateway tier for Azure ExpressRoute
    Azure ExpressRoute jetzt mit Ultra-Performance Gateways – fünf Mal so schnell wie High-Performance Gateways 14.10. Service Fabric Community Q&A (Oct 20)
    Q&A Session zu Azure Service Fabric am 20. Oktober – einfach reinschauen, Fragen stellen, Antworten bekommen Neue Videos Datum Nachricht Video 19.10. Triggering with Azure Functions
    Auslösen von Azure Functions über Trigger.
    19.10. Azure functions integration
    Integration von Azure Functions in größere Cloud Apps
    19.10. Azure App Service Companion
    Alles Wichtige zu Azure App Service Companion in 10 Minuten
    19.10. Intro Azure Active Directory – Manage Multiple Directories and Tenants
    Mehrere Verzeichnisse und Tenants in Azure AD verwalten
    19.10. Intro Azure Active Directory – Create Users and Groups
    User und Gruppen in Azure AD anlegen
    18.10. Tuesdays with Corey: Website speed with Azure Storage and Azure CDN
    Corey Sanders zu Azure Storage und CDN zur Steigerung der Performance von Webseiten
    18.10. Optimizing Hive in Azure HDInsight
    Performance von Hive Queries in HDInsight optimieren – das Wichtigste in 15 Minuten
    14.10. Episode 216: Azure News Recap with Chris and Haishi October 2016
    Rückblick auf alle Neuerungen in Azure des letzten Monats

    Code FTW. Sign up for Imagine Cup today!

    MSDN Blogs - Thu, 10/20/2016 - 23:00

    Imagine Cup is all about innovation, transformation and what comes next. Through Imagine Cup, Microsoft provides student developers with the opportunity to mold the world of tomorrow in their eyes by combining their ideas with the power of technology.

    Now celebrating its 15th anniversary, I want to share some exciting new changes we’re making to Microsoft’s marquee global student technology competition in the spirit of empowering students to tap into their passion and creativity.

    Microsoft Chief Evangelist Steven “Guggs” Guggenheimer announced that we’ll be doubling the competition’s top prize from $50,000 to $100,000 for the team that is crowned the next Imagine Cup World Champion. Student commitment to Imagine Cup has been incredibly inspiring since the competition’s inception, and we felt strongly that their enthusiasm should be met with equal commitment on our part.

    We wanted to do more, however, to make this 15th anniversary of Imagine Cup special, and so, true to the ever-evolving nature of our industry, we’re making a few more innovations to Imagine Cup.

    First, as cloud technologies become more and more pervasive, we feel it’s paramount that Imagine Cup help student developers prepare for their futures by building skills using today’s premier cloud platforms. To that end, we highly encourage all Imagine Cup projects to showcase one or more Microsoft Azure services within their solutions.

    Also, we’re changing the way we judge projects. For the 2017 Imagine Cup competition, we’re placing greater emphasis on each project’s underlying technology. What that means is that Imagine Cup will be on the lookout for deep technology integration in fully formed apps, services and games with the potential to be viable in the commercial marketplace.

    Finally, we’ve decided to forego categories in favor of opening the competition up to apps and solutions that might not have neatly fit into one of those categories in past years . . . in other words, all big ideas are welcome in Imagine Cup.

    I hope you’re as excited about the upcoming Imagine Cup season as I am. The innovations student come up with year in and year out never cease to amaze and inspire me, and I’m eager to see what the next generation of developers comes up with this year!

    Take your first steps into a larger and thrilling world and register for Imagine Cup today!

    Good luck to you all,

    Pablo Veramendi
    Imagine Cup Competition Manager


    Custom Azure RBAC roles and how to extend existing role definition’s scope

    MSDN Blogs - Thu, 10/20/2016 - 21:54

    Azure Role-Based Access Control (RBAC) enables fine-grained access management for Azure resources. Using RBAC, you can grant only the amount of access that users need to perform their jobs. This is considered a best practice for managing resources in Azure.

    **Please note: This post includes PowerShell scripts. To learn more about how to use Azure PowerShell scripts check this post here.

    Sometimes you will find that none of the built-in roles meet your specific access needs. That’s when you can create a custom role. Custom roles can be created using Azure PowerShell, Azure Command-Line Interface (CLI), and the REST API. There is no UI option using the portal at the time of writing this post unfortunately.

    Just like built-in roles, custom roles can be assigned to users, groups, and applications at subscription, resource group, and resource scopes. Custom roles are stored in an Azure AD tenant and can be shared across all subscriptions that use that tenant as the Azure AD directory for the subscription.

    To create a custom role, you need to define the role in JSON format.

    The definition consists of three main sections
    • Role identifying information (Name, description …etc.)
    • Included/Excluded actions
    • Scope
    Here is an example of a role definition JSON { "Name": "Virtual Machine Manager", "Description": "Can restart virtual machines that can be view only.", "Actions": [    "Microsoft.Compute/virtualMachines/start/action",    "Microsoft.Compute/virtualMachines/restart/action" ], "NotActions": [ ], "AssignableScopes": [    "/subscriptions/FFBBA206-76CF-42CE-B97E-0CD948BACB69",    "/subscriptions/FFBBA206-76CF-42CE-B97E-0CD948BACB69" ] }

    In the above role definition, we allow the action or starting and restarting a VM. Note that this custom role will not be useful to the assigned user/group unless it’s the user/group already has read access on VM resources via a different role. We chose to let this permission be handled on a separate role intentionally.

    The next step is to save the above definition in a file ‘MycustomRole.json’ then execute the following PowerShell command to add the custom role definition:

    New-AzureRmRoleDefinition -InputFile C:MycustomRole.json

    Now you can go to the portal and assign this custom role to users or security groups as follows:

    To delete an existing custom role, run the following command:

    Remove-AzureRmRoleDefinition -Id ROLE_DEFINITION_ID


    Remove-AzureRmRoleDefinition -Name "ROLE_DEFINITION_Name"

    ** You need to make sure that there are not users/groups assigned to that role before you attempt to delete the role or you will get an error.

    Sometimes you create a custom role and you find out that you need to extend the scope of the role to include more subscriptions or resource groups that has been created after the custom role has been created.

    To achieve that following the steps:

    First, load the desired role definition in a variable either by Name or ID

    Add the new scopes you would like to add (repeat this step as needed)

    Update the role definition in the cloud with the changes you just did (submit changes)

    $role = Get-AzureRmRoleDefinition -Name "ROLE_DEFINITION_NAME" $role.AssignableScopes.Add("/subscriptions/NEW_SUBSCRIPTION_ID_GOES_HERE")   ##[[Repeat this step to add all the subscriptions you want to add]] Set-AzureRmRoleDefinition -Role $role Additional resources

    Investigating issues with Visual Studio Team Services – 10/21 – Investigating

    MSDN Blogs - Thu, 10/20/2016 - 18:52

    Initial Update: Friday, 21 October 2016 01:18 UTC

    We are actively investigating issues with Visual Studio Team Services where some users whose accounts that were imported from Team Foundation Server or those accounts that were converted from an MSA account to an AAD account might face access issues with various VSTS artifacts. We have identified a possible fix and are in the process of deploying the hotfix .


    • Next Update: 10/21/2016 03:00 UTC

    We are working to resolve this issue and apologize for any inconvenience.



    Subscribe to Randy Riness @ SPSCC aggregator
    Drupal 7 Appliance - Powered by TurnKey Linux