You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
Get the latest information, insights, announcements, and news from Microsoft experts and developers in the MSDN blogs.
Updated: 1 hour 8 min ago

A new release of ODBC for Modern Data Stores

Tue, 06/07/2016 - 13:54

 

After more than 15 years since the last release, Microsoft is looking at updating the Open Data Base Connectivity (ODBC) specification.

 

ODBC was first released in September of 1992 as a C-based call-level interface for applications to connect to, describe, query, and update a relational store. Since its introduction, ODBC has become the most widely adopted standard interface for relational data, fostering an ecosystem of first-party data producers as well as 3rd party vendors that build and sell ODBC Drivers for a variety of data sources.

 

ODBC was designed for relational databases conforming to the ISO SQL-92 standard, and has not undergone a significant revision since the release of ODBC 3.5 in 1997. Since that time, not only have relational data sources evolved to support new data types, syntax, and functionality, but a variety of new sources of data have emerged, particularly in the cloud, that don’t conform to the constraints of a relational database.

 

The prevalence of tools, applications, and development environments that consume ODBC have led a number of vendors to create ODBC drivers to cloud-based and non-relational data sources. While this provides connectivity to existing tools, having to flatten the data into normalized relational views loses fidelity, functionality, semantics, and performance over a more natural representation of the data.

 

So we started to consider what it would look like to extend ODBC to more naturally support more modern Relational, Document-oriented, and other NoSQL stores. Borrowing and extending syntax and semantics from structured types in SQL-99, we identified a relatively small set of extensions necessary to support first-class representation of the modern data sources, as well as a set of general enhancements to improve the ability for applications to write interoperable code across ODBC drivers.

 

To be part of ODBC, these extensions would have to be backward compatible with existing drivers and applications, and extend ODBC in ways natural and consistent with the existing API.

 

From these efforts a new version of ODBC slowly took shape. We started to discuss these extensions with a few ODBC vendors to flesh out requirements and vet designs, and have reached the point that we’re ready to start talking more broadly about ODBC 4.0 – the first significant update to ODBC in over 15 years.

 

Thoughts/comments/ideas? Please share! And stay tuned here for more updates…

 

Michael Pizzo
Principal Architect, Microsoft Data Group

Java Tools Challenge results

Tue, 06/07/2016 - 13:28

As I mentioned in a previous blog post, we are committed to making VS Team Services a great solution for all developers – for any application on any platform.  3 months ago we kicked off the Java Tools Challenge, sponsored by the Visual Studio Partner Program, and the Visual Studio Marketplace which was a longer term virtual hackathon designed to bring awareness to our Java solutions and engage the Java community to build working and published tools and apps.

The Challenge was an opportunity for Java developers to explore the extensibility of Visual Studio Team Services as well as make use of some of our solutions for Java developers such as our Eclipse plugin, cross platform command line, IntelliJ plugin, our new cross platform build agent and more.  They were challenged to develop either a Visual Studio Team Services (VSTS) extension that helps developers create, test, deploy, etc. Java apps or create a Java app using Visual Studio Team Explorer Everywhere (Eclipse plugin) or JetBrains IntelliJ plugin.

It is my pleasure to announce the following winners of the Java Tools Challenge!

 

Best Overall: jSnippy, VSTS extension

Prize: $10,000 cash, VSPP Premier Membership, Visual Studio Enterprise 2015 with MSDN (#3), Featured product listing on Visual Studio Marketplace.

jSnippy is a code snippet manager that manages code snippets for your team in a secure manner and with an easy to search interface. As a team member, you can contribute by creating Code Snippets in the repository. jSnippy provides an Editor to write your code in and includes a built-in Java language syntax highlighting.

Team quote: “With jSnippy team members can easily search for code snippets using powerful tag and full text search capabilities within their modules. For Java-based Teams, jSnippy also provides an option to import commonly used code snippets on first time load.”

Best Extension: Allure, VSTS extension

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

To make tests relevant, you need to be able to know when a failure occurs and what caused it. The report must be in a readable format that can help to diagnose test failures quickly.

 

Team quote: “By creating the Allure VSTS extension, a general approach is used to create an HTML report with test results and failure screenshots intertwined. However, this was easier said than done since there is no proven standard – leaving us to code our own approach based on the test framework we are using. Leveraging a language agnostic open-source library like the Allure Framework, we were able to generate a robust, human-readable HTML report from each of our test runs.”

 

Best Mobile App: WiChat, Mobile App

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

WiChat is a messaging application that creates a chat-room to connect nearby users using WiFi Hotspot. Need to message nearby friends in an environment where silence is mandatory? Why spend money on texting when instead you can stay connected via WiChat?

 

Team quote: “The connection is handled by Java’s ServerSocket and Socket Class and VSTS was used to keep track of team progress and enhance the private Git repository experience.”

 

Best App: Molecula Add-in for Outlook, App

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

Molecula indexes all emails in your Inbox and groups all the people you communicated with using special criteria. Groups are visualized with cool bubbles—bigger ones represent groups of people you communicate with more frequently. You can start a new thread by clicking a bubble, rearranging the recipients for TO, CC and BCC fields, and you are done!

 

Team quote: “We used IntelliJ IDEA 15 and the VSTS plugin as our primary IDE. Server-side is built using Java and hosted in Microsoft Azure using Docker containers. On the client side we use JavaScript API for Office, Office UI Fabric as a framework for UI and Microsoft Graph API to access user emails.”

 

Best Runner-up Extension: Documentation (Doxygen), VSTS extension

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

VSTS being a great ALM platform and Doxygen being a great documentation tool, I wanted to integrate both and get the benefit of always having up-to-date documentation that can be viewed directly in VSTS projects.

 

Team quote: “Documentation (Doxygen) is a VSTS extension that contains two parts: A Build task to generate documentation as part of Build; and a Documentation hub under Code hub group to render the documentation from the selected Build.”

 

Best Runner-up Mobile App: My Cellars and Tastes, Mobile App

Prize: Oculus Rift, Visual Studio Enterprise 2015 with MSDN.

My Cellars and Tastes is a free app in 6 languages to manage cellars of wines, champagnes, beers, olive oils, etc.

 

Team quote: “Originally, the app was built using Eclipse locally, then I added a local SVN repository and I recently configured a dedicated Visual Studio Team Services account for this challenge to manage my backlog, my code with Git and also automate Build and Release.”

 

Congratulations to all of our Java Tools Challenge participants that submitted their projects and especially to those that won prizes! And, a big thanks to our other judges: Paul Barham, David Staheli, and Brian Benz as well as DevPost for hosting the Challenge!

 

You can view all of the Java Tools Challenge entries at http://javachallenge.devpost.com/submissions.

 

Brian

Readiness check fails on Application Tier with the error, “Cannot create a binding on port ‘XXX’ because the specified certificate could not be found. Select a different certificate and then re-run the readiness checks”

Tue, 06/07/2016 - 12:17

TFS 2015 Update 2 gives the option of editing the Site bindings on IIS as part of the configuration itself. This way, when you upgrade/restore TFS, you get an option to change the various settings such as port, virtual directory.

Recently, we came across a unique issue with a customer where TFS 2015.2.1 was configured with SSL. Due to an unforeseen circumstance, the server had crashed and the customer was trying to restore TFS back to its former glory. But they faced an issue at the Application Tier only wizard ‘Readiness Checks’

“Cannot create a binding on port xxx because the specified certificate could not be found. Select a different certificate and then re-run the readiness checks”

This was preventing the configuration to pass. But we do have an option to fix this.
Going back to the Application Tier option on the left, you will notice a ‘Edit Site Settings’

Clicking on it gives several options, to select the right SSL certificate or remove the binding altogether.

Now, we store the certificate hash in our config database which means TFS will want the same certificate.
If you have the same certificate, you could import it and select it there.

If you do not have the original certificate, you could remove the binding for now, re-configure TFS, then manually add the binding back. Newly created certificate will have a new hash that will mismatch the content stored in the tables, so you can also go back and re-select the new certificate to ‘update’ the settings.

That’s it! You could re-run the readiness checks and bring back TFS online!

Cheers!

Content: Manigandan B
Review: Chandra Sekhar Viswanadha

The week in .NET – 06/07/2016

Tue, 06/07/2016 - 11:33

To read last week’s post, see The week in .NET – 5/31/2016.

DotNetConf 7-9 June

Are you ready to rediscover .NET? Well, dotnetConf is back!

Immerse yourself in the world of .NET and join our live stream for 3 days of free online content June 7 – 9 featuring speakers from the .NET Community and Microsoft product teams. Watch and ask questions after each session for a live Q&A. The live stream will be broadcasted on Channel9.

There’s never been a better time to be a .NET developer. Learn to develop for web, mobile, desktop, games, services, libraries and more for a variety of platforms and devices all with .NET! We’ll have presentations on .NET Core and ASP.NET Core, C#, F#, Roslyn, Visual Studio, Xamarin, and much more. Take a look at our lineup of great speakers and sessions. We’ll have keynotes from Miguel de Icaza, Scott Hunter, and Scott Hanselman and a lot of great content from our community.

For more information, check out our website and stay tuned to #dotnetconf & @dotnet on Twitter.

See you on the live stream!

On.NET

Last week on the show, we had Lucian Wischik, Program Manager on the Managed Languages team at Microsoft, and concurrency expert.

This week, we speak with Daniel Egloff about using the GPU in .NET.

Package of the week: Markdig

There are quite a few markdown libraries for .NET. Markdig is a fully extensible implementation of the CommonMark standard with excellent performance, both in terms of speed, and GC pressure.

csharp
var result = Markdown.ToHtml("This is a text with some *emphasis*");

Xamarin App of the week: Expensya

Expensya is a leading expense reporting app in Europe, Africa and Australia. Expensya is implemented in C# and runs on Azure, so when building the mobile app, they naturally decided to use Xamarin, that allows a high rate of code sharing, reuse of the team skills, cross-platform and client-server unit tests that validate the business logic end to end, and native-level performance. Thanks to Xamarin, Expensya mobile apps shipped on the three main platforms, in just a few months.

User group meeting of the week: HoloLens Development in Florida

The Florida .NET user group hosts a meeting on HoloLens Development with Unity and .NET on Wednesday, June 8 at 6:30PM at Octagon Technology Staffing@AXIS in Ft Lauderdale, FL.

Game of the Week: McDroid

McDroid is a tower defense game where players take on the role of McDroid, an adorable android who is in charge of protecting the talkative space ship it crash landed with while it repairs the planet. Players build up their base defenses while completing missions that often require harvesting resources and fending of several waves of aliens. Having excellent time management skills becomes key in order to progress to the next level. McDroid offers a story campaign, survival levels and a multiplayer survival arena.

McDroid was created by Grip Digital using Unity and C#. It is currently available on the PlayStation 4, Xbox One and Windows (via Steam). More information can be found on their Made With Unity page.

.NET ASP.NET F# Xamarin Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, on ASP.NET Weekly, and on Chris Alcock’s The Morning Brew.

Windows Docs Updates

Tue, 06/07/2016 - 10:17

Some hightlights:

Windows 10 TechNet Guides – Use the Set up School PCs app (Preview)

Tue, 06/07/2016 - 09:30

Doesn’t time fly? It seems like only yesterday that we were all celebrating the launch on Windows 10, but that was almost a year ago! Way back on the official release day of July 29th 2015, it was announced that qualified Windows 7 or Windows 8.1 users could upgrade to Windows 10 for free before July 29th 2016.

Well that day is now less than two months away(!), and with the summer holidays around the corner there’s no excuse why your students shouldn’t return to a fully operational Windows 10 classroom in September.

To help you with the move to Windows 10, there is a handy “Windows 10 Upgrade Guide for Schools” that can be found on our SlideShare page, and also elsewhere on this blog.

In addition to this, there are some other helpful guides in the Windows 10 for Education section of the TechNet blog aimed at anyone seeking to get their classroom technology up to date with Windows 10.

Use the Set up School PCs app (Preview)

Teachers and IT administrators can use the Set up School PCs app to quickly set up computers for students. A computer set up using the app is tailored to provide students with the tools they need for learning while removing apps and features that they don’t need.

The full article covers the following:

  • What does this app do?
  • Tips for success
  • Set up School PCs app step-by-step
  • Create the setup file in the app
  • Apply the setup file to PCs
  • Learn more
Read the full post at TechNet:

Use the Set up School PCs app (Preview)

Windows 10 TechNet Guides – Use the Set up School PCs app (Preview)

Tue, 06/07/2016 - 09:30

Doesn’t time fly? It seems like only yesterday that we were all celebrating the launch on Windows 10, but that was almost a year ago! Way back on the official release day of July 29th 2015, it was announced that qualified Windows 7 or Windows 8.1 users could upgrade to Windows 10 for free before July 29th 2016.

Well that day is now less than two months away(!), and with the summer holidays around the corner there’s no excuse why your students shouldn’t return to a fully operational Windows 10 classroom in September.

To help you with the move to Windows 10, there is a handy “Windows 10 Upgrade Guide for Schools” that can be found on our SlideShare page, and also elsewhere on this blog.

In addition to this, there are some other helpful guides in the Windows 10 for Education section of the TechNet blog aimed at anyone seeking to get their classroom technology up to date with Windows 10.

Use the Set up School PCs app (Preview)

Teachers and IT administrators can use the Set up School PCs app to quickly set up computers for students. A computer set up using the app is tailored to provide students with the tools they need for learning while removing apps and features that they don’t need.

The full article covers the following:

  • What does this app do?
  • Tips for success
  • Set up School PCs app step-by-step
  • Create the setup file in the app
  • Apply the setup file to PCs
  • Learn more
Read the full post at TechNet:

Use the Set up School PCs app (Preview)

How to use the new REST API Data Source

Tue, 06/07/2016 - 09:00

Hi everyone!  As you can probably tell, we’re pretty excited about the REST API data source. If you haven’t seen our post about the release on the Building Apps for Windows blog, you can check it out here. Since this new feature is a bit more complicated than the average data source, we thought it would be best if we posted a short little tutorial on how to use it.  For this tutorial, we’ll focus on the most basic use case, which is to pull a bit of information from a REST API that is open and does not require an API key.  Once you understand how to do this, you’ll know enough to jump into creating more complex apps using this REST API data source.

Getting Started

The first step is to know what REST API you want to use and what data you want to get out of it.  Today we’re going to be making an app that displays information from WordPress.  So to get started, open up http://appstudio.windows.com and log into your account.  Once you do that, start a new project and begin with a blank template.  When you’re on the main content editing area, add the REST API data source as a section.

Leave the REST API URL blank since we will use this one for the demo.  However, if you wanted to use your own, just enter the URL in that area and click on the “Check API” button to confirm everything is correct.  If the API you’re using requires the use of headers, you can enter that information below.  When you are ready, click the arrow and move to the next step.

Setting the Root

The next section is called “Root”, and this is where you tell the tool what group of data you are thinking of pulling.  Generally, in this step there is the content you want, and something often labeled “meta”.  Since we’re looking to pull in posts from WordPress, we’ll select that.

Once you’ve selected the root of the data you want to pull, click the right-facing arrow to move to the next step.

Selecting the Identifier/ID

In this step, the tool needs you to tell it what the identification field is called.  For any object/data you’re pulling, there is a unique ID associated with it that the service uses to keep things organized and operate properly, Windows App Studio needs to be told what that is since it can vary across different APIs.

In this example, it’s simply just called “ID”, so select that and confirm it shows in the top box.  When it does, click on the right-facing arrow to go to the next step.

Mapping the Response

This is the main step where you map the data from the API to the data objects in your app.  In this step, you should look through the properties and decide what you want to show.  For this example, we’ll select, Title, Author Name, URL, Featured Image, and the Content.  So browse through the properties to find them, single click the ones you want, and then define the Column Type for each.

Once you have all the data you want and it’s properly organized and set to the right Column Type, click the right-facing arrow to go to the next step.

Defining Pagination

Pagination is a complicated word that can simply be thought of “How many records am I pulling per batch?”.  This section will ask you if the REST API supports pagination.  Some do and some don’t, so look up the details in the given service’s API documentation.  If it doesn’t, skip this step.  If it does, then select Yes and see new options appear on the page.  With the new options, you’ll be able to define the pagination type, the Pagination parameter name (look this up in API documentation for the given service as it will differ across different APIs), the initial value, and if you want to configure the page size.  If you do configure the page size, you will need to enter in the Page Size Parameter name as well as the value you want.

This step is a bit more confusing than the others, but the good news is that if you leave it with the defaults, things will probably work.  If you’re struggling with this at all, just play around with it and it will likely click and begin to make sense.  You can’t break anything here, so have some fun with it.  When you’re ready to move to the next step, click the right-facing arrow.

Defining the Order

If the API supports it, you can select a custom order for the data you’re pulling.  This step is very similar to the pagination step.  For this, you will look at the API documentation to find the properties for the “Order By” parameter and the “Order Direction” parameter.  When you have those, just enter the values of your choosing.

Once you do that, you’re all done with the configuration.  Click the blue Confirm button and you’ll have a REST API data source in your app.  At this point, you can manage your section bindings, stylings, and everything else just like you would with any other data source.

Conclusion

This is an extremely powerful tool and enables you to bring the wealth of the web’s data into your apps.  We’ll be working on building out this feature over future releases as well, so expect this to become much more robust and guided in the future.  If you have any questions, concerns, or recommendations, we’d love to hear from you, so leave us a comment below or find us on the forums and User Voice.

Thanks for reading and happy app building!

Using OneDrive and Excel APIs in the Microsoft Graph for App Storage

Tue, 06/07/2016 - 08:53

The Microsoft Graph is constantly evolving with new and powerful endpoints. If you want a glimpse into current engineering investments and future of the Graph, take a look at the latest developments on beta branch. One of the beta endpoints I’m particularly excited for is the new Excel APIs. The Excel APIs allow you to perform advanced manipulations of Excel remotely via REST calls into the Microsoft Graph. I recently recorded an Office Dev Show on Channel 9 discussing these APIs. I found it incredibly easy to manipulate worksheet data using the Excel APIs. So much so, that I thought I would try to use Excel and OneDrive as the data layer for a mobile application. In this post, I’ll illustrate how to perform CRUD operations on worksheet data using the Excel APIs in the Microsoft Graph. I’ll also discuss a few patterns for working with files in OneDrive for Business and provisioning application assets at run-time.

NOTE: The sample used in this post is built with Ionic2/Angular2/TypeScript, but the patterns and API end-points apply to any language platform.

Video showcase of solution:

Ensuring App Resources

If OneDrive for Business and Excel will serve as the data layer for my application, I need to ensure the appropriate files and folders are configured each time a user launches the application. I decided to provision a folder specific to my application called “MyExpenses” (the sample application is an expense/receipt tracking application). Taking a pattern from Dropbox, I decided to provision my application folder in an “Apps” folder at the root of my OneDrive. I also need a specific Excel template provisioned in my application folder that will serve as the data store for expense metadata. If you are keeping count, I need to ensure three things are in place when a user launches the application…a “Expenses.xlsx” file in a “MyExpenses” folder that is nested in an “Apps” folder. Below is the TypeScript I use to make this check. All of the ensure functions check for the existence and create if it does not exist.

ensureConfig ensures all app resources are provisioned:

//ensures all the configuration information is in place for the app ensureConfig() { //ensure all the folders and files are setup let helper = this; return new Promise((resolve, reject) => { let uri = 'https://graph.microsoft.com/v1.0/me/drive/root/children'; helper.ensureFolder(uri, 'Apps').then(function(appFolderId: string) { uri = 'https://graph.microsoft.com/v1.0/me/drive/items/' + appFolderId + '/children'; helper.ensureFolder(uri, 'MyExpenses').then(function(myExpensesFolderId: string) { helper.expensesFolderId = myExpensesFolderId; window.localStorage.setItem('CACHE_KEY_FOLDER', myExpensesFolderId); helper.ensureWorkbook(myExpensesFolderId).then(function(datasourceId: string) { helper.workbookItemId = datasourceId; window.localStorage.setItem('CACHE_KEY_WORKBOOK', datasourceId); resolve(true); }, function(err) { reject(err); }) }, function(err) { reject(err); }); }, function(err) { reject(err); }); }); } Ensuring Folders

Ensuring folders in OneDrive is relatively easy. Just keep in mind that regardless of how nested a folder is, it can still be accessed directly by id off the drive (ex: drive/items/folder_id). This is nice, especially for complex folder structures.

ensureFolder ensures a folder is provisioned of specific name at specific location:

//ensures a folder of a specific name and path exists ensureFolder(uri: string, folderName: string) { let helper = this; return new Promise((resolve, reject) => { helper.getItems(uri).then(function(response: Array) { //loop through items and look for the folder var folderId = null; for (var i = 0; i < response.length; i++) { if (response[i].name === folderName) { folderId = response[i].id; break; } } //check if folder was found if (folderId != null) { //resolve the folder id resolve(folderId); } else { //create the folder helper.createFolder(uri, folderName).then(function(id: string) { resolve(id); }, function(err) { reject(err); }); } }, function (err) { reject(err); }); }); }

createFolder creates a folder of specific name at specific location:

//creates a folder at the specified URI with the specified name createFolder(uri: string, name: string) { //adds a folder to a specific path and name let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { //configure headers for API call let headers = new Headers(); headers.append('Authorization', 'Bearer ' + token.accessToken); headers.append('Content-Type', 'application/json'); let data = JSON.stringify({ name: name, folder: { } }); //perform the HTTP POST helper.http.post(uri, data, { headers: headers }) .subscribe(res => { // Check the response status if (res.status === 201) resolve(res.json().id); else reject('Create folder failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); } Ensuring Workbook

For provisioning the workbook, I decided to store the Excel workbook template IN the Cordova mobile application. If is it determined that Expenses.xlsx file doesn’t exist, the application will read this template and provision it in OneDrive for Business. The function below illustrates this provisioning. One important note below…the Angular2 documentation for http.put indicates that the body can be any object. However, my testing determined it only supports a string body right now which is not appropriate for binary content of an upload. For this reason, I’m using an XmlHttpRequest instead. Angular2 is still in beta, so hopefully this will be fixed before final release.

ensureWorkbook ensures the Expenses.xlsx file exists at a specific location:

//ensures the "Expenses.xslx" file exists in the "MyExpenses" folder specified ensureWorkbook(myExpensesFolderId: string) { //ensure the Excel Workbookexists let helper = this; return new Promise((resolve, reject) => { //check for Expenses.xlsx files helper.getItems('https://graph.microsoft.com/v1.0/me/drive/items/' + myExpensesFolderId + '/children').then(function(response: Array) { //loop through the results and look for the Expenses.xlsx workbook var datasourceId = null; for (var i = 0; i < response.length; i++) { if (response[i].name === 'Expenses.xlsx') { datasourceId = response[i].id; break; } } //check if workbook was found if (datasourceId != null) { //resolve the id resolve(datasourceId); } else { //create the files helper.createWorkbook(myExpensesFolderId).then(function(datasourceId: String) { resolve(datasourceId); }, function(err) { reject(err); }); } }, function (err) { reject(err); }); }); }

createWorkbook provisions the Expenses.xslx file when it does not exist in OneDrive:

//creates the "Expenses.xslx" workbook in the "MyExpenses" folder specified createWorkbook(folderId: string) { //adds a the workbook to OneDrive let helper = this; return new Promise((resolve, reject) => { //get token for resource helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { //reference the Excel document template at the root application www directory window.resolveLocalFileSystemURL(cordova.file.applicationDirectory + 'www/Expenses.xlsx', function (fileEntry) { fileEntry.file(function (file) { //open the file with a FileReader var reader = new FileReader(); reader.onloadend = function(evt: ProgressEvent) { //read base64 file and convert to binary let base64 = evt.target.result; base64 = base64.substring(base64.indexOf(',') + 1); //perform the PUT helper.uploadFile(base64, 'Expenses.xlsx').then(function(id: string) { resolve(id); }, function(err) { reject(err); }); }; //catch read errors reader.onerror = function(err) { reject('Error loading file'); }; //read the file as an ArrayBuffer reader.readAsDataURL(file); }, function(err) { reject('Error opening file'); }); }, function(err) { reject('Error resolving file on file system'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); }

 

CRUD Operations with Excel

CRUD operations with the Excel APIs are relatively easy if you understand the data format Excel expects (multi-dimensional array of values). Retrieving data is accomplished by performing a GET on the rows of a specific table (ex: /drive/items/workbook_id/workbook/worksheets(‘worksheet_id’)/tables(‘table_id’)/rows)

getRows function retrieves rows from the Excel workbook:

//gets rows from the Expenses.xslx workbook getRows() { let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { helper.http.get('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets('Sheet1')/tables('Table1')/rows', { headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken }) }) .subscribe(res => { // Check the response status before trying to resolve if (res.status === 200) resolve(res.json().value); else reject('Get rows failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); }

Adding data to the worksheet uses the exact same endpoint as above, but with a POST. Also, the row data to add must be included in the body of the POST and formatted as a multi-dimensional array (ex: { “values”: [[“col1Value”, “col2Value”, “col3Value”]]}).

addRow function adds a row to the Excel workbook:

//adds a row to the Excel datasource addRow(rowData: any) { let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { helper.http.post('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets('Sheet1')/tables('Table1')/rows', JSON.stringify(rowData), { headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken }) }) .subscribe(res => { // Check the response status before trying to resolve if (res.status === 201) resolve(); else reject('Get rows failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); }

Updating a row in a worksheet is a little different than you might expect. Instead of referencing the table in Excel, you PATCH a specify range with new values (in the same multi-dimensional array format as add). Because updates (and deletes as you will see later) are performed against ranges, it is important to keep track of the row index of the data you work with.

updateRow function update a row in the Excel workbook (via Range):

//updates a row in the Excel datasource updateRow(index:number, rowData:any) { let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { let address = 'Sheet1!A' + (index + 2) + ':D' + (index + 2); helper.http.patch('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets('Sheet1')/range(address='' + address + '')', JSON.stringify(rowData), { headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken }) }) .subscribe(res => { // Check the response status before trying to resolve if (res.status === 200) resolve(); else reject('Get rows failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); }

Deleting a row is accomplished by performing a POST to a specific range with /delete tacked on to the end. You can also include instruction in the body on how the Excel should treat the delete. For CRUD operations, we want deleting to shift rows up, so {“shift”, “Up”} is in the body of the POST.

deleteRow function deletes a row in the Excel workbook (via Range):

//deletes a row in the Excel datasource deleteRow(index:number) { let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { let address = 'Sheet1!A' + (index + 2) + ':D' + (index + 2); helper.http.post('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets('Sheet1')/range(address='' + address + '')/delete', JSON.stringify({ 'shift': 'Up' }), { headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken }) }) .subscribe(res => { // Check the response status before trying to resolve if (res.status === 204) resolve(); else reject('Delete row failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); } CRUD Operations with OneDrives

Nothing too special about working with files and OneDrive. However, I found the OneDrive documentation to be a little unrealistic and unhelpful as it shows operations on text files and not binary data. Binary data (ex: images and Office documents) introduces additional complexity (especially for a client-side application) so I thought I would document some of the utilities I wrote to work with files. Thanks to Waldek Mastykarz, Stefan Bauer, and Sahil Malik for advisement on upload…it wasn’t working at first (turned out to be Angular2 bug), and they were very helpful. The getBinaryFileContents function is directly from Waldek’s blog HERE.

uploadFile uploads a binary file to a specific location in OneDrive using PUT

//uploads a file to the MyExpenses folder uploadFile(base64: string, name: string) { //adds a folder to a specific path and name let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { //convert base64 string to binary let binary = helper.getBinaryFileContents(base64); //prepare the request let req = new XMLHttpRequest(); req.open('PUT', 'https://graph.microsoft.com/v1.0/me/drive/items/' + helper.expensesFolderId + '/children/' + name + '/content', false); req.setRequestHeader('Content-type', 'application/octet-stream'); req.setRequestHeader('Content-length', binary.length.toString()); req.setRequestHeader('Authorization', 'Bearer ' + token.accessToken); req.setRequestHeader('Accept', 'application/json;odata.metadata=full'); req.send(binary); //check response if (req.status === 201) resolve(JSON.parse(req.responseText).id); //resolve id of new file else reject('Failed to upload file'); }, function(err) { reject(err); //error getting token for MS Graph }); }); }

deleteFile deletes a specific file using DELETE

//deletes a file from OneDrive for business deleteFile(id:string) { let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { helper.http.delete('https://graph.microsoft.com/beta/me/drive/items/' + id, { headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken }) }) .subscribe(res => { // Check the response status before trying to resolve if (res.status === 204) resolve(); else reject('Delete row failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); }

For images, I decided to take advantage of another beta API in the Microsoft Graph…thumbnails. The thumbnails API allow you to download small formats of an image in OneDrive. It will generate Small, Medium, and Large thumbnails for all images. I decided a medium thumbnail would look fine in my mobile application and be MUCH more performant.

loadPhoto loads the medium thumbnail for a specified image in OneDrive

//loads a photo from OneDrive for Business loadPhoto(id:string) { //loads a photo for display let helper = this; return new Promise((resolve, reject) => { helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) { //first get the thumbnails helper.http.get('https://graph.microsoft.com/beta/me/drive/items/' + id + '/thumbnails', { headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken }) }) .subscribe(res => { // Check the response status before trying to resolve if (res.status === 200) { var data = res.json().value; var resource = data[0].medium.url.substring(8); resource = "https://" + resource.substring(0, resource.indexOf('/')); helper.authHelper.getTokenForResource(resource).then(function(thumbtoken: Microsoft.ADAL.AuthenticationResult) { //prepare the content request let req = new XMLHttpRequest(); req.open('GET', data[0].medium.url, true); req.responseType = 'blob'; req.setRequestHeader('Authorization', 'Bearer ' + thumbtoken.accessToken); req.setRequestHeader('Accept', 'application/json;odata=verbose'); req.onload = function(e) { //check response if (this.status === 200) { //get the blob and convert to base64 using FileReader var blob = req.response; var reader = new FileReader(); reader.onload = function(evt){ var base64 = evt.target.result; base64 = base64.substring(base64.indexOf(',') + 1); resolve(base64); }; reader.readAsDataURL(blob); } else reject('Failed to read image'); }; req.onerror = function(e) { reject('Failed to download image'); }; req.send(); }, function(err) { reject('Error getting token for thumbnail'); }); } else reject('Thumbnail load failed'); }); }, function(err) { reject(err); //error getting token for MS Graph }); }); } Final Thoughts

The Excel APIs in the Microsoft Graph have capabilities well beyond CRUD operations for an app, but I thought this was an interesting pattern (especially for a guy that is always going over my monthly Azure quota). You can grab the MyExpenses solution written with Ionic2/Angular2/TypeScript at GitHub repo listed below:

https://github.com/richdizz/Ionic2-Angular2-ExcelAPI-Expense-App

 

Planning Ahead for Shared Budget APIs

Tue, 06/07/2016 - 08:23

As you may have already read, Bing Ads will begin piloting shared campaign budgets in September. You will be able to set a single daily budget that can be used by any campaign within the same account. This will enable you to efficiently distribute a single daily budget across all campaigns or across a defined group of campaigns within your Bing Ads account.

Say you have a budget of $20 to be used uniformly between two campaigns every day. On a given day Campaign A spends only $8 (of its $10 budget) because it got a smaller amount of impressions and clicks than normal. Using a Shared Budget, if Campaign B is performing well then Bing Ads will automatically take the unutilized $2 and allocate it to Campaign B. This will increase the chances of that unutilized budget being used to send your more traffic.

 

Using Shared Budgets

You can add, get, update, and delete the following shared budget properties.

 

You can get and associate a shared budget with a campaign using the following campaign properties.

 

Avoiding Breaking Changes with Shared Budgets

It is important to note that whether or not your application uses shared budgets, you probably need to make some code changes to continue managing campaigns. Specifically if a campaign uses a shared budget, then you cannot change the budget amount using the Campaign record (Bulk API) or Campaign object (Campaign Management API). If you try to update the budget amount of a campaign that uses a shared budget, the service operation will return the CampaignServiceCannotUpdateSharedBudget error code. If you make other updates to the campaign without changing the budget amount, this error will not be returned.

Before you try to update the budget of a campaign you should check the value of the shared budget ID. If the shared budget ID is greater than zero, then the campaign is using a shared budget. If the shared budget ID is null or empty then the campaign is not using a shared budget.

  • Bulk API: Check the Shared Budget Id field of the Campaign record.
  • Campaign Management API: Check the BudgetId element of the Campaign object. The BudgetId element is not returned in the Campaign object by default, so you will need to include BudgetId in the ReturnAdditionalFields flag of the GetCampaignsByAccountId and GetCampaignsByIds operations.

 

Getting Started

We plan to release sandbox support for shared budgets later this summer. We’ll be sure to make an announcement on the API blog once MSDN documentation and SDKs are available. You will then be able to write code to manage shared budgets, and also check whether or not a campaign uses a shared budget before attempting to update the campaign budget amount. Even today without the updated proxies you can prepare your application to handle the CampaignServiceCannotUpdateSharedBudget error.

 

Deprecation of Monthly Budgets

Separately we want to provide early notification that we plan to stop supporting monthly budgets in calendar 2017. As mentioned above, shared budgets will not support the monthly budget type at all. There will be a forced migration next calendar year where all campaign level unshared budgets will be rationalized as daily in parallel with the monthly budget deprecation. Moving campaign budgets to daily budget is encouraged before monthly budgets are deprecated. The formula that will be used to convert monthly to daily budgets is: Monthly budget amount / 30.4. If the result of division falls below the minimum campaign budget, the budget will be set to the minimum.

 

STRIDE, CIA and the Modern Adversary

Tue, 06/07/2016 - 07:54

The modern adversary isn’t a script kiddie. She/he is a sophisticated professional hired by a nation state or criminal organization. Her organization will have done an ROI (Return on Investment) calculation to justify the cost of a targeted attack against a specific entity. This adversary also doesn’t attack linearly against the traditional network based defenses.

The modern adversary uses graph thinking and going through multiple hops to get to the desired target such as credit card numbers or lists of employees. This can even include more than one company: first breach a trusted vendor, second abuse the trust to attack the actual target.

This new reality made me think about the relationship between the goals of security known as CIA (Confidentiality, Integrity and Availability) and the threat modeling categorization approach known as STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service and Elevation of Privilege).

Meaning of each Letter

STRIDE is what an attacker can do. TID is the attack version of CIA:

  • Defenders want Confidentiality – attackers use Information Disclosure
  • Defenders want Integrity – attackers use Tampering
  • Defenders want Availability – attackers use Denial of Service.

But, what about Spoofing, Repudiation and Elevation of Privilege? Enter the (modern) non-linear attack:

  • Spoofing and Elevation of Privilege are the entry points to pry open the doors.
  • Repudiation is covering the adversary’s tracks during the initial compromise and the breach. This also happens when you cannot distinguish the attacker’s actions from the legitimate user ones. Note: sometimes repudiation is an attempt by a legitimate user (turned adversary) to dispute transactions (Dear Gullible Bank, I never transferred that money from my account; you owe me $20,000 in damages).
Non-Linear Attack

Graph thinking does not (primarily) worry about traditional network defenses, such as a firewalls, DMZs, VPNs and similar. Note: these defenses are still useful to some extent.

Let’s construct a fictional scenario (any resemblance to actual attacks is coincidental):

  • Plan the attack (can take months for intelligence gathering).
  • Initial compromise to gain command and control (STIE)
    • Social engineering attack (Spoofing)
    • Deploy malware (Tampering the OS to facilitate Information Disclosure)
    • Steal credentials (Information Disclosure in preparation of Spoofing)
    • Attempt lateral traversal and escalating to Domain Administrator (Elevation of Privilege)
    • Add additional high-privilege user accounts such as domain administrators (Elevation of Privilege and Spoofing)
  • Execute the attacker’s mission (Violate CIA through TID)
    • Retrieve the data (Information Disclosure)
    • Execute fraudulent transactions (Tampering)
    • Encrypt for ransom demand (Denial of Service)
    • Destroy data (Denial of Service and/or Tampering)
    • Persist compromise for future exploits
  • Hide the tracks and hide Command and Control capability (Repudiation)
    • Encrypt stolen data during exfiltration to avoid detection
    • Erase logs
    • Tamper with anti-malware and intrusion detection systems
    • Use additional created credentials instead of the original high privilege account(s)
Prioritizing CIA and the Relative Importance of TID

STRIDE mitigations need to be prioritized. Therefore, you must consider the importance of C, I and A for your scenario:

  • Many industries (e.g. banking, health care insurance, …) value Confidentiality and Integrity almost equally and Availability much lower (but not at zero).
  • In other industries, such as SCADA (Supervisory Control and Data Acquisition) systems or any IoT, Availability trumps all else, followed closely by Integrity with Confidentiality lower. If you wonder why, just think about a nuclear power plant or a connected car: unavailability leads to a catastrophic failure.
Summary

Each letter of STRIDE maps to an adversaries and/or the defender’s goals. Of course the primary goals can also be useful during the initial compromise to open the door.

Goal Defender Attacker Threat Category Open the Door Prevent door opening Compromise Spoofing Integrity Preserve Violate Tampering Hide Activity Preserve visibility Hide Activity Repudiation Confidentiality Preserve Violate Information Disclosure Availability Preserve Deny Denial of Service Open the Door Prevent door opening Compromise Elevation of Privilege

 

Links Acknowledgements

A special thank you goes to my colleagues Michael Howard, John Rodriguez and Walter Dominguez for their valuable feedback.

Use Power BI to trace SQL Server RPC Calls and correlate them with Wait_info

Tue, 06/07/2016 - 07:38

 

The following code creates an extended event session that correlates wait_info and stored procedure (RPC calls) and allows you to analyze the data using Power BI.

 

Make sure to configure your right logfile path (where the Extended Event file is written)

Also enable xp_cmdshell to allow the script to cleanup the logfiles after the trace is done.

 

Update 7.6.2016 DatabaseID of the Database that should be traced can be added as a parameter, LogfilePath can be set as parameter in the script.

USE [master]
GO
Alter Database PowerBITrace set single_user with rollback immediate

Drop Database [PowerBITrace]

CREATE DATABASE [PowerBITrace]
GO
USE [PowerBITrace]
GO

CREATE TABLE [dbo].[ProcedureExecutionTime](
       [timestamp] [datetime2](7) NULL,
       [cpu_time] [int] NULL,
       [duration_in_microseconds] [int] NULL,
       [physical_reads] [int] NULL,
       [logical_reads] [int] NULL,
       [writes] [int] NULL,
       [row_count] [int] NULL,
       [SqlText] [nvarchar](512) NULL,
       [Correlation] [uniqueidentifier] NULL,
       [Sequence] [int] NULL,
       [Minute] [int] NULL,
       [Second] [int] NULL,
       [Millisecond] [int] NULL,
       [Szenario] [varchar](50) NULL,
       TransactionID int
) ON [PRIMARY]
 
GO
CREATE TABLE [dbo].[WaitInfo](
       [timestamp] [datetime2](7) NULL,
       [wait_type] [varchar](25) NULL,
       [wait_type_duration_ms] [int] NULL,
       [wait_type_signal_duration_ms] [int] NULL,
       [Correlation] [uniqueidentifier] NULL,
       [Sequence] [int] NULL,
       [Minute] [int] NULL,
       [Second] [int] NULL,
       [Millisecond] [int] NULL,
       [Szenario] [varchar](50) NULL,
       TransactionID int
) ON [PRIMARY]

GO
CREATE TABLE [dbo].[Transactions](
    [timestamp] [datetime2](7) NULL,
    [duration_in_microseconds] [int] NULL,
    [Correlation] [varchar](36) NULL,
    [Sequence] [varchar](8000) NULL,
    [object_name] [varchar](512) NULL,
    [transaction_state] [varchar](512) NULL,
    [transaction_type] [varchar](512) NULL,
    [transaction_id] int NULL,
    [Minute] [int] NULL,
    [Second] [int] NULL,
    [Millisecond] [int] NULL,
    [Szenario] [varchar](50) NULL
) ON [PRIMARY]

GO
CREATE TABLE [dbo].[commit_tran_completed](
    [timestamp] [datetime2](7) NULL,
    [Correlation] [varchar](36) NULL,
    [Sequence] [varchar](8000) NULL,
    [transaction_id] [int] NULL,
    [Minute] [int] NULL,
    [Second] [int] NULL,
    [Millisecond] [int] NULL,
    [Szenario] [varchar](255) NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[commit_tran_starting](
    [timestamp] [datetime2](7) NULL,
    [Correlation] [varchar](36) NULL,
    [Sequence] [varchar](8000) NULL,
    [transaction_id] [int] NULL,
    [Minute] [int] NULL,
    [Second] [int] NULL,
    [Millisecond] [int] NULL,
    [Szenario] [varchar](255) NULL
) ON [PRIMARY]

CREATE TABLE [dbo].[batch_completed](
       [timestamp] [datetime2](7) NULL,
       [cpu_time] [int] NULL,
       [duration_in_microseconds] [int] NULL,
       [physical_reads] [int] NULL,
       [logical_reads] [int] NULL,
       [writes] [int] NULL,
       [row_count] [int] NULL,
       [SqlText] [nvarchar](max) NULL,
       [Correlation] [varchar](512) NULL,
       [transaction_id] [int] NULL,
       [SzenarioID] [varchar](255) NOT NULL,
       [Minute] [int] NULL,
       [Second] [int] NULL,
       [Millisecond] [int] NULL
) ON [PRIMARY]

 

CREATE TABLE [dbo].[capture_stpandwaits_data](
    [event_data] [xml] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

GO

CREATE Procedure [dbo].[StpandWaitTrace] (@duration varchar(10), @szenario varchar(50), @databaseid int)
as
begin
–Author Lukas Steindl
–Script um Waitinfo Trace zu machen 
 
        declare @logfilepath varchar(128) = ‘C:Program FilesMicrosoft SQL ServerMSSQL13.SQL2016MSSQLLog’
 
       –Schritt 1) TRACE ANLEGEN:
       IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name = ‘StoredProcedureAndWaitstats’)
       DROP EVENT SESSION [StoredProcedureAndWaitstats] ON SERVER
 
       waitfor delay ’00:00:02′
       –CLEANUP FIRST:
       –Truncate Table [dbo].[ProcedureExecutionTime]
       –Truncate Table [dbo].[WaitInfo]
      
      
       –Anpassen:
       –sp_configure ‘show advanced options’,1
       –reconfigure
       –sp_configure ‘xp_cmdshell’,1
       –reconfigure
       –DB ID und Log pfad (wichtig SQL Server zugang zum Log drive! )

       declare @deletecmd nvarchar(4000)= CONCAT(‘DEL /F “‘,@logfilepath, ‘StoredProcedureAndWaitstats*”‘)
       exec xp_cmdshell @deletecmd
 
   
        declare @cmd nvarchar(4000) = CONCAT(‘
           CREATE EVENT SESSION [StoredProcedureAndWaitstats] ON SERVER
               ADD EVENT sqlos.wait_info(
                     ACTION(sqlserver.transaction_id)
                     WHERE ([package0].[equal_uint64]([sqlserver].[database_id],(‘,@databaseid,’)) AND [duration]>(0) AND [package0].[not_equal_uint64]([wait_type],(109)) AND [package0].[not_equal_uint64]([wait_type],(96)) AND [package0].[not_equal_uint64]([wait_type],(798)))),
               ADD EVENT sqlserver.commit_tran_completed(
                     ACTION(sqlserver.transaction_id)
                     WHERE ([sqlserver].[database_id]=(‘,@databaseid,’))),
               ADD EVENT sqlserver.commit_tran_starting(
                     ACTION(sqlserver.transaction_id)
                     WHERE ([sqlserver].[database_id]=(‘,@databaseid,’))),
               ADD EVENT sqlserver.rpc_completed(SET collect_statement=(1)
                     ACTION(sqlserver.transaction_id)
                     WHERE ([package0].[equal_uint64]([sqlserver].[database_id],(‘,@databaseid,’)) AND [package0].[not_equal_unicode_string]([statement],N”exec sp_reset_connection”))),
               ADD EVENT sqlserver.sql_batch_completed(SET collect_batch_text=(1)
                     WHERE ([sqlserver].[database_id]=(‘,@databaseid,’))),
               ADD EVENT sqlserver.sql_transaction(
                     ACTION(sqlserver.transaction_id)
                     WHERE ([sqlserver].[database_id]=(‘,@databaseid,’)))
               ADD TARGET package0.event_file(SET filename=N”’,@logfilepath,’StoredProcedureAndWaitstats.xel”)
               WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)’)

        exec sp_executesql @cmd

 

       –Schritt 2)  TRACE STARTEN:
 
       ALTER EVENT SESSION StoredProcedureAndWaitstats
       ON SERVER
       STATE = start;
 
       waitfor delay @duration –’00:00:02′
 
 
       –Schritt 3)
       ALTER EVENT SESSION StoredProcedureAndWaitstats
       ON SERVER
       STATE = stop;
 
       waitfor delay ’00:00:10′ –flushen!
       –in temptable kopieren:
       Truncate table [dbo].[capture_stpandwaits_data]
 
    declare @parsecmd nvarchar(4000) = CONCAT(‘Insert Into [capture_stpandwaits_data] Select CAST(event_data as xml) AS event_data
       from sys.fn_xe_file_target_read_file(”’, @logfilepath ,’StoredProcedureAndWaitstats*.xel”,null,null,null)’)
 
  –print @parsecmd
    exec sp_executesql @parsecmd

  –Schritt 5) AUSWERTEN
  —Transaction Commit auswerten:
    –Select * from #capture_stpandwaits_data
—    declare @szenario varchar(255) = ‘ABC’
    Insert Into commit_tran_completed    
    Select [timestamp],  
       LEFT(Correlation,36) as ‘Correlation’,
       REPLACE(RIGHT(Correlation,2),’-‘,”) as ‘Sequence’, transaction_id
       ,datepart(Minute,timestamp) as [Minute],datepart(second,timestamp) as [Second],datepart(MILLISECOND,timestamp) as [Millisecond],’test’ –@szenario as Szenario 
    —   into commit_tran_completed
       from (
              SELECT
                    n.value(‘(@timestamp)[1]’, ‘datetime2′) AS [timestamp],
                     n.value(‘(action[@name=”attach_activity_id”]/value)[1]’, ‘varchar(512)’) AS Correlation,
                            n.value(‘(action[@name=”transaction_id”]/value)[1]’, ‘int’) AS transaction_id
              FROM
              capture_stpandwaits_data
              CROSS APPLY event_data.nodes(‘event’)as q(n)
              where n.value(‘(@name)[1]’, ‘varchar(64)’) = ‘commit_tran_completed’
              )x

   
—    declare @szenario varchar(255) = ‘ABC’
     Insert Into commit_tran_starting
     Select [timestamp],  
       LEFT(Correlation,36) as ‘Correlation’,
       REPLACE(RIGHT(Correlation,2),’-‘,”) as ‘Sequence’, transaction_id
       ,datepart(Minute,timestamp) as [Minute],datepart(second,timestamp) as [Second],datepart(MILLISECOND,timestamp) as [Millisecond],@szenario as Szenario 
     —  into commit_tran_starting
       from (
              SELECT
                    n.value(‘(@timestamp)[1]’, ‘datetime2′) AS [timestamp],
                     n.value(‘(action[@name=”attach_activity_id”]/value)[1]’, ‘varchar(512)’) AS Correlation,
                            n.value(‘(action[@name=”transaction_id”]/value)[1]’, ‘int’) AS transaction_id
              FROM
              capture_stpandwaits_data
              CROSS APPLY event_data.nodes(‘event’)as q(n)
              where n.value(‘(@name)[1]’, ‘varchar(64)’) = ‘commit_tran_starting’
              )x

——————-

       Insert into ProcedureExecutionTime
       Select [timestamp], cpu_time, duration_in_microseconds, physical_reads, logical_reads, writes, row_count, SqlText,
       LEFT(Correlation,36) as ‘Correlation’,
       REPLACE(RIGHT(Correlation,2),’-‘,”) as ‘Sequence’
       ,datepart(Minute,timestamp) as [Minute],datepart(second,timestamp) as [Second],datepart(MILLISECOND,timestamp) as [Millisecond],@szenario
          ,transaction_id
       from (
              SELECT
              n.value(‘(@timestamp)[1]’, ‘datetime2′) AS [timestamp],
                     n.value(‘(data[@name=”cpu_time”]/value)[1]’, ‘int’) AS cpu_time,
                     n.value(‘(data[@name=”duration”]/value)[1]’, ‘int’) AS duration_in_microseconds,  
                     n.value(‘(data[@name=”physical_reads”]/value)[1]’, ‘int’) AS physical_reads,  
                     n.value(‘(data[@name=”logical_reads”]/value)[1]’, ‘int’) AS logical_reads,  
                     n.value(‘(data[@name=”writes”]/value)[1]’, ‘int’) AS writes,  
                     n.value(‘(data[@name=”row_count”]/value)[1]’, ‘int’) AS row_count,  
                     n.value(‘(data[@name=”statement”]/value)[1]’, ‘nvarchar(512)’) AS SqlText,
                     n.value(‘(action[@name=”attach_activity_id”]/value)[1]’, ‘varchar(512)’) AS Correlation,
                      n.value(‘(action[@name=”transaction_id”]/value)[1]’, ‘int’) AS transaction_id
              FROM
              capture_stpandwaits_data
              CROSS APPLY event_data.nodes(‘event’)as q(n)
              where n.value(‘(@name)[1]’, ‘varchar(64)’) = ‘rpc_completed’
              )x

—Batch Completed Parser
INSERT INTO batch_completed Select  [timestamp], cpu_time, duration_in_microseconds, physical_reads, logical_reads, writes, row_count, SqlText,
       LEFT(Correlation,36) as ‘Correlation’,transaction_id,@szenario
       ,datepart(Minute,timestamp) as [Minute],datepart(second,timestamp) as [Second],datepart(MILLISECOND,timestamp) as [Millisecond] –,
            from (
                       SELECT
              n.value(‘(@timestamp)[1]’, ‘datetime2′) AS [timestamp],
                     n.value(‘(data[@name=”cpu_time”]/value)[1]’, ‘int’) AS cpu_time,
                     n.value(‘(data[@name=”duration”]/value)[1]’, ‘int’) AS duration_in_microseconds,  
                     n.value(‘(data[@name=”physical_reads”]/value)[1]’, ‘int’) AS physical_reads,  
                     n.value(‘(data[@name=”logical_reads”]/value)[1]’, ‘int’) AS logical_reads,  
                     n.value(‘(data[@name=”writes”]/value)[1]’, ‘int’) AS writes,  
                     n.value(‘(data[@name=”row_count”]/value)[1]’, ‘int’) AS row_count,  
                     n.value(‘(data[@name=”batch_text”]/value)[1]’, ‘nvarchar(512)’) AS SqlText,
                     n.value(‘(action[@name=”attach_activity_id”]/value)[1]’, ‘varchar(512)’) AS Correlation,
                      n.value(‘(action[@name=”transaction_id”]/value)[1]’, ‘int’) AS transaction_id, @szenario as SzenarioID
              FROM
              capture_stpandwaits_data
              CROSS APPLY event_data.nodes(‘event’)as q(n)
              where n.value(‘(@name)[1]’, ‘varchar(64)’) = ‘sql_batch_completed’
              )x

 

  ——————-

  –declare @szenario varchar(255) = ‘ABC’
       Insert Into WaitInfo
       Select
       [timestamp], wait_type, wait_type_duration_ms, wait_type_signal_duration_ms, LEFT(Correlation,36) as ‘Correlation’,
       REPLACE(RIGHT(Correlation,2),’-‘,”) as ‘Sequence’,
       datepart(Minute,timestamp) as [Minute],datepart(second,timestamp) as [Second],
       datepart(MILLISECOND,timestamp) as [Millisecond],’test’,transaction_id
       from (
       SELECT n.value(‘(@timestamp)[1]’, ‘datetime2′) AS [timestamp],
                     n.value(‘(data[@name=”wait_type”]/text)[1]’, ‘varchar(25)’) AS wait_type,
                     n.value(‘(data[@name=”duration”]/value)[1]’, ‘int’) AS wait_type_duration_ms,  
                     n.value(‘(data[@name=”signal_duration”]/value)[1]’, ‘int’) AS wait_type_signal_duration_ms,
                     n.value(‘(action[@name=”attach_activity_id”]/value)[1]’, ‘varchar(512)’) AS Correlation,
                     n.value(‘(action[@name=”transaction_id”]/value)[1]’, ‘int’) AS transaction_id
              FROM
              capture_stpandwaits_data
              CROSS APPLY event_data.nodes(‘event’)as q(n)
              where n.value(‘(@name)[1]’, ‘varchar(64)’) = ‘wait_info’
              )x
end
 
GO
/****** Object:  Table [dbo].[ProcedureExecutionTime]    Script Date: 12/1/2015 4:02:16 PM ******/
 

–Run the trace for the duration specified:
use [PowerBITrace]
declare @db_id int = DB_ID(‘Master’)
print @db_id
exec [StpandWaitTrace] ’00:00:05′, ‘Beladung 3′, @db_id

— Use power bi to analyze the storedprocedures and the waits now.

–Use these two statements:

Select * from PowerBITrace.dbo.ProcedureExecutionTime
Select * from PowerBITrace.dbo.WaitInfo
Select * from PowerBITrace.dbo.Transactions
Select * from PowerBITrace.[dbo].[commit_tran_completed]
Select * from PowerBITrace.[dbo].[commit_tran_starting]
Select * from PowerBITrace.dbo.batch_completed

 

 

 
GO
/****** Object:  Table [dbo].[ProcedureExecutionTime]    Script Date: 12/1/2015 4:02:16 PM ******/
 

–Run the trace for the duration specified:
use [PowerBITrace]
exec [StpandWaitTrace] ’00:00:10′, ‘Trace5′

— Use power bi to analyze the storedprocedures and the waits now.

–Use these two statements:

Select * from PowerBITrace.dbo.ProcedureExecutionTime
Select * from PowerBITrace.dbo.WaitInfo
Select * from PowerBITrace.dbo.Transactions
Select * from PowerBITrace.[dbo].[commit_tran_completed]
Select * from PowerBITrace.[dbo].[commit_tran_starting]

 

—Important don’t forget that it will only trace your RPC Calls! so to test the script you could use the following C# Program:

using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace PowerBITraceDemo
{
    class Program
    {
        static void Main(string[] args)
        {
            SqlConnection conn = new SqlConnection(“Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=AdventureWorks;Data Source=.\SQL2014″);
           
      
         
            conn.Open();
            SqlTransaction tran = conn.BeginTransaction(“hellow”);
            SqlCommand cmd = new SqlCommand(“test”, conn, tran);
            cmd.CommandType = System.Data.CommandType.StoredProcedure;
            cmd.ExecuteNonQuery();
            tran.Commit();
            conn.Close();
        }
    }
}

 

It might happen that the rpc trace returns multiple rows with the same correlation id. in this case you have to run the following cleanup step:

Delete from ProcedureExecutionTime where Correlation in (
Select Correlation from ProcedureExecutionTime group by Correlation having COUNT(*) > 1)

Microsoft Innovative Educator Experts- STEM specialists

Tue, 06/07/2016 - 06:40

The Microsoft Innovative Educator (MIE) Expert program is an exclusive program created to recognize global educator visionaries who are using technology to pave the way for their peers in the effective use of technology for better learning and student outcomes.

MIE Experts help Microsoft to lead innovation in education. They advocate and share their thoughts on effective use of technology in education with peers and policy makers. They provide insight for Microsoft on new products and tools for education, and they exchange best practices as they work together to promote innovation in teaching and learning.

As a MIEE you will:-

  • Access to professional and career development opportunities and certifications
  • Share your expertise with world-renowned educators and specialists to scale their innovations
  • Present in Microsoft’s global webinar calls
  • Participate in focus groups giving feedback to development teams on Microsoft products
  • Join invitation-only special events from Microsoft
  • Share your passion for Microsoft with peers and policymakers, and through social media, blogs and videos
  • Test new products while in beta form and participate in pre-release programs for certain education-related tools.
  • Represent Microsoft through product demonstrations and by attending events
  • Build educator capacity in your community (school, district or at training events) by speaking, training and/or coaching colleagues and inviting them to participate in the online Microsoft Educator Community
  • Collaborate with innovative educators across the globe
  • Host regional events showcasing your use of Microsoft technology in the classroom
  • Achieve eligibility to attend the Microsoft Global Educator Exchange Event (E2), Feb/March/April 2017

 

This year, as part of the wider programme I am looking in particular to identify and support teachers with a STEM specialism in all phases, who would like to become MIEEs and take advantage of joining this global community of teachers. So if you have been working with Kodu, Touchdevelop, BBC micro:bit, Minecraft, Imagine Academy or even doing great things with Excel in Maths then you should think about applying. If you are still not sure and have questions. Get in touch with me on Twitter @innovativeteach

How do you apply to become a MIEE? It’s simple,

 

  • Be sure that you have joined the Microsoft Educator Community and completed your profile. You will need to submit the URL to your public profile as a part of the nomination process. You can find your URL by going into Edit Profile and looking under “basic information.”

 

  • Create a 2-minute Office Mix, video or Sway that answers the following questions in a manner that creatively expresses what makes you a Microsoft Innovative Educator Expert. To share the Mix/video/Sway in your nomination, you will need to post it somewhere that allows you to create a URL to share it.

o   Why do you consider yourself to be a Microsoft Innovative Educator Expert?

o   Describe how you have incorporated Microsoft technologies in innovative ways in your classroom. Include artifacts that demonstrate your innovation.

o   If you become a MIE-Expert, how do you hope it will impact your current role?

o    

o   Once you have completed those two tasks, you will be ready to fill out the self-nomination form (open May 15 – July 15, 2016)

 

 

 

 

A Script for Replicating Database Backup Files to the Azure Cloud

Tue, 06/07/2016 - 06:28

Let’s say we have a backup process that creates backup files on a file server.  For disaster recovery purposes, I need to get these files offsite in a timely manner.  I have decent bandwidth out of my data centers so that I have decided to put these files into low-cost Azure storage.

NOTE This script is provided for educational purposes only. No warranties or guarantees are provided.

This PowerShell script does just that. It recursively loops through target directory and copies files matching a particular naming pattern which are flagged for Archival to Azure Blob Storage.  The copy is handled through an asynchronous job so that I might push multiple files simultaneously.  A variable in the master script controls how many asynchronous jobs can run at one time and a variable in the secondary script controls the number of threads which each job can employ. (To tune the routine for my machine, I first copy a large backup file using various numbers of threads to see which provides me the best individual file throughput.  I then vary the number of jobs to maximize the overall throughput of the routine.)

I am using Azure Blob Storage as it is the cheapest option available in Azure.  That said, I need to keep in mind that Azure Blob Storage limits individual block blobs to 200 GB max.  I will want to ensure that my backups are compressed for bandwidth purposes and that any files approaching 200 GB are split into multiple backups.  (If I need to go bigger I could use page blobs or even Azure Files but these are more expensive and I need cheap.)

A few final thoughts:

  • If I were to make use of this file for production purposes, I would want to improve add logging and error handling.  I might also take a look at using AZCopy instead of the PowerShell commandlets in the secondary job, just because AZCopy has better job recovery features.
  • As this is for DR, I might not wish to put every backup into the cloud.  Depending on my tolerance for data loss in a DR scenario, I might only put certain types of backups in the cloud with a particular frequency.  Not everything needs to be offsite in every scenario.
  • Over time, my backup files become less and less valuable.  If I were to productionize this routine, I’d probably have another routine that crawls the Azure Storage Account and prunes files meeting some kind of aging criteria.
  • In order to recover from a backup that is in the cloud, I would need a script to download the backup files to local storage. I’d probably have one that allowed me to pull down one or more files meeting various criteria as DR may or may not involve pulling down all files for all platforms. There is a starting point for this as a third script sample in the download reference above.

 

 

Bob is Moving To BOBSQL

Tue, 06/07/2016 - 06:26

Bob Ward and Bob Dorr are among the founding members of PSSSQL as long standing SQL Server support professionals.   We are both excited to tell you about our new roles.

  • Bob Ward has joined the SQL Server development team as a Principal Architect focusing on the customer experience in the Tiger Team.  Bob is expanding his passion for customers and the SQL Server product.   Bob will continue to present at conferences and customer interaction events and drive the SQL Server, customer experience in his new role.

  • Bob Dorr is not just an expert in supporting SQL OS and core engine technologies. In March, Bob joined the development team as a Principle Software Engineer reporting to Slava Oks.  Bob is bringing his 22+ years of support experience to developing the next generation of SQL Server backed with that strong customer voice.

Don’t worry, the PSSQL blog remains in great hands and will continue to provide you with detailed, SQL Server information.  The support escalation team is committed to maintaining the quality and standard you have become accustomed to on PSSSQL.

Bookmark our new location.  We are moving to a new blog (bobsql) to bring you great content with a developer twist.  We will be providing posts with unique insights that only Bob and Bob can provide and you will also see our development colleges posting on bobsql from time to time.

Check us out at: https://blogs.msdn.microsoft.com/bobsql/

Bob Dorr – Principle Software Engineer
Bob Ward – Principle Architect

On change d’adresse

Tue, 06/07/2016 - 03:56

Petit up du blog UX, UI et design :
On passe sur une autre adresse, plus facile à maintenir.
https://michelrousseaufr.wordpress.com/

Je tâcherai toutefois de venir poster mes nouveaux articles ici aussi de temps en temps.
A bientôt sur mon nouveau blog.

Activity Feeds in Dynamics CRM: Best practices

Tue, 06/07/2016 - 02:32

Hello everyone,

In this post, I am going to talk about the importance of Activity Feeds in Dynamics CRM which was introduced with Dynamics CRM 2011. In my current role, I visit customers who use Dynamics CRM and review their application/architecture, provide proactive guidance towards the overall betterment of the application/architecture and advise them on the optimal usage. During the last few visits, I endeavored to get an understanding from my customers if they really know what Activity Feeds are and if they actually need this feature?

Those who are not very familiar with the feature, let me share a quick overview:

Activity Feeds monitor real-time updates to help you stay up-to-date with the fast-moving sales, customer projects, and marketing campaigns of your organization, which also includes automatic updates on the activities of your colleagues. You can also choose to “follow” updates of the contacts, colleagues, and customer records you care the most about. In fact, you could achieve a lot of things with this to help you keep track of things within the application you care most about.

For more information, please refer: https://www.microsoft.com/en-us/dynamics/crm-customer-center/stay-up-to-date-with-customer-news-with-the-activity-feed.aspx

Besides, this could also help in increasing the overall size of the CRM database every time an event happens. For example, if there is an update on any of your customers/contacts, it has to update certain tables in the database too, i.e. PostBase, PostCommentBase…. In my recent customer engagement, I had to cleanup around 50 GB of data which was quite significant based on the overall size of the database. And, this motivated me to share this experience with the wide audience through my blog.

In order to do the cleanup, steps are very easy:

  1. Check with the business and try to understand the actual need of the feature. Do they really need this?
  2. There could be a possibility that they only need the Activity rules to be activated on a very few entities or they just don’t need them.
  3. Now when you have a better understanding, you can follow the following quick steps:

 

    1. Login as a Dynamics CRM System Administrator to the Application.
    2. Navigate to Settings–>Activity Feeds Configuration.
    3. Select the configurations which you don’t need. (Please keep in mind that there are few configurations which are there by default and are active). Or, deactivate all the configurations if you don’t need them.
    4. Once that is done, do remember to Publish the customizations for the changes done, which could be done by clicking on More Commands …. and then Publish Customizations. Until you Publish the customizations, the changes won’t take effect.
    5. Now when you are sure that there are no more new entries being created (only in the case when you decide to deactivate all the configurations and rules, it’s time to clean the database, for which you can use the Bulk Delete feature to create a rule which would delete all the Posts (it is always a good practice to take the latest backup of the database and then do the cleanup because if at any point in time you would like to refer the data, you would have the backup to fall back on).

Note: I tested these steps with Dynamics CRM 2013 as I was working on the mentioned scenario at a customer site. However, I am sure that it would work the same way in DYCRM 2011 and the later versions. This cleanup could also be very useful when you plan for an upgrade or migration.

Thank you for stopping by. If you have any feedbacks or questions, please do leave your comments below. I shall be happy to answer them.

Pages

Drupal 7 Appliance - Powered by TurnKey Linux