You are here

Feed aggregator

Great Azure Web Sites Articles in MSDN Magazine

MSDN Blogs - Wed, 07/23/2014 - 10:01

As a follow-up to our recent post featuring the MSDN Magazine on scaling Azure Web Sites, here are some additional articles recently made available that can help you as you develop and deploy your application in Azure Web Sites.

Architect for the Cloud Using Azure Web Sites
Sometimes bad things happen to good apps, and when they do you need to react fast. This article explores the tools and patterns you can adopt to make your Web applications even better and more resilient in a sometimes-hostile cloud environment.

Building a Node.js and MongoDB Web Service
Tejaswi Redkar shows you how to develop a RESTful Web service in Node.js that accesses a MongoDB database in the cloud, and how to deploy it to Azure Web Sites.

Hybrid Connectivity: Connecting Azure Web Sites to LOB Apps Using PortBridge
Hybrid apps bring the power of the cloud to existing software services that run on-premises in your datacenter. This article takes a deep look at connecting a Web site to a line-of-business application running on-premises and shows how to enable seamless connectivity.

Teaching from the Cloud
Using Azure Web Sites, MVC 5, SignalR and Azure SQL, this article explores the creation of an e-learning application that provides course modules, lessons and basic user administration. The result is a Web site that allows an instructor to "push" content to the students' browsers, keeping all students in sync with the current lesson.

Scaling Your Web Application with Azure Web Sites

MSDN Blogs - Wed, 07/23/2014 - 09:50

Yochay Kiriaty, one of our program managers in Azure Web Sites, recently wrote an excellent article for MSDN Magazine that outlines how to design, implement and configure applications for scaling in the cloud. If you are hosting an application in Azure Web Sites (or if you plan to at some point in the future), this is absolutely required reading.

You can read the entire article on the MSDN website.

Visual Studio Toolbox: Load Testing

MSDN Blogs - Wed, 07/23/2014 - 09:48

Continuing with our exploration of testing, Chuck Sterling joins me this episode to discuss load testing. Load testing provides you with the ability to see how well your software responds to various levels of usage. Chuck shows how to create load tests for a Web site and evaluate the data it returns. He shows how to use Application Insights to monitor the Web site's performance. He then shows how to use load testing with unit tests.

Call to survey – Is your EA program valuable?

MSDN Blogs - Wed, 07/23/2014 - 09:42

This is the first time I’ve done this, so I’m hoping that my friends will contribute your opinions: I’ve created a survey  asking a few basic questions about how your Enterprise Architecture program is valued, or not valued, by your organization.

KwikSurvey Poll – Does your Enterprise Architecture program deliver value?

Note that this is a free survey tool that doesn’t allow me to collect text responses unless I pay, which I didn’t, so there are no text response fields.  If you want to comment on the survey questions or assumptions, please jump over to LinkedIn and comment in this thread:

LinkedIn Discussion – Do you have an effective or ineffective EA Program

All comments are anonymous.  I will publish the results on this blog.  This is just an informal data collection exercise but one that I think may provide a little insight into how you and your peers measure the value of your Enterprise Architecture program.

Tuto: Créez sur mesure ou migrez des VMs vers Azure avec UShareSoft !

MSDN Blogs - Wed, 07/23/2014 - 09:21

Pour les utilisateurs d’Azure, notre partenaire UShareSoft vient de lancer une nouvelle solution : "UForge for Microsoft Azure".

UForge vous permet de construire vos propres VM pour Azure. En quelques clics, vous pouvez créer des templates applicatifs complets (comprenant l'OS, le middleware, vos applis et la logique de configuration) et prêts à être provisionnés dans Azure.

Vous pouvez aussi bénéficier de UForge pour migrer vers Azure vos serveurs existants qu'ils soient physiques, virtuels ou clouds.

Ils viennent de mettre à disposition un nouveau tutorial

Vous pouvez également vous inscrire pour un compte gratuit UForge sur le site UShareSoft, à

C’est ce que j’ai fait:



Benjamin (@benjguin)

AX Content: Feedback Opportunity: Survey for users who work in the Microsoft Dynamics AX Project management and accounting module

MSDN Blogs - Wed, 07/23/2014 - 09:01

The Information Experience team for Microsoft Dynamics AX invites business users who work in the Project management and accounting module to provide us with feedback about their experience with the module and with our online Help offerings.  We’d like to hear from you no matter which version of Microsoft Dynamics AX you use.

We’re collecting your feedback with a survey that you can use to:

  • Let us know how you use the module in your day-to-day work
  • Tell us about other relevant aspects of how you do your job
  • Tell us about your experiences with our online Help offerings and what you think of them

We’re also offering opportunities for you to provide more feedback in the future, if that sounds like something you’d like to do.

The survey might take five to ten minutes of your time, but your feedback will help us understand how we can improve the information that we provide to help you use Project management and accounting in Microsoft Dynamics AX.

You can begin the survey here:

The New Realities that Call for New Organizational and Management Capabilities

MSDN Blogs - Wed, 07/23/2014 - 08:59

“The only people who can change the world are people who want to. And not everybody does.” -- Hugh MacLeod

Is it just me or is the world changing faster than ever?

I hear from everybody around me (inside and outside of Microsoft) how radically their worlds are changing under their feet, business models are flipped on their heads, and the game of generating new business value for customers is at an all-time competitive high.


Challenge is where growth and greatness come from.  It’s always a chance to test what we’re capable of and respond to whatever gets thrown our way.  But first, it helps to put a finger on what exactly these changes are that are disrupting our world, and what to focus on to survive and thrive.

In the book The Future of Management, Gary Hamel shares some great insight into the key challenges that companies are facing that create even more demand for management innovation.

The New Realities We’re Facing that Call for Management Innovation

I think Hamel describes our new world pretty well …

Via The Future of Management:

  • “As the pace of change accelerates more, more and more companies are finding themselves on the wrong side of the change curve.  Recent research by L.G. Thomas and Richard D'Aveni suggests that industry leadership is changing hands more frequently, and competitive advantage is eroding more rapidly, than ever before.  Today, it's not just the occasional company that gets caught out by the future, but entire industries -- be it traditional airlines, old-line department stores, network television broadcasters, the big drug companies, America's carmakers, or the newspaper and music industries.”
  • “Deregulation, along with the de-scaling effects of new technology, are dramatically reducing the barriers to entry across a wide range of industries, from publishing to telecommunications to banking to airlines.  As a result, long-standing oligopolies are fracturing and competitive 'anarchy' is on the rise.”
  • “Increasingly, companies are finding themselves enmeshed in 'value webs' and 'ecosystems' over which they have only partial control.  As a result, competitive outcomes are becoming less the product of market power, and more the product of artful negotiation.  De-verticalization, disintermediation, and outsource-industry consortia, are leaving firms with less and less control over their own destinies.”
  • “The digitization of anything not nailed down threatens companies that make their living out of creating and selling intellectual property.  Drug companies, film studios, publishers, and fashion designers are all struggling to adapt to a world where information and ideas 'want to be free.'”
  • The internet is rapidly shifting bargaining power from producers to consumers.  In the past, customer 'loyalty' was often an artifact of high search costs and limited information, and companies frequently profited from customer ignorance.  Today, customers are in control as never before -- and in a world of near-perfect information, there is less and less room for mediocre products and services.”
  • Strategy cycles are shrinking.  Thanks to plentiful capital, the power of outsourcing, and the global reach of the Web, it's possible to ramp up a new business faster than ever before.  But the more rapidly a business grows, the sooner it fulfills the promise of its original business model, peaks, and enters its dotage.  Today, the parabola of success is often a short, sharp spike.”
  • “Plummeting communication costs and globalization are opening up industries to a horde of new ultra-low-cost competitors.  These new entrants are eager to exploit the legacy costs of the old guard.  While some veterans will join the 'race to the bottom' and move their core activities to the world's lowest-cost locations, many others will find it difficult to reconfigure their global operations.  As Indian companies suck in service jobs and China steadily expands its share of global manufacturing, companies everywhere will struggle to maintain their margins.”
Strategically Adaptive and Operationally Efficient

So how do you respond to the challenges.   Hamel says it takes becoming strategically adaptable and operationally efficient.   What a powerful combo.

Via The Future of Management:

“These new realities call for new organizational and managerial capabilities.  To thrive in an increasingly disruptive world, companies must become as strategically adaptable as they are operationally efficient.  To safeguard their margins, they must become gushers of rule-breaking innovation.  And if they're going to out-invent and outthink a growing mob of upstarts, they must learn how to inspire their employees to give the very best of themselves every day.  These are the challenges that must be addressed by 21st-century management innovations.”

There are plenty of challenges.  It’s time to get your greatness on. 

If there ever was a chance to put to the test what you’re capable of, now is the time.

No matter what, as long as you live and learn, you’ll grow from the process.

You Might Also Like

Simplicity is the Ultimate Enabler

The New Competitive Landscape

Who’s Managing Your Company

How to Authenticate with Microsoft Account in a Chrome Extension

MSDN Blogs - Wed, 07/23/2014 - 08:30

A couple of weeks ago, at our internal OneNote Hackathon, a couple of folks were trying to build a Chrome extension that uses the OneNote API. So they needed to be able to authenticate against Microsoft Account (MSA), which is a OAuth 2.0 provider. Authenticating against MSA in a web browser extension isn’t a well-documented process, so I want to provide some help in case this is what you are trying to do.

High-level Steps

Here are the things you need to do at a high-level:

  1. Create a Client ID and make sure the API settings are set correctly.
  2. Set up your Chrome extension properly to use at least 1 content script. We will need it in #4 below.
  3. Create the UI in your Chrome extension to sign in, making sure you are setting the redirect URL properly to “” and response type set to “token”.
  4. In your Chrome extension’s content script, watch for the popup window from the Microsoft Account sign in flow. At the right point in time, we will catch the auth_token, store it, and then close the popup window.

Now, let’s get started.

Client ID and API Settings

First of all, you need to obtain a client ID from the Microsoft Account Developer Center as usual. The way you set the API settings need to follow my screenshot below.


Setting up the Chrome Extension

Now, let’s talk about what you need in your Chrome extension. I am assuming you are somewhat familiar with the basic anatomy of a Chrome extension, so I am not going to go through that. If you are not, you can start reading about it here.

The manifest.json file describes your Chrome extension. You can see the sample manifest.json file here.

A few things to point out:

  • We included js/script.js as a content script. These scripts load each time a document is loaded in a window or tab. We need this to perform #4 above.
  • We also included lib/jquery.min.js as a content script because I wanted to be able to use jquery in my script.js file.
  • We included “storage” in the permissions set because we will use Chrome storage later to store the auth_token.
  • We included this line: "content_security_policy": "script-src 'self'; object-src 'self'" so the LiveSDK JavaScript library can be successfully loaded from popup.html
  • browser_action.default_popup is set to “./html/popup.html” – this specifies the HTML that will show up when user clicks the browser extension button. We will use this to show the login UI.
Popup.html and popup.js

Next, you can take a look at popup.html. This is the HTML that gets shown when user clicks on your extension’s button near the URL bar. Things to note:

  • We have included popup.js and the Live SDK JavaScript library ( in this file.
  • We have a <div> with ID=”signin” – you will see that in our popup.js script, we will have code there handle the click event to start the sign in flow.

In popup.js, note the following snippet:

$('a#signout').click(function() { WL.init({ client_id: "000000004410CD1A", redirect_uri: "", response_type: "token" });"access_token"); return false; });

You should obviously replace client_id with your own client_id here. When the user clicks on the sign in link, a new browser window will open to ask the user to sign in with the Microsoft account.

Content Script (script.js)

The last thing we need to do now is to watch for the sign in browser window. When the user is successfully signed in, that browser window will be redirected to a URL that starts with and the URL will contain “#access_token”. So in the content script, we have the following code to look for this:

$(window).load(function() {   if (window.location.origin == "") {   var hash = window.location.hash;   // get access token var start = hash.indexOf("#access_token="); if ( start >= 0 ) { start = start + "#access_token=".length;   var end = hash.indexOf("&token_type"); var access_token = hash.substring(start, end);   // Store it{"access_token":access_token});   // Close the window window.close(); } } });

That’s basically it. When the sign in window closes, the access_token will be stored in the Chrome local storage. You can then use the access_token in any calls to the OneNote API as you normally would. You can take a look at the sendToOneNote function for an example:

  function sendToOneNote(access_token, title, text) { $.ajax({ accept: "application/json", type: "POST", url: "", headers: { "Authorization": "Bearer " + access_token }, data: "<html><head><title>"+ title +"</title></head>" + "<body><p>" + text + "</p>" + "</body></html>", contentType: "text/html", success: function (data, status, xhr) { alert(status); }, complete: function (data, status, xhr) { alert(status); }, error: function (request, status, error) { alert(status); } });   }

Hope that was helpful. Please let us know if you have any other questions about this topic!


James (@jmslau)

App Studio Update 7-15-14

MSDN Blogs - Wed, 07/23/2014 - 07:54

Windows App Studio was updated July 15th with a fresh release and brought with it some great new features such as:

New YouTube player

This new release brings a brand new YouTube player. Previously the player was implemented using Youtube Mytoolkit player from
Codeplex. The new player is implemented loading the embed YouTube url (<videoid>)
into a WebView control.

This change has been motivated by several things:

  1. Consistency: In WP8
    the videos were displayed in a "YouTube mytoolkit" control but in Universal
    the video was loaded in the native MediaElement
    control: The url was obtained using YouTube.GetVideoUriAsync helper
    method of myToolkit.
  2. Performance: In Universal
    for each video result the method helper YouTube.GetVideoUriAsync was
    called. This method obtains the video url through html scrapping having to
    download each page.
  3. Play
    Errors Experience
    : Some videos are not allowed by YouTube to be reproduced in some
    devices. Even the YouTube.GetVideoUriAsync method gets a valid url the
    result was an "unexpected error". With the new mechanism in case the
    video couldn't be reproduced, YouTube gives a detailed error with the reason.


New MVVM Pattern

In this new release there are a
lot of improvements in the generated code for Universal Apps:


The app data state management
has been moved from ViewModels and NavigationService to AppCache class. There
are 2 cache modes:

Memory: All data are store in memory in order to use it in the
current session

IsolatedStorage: All data is saved into the isolated storage to be
used when the app is released from memory.


Most of data loading behavior
has been moved from ViewModelBase to this new class. It acts as base class for
all specific data sources. This class manage how data is loaded: from cache or
from the specific data source. One relevant improvement is data is refreshed
from the data source only if it's content is older than 2 hours, no matter if
the app is resumed from suspension or terminated state, optimizing the battery
and data use.

ViewModel Items:

The way the items that comes
from data source are added to the observable collection binded to the view has
change. Now only the items that not are already in the collection are added.
This is done by implementing the IEquatable<T>
interface in each of the specific schema classes. This approach allows to
improve the user experience on one hand and the app performance on the other.


To check out the new and improved App Studio, visit

For Questions You can visit our support forum Here

To request new features, for future versions of App Studio, Visit the User Voice forum Here

Jetzt "Teaching with Technology" Kurse belegen und Microsoft Certified Educator werden

MSDN Blogs - Wed, 07/23/2014 - 07:44

Lehrer und andere mit Interesse an digitaler Bildung können jetzt in unserer kostenlosen „Teaching with Technology“ (TWT) Kursreihe ihr Wissen zur sinnvollen Anwendung von Informations- und Kommunikationstechnologien (IKT) im Unterricht gezielt vertiefen und neue Einblicke gewinnen. Die Kurse richten sich nach dem IKT-Kompetenzrahmen für Lehrer der UNESCO. Bei erfolgreichem Bestehen des „Microsoft Certified Educator Exam“ am Ende des Kurses, erhalten Sie ein Zertifikat über die erfolgreiche Teilnahme am Kurs und sind dann Microsoft Certified Educator.   

Nach einer unkomplizierten Registrierung und einem Self-Assessment, indem Sie ihre bestehenden Fähigkeiten und ihren bisherigen Kenntnisstand bezüglich des Einsatzes von Medien im Unterricht überprüfen, bietet „Teaching with Technology“ die Möglichkeit in Kursen ihr Wissen in unterschiedlichen Themenbereichen rund um den Nutzen und Einsatz von digitalen Medien im Unterricht zu erweitern.  Nach einer allgemeinen Einführung in die Herausforderung schulischer Bildung angesichts einer sich wandelnde Wissensgesellschaft werden die Gründe für die UNESCO eine andere Lern- und Lehrkultur zu fordern sowie der Einsatz von Informations- und Kommunikationstechnologien im Unterricht zu forcieren erläutert.

Die anschließenden Kurse zeigen Lehrern, wie sie passende IKT Lehr- und Evaluationsinstrumente finden, ihren Nutzen bewerten und sie in verschiedene Lernaktivitäten und Lernsituationen richtig und zielführend einbinden können. Im letzten Kurs lernen die Kursteilnehmer, wie Informations- und Kommunikationstechnologien Ihre Unterrichtsvorbereitung erleichtern kann. Darüber hinaus werden Lehrer dazu motiviert, Technologien auch zur eigenen persönlichen und beruflichen Entwicklung zu nutzen.

Den kompletten Kurs finden Sie hier auf unserer internationalen Partners in Learning Seite.

Load Testing SAML (PING) Based SharePoint 2013 Sites

MSDN Blogs - Wed, 07/23/2014 - 07:25

I have to proof to a customer that their SharePoint infrastructure supports their user base and workload. So the logical solution is to Visual Studio Team System (VSTS) to build web tests and load tests and simulate the load and see how the farm reacts.

There is one problem: the sites are based on SAML (PingFederate is used as the IdP).

SAML based sites are a lot different from sites using Windows Integration Authentication methods such as NTLM and Kerberos when users log in. The user will be redirected from SharePoint to SAML IdP, be presented a web form to fill out their username and password, be authenticated, and then be redirected back with a SAML token to SharePoint to be validated and then start browsing SharePoint content.

These work fine under normal user scenarios, the question is how do we turn it into web tests and load test to simulate large amount of users and requests coming to the sites and see how SharePoint farm would perform.

After playing with VSTS and Fiddler and a little research, I found this very useful article online: It is written for ADFS but the idea stays the same. I modified the test cases and made them work for PING based SharePoint sites.

To test SAML sites, you need to create a web test to log in a user with SAML IdP first, then run subsequential web tests such as open a page, navigate the sites, search for something, and so on.

For the user log on web test, there are only two steps you have to take: 

  1. Request the SAML IdP's log in page with the correct query strings and form data, to get the SAML token back
  2. Come to SharePoint’s “/_trust/” to validate the token and get the FedAuth token

After getting the FedAuth token, you can start the real SharePoint performance and load tests.

All the above steps should be manually created in the VSTS and use a few variables to carry around some key factors, like the SAML token.

Below is the screenshot of the web test, you don’t have to write any code: 

In above web test the SAML token is carried around in a variable called "wresult", I also used a CSV file to store all test users’ credentials.

The information you have to modify according to your own environment is highlighted. Your environment may have different values for those “w*” variables, the best way to find this out is to use Fiddler to monitor a full SAML log on session, you will see the values in Fiddler’s WebForms window.

After you get the user log on web test, you can add other recorded SharePoint web test scenarios to your load test, just make sure you configure the user log on web test as the initial test to execute before other tests for each virtual user in your load test’s Test Mix.

One more thing to keep in mind: when you run load tests against your SAML based SharePoint sites, you will put a lot of pressure on your IdP (PING in my case) as well. Just make sure that you check with the team manages the IdP before you run your load tests, you don’t want to interfere with other users and systems that your IdP is supporting.

When will GetSystemWindowsDirectory return something different from GetWindowsDirectory?

MSDN Blogs - Wed, 07/23/2014 - 07:00

Most of the time, the Get­Window­Directory returns the Windows directory. However, as noted in the documentation for Get­System­Windows­Directory:

With Terminal Services, the Get­System­Windows­Directory function retrieves the path of the system Windows directory, while the Get­Windows­Directory function retrieves the path of a Windows directory that is private for each user. On a single-user system, Get­System­Windows­Directory is the same as Get­Windows­Directory.

What's going on here, and how do I test this scenario?

When Terminal Services support was being added to Windows NT 4.0 in the mid 1990's, the Terminal Services team discovered that a lot of applications assumed that the computer was used by only one person, and that that person was a local administrator. This was the most common system configuration at the time, so a lot of applications simply assumed that it was the only system configuration.

On the other hand, a Terminal Server machine can have a large number of users, including multiple users connected simultaneously, and if the Terminal Services team took no special action, you would have found that most applications didn't work. The situation "most applications didn't work" tends not to bode well for adoption of your technology.

Their solution was to create a whole bunch of compatibility behaviors and disable them if the application says, "Hey, I understand that Terminal Server machines are different from your average consumer machine, and I know what I'm doing." One of those compatibility behaviors is to make the Get­Windows­Directory function return a private writable directory rather than the real Windows directory, because old applications assumed that the Windows directory was writable, and they often dumped their private configuration data there.

The signal to disable compatibility behaviors is the IMAGE_DLLCHARACTER­ISTICS_TERMINAL_SERVER_AWARE flag in the image attributes of the primary executable. You tell the linker that you want this flag to be set by passing the /TSAWARE:YES parameter on the command line. (At some point, the Visual Studio folks made /TSAWARE:YES the default for all new projects, so you are probably getting this flag set on your files without even realizing it. You can force it off by going to Configuration Properties, and under Linker, then System, change the "Terminal Server" setting to "Not Terminal Server Aware".)

Note that only the flag state on the primary executable has any effect. Setting the flag on a DLL has no effect. (This adds to the collection of flags that are meaningful only on the primary executable.)

The other tricky part is that the Terminal Server compatibility behaviors kick in only on a Terminal Server machine. The way you create a Terminal Server machine has changed a lot over the years, as has the name of the feature.

  • In Windows NT 4.0, it was a special edition of Windows, known as Windows NT 4.0 Terminal Server Edition.
  • In Windows 2000, the feature changed its name from Terminal Server to Terminal Services and became an optional server component rather than a separate product. You add the component from Add/Remove Programs.
  • In Windows Server 2003 and Windows Server 2008, you go to the Configure Your Server Wizard and add the server rôle "Terminal Server."
  • In Windows Server 2008 R2, the feature changed its name again. The instructions are the same as in Windows Server 2008, but the rôle name changed to "Remote Desktop Services".
  • In Windows Server 2012, the feature retained its name but became grouped under the category "Virtual Desktop Infrastructure." This time, you have to enable the rôle server "Remote Desktop (RD) Session Host."

Terminal Services is the Puff Daddy of Windows technologies. It changes its name every few years, and you wonder what it will come up with next.

Querying WMI with a Timeout

MSDN Blogs - Wed, 07/23/2014 - 07:00
This is thanks to my coworker Keith Munson, who is at least as passionate and adept at PSH as I am:   function Get-WmiObjectWithTimeout {     <#     Credit to Keith Munson for this.     #>       param (         [ string ] $Class ,         [ string ] $ComputerName = $env:COMPUTERNAME ,         [ string ] $NameSpace = 'root...(read more)

Goodbye, Outlook Tasks!

MSDN Blogs - Wed, 07/23/2014 - 06:30

When I first met Peter Bregman, he was on his book tour speaking at Microsoft Research. In 2012. He tried to convince me to give up Outlook Tasks for managing all the stuff that I wasn’t doing. His logic was sound. He repeatedly pointed out that “It’s a myth. You can’t do it all!”

I wasn’t yet ready to acknowledge that my [many] task lists were really just a pile of things that I wish I had time to do. But don’t.

I bought the book, 18 Minutes, anyway and had Peter sign it for me. He was very gracious.

My wife and kids have read it (and done better than I have at following Peter’s advice).

Sure, I adopted some of his recommended practices:

  • I made a simple, six-box set of priorities. (I even look at them once in a while!) They’re in OneNote and pinned to the start screen on my phone so that I can’t ignore them (all the time).
  • I schedule time on my calendar for the important things, instead of putting them on a list.
  • I don’t put anything on my calendar that doesn’t fall into one of the six boxes.
  • I make myself accountable for the items on my calendar every day, even if they don’t get done during the time slot I originally allocated to them.
  • I celebrate the things I do get done, which is a lot more than I did two years ago.

Even the partial adoption of the 18 Minutes philosophy has made me more productive and successful in the past two years. I highly recommend the book. Peter’s fun to read.

Fast forward 2 years, 2 months, and a fortnight.

I still have hundreds of task list items in 6 different mailboxes (two of them Office 365 mailboxes). Many of them over a year old. Most of them long overdue.

I’ve known that I have a task problem for a long time, but it was never a priority to do anything about it, and I always promise myself that I’ll review the task list someday and do those things.

So why am I saying goodbye to my task lists now?

Windows Phone 8.1 made me do it.

I’ve had the same Windows Phone 8 device since Launch Day™. I stood in line for 8 hours on Microsoft’s main campus with a couple thousand of my closest cow-orkers to get my “free” phone. I’ve never been without it since, not a single day. Until this past Monday. My reliable, beloved HTC 8X has finally gone to the great phone Valhalla in the sky. RIP, little guy. I’ll miss you.

I have a great new device, a Nokia Lumia 635 with Windows Phone 8.1 on it. I love it. It’s so yellow that you can see it from space. I’m not prone to losing devices, but just in case, you know?

Now, I’ve always had the comfort of being able to easily swipe over to my task list on my phone and look at them. Occasionally even complete (or delete) one! But over the years with Windows Phone, as it grows and matures, it is becoming clear that Tasks are an afterthought for the team that maintains the Calendar app.

I can’t swipe over to my task list anymore. I have to perform some alternative action to discover my tasks. As a creature of habit (part of my autism is being pretty change averse), this is painful. It just seems like nobody thinks tasks are important enough to treat like first class citizens the same way that email, contacts, and appointments are. I don’t know if Tasks are going the way of Outlook Notes. (I don’t think ActiveSync has supported Outlook Notes since Windows Phone 7, but I could misremember; maybe it never did. I gave up on Outlook Notes long ago.)

  • I can’t pin Tasks to the start menu or create Tasks from the start menu.
  • I can’t do a lot of the categorization, coloring, etc, on the phone that I can do with tasks in Outlook. I’m not sure the two teams are communicating with one another regularly.
  • I can’t effectively search or do more than a basic sort of tasks on my phone.
  • OneNote is where the task functionality seems to be headed. Which sort of makes sense, although they don’t really have reminders, sortability, etc, either.
  • For now, tasks do still sync with my various clouds, but… I can’t find a Windows Phone app that does what I want AND supports more than one mailbox at a time.
  • The Windows Phone API only allows read-only access to other apps for appointments and contacts, not for tasks, which means I’m really, really too lazy to reinvent the wheel and code to the Office 365 API and manage my own task store on the phone. Ick.
  • Do I really need them after all?

No. I don’t really need them after all.

Fine. I can admit when I’m wrong. I’m sorry, Peter. You were right.

With my grieving done, I’ve made one final task list in OneNote.

Goodbye, Outlook Tasks.

The rest of the story (about sand)

MSDN Blogs - Wed, 07/23/2014 - 04:36

A month ago I wrote about our newly enabled capability to measure quality of service on a customer by customer basis.  In that post I mentioned that we had actually identified a customer experiencing issues before they even contacted us about them and had started working with them to understand the issues.  Well, the rest of that story…

We’ve identified the underlying issue.  The customer had an unusually large number of Team Projects in their account and some of our code paths were not scaling well, resulting in slower than expected response times.  We have debugged it, coded a fix and will be deploying it with our next sprint deployment.

Now that’s cool.  We’ve already started working with a few other of the accounts that have the lowest quality of service metrics.  Our plan is to make this a regular part of our sprint rhythm where, every sprint, we investigate a top few customer accounts on the list and try to deploy fixes within a sprint or two – improving the service every sprint.


What’s that HTDELETE wait type?

MSDN Blogs - Wed, 07/23/2014 - 04:30

One of the many improvements, shipped with the SQL Server 2014, made to the iterators used in the batch mode processing query execution paradigm applicable to a query that uses at least one columnstore index (Apollo) , corresponds to the enhancement and extension of aggregation.

Among other things, it now uses one shared hash table instead of per-thread copy. This provides the benefit of significantly lowering the amount of memory required to persist the hash table but, as you can imagine, the multiple threads depending on that single copy of the hash table must synchronize with each other before, for example, deallocating the hash table. To do so, those threads wait on the HTDELETE (Hash Table DELETE) wait type.

For further details on what the vector-based query execution method known as “batch processing” is, I’d recommend you read Eric N. Hanson’s SQL Server Columnstore Performance Tuning wiki page.


MSDN Blogs - Wed, 07/23/2014 - 03:31


My PFE colleague Sam Mesel posted the following information a few days ago on an internal distribution group:


Applying the following change on it does not give me any error message, but I see no performance improvements.


Is this the expected behavior for system databases ?

And another PFE colleague, Tom Stringer, responded:

I just did a test in my environment by setting delayed durability as forced for tempdb:

alter database tempdb

set delayed_durability = forced;





from sys.databases

where name = 'tempdb';

And then I created an XEvents session by capturing the log_flush_requested event.  This event has an event column of is_delayed_durability and while running this session I ran a quick query:

use tempdb;


create table myTempDbTable


       id int identity(1, 1) not null



begin tran;

insert into myTempDbTable

default values;

commit tran;

Looking at the output of the log_flush_requested event for this duration, I see that is_delayed_durability is false.  So with my quick test it looks like forcing delayed durability is not in fact recognized for tempdb.  But again this is a quick and isolated test.

create event session LogFlushRequested

on server

add event sqlserver.log_flush_requested




              database_id = 2      -- tempdb



add target package0.event_file



filename = N'\\<Server>\<Share>\<Folder>\LogFlushRequested.xel'


alter event session LogFlushRequested

on server

state = start;



alter event session LogFlushRequested

on server

state = stop;


drop event session LogFlushRequested

on server;



So I decided to have a look at the source code of the product to see whether SQL Server was intentionally coded to treat TEMPDB differently and whether that behavior was written in the feature specifications or not. Here are my findings:

It’s working as per the functional specs. TempDB doesn’t honor the durability settings or commit semantics. TempDB transactions commit without waiting for the log to harden, regardless of those two. For TempDB, LCs are lazily (and only eventually) hardened. (LC stands for Log Cache, which is an in-memory buffer in which log records can be formatted. Before a log cache is to be written to disk, it is converted into a log block.)

Since this special behavior is not officially and publicly documented in the product, I’ve filed a documentation defect so that the topic on Control Transaction Durability gets improved with such addition in any of the upcoming documentation refresh.

Exchange: Automating a Welcome Email to New Users

MSDN Blogs - Wed, 07/23/2014 - 03:09

Since Exchange 2010, it has been possible to use Scripting Agents to perform additional checks/processing when certain cmdlets are run.  This feature can be used to automatically send an email to welcome new users.  An overview of scripting agents and how they work can be found on Technet.

To send an email when a new mailbox is created, we need to respond to New-Mailbox or Enable-Mailbox (New-Mailbox is used when there is no existing AD account, and Enable-Mailbox is used when an existing AD user is mail-enabled).  Attached you'll find a sample Scripting Agent that will respond to both cmdlets, obtain the related mailbox/AD object, and use this information to send a personalised welcome email to the new mail user.

To create the personalised message, the easiest way is to write it using Outlook, then save as HTML.  In the sample agent, I've included code that will replace any occurrence of #UserFirstName# in the welcome message with the actual first name of the user (as read from Active Directory).  This can be extended as needed to include other fields.  If you have any images in the message, then there are some additional steps to take. When the message is saved, you'll find the html file, and then a folder of the same name containing any images (and a few other files, most likely, but these we can ignore).  Both the script and the welcome message need to be modified to be able to send these images.  The steps are as follows:

In the script, go to the part commented "Add any linked resource (e.g. images)".  In the sample script, you'll just find one image (the Microsoft logo, of course!).  For each image, you need to load it as a linked resource and add it to the list.  To make editing the HTML file simpler, set the ContentId to the filename (no path, just the filename).  The ContentType needs to be set to the correct MIME type for the image (e.g. image/jpeg, image/png, etc.).  The image needs to be available to the Exchange server at the given path when it is loaded - so it is usually best to put it on the machine somewhere.  For every image, you'll have four lines in the script to load it into the message:

        $image = New-Object System.Net.Mail.LinkedResource("C:\WelcomeMessage\image001.png")
        $image.ContentId = "image001.png"
        $image.ContentType = "image/png"
The above lines add the image as an attachment to the message, and define it as a linked resource with a specific ContentId.  The next step is to update the message HTML so that the image references point to the correct attachment (which is done by referencing the ContentId of the image).  So, you need to open the html file in your favourite text editor (Notepad is fine for this, though Notepad++ adds syntax highlighting which makes the HTML much easier to follow).  When it is open, do the following for each image in the document:

  • Search for the image filename (e.g. <CTRL><F>, then search image001.png)
  • For each instance of the image (i.e. if the document has the same image repeated, you'll have to do this for each occurrence), you will find two tags.  One is the standard <img> tag, but just before this, in an HTML comment, you'll also find a <v:imagedata> tag (this is used by Word).
  • In both tags (the Word tag and the img tag), replace the src attribute value with cid:contentId (where contentId is the filename, assuming that is what you used).  As an example:
    <v:imagedata src="Welcome_files/image001.png" o:title="MSFT_logo"/>
    is replaced by
    <v:imagedata src="cid:image001.png" o:title="MSFT_logo"/>

    <img width=129 height=43 src="Welcome_files/image001.png" alt="MSFT_logo" v:shapes="Picture_x0020_1">
    is replaced by
    <img width=129 height=43 src="cid:image001.png" alt="MSFT_logo" v:shapes="Picture_x0020_1">
  • Repeat the above process for all images.
  • Save the modified HTML.

Now the HTML is updated, all you need to do it copy it and the images to the folder specified in the scripting agent (for this example, I copied all the files to c:\WelcomeMessage - note that this needs to be present on each Exchange server that the agent is deployed on).  The welcome message preparation is then complete.

Additional scripting agent code needs to be updated to work with your environment.  The things you'll need to check are SMTP server (the script using System.Net.Mail, so sends the message using SMTP), from address, and message subject.  Once these changes have been made, the ScriptingAgentConfig.xml needs to be placed on each Exchange server in the folder C:\Program Files\Microsoft\Exchange Server\V14\Bin\CmdletExtensionAgents (assuming default Exchange installation).  If there is already one there, you'll need to combine the scripts.

[Sample Of July 23] How to create and access session variables in ASP.NET MVC

MSDN Blogs - Wed, 07/23/2014 - 02:42
July 23

Sample Download :

This sample demonstrates how to create and access session variables in ASP.NET MVC. In this sample, we will demo two ways to achieve this. One is to directly access HttpContext.Current.Session. The other is to create an extension method for HttpContextBase. We type some words selected from the page and submit it to the controller. In the controller, we save the text to session.


You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage

TF255050 TFS and Reports setup

MSDN Blogs - Wed, 07/23/2014 - 02:04

Recently, whilst helping a customer upgrade to TFS 2013 we hit an issue configuring SQL Server Reports within the TFS Admin console.


In this particular dual tier configuration, SQL Reports was setup on a different server to the TFS databases.  Here the solution was to ensure that the account that TFS was running under was a local admin on the SSRS machine whilst TFS Reports was configured.  This can be removed once it's be setup.



Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux