You are here

Feed aggregator

Escalation acceleration

MSDN Blogs - 1 hour 39 min ago

 When I recently wrote about the frightening yet fantastic world of DevOps, I discussed how escalations reach the dev team, but I skipped over when the dev team does the escalating. As you move from shipping annually to shipping weekly and daily, you depend on engineering systems and services to work 24x7. However, engineering systems often don’t get the love they deserve, and may not be resilient. As a result, escalations become part of your life.

Ideally, we would transform our engineering systems into production-quality services (at least three nines—inoperative only eight total hours per year). By engineering systems, I mean source control, build, test automation, networking, code signing, release, ingestion, configuration, deployment, and monitoring. While these systems are constantly improving, and a growing focus at Microsoft, many are far from three nines. That means the right solution is years away, so knowing how to handle escalations well is a skill well worth having.

How do you handle escalations well? You could run around and yell at everyone, but as much as that makes you feel better, it’s juvenile and counterproductive. A more effective approach is to establish relationships with your service teams, know the right things to say and the right data to collect, escalate to the right people, help them resolve the issue, and then constructively drive long-term fixes. Need more details? I’m at your service.

Love thy neighbor

DevOps teams need to know the folks who run all the systems and services they depend upon, including the production systems. If you wait until there’s a crisis, you won’t know who to call, and they won’t know who is calling.

While it’s wonderful for all members of your team and the service teams to be acquainted, at a minimum the leaders should know each other. That way, should an issue get escalated, there is already a basis for trust, cooperation, and understanding.

To help with the “getting to know you” process, DevOps teams should make a table of services and their associated escalation contacts. However, even when you do know who to call, you don’t always know what to say.

The secret knock

Many service teams, like those in Microsoft IT, have a specific escalation procedure. You email their escalation alias or fill out an online form. Then you receive an email from the service team with a ticket number that uniquely identifies your issue and information about priority, response time, and how to escalate further.

Typically, your priority starts fairly low—like priority 3 with a response time of 24 hours. For a serious blocking issue, you’d like to escalate further and get a faster response, but replying, “Are you be kidding me? 24 hours? You stink!” doesn’t work. Instead, you need to say the secret words, “work stoppage” and/or “customer-facing release.”

Priority 1 with a 30-minute response time is usually reserved for issues with major business impact. The two issues that qualify most often are work stoppages of entire groups and release blockers for customer releases. If you really are dealing with a work stoppage or blocked release (lying only works once), reply to the ticket-number response saying, “This is a priority 1 issue. My entire organization is suffering a work stoppage,” and/or “Our customer-facing release is blocked. Please escalate this issue to priority 1 immediately.”

Sometimes you can also change the priority directly on the ticketing site (instructions are in the service’s initial response email). Sometimes you need to reach out to the service leader. For each team in your service table, note its escalation procedure. Also indicate what information the service team might need, like job numbers, server names, and IP addresses. The better you know how to escalate and what data to provide, the faster your issue will be resolved.

Eric Aside

Sometimes it’s your team that makes the mistake. Check out I messed up for what to do.

Wait for the Wolf

Once you’ve provided the right data and escalated to the right priority, you’ll start getting communications from the service team’s experts. Depending on the situation, they may start a Lync call, tracking the issue in real time.

The initial people on the call or mail thread will be broken customers, like yourself, and tier 2 support, who know enough to ask the right questions, but not always enough to fix the problem. You want someone who can fix the problem. Ask, “Who are we getting who can fix the problem? Have they been called? Are they on their way?”

You can tell when someone capable of fixing the problem has arrived by how they speak. People who can’t fix the problem say, “It could be this or that. I’m not sure.” People who can fix the problem say, “It could be this or that. Try this. If it doesn’t work, we know it’s that.” They are confident and take control. These people are precious. Follow their instructions, let them work, and send them your thanks when the problem is resolved.

Stay focused at all times

You’ll often have additional people join the call or mail thread while you’re waiting for a resolution. These new folks will mean well, but often will distract people by speculating on causes or asking questions that were answered earlier.

If you want a fast resolution, have a summary ready to copy/paste when each new person arrives, reiterate that you’re in a work stoppage and/or blocked release, and keep the whole crowd focused on getting and keeping a capable person working the problem. That’s the only way these issues get resolved.

Do the right thing

Once the problem is fixed, you want it to never come back. Contact the service lead, and ask to receive the root cause analysis (RCA). If the RCA is incomplete or faulty, constructively point out the weaknesses and push for a more comprehensive RCA.

With a strong RCA in hand, look over the initiatives laid out to resolve the long-term issues. If there are none, suggest some. If the ones listed aren’t sufficient, constructively suggest alternatives. Keep in mind that service teams, and dependencies in general, often have constraints on what they can and can’t fix and what level of service they can provide (another great topic for your team and the service team leaders to discuss).

If money or resources are an issue, seriously consider providing them. If you doubt it’s worth it, calculate the people-hours and money you lost during the incident and reprioritize. Chances are that putting the long-term solution in place is well worth the cost.

If you believe it’s necessary, but don’t feel comfortable committing money or resources to fix a service’s issues, then ask your dev manager to do it. If your dev manager doesn’t feel comfortable, he should go back to being a lead. It’s a dev manager’s job to understand what his business needs and what it costs to run. (More on this in On budget.) Either a working service is worth the commitment, the issue is truly rare, or your team should drop the service. If you think it’s someone else’s problem to fix, remember whose problem it really was.

Eric Aside

If you offer money and resources to a service to fix one of its problems, that’s typically enough for its members to successfully argue that their own management should provide that support. You shouldn’t make the offer without being committed to follow through, but it’s often that very commitment which makes a long-term solution seem worthwhile.

Bad things happen to good people

We’d love to live in a world where everything worked all the time. However, that’s not reality. You will have problems with every service at some point. Come up with a list of services your team uses and create a table. Include columns for the service’s team leader, its escalation contact, its procedure for raising priority, and the data that it will need. Meet with service leaders, help them understand your business and constraints, work to understand theirs, and then fill out your service table with their help.

When a serious issue occurs, use your service table, the right words, and the right data to escalate quickly. Push to get a capable person engaged, and then help that person succeed, while preventing newcomers from derailing the process. Once the issue is resolved, push for a long-term solution that truly addresses the root cause.

Yes, escalation is part of life in DevOps. However, you can minimize the impact, maximize your uptime, and reduce future incidents by handling issues with grace and confidence. Don’t wallow in the problem. Be part of the solution.

Spooked by Columnstore? See My Halloween Weekend Preso at #SQLSatOregon

MSDN Blogs - Fri, 10/31/2014 - 22:03

This Halloween weekend I continue my efforts to make the world of SQL DWs a better place—one table at a time—via evangelizing Columnstore at #SQLSaturday337 in Portlandia

Columnstore Indexes in SQL Server 2014: Flipping the DW /faster Bit

If you’ve not been in a crypt since last All Hallows Eve, you likely know this is one of my staples. 

Register, see the schedule, or see the event home page on the SQL Saturday site.  I’ll look forward to seeing you here:

Mittleman Community Center
6651 SW Capitol Highway
Portland, OR 97219

Kudos to Arnie Rowland (@ArnieRowland), Paul Turley (@Paul_Turley), Theresa Iserman (@TheresaIserman), Vern Rabe, Rob Boek (@robboek), & the rest of the Oregon SQL Users Group (@osqld) Leadership Team for their superb organizational efforts.

SQL Saturday Oregon All-Stars: Speaker Tom Roush (@GEEQL)
flanked by Leadership Team members Theresa Iserman (also a speaker) & MVP Arnie Rowland.

Edit and Query documents through the Azure portal

MSDN Blogs - Fri, 10/31/2014 - 15:24

Edit and Query documents through the Azure portal

We’ve released a number of great enhancements to further improve the DocumentDB management experience.  Our October 23rd portal update lights up the following new capabilities through https://portal.azure.com:

  • Document CRUD operations – create, edit and delete documents
  • Document system properties – view system properties for each document
  • Document filtering and refresh – filter and refresh results within Document Explorer
  • Query Explorer – author and run DocumentDB SQL queries

Keep reading for some additional information on each of these new features!

Document CRUD operations

Document Explorer has been enhanced such that you can create, edit and delete documents.  To create a document, simply click the Create Document command and a minimal JSON snippet will be provided:

Note that if you choose to not provide an id, Document Explorer will automatically generate a GUID as the id value. 

Once I click save, the document is updated to reflect the supplied (or generated) id:

Additionally, the document is added to the bottom of the document list in Document Explorer:

To edit an existing document, simply select it in Document Explorer, edit the document as you see fit, and click the Save command:

Likewise, you can delete a document by selecting it, clicking the Delete command and then confirming the delete (after confirming, the document will immediately be removed from the Document Explorer list):

If you’re editing a document and decide that you want to discard the edits, simply click the discard command, confirm the discard action, and the previous state of the document will be reloaded:

Note that Document Explorer will validate that the document contains valid JSON:

And will also prevent you from saving a document with invalid JSON content:

Document System Properties

We've also added a properties command to Document Explorer which allows you to view the system properties of the currently loaded document:

Note that the timestamp (_TS) property is internally represented as epoch time but in the portal experience we've converted the display value to a human readable GMT format.

Document Explorer Filter and Refresh Support

Document Explorer now supports filtering the currently loaded set of documents by the id property. Simply type in the filter box:

 

and the results in the Document Explorer list will be filtered based on your criteria:

 

 Finally, we've added a Refresh command to allow you to easily refresh the documents displayed in Document Explorer:

 Query Explorer

We’re very happy to introduce a new Query Explorer experience which allows you author and run DocumentDB SQL queries.

You can launch Query Explorer from any of the DocumentDB Account, Database, or Collection blades. 
A default query of SELECT * FROM c is provided in the Query Editor:

You can accept the default query or construct your own and then click the Run command to view the results:

By default, Query Explorer will return results in sets of 100. 
If your query produces more than 100 results, simply use the Next page and Previous page commands to navigate through the result set:

Successful queries will provide information such as the request charge, the set of results currently being shown and whether or not there are more results (which can then be accessed via the Next page command in the results pane):

Likewise, if a query completes with errors, the portal will display a list of errors which can help with troubleshooting efforts:

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable within the Azure portal.  Please submit your suggestions on the Microsoft Azure DocumentDB feedback forum.  If you haven’t tried DocumentDB yet, then get started here.

Enjoy!

Stephen Baron
Program Manager

Bypassing Multiple-Authentication Providers in SharePoint 2013

MSDN Blogs - Fri, 10/31/2014 - 15:06

Intro

I haven't researched to see if this topic has been heavily documented but feel like putting my 2 cents of input so hope this helps you. One of Microsoft's big pushes for SharePoint 2013 for On Premise is to consolidate multiple web applications to a single Web Application. This is easier said than done and depending on the complexity of one's environment this can take some time to unravel all of the moving parts like moving from path based to host name site collections. This blog focuses on how should I handle the sign in experience when a web application requires multiple authentication providers? In my test case scenario, I require SAML Authentication for my users.

Question: Wait, I thought you mentioned multiple authentication providers?
Answer: Yes, I did and keep reading below!

Because I require SAML Authentication enabled on the web application, I also require Windows Claims authentication enabled on the web application. This is required by Search in order to crawl the site.

For Example, Looking at my Authentication Provider my Web Application in Central Administration:

 

 

 

Traditional sign-in experience

After enabling dual authentication providers in a single web application, a default out of the box login page is presented to the users when they first sign in. It looks like the following:

 

This might be acceptable to smaller SharePoint environments especially if some of the users will leverage Windows Authentication and others SAML Authentication. However, most larger SharePoint Enterprises require that all users leverage SAML for Authentication purposes and leave Windows Claims Authentication enabled for Search. Because of this requirement, users shouldn't be presented with a choice (login page) and should automatically authenticate using SAML authentication.

 

 

Options for Resolution

You have a couple of options if you want to force users to always redirect to SAML authentication for initial logon while Search will leverage Windows Claims behind the scenes.

 

Option 1: Set /_trust/ as the custom sign in page for the default zone

It's super easy to implement this as you perform the following steps:

1. Launch Central Administrator and select Application Management

2. Select Manage Web Applications

3. Select desired Web Application and choose Authentication Providers button from the Ribbon

4. Select the appropriate Zone. In my case it's the Default Zone.

5. Scroll down to Sign In Page URL section and select "Custom Sign In Page" and input /_trust/

6. Click Save

 

 

Option 2: Use a custom login page

It's possible to build your own custom login page to automatically direct all user based traffic to the SAML Authentication Provider thereby bypassing the oob custom login page. Steve Peschka did an excellent job of outlining exactly how to accomplish this for SharePoint 2010 here. I figured why not reuse what Steve has already provided and validate it works fine for SharePoint 2013. I had to perform some tweaks for the following reasons:

Reason 1: Steve's article walks you through redirecting to a Forms Based Auth Provider. I will redirect to a SAML Authentication Provider

Reason 2: My code sample is different because were dealing with redirecting to a different Auth Providers and SharePoint Product

All of the steps below are identical with Steve's article however the code samples are different in my article as it applies to forcing redirection to SAML Authentication Provider.

See below for more details:

    1. Make a backup copy of the default.aspx file in the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\IDENTITYMODEL\LOGIN folder
    2. In Microsoft Visual Studio, create a Windows Class Library project.
    3. Add references to System.Web, Microsoft.SharePoint.dll, and Microsoft.SharePoint.IdentityModel.dll. The identity model assembly is in the global assembly cache. Therefore, I had to get a copy and place it in the root of my drive to add my references.
    4. Strong name the assembly that you are creating, because you will place it the global assembly cache later.
    5. Add a new ASPX page to your project. I find the easiest way to do this is to copy a page from an existing ASP.NET web application project; if you do this, you can copy the .aspx, .aspx.cs, and .aspx.designer.cs files all at the same time. Remember, in this case we want a file that is named "default.aspx", and it will be easier if there is no code written in it yet and there is minimal markup in the page.
    6. In the code-behind file (.aspx.cs file), change the namespace to match the namespace of your current project.
      1. Change the class so that it inherits from Microsoft.SharePoint.IdentityModel.Pages.MultiLogonPage.
      2. Override the OnLoad event:

      My Code sample for Default.aspx.cs:

      // Microsoft provides programming examples for illustration only,
      // without warranty either expressed or implied, including, but not
      // limited to, the implied warranties of merchantability and/or
      // fitness for a particular purpose.
      //
      // This sample assumes that you are familiar with the programming
      // language being demonstrated and the tools used to create and debug
      // procedures. Microsoft support professionals can help explain the
      // functionality of a particular procedure, but they will not modify
      // these examples to provide added functionality or construct
      // procedures to meet your specific needs. If you have limited
      // programming experience, you may want to contact a Microsoft
      // Certified Partner or the Microsoft fee-based consulting line at
      //  (800) 936-5200
      // For more information about Microsoft Certified Partners, please
      // visit the following Microsoft Web site:
      // https://partner.microsoft.com/global/30000104

      using System;
           using System.Diagnostics;
           using Microsoft.SharePoint;
           using Microsoft.SharePoint.WebControls;
           using Microsoft.SharePoint.IdentityModel;
           namespace LoveAuth
           {
               public partial class authIsFun : Microsoft.SharePoint.IdentityModel.Pages.MultiLogonPage
               {
                   protected override void OnLoad(EventArgs e)
                   {
                       base.OnLoad(e);
                       try
                       {
                           // If this is not a post back, the user has not yet selected which
                           // authentication provider they want to use.
                           // In this case, we want to always refer the user to log in by using forms-based authentication.
                           if (!this.IsPostBack)
                           {
                               // Grab the query string
                               System.Text.StringBuilder qp = new System.Text.StringBuilder("trust=SAMLProvider&ReturnUrl=%2f_layouts%2f15%2fAuthenticate.aspx%3fSource%3d%252F&Source=%2F");
                               // Redirect to the saml-based authentication login page.
                               this.Response.Redirect("/_trust/default.aspx?" + qp.ToString());
                           }
                       }
                       catch (Exception ex)
                       {
                           Debug.WriteLine(ex.Message);
                       }
                   }
                   protected void Page_Load(object sender, EventArgs e)
                   {
                   }
               }
           }

       

    7. Compile the application so that you can get the strong name for it and add it to the markup for default.aspx
    8. Copy the following markup into default.aspx; you just have to change the class from which the page inherits (in this example, "MultiAuthLoginPage._Default,MultiAuthLoginPage, Version=1.0.0.0, Culture=neutral, PublicKeyToken=907bf41ebba93579"). Note that all I did was copy it from /_login/default.aspx and replace the Inherits value with my custom class information.

    My Markup for default.aspx looks like:

    <%@ Assembly Name="LoveAuth, Version=1.0.0.0, Culture=neutral, PublicKeyToken=7a7aa1fb5bbe489a" %>
    < %@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="LoveAuth.authIsFun" DynamicMasterPageFile="~masterurl/default.master" %>
    < %@ Import Namespace="Microsoft.SharePoint.ApplicationPages" %>
    < %@ Assembly Name="Microsoft.SharePoint.IdentityModel, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    < %@ Register Tagprefix="SharepointIdentity" Namespace="Microsoft.SharePoint.IdentityModel" Assembly="Microsoft.SharePoint.IdentityModel, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    < %@ Assembly Name="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"%>
    < %@ Import Namespace="Microsoft.SharePoint.WebControls" %>
    < %@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    < %@ Register Tagprefix="Utilities" Namespace="Microsoft.SharePoint.Utilities" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    < %@ Import Namespace="Microsoft.SharePoint" %> <%@ Assembly Name="Microsoft.Web.CommandUI, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    <%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    < %@ Register Tagprefix="Utilities" Namespace="Microsoft.SharePoint.Utilities" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
    < %@ Register Tagprefix="asp" Namespace="System.Web.UI" Assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" %>
    < %@ Import Namespace="Microsoft.SharePoint" %>
    < %@ Assembly Name="Microsoft.Web.CommandUI, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

    <asp:Content ID="Content1" ContentPlaceHolderId="PlaceHolderPageTitle" runat="server">
        <SharePoint:EncodedLiteral runat="server"  EncodeMethod="HtmlEncode" Id="ClaimsLogonPageTitle" />
    < /asp:Content>
    < asp:Content ID="Content2" ContentPlaceHolderId="PlaceHolderPageTitleInTitleArea" runat="server">
        <SharePoint:EncodedLiteral runat="server"  EncodeMethod="HtmlEncode" Id="ClaimsLogonPageTitleInTitleArea" />
    < /asp:Content>
    < asp:Content ID="Content3" ContentPlaceHolderId="PlaceHolderSiteName" runat="server"/>
    < asp:Content ID="Content4" ContentPlaceHolderId="PlaceHolderMain" runat="server">
    < SharePoint:EncodedLiteral runat="server"  EncodeMethod="HtmlEncode" Id="ClaimsLogonPageMessage" />
    < br />
    < br />
    < SharepointIdentity:LogonSelector ID="ClaimsLogonSelector" runat="server" />
    < /asp:Content>

      Finally for Each WFE: Register your assembly in the global assembly cache and copy your new custom default.aspx page into the C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\IDENTITYMODEL\LOGIN folder.

      Resources: http://msdn.microsoft.com/en-us/library/office/hh237665(v=office.14).aspx

       

      Thanks!

      Russ Maxwell, MSFT

      How to convert a jagged array to 2D array

      MSDN Blogs - Fri, 10/31/2014 - 14:00

      If the dimensions of the 2D array is known, you can use this code:

      string[,] arr = new string[5, 4];
      string[] filelines = File.ReadAllLines("file.txt");
      for (int i = 0; i < filelines.Length; i++)
      {
          var parts = filelines[i].Split(',');    // Note: no need for .ToArray()
          for (int j = 0; j < parts.Length; j++)
          {
              arr[i, j] = parts[j];
          }
      }

      MonoGame for the XNA Developer (#Gamedev w/ @ScruffyFurn)

      MSDN Blogs - Fri, 10/31/2014 - 13:30

      When XNA was put into “stasis” a huge hole was created in the game development tool space. In this latest episode of #Gamedev w/ @ScruffyFurn I rant about the heartache I suffered with the retirement of XNA and take a look at my newly budding relationship with MonoGame,, the open source implementation of XNA. Special guest Bryan Griffiths shares great insights and offers amazing advice from his years of experience in “AAA” and Indie game development.

      Share your feedback!

      Let us know what you think of the show. Your feedback helps shape the show so reach out and lend your voice.

      • Got a great idea or topic for a rant? Let us know. Tweet to @cdndevs “#ScruffyFurnRants” or comment on our Facebook wall.
      • Have feedback? Tweet to @cdndevs with the “#TellScruffyFurn” or comment on our Facebook wall.
      Sign On ScruffyFurn!

      We determine the success of these endeavors, not only by the amount of positive “noise” that you all make, but more importantly, by the number of people who are interested (the number of people who watch). So if you want more, share this page with everyone who you’d think would enjoy watching!

      November 5th 2014 webcast for delivering applications from the cloud to your workforce with Azure RemoteApp

      MSDN Blogs - Fri, 10/31/2014 - 13:07

      Please join us on November 5th, 2014 for an overview webcast all about Azure RemoteApp.  In this webcast, we will discuss delivering applications from the cloud with Azure RemoteApp. This scalable, agile cloud service will enable your users to stay productive on the go, while safeguarding your sensitive corporate resources.  Demi Albuz and Thomas Willingham will be leading the discussion from Microsoft.

      To prepare for this webcast, we recommend that you view this brief video to learn about the latest Enterprise Mobility Suite features and capabilities. 

      Register here.

      Thank you

      ConvertFrom-String: Example-based text parsing

      MSDN Blogs - Fri, 10/31/2014 - 12:36

       

      Intro

      I’m sure most of you are familiar with the powerful tools for text parsing available in PowerShell. A presentation at the PowerShell Summit a couple of weeks ago provides a good overview of these and mentions a new Powershell cmdlet, ConvertFrom-String, that was introduced in Windows Management Framework 5.0 Preview September 2014. ConvertFrom-String lets you parse a file by providing a template that contains examples of the desired output data rather than by writing a (potentially complex) script.

      A Simple Example

      The namesAndCities.input.txt attached to this post contains simple names together with cities, and namesAndCities.namesOnly.template.txt copies the first two records and wraps their names in template markup:

      {Name*:Craig Trudeau}

      Buffalo, NY

       

      {Name*:Merle Baldridge}

      Baltimore, MD

       

      This defines the extraction of all names from the file. (In this case a single example would have worked due to the distinct formatting of the lines, but in general it is better to supply two examples to give FlashExtract a better idea of the context.) Now let’s run it:

       

      gc .\namesAndCities.input.txt | ConvertFrom-String -templateFile .\namesAndCities.namesOnly.template.txt

       

      ExtentText                                Name

      ----------                                ----

      Craig Trudeau ...                         Craig Trudeau

      Merle Baldridge ...                       Merle Baldridge

      Vicente Saul ...                          Vicente Saul

      Lydia Parsons ...                         Lydia Parsons

      Cheryl Booth ...                          Cheryl Booth

      Shannon Holland ...                       Shannon Holland

      Libby Stevens ...                         Libby Stevens

      Thomas Donnelly ...                       Thomas Donnelly      

       

      The rest of this post describes how ConvertFrom-String works and develops a more full-featured address file and templates for these addresses. I’ll also describe some ways to figure out what to change when you don’t get the results you want.

      How ConvertFrom-String Works

      ConvertFrom-String is built on top of FlashExtract, a program-synthesis technology developed by Microsoft Research. FlashExtract uses an improved version of the substring-extraction techniques that were developed in Flash Fill, which ships in Excel 2013.  In Flash Fill, those substrings are extracted from one or more source strings and combined into a target string. FlashExtract learns substrings to perform a top-down partitioning of the file into regions that are either nested or completely non-overlapping, and then to extract the contents of some subset of those regions as the desired output strings. In ConvertFrom-String, the regions are defined by examples in the template markup, and the substrings that are extracted by FlashExtract become the values of properties on a sequence of output objects.

      The program synthesis in FlashExtract is based upon analyzing the substrings surrounding the beginning and ending of each example region and generating programs that are combinations of various primitive string operations such as regular expressions. For each region, it finds the set of these programs that are consistent with all examples for that region and ranks them. The combination of the best-ranking sub-program for each region becomes the final FlashExtract program.

      Defining a Structure and Fields

      Now let’s look at a more realistic address file and template. Here are the first two examples in addresses.PersonInfo.template.txt:

      {PersonInfo*:{Name:Craig Trudeau}

      {Address:{Street:4567 Main St NE}

      {[string]City:Buffalo}, {State:NY} {Zip:98052}}

      {Phone:(425) 555-0100}}

       

      {PersonInfo*:{Name:Merle Baldridge}

      {Address:{Street:1234 First Ave}

      {City:Baltimore}, {State:MD} {Zip:98101}}

      {Phone:(425) 555-0101}}

       

      In this template we define a hierarchy with the PersonInfo structure defined at the highest level and within this the individual fields of the structure, including another structure for Address.  In a bit more detail we are defining:

      • Examples of regions that define a sequence of structures named PersonInfo.  The ‘*’ suffix defines a sequence within its parent region. In this case, the parent region is the entire file.
      • An example of a region that defines a non-sequence (there is no ‘*’ suffix) leaf value Name within the PersonInfo structure.
      • An example of a region that defines a non-sequence structure Address within the PersonInfo structure, along with the fields of Address. If all addresses in the file had exactly the same format, only the first PersonInfo definition would be necessary. However, the second Address.Street does not have the “NE” suffix. If we did not define the Address structure and its Street field in the second example, our extracted addresses would only recognize Streets that had such a suffix. By supplying the second example, we tell FlashExtract to be more flexible in its extraction of Street. (In this particular example, simply defining the second PersonInfo structure, even without its Address.Street definition, results in a correct program. However, it is safest to provide the definition for the field, to ensure that changes in other areas such as ranking will continue to give the desired results.)
      • Notice the [string] type cast on City.  This is the default, so it is merely illustrative here. As in PowerShell, you can specify a type cast to any .NET type.  For example, to use Sort-Object on a field with integers, define it with [int] so it is sorted as integers rather than text.
      Debugging ConvertFrom-String

      Now let’s look at a program that FlashExtract might generate for these two examples.  To do so, pass the -Debug parameter to the ConvertFrom-String cmdlet (“cfs” is the alias for ConvertFrom-String):

      gc .\addresses.input.txt | cfs -templateFile .\addresses.PersonInfo.template.txt -Debug

       

      Important Note: The -Debug output shown here is specific to the current preview version of ConvertFrom-String and will change in later versions. And as always in PowerShell, text output is not a contract.

      Running this gives us 8 programs, one for each field in the template.  Here are the first two:

      DEBUG: Property: PersonInfo

      Program: EndSSL(ESPL((StartsWith(Left parenthesis(\(), Number([0-9]+(\,[0-9]{3})*(\.[0-9]+)?), Right parenthesis(\)))): 0, 1, ...: ε...ε, 0)Line Separator([ \t]*((\r)?\n)

      )...Camel Case(\p{Lu}(\p{Ll})+), WhiteSpace(( )+), Camel Case(\p{Lu}(\p{Ll})+), -1)

      -------------------------------------------------

      Property: Name

      Program: ESSL((EndsWith(Camel Case(\p{Lu}(\p{Ll})+), WhiteSpace(( )+), Camel Case(\p{Lu}(\p{Ll})+))): 0, 1, ...: ε...ε, 1 + ε...ε, 0)

       

       

      Before we dive into the details, here’s a high-level view of what’s happening. FlashExtract first learns how to recognize the start and end positions of the PersonInfo structure examples. Then it evaluates the subfield examples within each of those structure examples to learn those subfields’ boundaries. In this case, we have two examples of the Address structure, one within each of the two PersonInfo examples.  For each Address example, FlashExtract learns programs to recognize the start and end positions of that Address example within its parent PersonInfo example, then combines these to create a single substring-recognition program that satisfies both Address examples. In the same way, we learn a substring-recognition program for each of the fields Street, City, State, and Zip within Address, and for the field Phone within PersonInfo.

      Now let’s look in more detail at the -Debug output, starting with PersonInfo.

      A line sequence is a subset of the lines in the file that match certain criteria. A position may be either a constant or a location in a string where the substrings to either side of that location match certain regular expressions.  In this example, for the multiline PersonInfo region FlashExtract first learned the line sequence that identifies the end positions, then learned a function that, for each line in this sequence, backs up to identify the region start positions. (FlashExtract can also learn it in the other direction, first learning the starting position sequence and then a function that moves forward to find the ending position).  In the above program, EndSSL is the function that drives this process, ESPL defines the ending-position line sequence, and after ESPL is the function that maps an ending position to the start position. So the PersonInfo program breaks out as:

      EndSSL(

         ESPL((StartsWith(/*area code*/)):

                  0, 1, ...: // This represents a filter that accepts all

                             // matching lines (starts at first position

                             // and increments by one.

                  e...e, 0   // The end position is the last position in

                             // the line.

               )

          Line Separator()... // Find the start position by looking for

                             // a line start

              Camel Case(), WhiteSpace(), Camel Case(), // that is followed

                             // by two "names" separated by whitespace;

                             // the start position is at the beginning of

                             // the first "name".

              -1             // Move backward from the end position to

                                   // find the match.

          )

       

      Notice that in the comments above, “name” is in quotes. So far our examples assume that a name consists of an initial uppercase letter followed by lowercase letters (sometimes called “proper case”). We’ll see later that this is not always correct.

      As mentioned above this output will change but to help you diagnose problems in the meantime here is a list of the sequence-generating functions you might see in the current version:

      ESSL: This returns a sequence of (single-line) substrings by finding a sequence of lines and extracting a substring from each line.

      EndSSL: This returns a sequence of (possibly multiline) substrings by finding a sequence of ending positions, and for each ending position, finding the starting position.

      StartSSL: This returns a sequence of (possibly multiline) substrings by finding a sequence of starting positions, and for each starting position, finding the ending position.

      ESPL: This returns a sequence of positions by finding a sequence of lines, and for each line, finding a position within it.

      SPL: This returns a sequence of positions. It takes four parameters: re1, re2, init, incr. SPL finds all positions that match regex re1 on its left and match regex re2 on its right. From this sequence it selects every incr’th item starting at index init.

       

      Why does FlashExtract break the file into lines?  Consider the following:

      $123 one two three four {CapitalLetters*:ABC} five six seven eight

      123 put four words here DEF and another four here

      123 2001 2002 2003 2004 GHI 2005 2006 2007 2008
      $123 eleven twelve thirteen fourteen {CapitalLetters*:JKL} fifteen sixteen seventeen eighteen

       

      Here we want to capture CapitalLetters only if the line starts with $. However, learning the start and end positions of CapitalLetters can be done over a much shorter span (“extract an all-capital sequence that is between two lower-case alphabetical sequences”), which would mistakenly capture the DEF line.  By splitting the file into lines, we can use shorter ranges on both line selection and position selection.

      Now that we’ve got the outer structure, let’s look at the Name field inside it.

      Property: Name

      Program: ESSL((EndsWith(Camel Case(\p{Lu}(\p{Ll})+), WhiteSpace(( )+), Camel Case(\p{Lu}(\p{Ll})+))): 0, 1, ...: ε...ε, 1 + ε...ε, 0)

       

      This breaks out as:

       

      ESSL((EndsWith(/*"name", whitespace, "name")): // find lines that end with this pattern

                  0, 1, ...: // accept all matching lines

                  e...e, 1   // For each matching line, the start position is the first occurrence of an empty string

                  +             // (separate start and end positions)

                  e...e, 0)  // and the end position is the end of the line

       

      Now let’s see this in action.  Add Format-Table to the command line.

      PersonInfo                                                                                                                                                              

      ----------                                                                                                                                                              

      {@{ExtentText=Craig Trudeau ...   

      {@{ExtentText=Merle Baldridge ...                     

      {@{ExtentText=Los Angeles, CA 98102...

      {@{ExtentText=Randolph LaBelle...

      {@{ExtentText=Lydia Parsons ...

      {@{ExtentText=Cheryl Booth ...

      {@{ExtentText=Shannon Holland ...

      {@{ExtentText=San Diego, CA 98107...

      {@{ExtentText=Hannah McStorey...

      {@{ExtentText=Thomas Donnelly ...   

       

      Notice that we have “Los Angeles” and “San Diego” where we expect names.  These city names contain a space, so they match the beginning position program for PersonInfo.  Let’s provide another example:

      {PersonInfo*:Vicente Saul

      {Address:2345 Second Ave SE

      {City:Los Angeles}, CA 98102}

      (425) 555-0102}

       

      Because the field we’re concerned about providing another example for is City, we only need to provide its direct hierarchy; we don’t need Name, Street, State, etc.

      Now the Format-Table output looks good.  Let’s dig in a bit more with Format-List.  Now we see a couple of incorrect names:

      PersonInfo : {@{ExtentText=Randolph LaBelle

                   3456 Third Ave

                   Fargo, ND 98103

                   (425) 555-0183; Name=3456 Third Ave; Phone=(425) 555-0183}}

       

      PersonInfo : {@{ExtentText=Hannah McStorey

                   8901 Pine St

                   Portland, OR 98108

                   (425) 555-0108; Name=8901 Pine St; Address=; Phone=(425) 555-0108}}

       

       

      As mentioned earlier, learning that a name has an uppercase letter only at the beginning is not always correct.  We’ll add one more example, again defining only the fields necessary to resolve the ambiguity:

       

      {PersonInfo*:{Name:Randolph LaBelle}

      3456 Third Ave

      Fargo, ND 98103

      (425) 555-0183}

       

      With this, we can see that the full output with Format-Custom (or your favorite formatting command) is correct.  In fact, we can now remove the example for Merle Baldridge because the example we added for Randolph LaBelle has no suffix on the street.

      Now let’s see how we can write the examples even more easily.

      Defining Implicit Structures

      Above, we defined the PersonInfo structure explicitly, with a name and boundaries.  This is not always necessary.  FlashExtract can often infer the boundaries of a parent structure if the first subfield of that structure is defined as a sequence.  In this case, FlashExtract learns an implicit region for the parent structure that extends from the beginning of one instance of the first subfield to just before the beginning of the next instance.  The attached file addresses.ImplicitStruct. template.txt modifies our example above to illustrate this:

       

      {Name*:Craig Trudeau}

      {Address:{Street:4567 Main St NE}

      {City:Buffalo}, {State:NY} {Zip:98052}}

      {Phone:(425) 555-0100}

       

      {Name*:Vicente Saul}

      2345 Second Ave SE

      {!Name*:Los Angeles}, CA 98102

      (425) 555-0102

       

      {Name*:Randolph LaBelle}

      3456 Third Ave

      Fargo, ND 98103

      (425) 555-0183

       

      Now there is no PersonInfo field defined, and Name has become a sequence (with the ‘*’ suffix). This also illustrates another aspect of field definitions, a negative example. Without the {!Name*:Los Angeles} definition, “Los Angeles” matches a line-starting expression that will extract it as a name. As before, we need Randolph LaBelle as an example of capitalization inside a name.

      The only difference in -debug output from the final template file with PersonInfo is that we don’t have a PersonInfo property here, and Name has a different program:

      DEBUG: Property: Name

      Program: ESSL((SucceedingStartsWith(Number([0-9]+(\,[0-9]{3})*(\.[0-9]+)?), WhiteSpace(( )+), Camel Case(\p{Lu}(\p{Ll})+))): 0, 1, ...: ε...ε, 1 + ε...ε, 0)

       

      Our negative example for “Los Angeles” resulted in the best-ranked program changing from looking for a line starting with Camel Case to looking for a line followed by a line that starts with a number.

      Although these examples don’t show it, it is possible to define sequences within any parent structure. This includes defining nested sequences within an implicit parent structure:

                      {Name*:…}

                                      {Phone*:…}

                                      {Phone*:…}

                      {Name*:…}

       

      However, it is often easier to remove ambiguity by defining explicit regions. This is particularly important when there may be zero instances of a child sequence within a parent sequence.

      Syntax Summary

      As you can probably piece together from the examples above and the release notes, the syntax of a template field specification is:

                      {[optional-typecast]namesequence-spec:example-value}

      The field specification is enclosed with curly braces. If there are curly braces already in the file they must be escaped by adding a ‘\’ before them (and any ‘\’ characters already in the file must be escaped by doubling them).  Optional-typecast is the usual PowerShell type-cast syntax, a .NET type within square brackets (such as [string] or [int]).  Name is the name of the field.  Sequence-spec is “*” if the field will be a sequence (i.e. will have multiple instances) within its parent, else empty.  All of these are text that is added to the actual data in the template file. “Example-value” is the actual data. This data starts with the character immediately after the “:” following name and ends with the character immediately preceding the field’s closing “}”, including all whitespace.

      As illustrated above, fields may be nested; this is done by creating a field specification within the “example-value” of the enclosing field.

      ConvertFrom-String does not recognize regular expressions in the field value; it interprets them as the literal string.  For example, this LazyWinAdmin post provides a very nice example of using ConvertFrom-String with the output of netstat.  However, the use of “{State:\s}” works not because it is recognized as a regular expression, but because it tries to match against the literal string “\s”, which happens to result in FlashExtract selecting a program that returns an blank field.  Using “{State:\q}” or “{State:#}” works as well.  The latter, because it does not have an alphabet character, learns a program that avoids a problem in the template that (as of this writing) converts “TIME_WAIT” to “WAIT”; this could also be fixed by adding examples of State with the “_” character.

      Common Problems

      Many problems are due to over-learning:

      ·         Specific case.  Sometimes a word will begin with lowercase when your examples are all uppercase, or as above, an uppercase letter will be in the middle of a word.  Or there may be special characters like underscore, apostrophe, or hyphen.

      ·         A program may learn to search for only a specific number of characters, or for a specific string, if all the examples are the same in certain areas. For example, I recently parsed a file that used spaces for alignment; both my examples had one digit followed by 9 spaces, so it missed lines where 2 digits were followed by 8 spaces.

       

      Over-learning can be fixed by adding diverse examples to relax the restrictions. In other cases, as with “Los Angeles” above, the learned program may be too lenient and you may need to specify a negative example.

       

      Other things to look for:

      ·         Be sure that spaces in the template file match those in the data file.  Sometimes there may be trailing spaces or spaces between fields in the original file, and these may be relevant in learning.

      ·         If you are having trouble getting the correct region boundaries with implicit regions, consider adding explicit regions.

       

      We would love to know how ConvertFrom-String meets your needs or where it does not do what you expect.  Please give it a workout and send your examples, problems, suggestions, and other feedback to psdmfb--at--microsoft.com. Happy parsing!

       

      Ted Hart [MSFT]

      Microsoft Research

      Download: ConvertFrom-String Examples

      The MVP Global Summit: A Virtuous Cycle

      MSDN Blogs - Fri, 10/31/2014 - 12:30

      The following post was written by MVP Program Event Manager Paulette Suddarth

       

      Widely recognized as one of the most important community events in the world, next week’s MVP Global Summit will be, as it always is, a reflection of the community itself. MVPs are instrumental in every facet of the event, helping us plan and make continual improvements as well as infusing every jam-packed moment with their passion, curiosity and expert insights.

      Core to the mission of the MVP Global Summit is the free exchange of ideas between MVPs and Microsoft teams about Microsoft technologies—how they work now and what’s in the works for the future. It provides all of us at Microsoft the opportunity to receive direct feedback from hands-on experts.

       

      And that’s true of the event itself. Last year, after we significantly improved the Summit feedback tool—based on MVP feedback—we gained a record 75% response rate from attendees. This year, we’re hoping to hear from 80% of Summit participants. From the feedback they provide us at the end of each Summit, to their questions and suggestions this year on a private Summit Yammer group, hearing from this community makes all the difference.

      This year, we’re fortunate to have the opportunity to hear from a lot of MVPs. In what will be the largest MVP Global Summit in at least a decade, we’re expecting well over 1,800 MVPs and other influencers to fly in from nearly 80 countries to meet with more than 300 Microsoft product team members.

       

      In addition to sharing their valuable feedback, a number of MVPs present their innovations in using Microsoft technologies at the MVP Showcase. This year, sixty-seven MVPs from around the world will demonstrate and answer questions on topics ranging from how the Surface can support students with reading and writing disabilities to developing Universal Windows Applications across multiple form factor devices.

      And then the deep dive sessions will begin. MVPs will be invited to participate­­ in nearly 50,000 hours of learning and countless conversations with members of Microsoft’s community and, as part of One Microsoft, sessions will be more broadly available to participants. There also are a number of Microsoft conferences happening in the next week or so, and they will be conveniently located next to the MVP Global Summit so MVPs can make the most of their time here at Microsoft world headquarters.

       

      Throughout the Summit, MVPs can manage their schedule with a cross-platform phone app created by MVPs. This year’s app has been downloaded twice as much as the previous year’s.

      Finally, to highlight Microsoft’s vision for the cloud and the amazing contributions MVPs have made in helping people make the move to the cloud, the community will be welcomed at
      the closing night attendee party by Captain Cloud and the Community Crew! You can find out more about Captain Cloud and the crew on our Facebook page.

      Episode 2 of #StartupLife: Board Shorts and Neckties

      MSDN Blogs - Fri, 10/31/2014 - 11:18

      This post is part of an ongoing series of videos inspired by the community and brought to you by BizSpark" target="_blank">BizSpark. Share your own #StartupLife experiences on Twitter for a chance to be featured in our next Vine video. Start-up life can be tough, BizSpark is there to help. 

       

      Does this scene look familiar to you? The lines between work and personal life become blurred, non-existent really, when you're leading the startup lifestyle. Fortunately, working in the startup world has afforded you some skills that come in handy in a moment like this. Apply a little resourcefulness, sprinkle in a dash of ingenuity and...voilà! The board shorts and necktie look is born.

      This is the second video in our series (check out our fist post: Real Start-ups Run on Ramen) and it was inspired by @RHolmes4 who posted the tweet below. Rich knows that as a start-up, duty calls at all hours of the day and being in your pajamas doesn't have to stop you.

       

       

      Do you have some insight into the start-up lifestyle? Share your anecdotes, stories, and ideas using the #StartupLife hashtag on Twitter for a chance to be featured in our next Vine video. Want to scale your startup? Connect with us on Twitter @BizSpark for resources and tips on growing your business. The start-up lifestyle can be tough, BizSpark is there to make it a little easier.

      Epic Sage Chapter 4: Wherein I Discover Ripple and the Multi-Device Hybrid Apps Extension for Visual Studio

      MSDN Blogs - Fri, 10/31/2014 - 10:27

      This post is the fourth post in the series: Uploading Images from PhoneGap/Cordova to Azure Storage using Mobile Services

      In the previous chapter of this saga, I was frustrated at not being able to get the Eclipse IDE to even run on my laptop (although I had it running a few months back). After wasting an entire morning, I remembered that the Multi-Device Hybrid Apps extension for Visual Studio (let’s just call it Cordova extension for short) included an Android emulator and a bunch of other good stuff. This seemed to me the perfect time to try out this new x-plat offering from Microsoft—and I really liked it. In fact, I did a whole blog post on it: The Multi-Device Hybrid Apps extension for Visual Studio Kinda Rocks. Please read this post as I won’t be going over again all the goodness, this series is about me trying to get my Cordova app, which uploads raw JPEG binaries directly to Azure Blob storage, to run.

      Although I had installed the Cordova extensions for Visual Studio just to get a working Android emulator, I thought “why not give the Ripple, Chrome-based emulator a try.” As I mentioned in my other post, Ripple supports both iOS and Android and can emulate several devices on each. Here’s the TodoList app running on Ripple as Android Nexus Galaxy:

      Pretty cool…there’s geolocation and accelerator support, but what about camera support? The Android emulator (I thought) had camera support, but would Ripple? One way to find out.

      I deployed my app to Ripple as Android Nexus Galaxy and tried to add a new item (which is supposed to then load the camera to take a capture). Uh oh!

      I learned that I Haz Cheeseburger?!?! is really lolspeak for “Oops, Ripple couldn’t load a resource that you need.” Crud. You’re supposed to be able to pass in some success or response data, but I’ve never tried that. Instead, I clicked Success!, and wouldn’t you know, the app actually ran.

      At this point, I ignored the previous error and was excited about my chances. I tried to upload an image and got this…

      OK, so that previous obscure error must have been Ripple saying (as an lolcat) that it couldn’t load the capture plugin…no camera. But, check it out! Ripple was instead letting my choose a local file to upload and pretend that I just took the snap. Let’s Use selected file (a picture of a nice latte)…

      Oh no!!!! Ripple also doesn’t support readAsArrayBuffer!!

      Well, Ripple tried its best. Now, back to the Android emulator

      Stay tuned in for the next installment…Chapter 5: Wherein I learn to Hate the Android Emulator

      Cheers!

      Glenn Gailey

      Its all about being Open!

      MSDN Blogs - Fri, 10/31/2014 - 09:30

       

      Why Open?

      One of the coolest things to happen at Microsoft is embracing and supporting Openness. This is about how we, as a company, collaborate with others in the  industry and how we listen to our customers. Its about the choice that we give to our customers and developers. Right from supporting Linux, Drupal, Java, Hadoop, PHP, NodeJS, HTML5 and Python to extending our support on the cloud through Microsoft Azure, Microsoft has embraced openness. In fact, Microsoft has partnered with 150+ standard bodies, 400+ working groups around the world to ensure that our technology works with every one else’s.

       

      Cloud and Open Technologies

      Microsoft Azure is open, flexible and a scalable platform which is a great choice for app creation. Azure supports virtual machines on several Linux flavors such as CentOS, Ubuntu, Suse.  Not only does it support open platforms but also open development tools. As already mentioned above the support for the various development tools is pretty exhaustive. For example, look into what we have in store for Azure and PHP at the PHP Development Centre, a rich resource for tutorials and documentation, which will enable you to get started with development on the cloud. Also look into the support for PHP tools for Visual Studio, which provides a well known editor for PHP, HTML/JAVA/CSS support and most importantly integration with Azure itself.

      How can you get involved with this now?

      TrueNorthPHP is one of the biggest conferences that hosts the PHP community in Toronto taking place on Nov 6th, 7th & 8th in the Microsoft Campus at Mississauga. The conference showcases some world class speakers talking about various topics such as Clean Application Development, Security and a whole gamut of relevant and interesting topics. One of the most important components of the conference is a hackathon called Azure API Challenge taking place on Day 2, Nov 7th.

      Again, the details of the event is as follows:

      Date: Nov 6th, 7th, 8th

      Venue: Microsoft Canada, 1950 Meadowvale Blvd. Mississauga, ON L5N8L9

      Come hack with us, attend some of the best talks and let us celebrate the movement to openness together. You can connect with Mickey (@ScruffyFurn) or me, Adarsha (@AdarshaDatta) anytime for further details.

      Pumpkinduino Part 4: The final hardware and software put together

      MSDN Blogs - Fri, 10/31/2014 - 09:00
      Previous Post: Pumpkinduino Part 3 Ok, I procrastinated, got distracted, then got sick with one of those icky colds that you just can’t shake, so the final product kind is not really all that good.   On the other hand you can pull the parts out of your Arduino pile, since I didn’t use the Galileo in the final product.  Software Well nothing like procrastination to get you into trouble, but I did manage to find some ideas like the “ Pimp your Pumpkin ” by Matt Makes.  I didn’t...(read more)

      Using Bing for technical instant answers and automated solutions

      MSDN Blogs - Fri, 10/31/2014 - 08:17

      Bing has been providing factual instant answers for some time now, but recently they have added "technical" instant answers for questions about Microsoft products or technical support issues. My previous team worked on the content management system that our internal content delivery teams are now using to add technical instant answers to Bing. Here's an example technical instant answer for the "Cortana" search term: 

      Now that I'm working on support diagnostics and automated solutions again, I have been working with the Bing and content delivery teams to get some instant answers created with links to some of our automated solutions. I'm happy to announce that the first one is now live! So you can search Bing for "Windows Update Troubleshooter" (or a variety of related terms and error messages) and the first result will be a technical instant answer with a link to download and run our automated troubleshooter to fix problems with Windows Update.

      When you click the link in step 3, you will be prompted to open (or run) or save the troubleshooter.

      Just click Open (or Run) to launch the troubleshooter.

      The content delivery teams will be constantly adding more technical instant answers, and we hope to have more live with automated solutions soon!

      Workaround: "An unexpected client error has occured"

      MSDN Blogs - Fri, 10/31/2014 - 08:03

      If you receive the below error while using LCS please try again after clearing the browser cache. 

       


      Sample chapter: The Liskov Substitution Principle

      MSDN Blogs - Fri, 10/31/2014 - 08:00

      The Liskov substitution principle (LSP) is a collection of guidelines for creating inheritance hierarchies in which a client can reliably use any class or subclass without compromising the expected behavior. This chapter from Adaptive Code via C#: Agile coding with design patterns and SOLID principles explains what the LSP is and how to avoid breaking its rules.

      After completing this chapter, you will be able to

      • Understand the importance of the Liskov substitution principle.
      • Avoid breaking the rules of the Liskov substitution principle.
      • Further solidify your single responsibility principle and open/closed principle habits.
      • Create derived classes that honor the contracts of their base classes.
      • Use code contracts to implement preconditions, postconditions, and data invariants.
      • Write correct exception-throwing code.
      • Understand covariance, contravariance, and invariance and where each applies.

      Find the complete chapter here: https://www.microsoftpressstore.com/articles/article.aspx?p=2255313.

      I'm back!

      MSDN Blogs - Fri, 10/31/2014 - 07:22

      Did anybody miss me? :)

      After a long hiatus from this blog I'm planning to start posting here again. For the past few years I have been working on Microsoft internal content and knowledge management systems, including a KCS verified knowledge management system used to manage the Knowledge Base. Now I'm working on support and self-help diagnostics and automated solutions again. I'm excited to be back in this space, and I'm looking forward to updating you on some of the new customer-facing stuff we're working on. So watch this space for more information (coming soon)...

      Workaround: "An unexpected client error has occured"

      MSDN Blogs - Fri, 10/31/2014 - 07:18

      If you receive the below error while using LCS please try again after clearing the browser cache. 

      The case of the file that won't copy because of an Invalid Handle error message

      MSDN Blogs - Fri, 10/31/2014 - 07:00

      A customer reported that they had a file that was "haunted" on their machine: Explorer was unable to copy the file. If you did a copy/paste, the copy dialog displayed an error.

      1 Interrupted Action

      Invalid file handle

      ⚿  Contract Proposal
      Size: 110 KB
      Date modified: 10/31/2013 7:00 AM

      Okay, time to roll up your sleeves and get to work. This investigation took several hours, but you'll be able to read it in ten minutes because I'm deleting all the dead ends and red herrings, and because I'm skipping over a lot of horrible grunt work, like tracing a variable in memory backward in time to see where it came from.¹

      The Invalid file handle error was most likely coming from the error code ERROR_INVALID_HANDLE. Some tracing of handle operations showed that a call to Get­File­Information­By­Handle was being passed INVALID_FILE_HANDLE as the file handle, and as you might expect, that results in the invalid handle error code.

      Okay, but why was Explorer's file copying code getting confused and trying to get information from an invalid handle?

      Code inspection showed that the handle in question is normally set to a valid handle during the file copying operation. So the new question is, "Why wasn't this variable set to a valid handle?"

      Debugging why something didn't happen is harder than debugging why it did happen, because you can't set a breakpoint of the form "Break when X doesn't happen." Instead you have to set a breakpoint in the code that you're pretty sure is being executed, then trace forward to see where execution strays from the intended path.

      The heavy lifting of the file copy is done by the Copy­File2 function. Explorer uses the Copy­File2­Progress­Routine callback to get information about the copy operation. In particular, it gets a handle to the destination file by making a duplicate of the hDestination­File in the COPY­FILE2_MESSAGE structure. The question is now, "Why wasn't Explorer told about the destination file that was the destination of the file copy?"

      Tracing through the file copy operation showed that the file copy operation actually failed because the destination file already exists. The failure would normally be reported as ERROR_FILE_EXISTS, and the offending Get­File­Information­By­Handle would never have taken place. Somehow the file copy was being treated as having succeeded even though it failed. That's why we're using an invalid handle.

      The Copy­File2 function goes roughly like this:

      HRESULT CopyFile2() { BOOL fSuccess = FALSE; HANDLE hSource = OpenTheSourceFile(); // calls SetLastError() on failure if (hSource != INVALID_HANDLE_VALUE) { HANDLE hDest = CreateTheDestinationFile(); // calls SetLastError() on failure if (m_hDest != INVALID_HANDLE_VALUE) { if (CopyTheStuff(hSource, hDest)) // calls SetLastError() on failure { fSuccess = TRUE; } CloseHandle(hDest); } CloseHandle(hSource); } return fSuccess ? S_OK : HRESULT_FROM_WIN32(GetLastError()); }

      Note: This is not the actual code, so don't go whining about the coding style or the inefficiencies. But it gets the point across for the purpose of this story.

      The Create­The­Destination­File function failed because the file already existed, and it called Set­Last­Error to set the error code to ERROR_FILE_EXISTS, expecting the error code to be picked up when it returned to the Copy­File2 function.

      On the way out, the Copy­File2 function makes two calls to Close­Handle. Close­Handle on a valid handle is not supposed to modify the thread error state, but somehow stepping over the Close­Handle call showed that the error code set by Create­The­Destination­File was being reset back to ERROR_SUCCESS. (Mind you, this was a poor design on the part of the Copy­File2 function to leave the error code lying around for an extended period, since the error code is highly volatile, and you would be best served to get it while it's still there.)

      Closer inspection showed that the Close­Handle function had been hooked by some random DLL that had been injected into Explorer.

      The hook function was somewhat complicated (more time spent trying to reverse-engineer the hook function), but in simplified form, it went something like this:

      BOOL Hook_CloseHandle(HANDLE h) { HookState *state = (HookState*)TlsGetValue(g_tlsHookState); if (!state || !state->someCrazyFlag) { return Original_CloseHandle(h); } ... crazy code that runs if the flag is set ... }

      Whatever that crazy flag was for, it wasn't set on the current thread, so the intent of the hook was to have no effect in that case.

      But it did have an effect.

      The Tls­Get­Value function modifies the thread error state, even on success. Specifically, if it successfully retrieves the thread local storage, it sets the thread error state to ERROR_SUCCESS.

      Okay, now you can put the pieces together.

      • The file copy failed because the destination already exists.
      • The Create­The­Destination­File function called Set­Last­Error(ERROR_FILE_EXISTS).
      • The file copy function did some cleaning up before retrieving the error code.
      • The cleanup functions are not expected to alter the thread error state.
      • But the cleanup function had been patched by a rogue DLL, and the hook function did alter the thread error state.
      • This alteration caused the file copy function to think that the file was successfully copied even though it wasn't.
      • In particular, the caller of the file copy function expects to have received a handle to the copy during one of the copy callbacks, but the callback never occurred because the file was never copied.
      • The variable that holds the handle therefore remains uninitialized.
      • This generates an invalid handle error when the code tries to use that handle.
      • This error is shown to the user.

      An injected DLL that patched a system call resulted in Explorer looking like an idiot. (As Alex and Gaurav well know, Explorer is perfectly capable of looking like an idiot without any help.)

      We were quite fortunate that the error manifested itself as a failure to copy the file. Imagine if Explorer didn't use Get­File­Information­By­Handle to get information about the file that was copied. The Copy­File2 function returns S_OK even though it actually failed and no file was copied. Explorer would have happily reported, "Congratulations, your file was copied successfully!"

      Stop and think about that for a second.

      A rogue DLL injected into Explorer patches a system call incorrectly and ends up causing all calls to Copy­File2 to report success even if they failed. The user then deletes the original, thinking that the file was safely at the destination, then later discovers that, oops, looks like the file was not copied after all. Sorry, it looks like that rogue DLL (which I'm sure had the best of intentions) had a subtle bug that caused you to lose all your data.

      This is why, as a general rule, Windows considers DLL injection and API hooking to be unsupported. If you hook an API, you not only have to emulate all the documented behavior, you also have to emulate all the undocumented behavior that applications unwittingly rely on.

      (Yes, we contacted the vendor of the rogue DLL. Ideally, they would get rid of their crazy DLL injection and API hooking because, y'know, unsupported. But my guess is that they are going to stick with it. At least we can try to get them to fix their bug.)

      ¹ To do this, you identify the variable and set a breakpoint when that variable is allocated. (This can be tricky if the variable belongs to a class with hundreds of instances; you have to set the breakpoint on the correct instance!) When that breakpoint is hit, you set a write breakpoint on the variable, then resume execution. Then you hope that the breakpoint gets hit. When it does, you can see who set the value. "Oh, the value was copied from that other variable." Now you repeat the exercise with that other variable, and so on. This is very time-consuming but largely uninteresting so I've skipped over it.

      Project Online Reporting und PowerBI

      MSDN Blogs - Fri, 10/31/2014 - 06:59

      Nach den Erfahrungen auf der TechEd Europe diese Woche und der Vielzahl an Diskussionen bei uns am Project-Stand, hier nochmal ein kurzes Heads-Up dazu.

      Wer sich im Rahmen von Project Online mit Reporting beschäftig, sollte unbedingt einen genaueren Blick auf die Funktionen von PowerBI werfen (http://www.microsoft.com/en-us/powerbi/default.aspx). Zur Analyse von Daten, gerade in der Komplexität eines Projekt-Portfolios sind die enthaltenen Funktionen sehr hilfreich.

      Zwei Punkte sind dabei besonders interessant für Nutzer von Excel Services/ Excel Online für Portfolio Reporting. Mit Hilfe von PowerBI lässt sich ein automatisierten Refresh der Datenquellen innerhalb der Reports durchführen, sodass ein Nutzer nicht erst beim Öffnen des Reports lange warten muss, bis er an die aktuellen Daten kommt.

      Es können im Detail die Datenverbindungen gewählt werden, die einen regelmäßigen Refresh benötigen.

      Noch spannender wird aber sicher die Zukunft der Datenanalyse, durch Abfrage der verfügbaren Daten mit natürlicher Sprache, anstatt wie bisher durch ggf. komplexe Abfragesyntaxen. Auch dafür bietet PowerBI die Lösung.

      Zum Ausprobieren kann das Feature im O365 Admin Portal hinzugefügt werden.

      Aktuell gibt es für die Excel Dokumente ein Größenmaximum von 250 MB. Für die bisher gesehenen Datenmengen sollte diese Beschränkung in der Praxis aber keine Rolle spielen.

      Also, hier geht's weiter: http://www.microsoft.com/en-us/powerbi/default.aspx.

      Pages

      Subscribe to Randy Riness @ SPSCC aggregator
      Drupal 7 Appliance - Powered by TurnKey Linux