You are here

Feed aggregator

Clustered Column Store Index: Concurrency and Isolation Level

MSDN Blogs - Sat, 07/26/2014 - 18:48
Clustered Column Store and Concurrency

The clustered column store index (CCI) has been designed for Data Warehouse scenario which primarily involves

  • Write once and read multiple times – CCI is optimized for query performance. It give order of magnitude better query performance by compressing the data in columnar format, processing set of row in batches and by bringing only the columns that are required by the query.
  • Bulk data import and trickle data load – Insert Operation

While it supports UPDATE/DELETE operations but it is not optimized for large number of these operation. In fact, concurrent DELETE/UPDATE can cause blocking in some cases and can lead to multiple delta row-groups.To understand the concurrency model, there is a new lock resource, called ROWGROUP. Let us see how locks are taken for different scenarios. I will walk through concurrency using a series of blogs starting with transaction isolation levels

 

Transaction Isolation levels Supported
  • Read Uncommitted –While this is ok for most DW queries, and in fact, queries running on PDW appliance access CCI under read uncommitted to avoid blocking with concurrent DML operations. This is how CCI is queried in Analytics Platform System, a re-branding of PDW. Please refer to the  http://www.microsoft.com/en-us/server-cloud/products/analytics-platform-system/default.aspx#fbid=CRIMcFvfkD2
  •  Read Committed – Only lock based implementation of read committed isolation is supported which can get blocked with concurrent DML transactions.

 If RCSI is enabled on the database containing one or more tables with CCI, all tables other than CCI can be accessed with non-blocking semantics under read committed isolation level but not for CCI

Example:

select is_read_committed_snapshot_on, snapshot_isolation_state_desc,snapshot_isolation_state

from sys.databases where name='AdventureWorksDW2012'

 

CREATE TABLE [dbo].[T_ACCOUNT](

       [accountkey] [int] IDENTITY(1,1) NOT NULL,

       [accountdescription] [nvarchar](50) NULL

) ON [PRIMARY]

 

            -- create a CCI

CREATE CLUSTERED INDEX ACCOUNT_CI ON T_ACCOUNT (ACCOUNTKEY)

 Session-1

use AdventureWorksDW2012

go

-- Do a DML transaction on CCI but don't commit

begin tran

insert into T_ACCOUNT (accountdescription )

values ('value-1');

 

 Session-2

-- query the table under read committed in a different session

set transaction isolation level read committed

go

 select * from t_account

 You will see CCI query is blocked on session-1 as shown using the query below

select

    request_session_id as spid,

    resource_type as rt, 

    resource_database_id as rdb,

    (case resource_type

      WHEN 'OBJECT' then object_name(resource_associated_entity_id)

      WHEN 'DATABASE' then ' '

      ELSE (select object_name(object_id)

            from sys.partitions

            where hobt_id=resource_associated_entity_id)

    END) as objname,

    resource_description as rd, 

    request_mode as rm,

    request_status as rs

from sys.dm_tran_locks

 

 

 Even though the database is using default non-blocking read committed isolation level using row versioning, the CCI is accessed using lock based implementation of read committed.

  • Snapshot Isolation – It can be enabled for the database containing CCI. Any disk-based table other than CCI can be accessed under Snapshot Isolation  but access to CCI is disallowed and it generates the following error

Msg 35371, Level 16, State 1, Line 26

SNAPSHOT isolation level is not supported on a table which has a clustered columnstore index.

  • Repeatable Read – Supported in CCI

set transaction isolation level repeatable read

go

 

begin tran

select * from t_account

go

Here are the locks. Note it takes S lock on all rowgroups as we are doing the full table scan

 

  • Serializable – Supported in CCI

set transaction isolation level serializable

go

 

begin tran

select * from t_account

go

Here are the locks. Note it takes S lock at the table level to guarantee serializable Isolation level

 

In the next blog, I will discuss locks taken when inserting rows into CCI

Thanks

Sunil Agarwal

 

Living on the edge - testing without mocking (Part 2)

MSDN Blogs - Sat, 07/26/2014 - 13:23
The (micro-) epic continues...

In the first part of this N-part series (where N may or may not equal 2), I thoroughly convinced you that while testing against mock systems is wise and all, it's always cool to also live dangerously and write/run unit tests against live systems at least every once in a while. In this post I'd like to continue this with another pattern that has proved useful to me in this thrilling world.

The secret wild side pattern

So you're a hip start-up programmer writing a super-cool machine learning system to figure out the best coffee blends you can make for any time of day (and you're of course reading this in 2018 when Microsoft has become cool with start-ups again - isn't it great?). C-Blenda relies on a web service that always gives the available coffee beans and their prices/characteretics in any city in the world:

IEnumerable<BeanInfo> GetAllCoffee(Location location)

Now as a conscientious programmer, you abstract this out into an abstract base class (not an interface - a pet peeve of mine that I may expand on some day), with a concrete implementation that actually calls into the web service, and various mock implementations that you use in your tests. You then code up your brilliant implementation with awesome tests to go with it, for example this test that checks that you always give at most three recommendations for the morning (can't tax that caffeine- deprived brain too much in the morning):

[TestMethod] public void GoodNumberOfMorningRecommendations() { var coffeeInfoProvider = new MockInfo(); var recommender = new Recommender(coffeeInfoProvider); var ideas = recommender.TastiestBlends(Seattle(), Morning()); Assert.IsLessThan(4, ideas.Count()); }

Your tests pass, you do the obligatory dance-around-the-office celebration, but now you crave more: your tests should also pass against real data. You want your tests to be prim & proper and run against mock implementations in the everyday world, but have the occasional wild night against live implementations.

Introducing two-face

Here is how I usually give a wild side to my tests: first I make the test class an abstract base class, with an abstract method to get the concrete implementation of the dependent system:

protected abstract CoffeeInfoProvider NewCoffeeInfoProvider(); [TestMethod] public void GoodNumberOfMorningRecommendations() { var coffeeInfoProvider = NewCoffeeInfoProvider(); var recommender = new Recommender(coffeeInfoProvider); var ideas = recommender.TastiestBlends(Seattle(), Morning()); Assert.IsLessThan(4, ideas.Count()); }

I keep all of the actual tests in the base class, and I then create two thin derived classes that just implement this abstract method differently: the mock one that just returns a mock implementation:

public class MockCBlendaTests : BaseCBlendaTests { protected override CoffeeInfoProvider NewCoffeeInfoProvider() { return new MockCoffeeInfoProvider(); } }

And a live one that either creates a real implementation based on configured information, or skips the tests if not configured for that:

public class LiveCBlendaTests : BaseCBlendaTests { protected override CoffeeInfoProvider NewCoffeeInfoProvider() { var credentials = TestConfiguration.GetCoffeeInfoProviderCredentials(); if (credentials == null) Assert.Inconclusive("Skipping live tests because we're not configured for it."); return new CoffeeInfoProvider(credentials); } }

(See part 1 for my discussion of this TestConfiguration). And voila: I just need to enter my credentials to the web service in a config file whenver I want to test against a live system, the mock tests always run, and I can keep all the tests that don't care about which implementation they go against in one clear place.

Luckily all the test systems I've tried this against (NUnit, JUnit and MSTest) all have the necessary ingredients to make this work: they don't choke on abstract base test classes, and they have a way of skipping the test in the running of it.

This pattern has for the most part served me well. There are a couple of drawbacks though:

  1. It doesn't enforce running the live tests regularly. Ideally it'd be coupled with a continuous build system that does run the tests with live credentials at regular intervals, otherwise we're at the mercy of my and my team's discpline to actually run them when needed (which of course is always an excellent idea to depend on my discipline...).
  2. Visual Studio at least doesn't differentiate in its UI that much between the two versions of the tests: you sort of see two results for a test called BaseCBlendaTests.GoodNumberOfMorningRecommendations, which can be confusing on which one is the mock and which is the live. It's a bit annoying but it never hurt me too much.

Issue with Application Insights Data Platform - 7/26 UTC - Mitigating

MSDN Blogs - Sat, 07/26/2014 - 12:19

Initial Update 19:14 7/26 UTC

Application Insights is investigating issues with the Application Insights data platform data ingestion pieces.  At approximately 10:30 UTC, Azure started rebooting the data ingestion queue nodes.  From the logs, it appears the machine got rebooted before the queue service cleanly shutdown.  As a result, the partition leaders did not transition cleanly and when the broker came back up they needed to do a full integrity check of the log for data integrity.  During this time the producers could not add new items to the queue which resulted in data loss.  The topic partitions started coming up at around 18:40 UTC.

Customer's data was permanently lost between 10:30 UTC and 18:40 UTC.

The team is still in the process of verifying the rest of the system has recovered.

We apologize for the inconvenience this may have caused.

-Application Insights Service Delivery Team

SideLoading Guideance

MSDN Blogs - Sat, 07/26/2014 - 12:16

This scenario shows how one can use Side Loading to install a SharePoint Provider Hosted Application to a site collection. 

SharePoint Administrators can deploy apps to their tenancy basically two different ways. Deploy from the app catalog (“app stapling”) or via sideloading.

What is sideloading? App sideloading, from a SharePoint context, is the ability to install a SharePoint app directly into a site to explicitly bypass the regular governance controls of SharePoint. The main reason for blocking sideloading by default on non-developer sites is the risk that faulty apps pose to their host web/host site collection. Apps have the potential to destroy data and make sites or, given enough permissions, can even make site collections unusable. Therefore, apps should only be sideloaded in dev/test environments and in production only when deploying from the AppCatalog does not meet your needs. It is NOT recommended to sideload SharePoint Hosted-Applications, because of the risk of data loss.

Note:

  • Enabling the app sideloading features requires tenant admin permissions (in a multi-tenant environment) or farm admin permissions (in a single tenant environment).
  • You must have a user context when sideloading the application. App-only permission is not available.
  • Sideloading does not suppress the security check or compensate existing security requirements. It does however enable the programmatic installation of an app
  • You must still register and app principal for SharePoint Provider hosted applications
  • You should deactivate the sideloading feature immediately once the app is in installed. Site Collections administrators can install apps using CSOM which could circumvent your governance practices.
  • You can not side load applications using App-only context or token, The API requires the a user context.
Centrally deployed apps vs side loading comparison

 

App Stapling (Deploy from App Catalog) Sideloading

Custom actions and app parts are not supported

Custom actions and app parts are supported

App Install, Uninstall and upgrade event receivers cannot be handled

App Install, Uninstall and upgrade event receivers do fire and can be handled

Site Collection Administrators cannot uninstall the application

Site Collection Administrators can uninstall the application

Applied to new and existing site collections

Custom code must be used to install the application

There is metadata about the app and updates are applied

Tenant Administrators must enable the sideloading feature prior to install the application and should be disabled after the application is installed

 

There is no metadata about the app and updates have to be managed manually

How to use against Office 365 Multi-tenant

Since this solution is based on the using a provider hosted application, the following should be taken in account:

  • The user must be a tenant administrator in order to enable the SideLoading Feature
  • The Provider hosted application has already been registered by the tenant administrator
  • The Provider hosted application has been deployed to your hosting platform

 

Guid _sideloadingFeature = new Guid("AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D");

string _url = GetUserInput("Please Supply the SharePoint Online Site Collection URL: ");

/* Prompt for Credentials */

Console.WriteLine("Enter Credentials for {0}", _url);

string _userName = GetUserInput("SharePoint Username: ");

SecureString _pwd = GetPassword();

ClientContext _ctx = new ClientContext(_url);

_ctx.ApplicationName = "AMS SIDELOADING SAMPLE";

_ctx.AuthenticationMode = ClientAuthenticationMode.Default;

 

//For SharePoint Online

_ctx.Credentials = new SharePointOnlineCredentials(_userName, _pwd);

 

string _path = GetUserInput("Please supply path to your app package:");

Site _site = _ctx.Site;

Web _web = _ctx.Web;

 

try

{

     _ctx.Load(_web);

     _ctx.ExecuteQuery();

//Make sure we have side loading enabled. You must be a tenant admin to activate or you will get an exception! The ProcessFeature is an extension method,

//see OfficeAMS (https://officeams.codeplex.com/) OfficeAMS.Core sample

      _site.ProcessFeature(_sideloadingFeature, true);

     try

     {

     var _appstream = System.IO.File.OpenRead(_path);

          AppInstance _app = _web.LoadAndInstallApp(_appstream);

          _ctx.Load(_app);

          _ctx.ExecuteQuery();

      }

      catch

      {

           throw;

      }

 

//we should ensure that the side loading feature is disable when we are done or if an //exception occurs

      _site.ProcessFeature(_sideloadingFeature, false);

}

catch (Exception _ex)

{

Console.ForegroundColor = ConsoleColor.Red;

     Console.WriteLine(string.Format("Exception!"), _ex.ToString());

     Console.WriteLine("Press any key to continue.");

     Console.Read();

}

 

References

SharePoint 2013 and SharePoint Online solution pack for branding and site provisioning 

 

Code Sample in the blog  See the Core.SideLoading Sample

The emerging intelligent data ecosystem

MSDN Blogs - Sat, 07/26/2014 - 11:12

Last month I was invited by one of our major customers in Europe to keynote their annual innovation event for over 500 of their senior managers on how the rise of cloud computing and the business of big data has impacted and will further impact the electricity industry and power utilities worldwide. During the presentation I painted a picture of the emerging intelligent data ecosystem and how the cloud acts as a hub that pulls together information from cities, utilities, electric vehicles, homes, industrial customers, and so on. As we embed more and more sensors and intelligent devices into our infrastructure, this intelligent data ecosystem evolves into the Internet of your Things (IoyT) to create business models that are far different than today’s largely static information architectures. Some of the examples I used were our smart metering intelligence demonstration, our smart elevator machine learning research, the work we are doing on our CityNext initiative and our productivity vision future vision.

It was a terrific meeting and I would encourage all Utilities to follow this lead and set aside time for their business leaders to reflect on how the cloud and big data megatrends are going to reshape their businesses. -  Jon C. Arnold

Fearless Speaking

MSDN Blogs - Sat, 07/26/2014 - 09:58

“Do one thing every day that scares you.” ― Eleanor Roosevelt

I did a deep dive book review.

This time, I reviewed Fearless Speaking.

The book is more than meets the eye.

It’s actually a wealth of personal development skills at your fingertips and it’s a powerful way to grow your personal leadership skills.

In fact, there are almost fifty exercises throughout the book.

Here’s an example of one of the techniques …

Spotlight Technique #1

When you’re overly nervous and anxious as a public speaker, you place yourself in a ‘third degree’ spotlight.  That’s the name for the harsh bright light police detectives use in days gone by to ‘sweat’ a suspect and elicit a confession.  An interrogation room was always otherwise dimly lit, so the source of light trained on the person (who was usually forced to sit in a hard straight backed chair) was unrelenting.

This spotlight is always harsh, hot, and uncomfortable – and the truth is, you voluntarily train it on yourself by believing your audience is unforgiving.  The larger the audience, the more likely you believe that to be true.

So here’s a technique to get out from under this hot spotlight that you’re imagining so vividly turn it around! Visualize swiveling the spotlight so it’s aimed at your audience instead of you.  After all, aren’t you supposed to illuminate your listeners? You don’t want to leave them in the dark, do you?

There’s no doubt that it’s cooler and much more comfortable when you’re out under that harsh light.  The added benefit is that now the light is shining on your listeners – without question the most important people in the room or auditorium!

I like that there are so many exercises and techniques to choose from.   Many of them don’t fit my style, but there were several that exposed me to new ways of thinking and new ideas to try.

And what’s especially great is knowing that these exercise come from professional actors and speakers – it’s like an insider’s guide at your fingertips.

My book review on Fearless Speaking includes a list of all the exercises, the chapters at a glance, key features from the book, and a few of my favorite highlights from the book (sort of like a movie trailer for the book.)

You Might Also Like

7 Habits of Highly Effective People at a Glance

347 Personal Effectiveness Articles to Help You Change Your Game

Effectiveness Blog Post Roundup

Introduction to Containerization - Automated packing process in Microsoft Dynamics AX 2012 R3

MSDN Blogs - Sat, 07/26/2014 - 08:18
Overview In addition to the manual packing functionality that was introduced in my previous blog , the new Warehouse management system also provides a feature, Containerization, which supports an automated or guided packing process. In this process, containerization assists the user by: Automatically calculating required containers for the outbound shipment based on product and container physical dimensions. Optimizing packing structure according to a set of rules configured by the user. Generating...(read more)

The Sharks Cove is now available for Pre-order!

MSDN Blogs - Sat, 07/26/2014 - 08:01

Back in April at the //Build conference, our group sent a couple of guys down to San Francisco to give a preview of some of the cool new stuff we’ve been working on over the past year.  At the end of the presentation, Viraf shared that the coolest of that stuff, the Sharks Cove, was targeted for release in Summer of 2014.  Given that it’s nearing the end of July, we’re still in the year 2014, and the title of this blog post provides a pretty strong hint (spoiler alert?), it appears to be pretty easy to guess what I’ll say next:

 

The Sharks Cove development board is now available for pre-order!

This marks a major milestone in our work, and we’re all pretty excited about it, to say the least.  This board is the product of a lot of collaborative effort amongst various groups from Microsoft, Intel, and the product manufacturer, CircuitCo.  This “Windows compatible hardware development board” is designed to facilitate development of software and drivers for mobile devices that run Windows, such as phones, tablets and similar System on a Chip (SoC) platforms. 

At $299, this is a board that we believe will find a home with Independent Hardware Vendors (IHVs) and hardware enthusiasts alike.  That price not only covers the cost of the hardware, but also includes a Windows 8.1 image and the utilities necessary to apply it to the Sharks Cove.  When you additionally consider that the Windows Driver Kit 8.1 can pair with Visual Studio Express and are both free with a valid MSDN account, the initial outlay for Windows driver developers is a lot less cost prohibitive than it once was.

We’ve also been busy posting content related to the Sharks Cove, settling into our new MSDN development-board Forum, and the launch of this blog.  Our goal is to ensure information is easily found and we have multiple ways to interact with
our community:

SharksCove.org is the site we have set up a site dedicated to the Sharks Cove board, where you’ll find specs and links to other content and our MSDN forums, as well as a link for the pre-order.

Pre-order the Sharks Cove direct link (via Mouser Electronics).

The Hardware Development Boards for Windows forum on MSDN is the new forum we have set up for discussion and support.

The Windows compatible hardware development boards MSDN page will consistently be updated with new information and act as a launch point to the various pages related to Windows driver development using these boards. 

As mentioned above, Peter Wieland and Viraf Gandi introduced and demoed the Sharks Cove at the //BUILD conference in San Francisco this past April – definitely worth the viewing!

Over the next few weeks and months, we’re planning to a number of articles related to the Sharks Cove and using it for driver development.  Among our planned posts, we will have series of posts from our summer college intern describing his introduction to driver development and using the Sharks Cove and the User Mode Driver Framework (UMDF) to develop a sensor driver.  We’ll also give a behind-the-scenes peek at how all of this came together, as well as a variety of other posts that feature the Sharks Cove.

We’re very excited and proud of the work done to make the Sharks Cove a reality.  We are looking forward to seeing the amazing things that can be done with these boards!

Issues with Galleries

MSDN Blogs - Sat, 07/26/2014 - 03:23

The Galleries site is currently unavailable due to a backend failure on the SQL cluster. We are currently working on restoring connectivity to the backend environment.

We apologize for the inconvenience and appreciate your patience.

-MSDN Service Delivery Team

Kniha zdarma: Building Cloud Apps with Microsoft Azure

MSDN Blogs - Fri, 07/25/2014 - 22:40
Pokud vás zajímá vývoj pro cloud, máme pro vás dobrou zprávu. Tento týden byla uvolněna kniha Building Cloud Apps with Microsoft Azure od autorů zvučných jmen: Scott Guthrie, Mark Simms, Tom Dykstra, Rick Anderson, Mike Wasson. A když píšeme "uvolněna", myslíme tím zcela zdarma a ke stažení. Kniha má podtitul "Best practices pro DevOps, úložiště dat, vysokou dostupnost a ještě...(read more)

Weekend Links 07/26

MSDN Blogs - Fri, 07/25/2014 - 21:46

Here we are again, the summer moves along and the news and articles keep coming. So, let's get to it.

 

Windows Store App - Using Facebook to authenticate the user

http://blogs.msdn.com/b/mspfe/archive/2014/07/25/windows-store-app-using-facebook-to-authenticate-the-user.aspx

If you want to integrate Facebook Into your Windows Application, this is the link for you. In a 20 minutes video, Gianluca Bertelli gives you the basics on how to accomplish this great social feature.

 

TWC9: New Unified Tech Event, CH9 WP8 & 360 App's, Node.js for Visual Studio and more

http://channel9.msdn.com/Shows/This+Week+On+Channel+9/TWC9-New-Unified-Tech-Event-CH9-WP8-360-App-s-Node-js-for-Visual-Studio-and-more

A new episode from our friends at Channel 9. You know you will like it!

 

Writing sideloaded Windows Store apps for the Enterprise

http://channel9.msdn.com/Blogs/One-Dev-Minute/Writing-sideloaded-Windows-Store-apps-for-the-Enterprise

More Channel 9 goodies, this time with a more Enterprise flavor.

 

 

Six Best Windows Phone Games for This Weekend

http://www.wpxbox.com/six-best-windows-phone-games-weekend/

In case you want some fun during the weekend. I know I will try Vector Runner Remix.  

 

Beta release of VLC player for Windows Phone coming soon

http://www.neowin.net/news/beta-release-of-vlc-player-for-windows-phone-coming-soon

I have to say VLC will always have a sweet spot in my my and being able to use it in my Windows Phone will be even sweeter.

 

Microsoft unleashes 'Settlers of Catan' on the web

http://www.engadget.com/2014/07/25/microsoft-unleashes-settlers-of-catan-on-the-web/

Settlers of Catan. Enough Said.

 

Learn a few new tricks in Quick tips for CRM for tablets video

MSDN Blogs - Fri, 07/25/2014 - 21:08

Applies to: CRM for tablets (Windows 8)

Even if you’ve been using CRM for tablets for a while, you might learn something new from the tips in this short video (1:04).

For more information on installing, configuring, and using CRM for tablets, see CRM for tablets and phones.

 

Andrea Bichsel

Technical Writer Microsoft Dynamics CRM Customer Center

DRIVER_INITIALIZE DriverEntry;

MSDN Blogs - Fri, 07/25/2014 - 17:36

There exists an MSDN page titled “Support and community for hardware developers”, which includes a column of links titled “Ask the community”.  Each of those links will send a current or potential developer somewhere that will allow them to interact with other current or potential developers, including a bunch of us that work in hardware and driver development at Microsoft.  While each of these places tends to be active and rich with content, there have been a person or two that have asked: “Why is there no link to a blog dedicated to this stuff?”  There are a few reasons for this, but none are particularly exciting or worthy of discussion at the moment.  So, without much further ado…

Welcome to the Windows Hardware and Driver Developer blog! 

As one might infer from the title, we're launching this blog with the intention to produce content focused on the world of Windows driver development, including hardware and software designed for the purpose.  Our intention is to create another (sometimes less "formal") place where a driver developer and/or hardware enthusiast can read about happenings within the world of driver development for Windows.  We will cover a variety of topics that include (and certainly not limited to):

  • Tips-and-Tricks on various aspects of development, debugging, and the available tools
  • Debugging walkthroughs: root-cause analysis on issues we found particularly interesting
  • Product availability and update announcements, overviews, etc.
  • Basically stuff that's considered interesting and relevant!

Our vision for this blog includes our readers suggesting ideas for topics that we’re able to cover.  Similarly, when we observe a trend in our Windows Hardware WDK and Driver Development support forum or popular development forums, and it’s a subject where a blog post would benefit, we’ll create a post that covers it. 

We have compiled a roster of people that will contribute to this blog, most of whom are focused in some way on the Windows Driver Kit (WDK), Windows Driver Framework (WDF), or Windows compatible hardware development boards.  Our group hails from many different functional backgrounds, which has already proven to be useful in terms of content variety: Program Managers, Support Engineers, Quality Engineers, Developers, and Leads (you know, management-types).  The plan is for content to be added to the blog about once per week, though don’t be surprised if posts appear more often than that.  In fact, it’s likely that this post won’t be the only post on this board for too long!

So, what’s coming up in the near-term?  With the impending release of the Sharks Cove development board, we are working on a few posts related to it, including a series of posts from our summer intern, describing his initial journey into the world of hardware and driver development and using this board as part of his driver development.  At some point in the near future, we'll also give a bit of a behind-the-scenes look at some of the work done to make the Sharks Cove a reality.

We will start cranking out posts soon, and over time we’ll look to put an end to the question (for this topic, anyway), “Why isn’t there a blog for that?”

Issues with VSO Services on Email Notification - 7/25 - Investigating

MSDN Blogs - Fri, 07/25/2014 - 16:57

We are currently investigating issues with Visual Studio Online services where customers may experience delay in receiving email notifications. Our DevOps teams are engaged and actively investigating the issues.

We apologize for the inconvenience and appreciate your patience while working on resolving this issue. 

Issues with Visual Studio Online Premium Features - 7/25/2014 22:15 UTC

MSDN Blogs - Fri, 07/25/2014 - 15:15

We are currently investigating an issue with one of premium feature we recently enabled for our customers. Users intermittently might encounter errors when using the test hub functionality in visual studio online project homepage. Our DevOps teams are engaged and actively investigating the issues.

Additionally we are experiencing issues with email notifications which may delay mail delivery. We apologize for the inconvenience and appreciate your patience while working on resolving this issue.

New Forum - SQL Server Virtual Machines

MSDN Blogs - Fri, 07/25/2014 - 14:44

Please use this forum to post all your questions, issues, suggestions to forum - "SQL Sever Virtual machines"  http://social.msdn.microsoft.com/Forums/en-US/home?forum=SQLServerVirtualMachines. Our engineering team will be monitoring issues reported on this forum and we shall respond back as early as possible.

Issues with MSDN Subscription 7/25 -Resolved

MSDN Blogs - Fri, 07/25/2014 - 13:18

Final Update: Friday , 25 July 2014 08:10 PM UTC

There was an issue for MSDN Subscribers while getting Product Keys.
Our DevOps Team isolated and resolved the issue quickly. MSDN Subscribers will
be able to get keys as usual. 

We apologize for the inconvenience and appreciate your patience.

-MSDN Service Delivery Team

 

 

Tool for troubleshooting authentication using CRM SDK

MSDN Blogs - Fri, 07/25/2014 - 12:41
Hello Folks, Finally I was able to spend some time out to work on shorter code for CRM connection using late bound approach. More than the helper concept the idea is about a universal application which can help you troubleshoot/isolate the issues with CRM authentication. The blog as an application attached where you will also see the activity log for the events that take place. Here's the screenshot Happy Learning. //.NET DLLS using System; using System.Collections.Generic; using...(read more)

How It Works: XEvent Output and Visualization

MSDN Blogs - Fri, 07/25/2014 - 11:54

Each and every day I use XEvent more and more as I uncover the powerful feature set.   I am finding it helpful to understand some of the input and output capabilities in order to leverage the power of XEvent.

Server File Output

When setting up an session to write to a file use per CPU partitioning (BY CPU) and increase the memory size to accommodate .5 to 4MB per CPU.    The BY CPU configuration reduces contention on the output buffers as each CPU has private buffer(s).

Without going into all the details of how many active buffers can be achieved, the asynchronous I/O capabilities and such, the following shows the core performance gain when using per CPU partitioned output.

No partitioning: CPUs complete to insert events into same buffer. CPU Partitioning: CPUs have private buffers

Visualization (Using the event data)

The per CPU partitioning is used to reduce the impact of tracing the production server.   However, the divide and conquer activity for capture means you have to pay attention to the details when using looking at the data.

A buffer is flushed to the file when it becomes full or the data retention period is met.

This all boils down to the fact that you must sort the events by timestamp (better if you use Action: event_sequence) before you attempt to evaluate the events.

Take the following as an example.  You have 2 connections to the SQL Server but the connection assigned to CPU 2 does more work than the connection assigned to CPU 1.  

  • The connection on CPU 1 executes the first command and produces event_sequence = 1 into the local, storage buffer.
  • The connection to CPU 2 executes the second command and produces event_sequence = 2, 3, 4, 5, … into the local storage buffer.
  • CPU 2’s buffer is filled before CPU 1’s buffer and flushed to the XEvent file.

When you open the XEL file in management studio (SSMS) the order of events displayed is physical file order.

    1. Sequence = 2
    2. Sequence = 3
    3. Sequence = 4
    4. Sequence = 1

Unless you have a very keen eye you might not even notice the ordering of events.    You must sort the data by event_sequence or timestamp in order to reflect the true nature of the capture.   You will encounter the same need to add an order by clause if you import the trace into a SQL Server database table.

Dispatch Latency

Be very careful when setting dispatch latency.   The latency controls how long an unfilled buffer can hold events before sending them to the output stream.   Using a value of 0 is really INFINITE, not flush immediately.

Take the same example from above with a long dispatch latency (minutes, INFINITE, ….).    Now assume that CPU 2 keeps generating events but the connection to CPU 1 remains unused.   The buffer for CPU 1 is not flushed until the trace is stopped or dispatch latency exceeded.  Here is what I recently encountered when someone used the INFINITE dispatch latency.

File File time Lowest Sequence in the file 1 (Trace started) 3:00am 100000 2 3:20am 200000 3 3:40am 300000 4 4:00am 400000 5 (Trace stopped) 4:11am 50000

The trace was captured with INFINITE dispatch latency.  CPU # had very little event activity and didn’t flush the partial buffer until the trace was stopped.    Other CPUs had activity and caused events to be flushed to the rollover files.

Looking for the problem at 3:10am I opened the XEL file covering the time window, 1 in this example.    However, it was not until I included the 5th file into my research and sorted by the event_sequence that I saw the additional queries that occurred within the same time window and relevant to my investigation.

If I had only opened file 1 and grouped by Query Hash, ordering by CPU I would have missed the completed events in file 5.

CSS uses 5 to 30 seconds as a best practice for the dispatch latency setting, keeping the events near each other, across the CPU partitions.

SSMS runs in the Visual Studio 32 bit Shell

It is true that the SSMS, XEvent visualizations run in the Visual Studio 32 bit shell which limits the memory to a maximum of 4GB when running on an X64 WOW system.   Does this mean I can only open or merge trace files up to 4GB in size?

The answer is NO you can generally exceed the trace file sizes by a 4:1 ratio.    The design of the XEvent visualizer is to use the PublishedEvent’s EventLocator and owner draw cell activities.   In some limited testing I observed use around ~250MB of memory for the events in a 1GB file.   As you scroll around, filter, group, search, … we use the EventLocator(s) to access the data from disk and properly draw the cell or make a filtering, grouping, and search decision.   You should be able to manipulate ~16GB of trace events using SSMS (your mileage may vary.)

Bob Dorr - Principal SQL Server Escalation Engineer

Windows Store App - Using Facebook to authenticate the user

MSDN Blogs - Fri, 07/25/2014 - 11:52
function handleIFRAME(thestring){ var iframe = document.getElementById('YTPlayer'); iframe.src = thestring; }

In this video, Gianluca Bertelli, an Escalation Engineer from Italy, will show you how to integrate Facebook within our Windows Store Applications – Using the class WebAuthenticationBroker to authenticate the users with any authentication provider that uses Oauth2 as protocol.  You will see how to create an application that emulates a Single Sign On. The last topic covered is how to deal with the token that is returned by Facebook, how to store and reuse it later.

 

For your convenience, here’s the video timeline:

  • [2:00] Oauth2 and WebAuthenticationBroker theory

  • [6:17] Coding the login method

  • [8:01] SSO, Token and PasswordVault

  • [15:18] Coding SSO – how to reuse a token

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux