You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 1 hour 18 min ago

Introducing Australia’s newest news and information site: MSN

Sun, 09/07/2014 - 20:58

I’m pleased to introduce Australia’s newest news and information site: MSN. Today we’re unveiling a first look at the all-new MSN which you can experience now at

We’ve designed the new MSN from the ground up for a mobile-first, cloud-first world. The new MSN combines comprehensive content from leading Australian and global media outlets with personal productivity tools that help you do more.

While content is at MSN’s core, Microsoft’s DNA is about empowerment. We’ve completely reimagined the experience to bring together deep content with productivity tools that help people get more done. And because it’s a single platform, once you personalise, you get the same experience everywhere, no matter which devices you use.

In the coming months the new MSN will be available across all major operating systems, including Windows, iOS and Android. You can follow a breaking story on the go across your devices anywhere with the new MSN.  In this way, we’ve designed the new MSN specifically for the changing ways in which we all view and access content.

 Comprehensive and broad content from multiple sources

  • The new MSN curates articles, videos and images from many well-respected and authoritative Australian and global news sources including The New York Times and Wall Street Journal in the US, The Guardian and The Telegraph in the UK and many other global publishers.
  • In Australia the new MSN will include content from:
    • Nine News and Wide World of Sports
    • The Sydney Morning Herald and The Age
    • ABC News and Grandstand
    • Australian Associated Press
    • Ten News
    • 7 News
    • Bauer Media’s Australian
      Women’s Weekly, Woman’s Day, Gourmet Traveller and Wheels
    • The Guardian

Along with many more covering other niche subjects. In fact, MSN’s expert editors are able to draw from more than 1000 sources to curate content for Australians every day.

 Know more, do more

  • The new MSN experience is about content and productivity combined. We want to marry these forces and give people the tools for maximum productivity – like shopping lists, flight status and savings calculators, to integrated access to popular sites like Skype,, Facebook, Twitter, OneNote and OneDrive. Our aim is to not only help you learn about the world but also help you do more in it.
  • At the top of the new MSN homepage, there is a personal stripe allowing you one-click access to top services so you can stay on top of what’s happening in your life.These integrated tools cover eight categories and empower you to explore, plan, do, and track the activities you care about.

Personalised for you and available on any device

  • You will be able to personalise your MSN experience regardless of which device you use. In the coming months, our Windows apps will also be available on Apple and Android devices. Set-up your favourite cities for weather, pick your favourite sports teams, choose the news topics you want to follow, and those things will be with you at your work PC, on your Windows 8.1 tablet in the living room, or on your iPhone when you are on the go.

With premium content and productivity tools, we’re thrilled to offer a new experience to help people do more. Our daily lives have become increasingly busy and we now have more screens to interact with. MSN streamlines this multi-screen experience and shapes it around your preferences.  

I invite you to try out the new web experience being previewed at


Tony Wilkinson

Director, Microsoft Advertising and Online Division Australia & New Zealand

Exchange 2013 の OWA で IM を統合する。Part1

Sun, 09/07/2014 - 19:23

こんばんは。Lync Support の久保です。


本日より 5 回にわたり Lync Server 2013 と Exchange を統合させ、OWA 上で IM を行う機能について解説したいと思います。


第一回の今回は、Technet や様々な Blog で展開手順については公開されていますが、日本語環境で改めておさらいをしてみたいと思います。


Lync : Windows Server 2008 R2 SP1 + Lync Server 2013 Std

Exchange : Windows Server 2008 R2 SP1 + Exchange 2013 (One Box)


Exchange 2013 (One Box) の設定

1.  Exchange Server 管理シェルを起動します。

2. OWA 仮想ディレクトリの IM に関するパラメータを確認します。

Get-OWAVirtualDirectory |FL identity,*instant*

3. InstantMessagingCertificateThumbprint に設定する証明書の Thumbprint を確認します。

Get-ExchangeCertificate |fl


4. 確認した Thumbprint を使用してパラメーターを設定します。

Get-OWAVirtualDirectory |Set-OWAVirtualDirectory  -InstantMessagingType "OCS"

5. 正しく設定されたか確認します。

Get-OWAVirtualDirectory |FL identity,*instant*


6. テキスト エディター で Owa の Web config ファイルを開きます。

パス: x:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\Owa

7.  “</appsettings>” を検索し、次の二つのキーを追加します。

 <add key="IMCertificateThumbprint" value="<証明書の拇印>" />
 <add key="IMServerName" value="<Lync のプール名>" />

Lync Server 2013 の設定

1. トポロジ ビルダーを起動しトポロジをダウンロードします。

2. [Lync Server 2013]-[信頼済みアプリケーション サーバー] をクリックし、[新しい信頼済みアプリケーション プール] をクリックします。

3. Exchange 2013 の FQDN を設定し [次へ] をクリックします。

4. [次ホップ プールの関連付け] にチェックを入れ、Exchange で設定した Lync プールを選択し、[完了] をクリックします。

5. トポロジを公開します。

6. Lync 管理シェルを起動し信頼済みアプリケーションを登録します。

New-CsTrustedApplication -ApplicationID "<任意>" -TrustedApplicationPoolFqdn "<Exchange FQDN>" -Port "<重複しないポート番号>"

7. Enable-CsTopology を実行します。


OWA で接続し、プレゼンスを確認します。


なお、これらの設定を行っても、プレゼンスが表示されない場合は、Technet の記載の通り、

C:\Windows\System32\Inetsrv\Appcmd.exe recycle apppool /"MSExchangeOWAAppPool"


それでも回避しない場合は、IISReset で IIS ごとリセットしてみてください。


引き続き、快適な Lync ライフをお楽しみください。

After restarting SQL Server, only 'sa' user can log into Microsoft Dynamics GP

Sun, 09/07/2014 - 18:30

Last week, I resolved a rather bizarre case for a customer.

The Situation

Each time the customer's SQL Server was restarted for whatever reason, only the 'sa' user was able to log into Microsoft Dynamics GP.  Once the 'sa' user had logged into Microsoft Dynamics GP at least once, then all the other "normal" users could log in and everything was fine until the next time the SQL Server was restarted.

I asked for screenshots, the DEXSQL.LOG and the SQL Profile Trace so we could see what was happening.

From the user interface, they received the following error:

Your attempt to log into the server failed because of an unknown error. Attempt to log in again.

 Your attempt to log into the server failed because of an unknown error. Attempt to log in again. 


The Cause

I did not need the SQL Profile Trace and SQL Logs and the DEXSQL.LOG had enough information to understand what was going wrong. Below is the excerpt from the log:

/*  Date: 09/05/2014  Time: 12:33:58
set nocount on
insert into tempdb..DEX_SESSION values (@@spid)
select @@identity
/*  Date: 09/05/2014  Time: 12:33:58
SQLSTATE:(S0002) Native Err:(208) stmt(195184136):*/
[Microsoft][SQL Server Native Client 10.0][SQL Server]Invalid object name 'tempdb..DEX_SESSION'.*/
/*  Date: 09/05/2014  Time: 12:33:58
SQLSTATE:(00000) Native Err:(208) stmt(195184136):*/

The error is that the system could not find the tempdb..DEX_SESSION table. If we looked at the SQL Server after the restart would probably find that the tempdb..DEX_LOCK table was also missing.

The DEX_SESSION and DEX_LOCK tables are used by Dexterity's Optimistic Currency Control (OCC) system which allows the passive locking approach to table updates which would allow two users to update the same record in a table as long as they don't try and change the same field.

The fact that the tables are missing is causing the unknown error and preventing users from logging in.

Logging on as the 'sa' provides enough privileges to create the tables. Dexterity can create tables automatically if they are missing as long as the user has sufficient rights.

Once 'sa' has logged in and the tables have been created, other "normal" users can log in.

So why would they have been dropped from the database? Well, the truth is that the entire tempdb database is recreated every time SQL Server restarts.

A better question is: When the SQL Server restarted, why weren't the tempdb..DEX_SESSION and tempdb..DEX_LOCK tables recreated?

Another questions is: What mechanism does the system have for recreating the tables when the SQL Server is restarted?


The Resolution

If you look on your system under the master database, you should find 2 Stored Procedures: dbo.smDEX_Build_Locks and dbo.smDEX_Max_Char. It is the dbo.smDEX_Build_Locks stored procedure which is meant to be executed on start up to create the tables in the tempdb.  If the stored procedure is missing or is not running on startup, we would have the issue described above.

To fix the issue:

  1. Locate the dex_req.sql file in the application folder under SQL/Util, for example: C:\Program Files(x86)\Microsoft Dynamics\GP2013\SQL\Util\dex_req.sql.
  2. Execute this script in SQL Server Management Studio. Notice it has the line sp_procoption 'smDEX_Build_Locks','startup','true' to make the script run automatically on start up.

Next time you restart the stored procedure should run and all users can log in.


Hope you find this helpful. 


Deblogged and Lastposted

Sun, 09/07/2014 - 00:57

When I was a kid, my Dad was in the Royal Air Force. When he came home and said he'd got a posting, it meant we were going off to live in a foreign country for a few years. Of course, today, a posting is just something on a blog - rather like this one. However, in this particular case, the double meaning is actually applicable.

...(read more)

OOF Calendar

Sat, 09/06/2014 - 14:18
FY15 vacation, train the trainer, and onsite delivery days. • Nov 8-11 (4 days) • Nov 22-30 (9 days) • Dec 20 - Jan 4 (16 days) • Jan 17-21 (4 days) • Feb (0 days) • Mar 7-15 (9 days) • Apr 3-5 (3 days) • May 23-25 (3 days)...(read more)

10 High-Value Activities in the Enterprise

Sat, 09/06/2014 - 13:59

I was flipping back over the past year and reflecting on the high-value activities that I’ve seen across various Enterprise customers.  I think the high-value activities tend to be some variation of the following:

  1. Customer interaction (virtual, remote, connect with experts)
  2. Product innovation and ideation.
  3. Work from anywhere on any device.
  4. Comprehensive and cross-boundary collaboration (employees, customers, and partners.)
  5. Connecting with experts.
  6. Operational intelligence (predictive analytics, predictive maintenance)
  7. Cross-sell / up-sell and real-time marketing.
  8. Development and ALM in the Cloud.
  9. Protecting information and assets.
  10. Onboarding and enablement.

At first I was thinking of Porter’s Value Chain (Inbound Logistics, Operations, Outbound Logistics, Marketing & Sales, Services), which do help identify where the action is.   Next, I was reviewing how when we drive big changes with a customer, it tends to be around changing the customer experience, or changing the employee experiences, or changing the back-office and systems experiences.

You can probably recognize how the mega-trends (Cloud, Mobile, Social, and Big Data) influence the activities above, as well as some popular trends like Consumerization of IT.

High-Value Activities in the Enterprise from the Microsoft “Transforming Our Company” Memo

I also found it helpful to review the original memo from July 11, 2013 titled Transforming Our Company.  Below are some of my favorite sections from the memo:

Via Transforming Our Company:

We will engage enterprise on all sides — investing in more high-value activities for enterprise users to do their jobs; empowering people to be productive independent of their enterprise; and building new and innovative solutions for IT professionals and developers. We will also invest in ways to provide value to businesses for their interactions with their customers, building on our strong Dynamics foundation.

Specifically, we will aim to do the following:

  • Facilitate adoption of our devices and end-user services in enterprise settings. This means embracing consumerization of IT with the vigor we pursued in the initial adoption of PCs by end users and business in the ’90s. Our family of devices must allow people to be more productive, and for them to easily use our devices for work.

  • Extend our family of devices and services for enterprise high-value activities. We have unique expertise and capacity in this space.

  • Information assurance. Going forward this will be an area of critical importance to enterprises. We are their trusted partners in this space, and we must continue to innovate for them against a changing security and compliance landscape.

  • IT management. With more IT delivered as services from the cloud, the function of IT itself will be reimagined. We are best positioned to build the tools and training for that new breed of IT professional.

  • Big data insight. Businesses have new and expanded needs and opportunities to generate, store and use their own data and the data of the Web to better serve customers, make better decisions and design better products. As our customers’ online interactions with their customers accelerate, they generate massive amounts of data, with the cloud now offering the processing power to make sense of it. We are well-positioned to reimagine data platforms for the cloud, and help unlock insight from the data.

  • Customer interaction. Organizations today value most those activities that help them fully understand their customers’ needs and help them interact and communicate with them in more responsive and personalized ways. We are well-positioned to deliver services that will enable our customers to interact as never before — to help them match their prospects to the right products and services, derive the insights so they can successfully engage with them, and even help them find and create brand evangelists.

  • Software development. Finally, developers will continue to write the apps and sites that power the world, and integrate to solve individual problems and challenges. We will support them with the simplest turnkey way to build apps, sites and cloud services, easy integration with our products, and innovation for projects of every size.”

A Story of High-Value Activities in Action

If you can’t imagine what high-value activities look like, or what business transformation would look like, then have a look at this video:

Nedbank:  Video Banking with Lync

Nedbank was a brick-and-mortar bank that wanted to go digital and, not just catch up to the Cloud world, but leap frog into the future.  According to the video description, “Nedbank initiated a program called the Integrated Channel Strategy, focusing on client centered banking experiences using Microsoft Lync. The client experience is integrated and aligned across all channels and seeks to bring about efficiencies for the bank. Video banking with Microsoft Lync gives Nedbank a competitive advantage.”

The most interesting thing about the video is not just what’s possible, but that’s it’s real and happening.

They set a new bar for the future of digital banking.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

You must forcibly set "Copy Local" to true when referencing SharePoint Client Object Model assemblies

Sat, 09/06/2014 - 11:04

I recently ran into an issue when deploying a provider hosted app on Prem.  Everything ran perfect in my dev environment (of course it would, all on one machine), so I deployed to the remote web server in my distributed environment, a 3 server SharePoint farm and 1 remote web machine.

I published my web site to my dev machine then copied the installer to my remote web server and deployed the web site.  I then installed my app from the app catalog, the app installed OK but when forwarded to my landing page I received the infamous “An error occurred while processing your request.”.  Now that error message is just built in to the SharePointContext class that comes with the ootb template for provider hosted apps, it doesn't really give you the whole story about what happened under the sheets.

I was thinking "well, it must be some issue with the high trust cert, maybe some oAuth problem, app manifest or web.config not right, etc.  So I went off and checked all kinds of things and then almost accidentally I was looking at the bin directory and noticed that the SharePoint client assemblies were all missing.  I then checked my Visual Studio assembly references for the project in question and noticed that "Copy Local" was false for all of the SharePoint client assemblies.

There are some very specific reasons for the "Copy Local" flag to be set to false and I don't think the SharePoint assemblies meet the criteria so I'm a bit in the dark about why this would happen in the first place.

  1. If the reference is another project, called a project-to-project reference, then the value is true.
  2. If the assembly is found in the global assembly cache, the value is false.
  3. As a special case, the value for the mscorlib.dll reference is false.
  4. If the assembly is found in the Framework SDK folder, then the value is false.
  5. Otherwise, the value is true.

Anyway, lesson learned, always remember to check and verify the "Copy Local" flag is set to true on all assemblies that you've referenced unless you know they'll be in the GAC.

Tasks 365 SharePoint 2013 Hosted App – Integrating CRM 2013 and SharePoint 2013 online.

Sat, 09/06/2014 - 06:00

Tasks 365 is a SharePoint Hosted App that let the users manages all their active tasks both in SharePoint Portal and CRM 2013 online at a single place through Calendar Interface.


It demonstrates one possible way we can integrate CRM and SharePoint.

More Details on the feature

Earlier it was created as an Auto Hosted App, however Microsoft later discontinued SharePoint it so converted it to SharePoint Hosted

The app can be downloaded here

Click below to see the working demo{3B94492B-E77E-4AF7-BD5A-D5BB1E8E5672}

username -

password – Secure!1


Shraddha Dhingra

MHacks resources from Microsoft

Sat, 09/06/2014 - 04:29

Kudos to all of the hackers that came to the University of Michigan the weekend of Sept 5-7, 2014 for the MHacks event.     

Here are some resources that developers were using this weekend, gathered in one place:

Developer Centers

Windows Azure Developer Center

Windows and Windows Phone Developer Center


DreamSpark – free software for students

BizSpark – free software and Azure cloud services for startups

Microsoft Virtual Academy – free online training


Additionally, if you are a University of Michigan student,

  • Apply for the Software Dev Challenge in Fall 2014 by Mon Sept 8.  Taught by Dr. Jeff Ringenburg, teams of 4 students will develop software apps.  Each team will be provided with hardware and software to complete their challenge and receive mentorship from Microsoft engineers. 
  • Your recruiter is David Daniels.
  • Your Microsoft technical evangelist is Jennifer Marsman.    ,


Finally, let us know about the apps you built on Microsoft technology – we love to highlight great stories!

Small Basic - Traffic Light Challenge

Sat, 09/06/2014 - 00:00

One of this month's forum challenges was to create a traffic light system.

Graphics Challenge 2

Draw a traffic light that changes through the correct lighting sequence, perhaps using the Timer object.

We have 2 great answers in, the first by Martin from Germany.

And the second by NaochanOn from Japan which uses different coloured lights.

You can view them in action (as well as see the source code) using SIlverLight with the following links:

Note that you can host any of your Small Basic programs online in this way after it is published and you have the 6 character ID.

Issues with Codeplex 09/05 - Resolved

Fri, 09/05/2014 - 22:18

Final Update: Friday, Aug 05 2014 05:50 AM UTC

The issue is now resolved and customer impact is fully mitigated.

We sincerely thank you for your patience and apologize for the inconvenience.


Initial Update: Friday, Aug 05 2014 05:19 AM UTC

We are currently experiencing issues with Codeplex .Codeplex site is not rendering properly

Dev-ops are engaged and actively investigating to mitigate the issue.

We apologize for the inconvenience and appreciate your patience.

-MSDN Service Delivery Team

Mysterious Mushrooms Game Demo

Fri, 09/05/2014 - 21:43

Demo of my first published Unity game. You can find it in the Windows Store here

Brighten up everyone's day! Become a shining September Small Basic Guru!

Fri, 09/05/2014 - 17:19
As we in the Northern hemisphere watch the leaves turn brown, and the days grow shorter once more, we mourn the onset of darkness and cold. Winter is coming, and it may be a long one! (heard that somewhere before...)

SO my mighty guru word warriors, light up our hearts and minds with words of wisdom!

Send white hot ideas, spark off imaginations, light the way to a better future!

Let your intellectual outpourings enlighten your readers and lighten their burden, and quench their thirst for knowledge!

Beat back the darkness with laser sharp wit and broad spectrum facts.

Light a fire in our imaginations!

Become a beacon for awesomeness!

Shine so brightly as to become stars, and you shall be worshiped by us, as we bask in your technical glory!


Your time has come!

That time is NOW!


All you have to do is add an article to TechNet Wiki from your own specialist field. Something that fits into one of the categories listed on the submissions page. Copy in your own blog posts, a forum solution, a white paper, or just something you had to solve for your own day's work today.


Drop us some nifty knowledge, or superb snippets, and become MICROSOFT TECHNOLOGY GURU OF THE MONTH!


This is an official Microsoft TechNet recognition, where people such as yourselves can truly get noticed!




1) Please copy over your Microsoft technical solutions and revelations to TechNet Wiki.

2) Add a link to it on THIS WIKI COMPETITION PAGE (so we know you've contributed)

3) Every month, we will highlight your contributions, and select a "Guru of the Month" in each technology.


If you win, we will sing your praises in blogs and forums, similar to the weekly contributor awards. Once "on our radar" and making your mark, you will probably be interviewed for your greatness, and maybe eventually even invited into other inner TechNet/MSDN circles!


Winning this award in your favoured technology will help us learn the active members in each community.


Feel free to ask any questions below.


More about TechNet Guru Awards


Thanks in advance!
Ed Price & Pete Laker

SharePoint 2013 Custom List displaying duplicate(?) modification entries

Fri, 09/05/2014 - 13:38

This article was written to help explain the behavior seen by some when the "perfect storm" occurs in their SharePoint Custom List or Document Library. What do I mean by the "perfect storm"? In this scenario I'm referring to what occurs when a SharePoint site admin applies the following settings/configuration to either a Custom List site or to a Document Library. If you've experienced this scenario in your own environment or have had to help someone resolve this then you know how difficult it is to even explain when this happens. So I'm going to do my best to outline a real world scenario where this occurred, why it occurred and how best to get around it.

As a SharePoint Site administrator you've been given the task of finding a solution for the following request:

You've been dubbed the resident SharePoint guru of Contoso University's literature department.  Russell Johnson, one of the professors, has asked you to add a new sub site under the site  Your job is to evaluate Professor Johnson's expectations from the site and provide a solution.

The professor and the rest of his team are planning to host a 'Short Stories' competition for their young authors.  The students are expected to register on your new SharePoint site, upload a copy of their original short story and then each teacher will critique the students work and elect a winner.

After (not so much) deliberation you come up with the following sub site setting for Professor Johnson's upcoming competition:

  • First you decide to create a Custom List site under the http://cu/Liter8r/101 master site.
  • Second, you go to the new sites List Settings and you enable versioning on the site in order to keep track of any changes being made to the items throughout the competition. Just the minimum setting to allow a new version each time an item is edited.


  • Then, while you're in the list settings, you add a new column so that all new items added can have their own comments field. You set it up for multiple lines of text and since you have versioning enabled the option to append changes to existing text is available so you add that in there too.


  • Add each of the students from the Literature classes involved to the site permissions so they may create their own Items.
  •  At this point you explain to the students and teachers involved with the competition how to use the site.

    • Student goes to the new SharePoint site and selects +New Item.
    • As part of the new item creation process the student gives it a name that includes the student name and the title of the short story then click OK to create.
    • The student can now click on their new item to open it up and add any comments to the new Multi Text column field that you added to the Custom List sites configuration.
    • From the tools ribbon at the top of the now open item the student can select to edit the item further or attach a file. In this case the student will select to attach a file and they will upload their short story in Word format to the item and click OK.

    As all students continue to follow these steps the professor and his team can open up the newly added items, open the attached Word document, read it and add their own annotations in edit mode. They can choose to add a comment to the item and move on to the next students listed item.

    Make sense so far? Does it sound like a good solution for this projects requirements? Well this is what actually happens…..

            Users, both student and teacher in this case, will observe what appears to be duplicate entries in all these items. Trouble is that they aren't duplicates. This behavior is actually by design. It's a normal behavior when both the versioning and append features fulfill their designed roles. When a custom column with the append option is added to the site design any data that is placed into this column will populate BOTH the items field itself as well as one of the metadata fields of the attached document.

            Since appending is not tracked like versioning is it will maintain whatever data is in its comment field until it is manually changed during an edit of the item or otherwise removed. So each iteration within the items modification list will appear to be duplicating the comments added by the last user. In fact it is just appending whatever the last update was to the comments field. Add to the confusion by editing the attachment and making changes or modifications of any kind and you'll see what appears to be multiple duplicates. Why? Because the item having been opened for editing now places a new modification entry in the list showing the users name, date and time of enabled edit (appending the latest comment to it…even if the comment was not made by the current user). It will then add another modification entry to the list of changes whenever the attachment (currently opened for editing) had modifications made to it and the attachment is saved. It doesn't have to be closed at this point for the new modification entry to appear. As long as a change is made and the document is saved EITHER MANUALLY or with AUTOSAVE. So depending on how long the document is open for editing and changes are made and saves of an auto or manual nature are occuring you will see what would otherwise look like duplicate modification entries in the modifications list. Now that was a mouthful! See why I thought it was best to describe a scenario with a complete implementation of this configuration? Try explaining this in two sentences or less to a support engineer and expect to get assistance that actually addresses your problem.

            One other thing to take note of in order to debunk the idea that these are bugged duplicate entries is to look closely at the modification date/time stamps for each entry. If you are trying to test your theory you are likely running through these steps quickly to see if it duplicates. If so….then yes it will look like a duplicate because the time it took you to run through the steps was likely within a minute. If you go through each step slowly….even waiting a full sixty seconds between steps you will see that each entry in your modifications list is taking place at very different times. Thus….not duplicated….but behavior completely natural for the versioning and append features to display under these circumstances.

              So now what? How do I get around this behavior? In my experience you have a couple options. You can remove the append option from the custom column you created to hold comments. Now you won't get a quick and easy modifications list to reference by simply opening the Custom List item but you can open the item and select Version History from the tools ribbon. This way you can still maintain a list of any changes to the document, including comments, but without the appearance of duplicate entries because the comments will no longer be required as an appended item. You will get a list of strictly changed/modified occurrences, by who and at what date/time.

    The other option? You can basically use any of the multiple SharePoint templates that suit this process….just don't use versioning AND append to track changes.

    I hope this proves helpful to some of you out there struggling with this scenario and tricked into thinking it's a bug or that something in your configuration is "broken". It's not broken. Just not a great combination of functions.


DirectX Tool Kit: Now with GamePads

Fri, 09/05/2014 - 13:04

The XInput API is almost trivial to use at first glance which is basically two simple C APIs with very simple parameters. There are, however, a number of subtleties that have crept in over the years, including the split between Windows 8 and previous releases. There is also a potential performance problem if you naively try to search for multiple gamepads that are not currently connected every frame due to the underlying overhead of device enumeration to look for newly connected gamepads. Also, while XInput was available on Xbox 360, the Xbox One makes use of a WinRT IGamePad API instead.

At this point, it would be useful to have a nice abstraction to handle all this that you could code against while keeping your application's code elegant and simple to use. As usual, when looking for a design to start with, it turns out this problem was already nicely solved for C# developers with the XNA Game Studio GamePad class.

GamePad class

The September 2014 release of DirectX Tool Kit includes a C++ version of the GamePad class. To make it broadly applicable, it makes use of XInput 9.1.0 on Windows Vista or Windows 7, XInput 1.4 on Windows 8.x, and IGamePad on Xbox One. It's a simple class to use, and it takes care of the nuanced issues above. It implements the same thumb stick deadzone handling system as XNA, which is covered in detail by Shawn Hargreaves in his blog entry "Gamepads suck". The usage issue that continues to be the responsibility of the application is ensuring that you poll it fast enough to not miss user input, which mostly means ensuring your game has a good frame rate.

See the CodePlex documentation wiki page on the new class for details and code examples.

The headset audio features of XInput are not supported by the GamePad class. Headset audio is not supported by XInput 9.1.0, has some known issues in XInput 1.3 on Windows 7 and below, works a bit differently in XInput 1.4 on Windows 8, and is completely different again on the Xbox One platform.

The GamePad class is supported on all the DirectX Tool Kit platforms: Win32 desktop applications for Windows Vista or later, Windows Store apps for Windows 8.x, and Xbox One. You can create and poll the GamePad class on Windows Phone 8.x as well, but since there's no support for gamepads on that platform it always returns 'no gamepad connected'.

Xbox One Controller

Support for the Xbox One Controller on Windows was announced by Major Nelson in June and drivers are now hosted on Windows Update, so using it is a simple as plugging it into a Windows PC via a USB cable (see the Xbox One support website). The controller is supported through the XInput API as if it were an Xbox 360 Common Controller with the View button being reported as XINPUT_GAMEPAD_BACK, and the Menu button being reported as XINPUT_GAMEPAD_START. All the other controls map directly, as do the left and right vibration motors. The left and right trigger impulse motors cannot be set via XInput, so they are not currently accessible on Windows.

The Xbox One Wireless controller is not compatible with the Xbox 360 Wireless Receiver for Windows, so you have to use a USB cable to use it with Windows. Note also that it will unbind your controller from any Xbox One it is currently setup for, so you'll need to rebind it when you want to use it again with your console.

Related: XInput and Windows 8, XInput and XAudio2

Notification: System diagnostic “Test permissions” Error

Fri, 09/05/2014 - 12:43

The “Test permissions” button in the System diagnostic on-premise component installer is currently not working. If you click it, you will see the following error message.

We’ll be addressing this error soon—it does not affect the installation of the System diagnostic on premise component or any functionality.

The ultimate showdown of NoSQL destiny!

Fri, 09/05/2014 - 12:43
Sharks and bees and... fast Italians?!

If you've been following this blog recently, you'd have noticed that I'm having a blast trying different data products on Azure and playing with them. I recently managed to get Spark / Spark SQL (Shark's replacement) running on Azure in the same way, but rather than dedicate a post to them, I thought it'd be more fun to pit them all against each other, throwing Hive on HDInsight into the mix, in a friendly competition to analyze some data on my Azure blob storage. So without further ado, I present the ultimate showdown!

The scenario

When I thought this up, I didn't want to design or even steal a precise scientific benchmark to execute (like TPC-DS for example), since I wasn't interested in precise performance benchmarking and I wouldn't be able to tune each one sufficiently anyway. (If you want a more scientific benchmark that compares some of the products here, I found this awesome effort to be pretty informative.) Instead I wanted a simple scenario the likes of which I see a lot, but artificial enough that I'd be able to play with it/release the code for it so anyone can replicate it. So I came up with the following: we'll have a simulated Cloud application in which each VM logs operation messages to blob store in a simple comma-separated format. For this toy example, each log is either a) a start of an operation, b) a success or c) a failure. The business questions we want each of our contestants to answer are:

  1. How many failures/successes did each VM get?
  2. How long did the average successful operation take on each VM?

I came up with these two questions as a good starting set for this experiment because the first one can be answered with a single scan through the data, while the other would require a self-join to correlate the start and finish messages for each operation. Which gives me a good feel for how the different contestants at least handle this first level of complexity.

The data

Now that I had a scenario in mind, time to cook up some data! I created a simple Azure Compute service where each instance just churns out as fast as it can log messages to its own CSV log file on my Azure blob container. Each line in that file is of the format "{date},{instance name},{operation id},{message}", where date is in a Hive-friendly format, operation ID is a GUID and message is a free-form string so I can throw in a little string manipulation later.

I also cooked up the data so that every tenth role instance (0, 10, 20) would have a higher random operation failure rate, and every seventh instance (0, 7, 14, 21, 28) would have a higher operation duration. This way I can double-check myself and the contestants later to see if we can catch these anomalies.

With that in hand, I deployed it to Azure with 30 (small) instances. They were churning out data pretty fast so I shut it down after about 10 minutes when I had 35.1 GB of data to chew on: this seemed like enough data to make a decent first cut, but not so much that I'd have to worry about sharding across storage accounts to start with.

(Note: all the code used for this experiment is in the public github repo here.)

Enter our first contestant: Hive/HDInsight

So let's start the showdown with Hive/HDInsight: our supported and longest standing NoSQL product in this arena. I spun up an 8-node HDInsight cluster backed by a SQL Azure Hive metastore (so I can use it later with the other contestants), created an external Hive table pointing to my logs, and started issuing the queries using the awesome relatively new Hive portal on the HDInsight cluster.

I created two queries for the two business questions above: a simple COUNT() query for what I'll call the "Count Errors" scenario, and an aggregate over self-join query for what I'll call the "Average Duration" scenario. The average elapsed times for those scenarios on unadorned Hive were 280 seconds and 489 seconds respectively (and yes, the results did capture the planted anomalies so I have decent confidence in the correctness). When I created 16-node clusters, the times went down to 218 seconds and 434 seconds, which is an OK speedup but not exactly linear scaling.

So this was unadorned Hive on HDInsight 3.1. But one of the exciting changes that came with HDInsight 3.1 is the introduction of Tez to Hive, which enables a more sophisticated execution engine than plain-old Map-Reduce. It's not on by default, but it's definitely worth trying out. So I changed the execution engine to Tez (SET hive.execution.engine=tez) and tried it out. The results were surprisingly mixed: the best speedup was in the 16-node case in the Average Duration query which was executed in the blazingly fast time of 305 seconds (spoiler alert: this the fastest among all contestants for this scenario). Interestingly though in the 8-node case this scenario was slower, executing in an average of 574 seconds. For the Count Errors scenario the results are more straightforward: Tez sped up the scenario so it executes in about 2/3 the time as Hive-over-Map-Reduce for both 8-nodes and 16-nodes scenarios.

Here is the summary of the Hive/MR (default) and Hive/Tez results:

And in comes the youngest challenger: Presto

And next comes Presto: the youngest entrant in this showdown. Since Presto is a memory-hungry beast, in my basic configuration I had 8 A6 workers (which is the memory-intensive VM at 28GB RAM and 4 cores), a Large coordinator and 1 Small Hive Metastore. I used the same backing SQL Azure DB for the metastore as I did for Hive so I had the same table definition right away, then I used the Presto CLI from my box to issue the queries. I modified the Count Errors and Average Duration queries to match the more ANSI SQL dialect of Presto (I got the same results as Hive, so again I feel good about the correctness of my results).

The results were pretty impressive especially on the Count Errors scenario: on 8 nodes, Presto blew away 8-node HDInsight: 55 seconds vs. the best Hive result for 8 nodes which is 182 seconds. But wait: is it fair to compare the 8 A6 nodes Presto got vs. the 8 Large nodes HDInsight uses? I can't give Presto a fair chance with Large nodes, and I can't customize node sizes on HDInsight, so I'm stuck with this apples-to-oranges comparison here. They have the same number of cores, but A6 nodes are about twice the price of Large, so I think the fair comparison is to compare against 16-node HDInsight. Even with that allowance, Presto on the Count Errors scenario strongly wins since the best Hive result there is 133 seconds, still more than twice as slow as Presto's 8-node result.

On the Average Duration scenario, the results are good but less impressive. 8-node Presto executed the query in an average of 462 seconds, which beats 8-node Hive (489 seconds for Hive/MR, 574 seconds for Hive/Tez) but is much slower than 16-node Hive/Tez (305 seconds).

So here is the summary of the rumble of Presto vs. Hive/Tez if we take the same rough price point (8 A6 vs. 16 Large):

And here is the rumble if we take the same number of cores (8 A6 vs. 8 Large):

Note: the results get switched around a bit if it's 16 A6 vs. 16 Large, since at that scale Presto is a bit slower than Hive/Tez on the Average Duration scenario, though it beats it on the Count Errors one.

One more note on Presto: one of the things that don't show here that much that I like about it is that of all the three I'm trying here, it struck me as the most predictable in its perf at different scale points. You've seen the surprising scale results I got out of Hive/Tez and Hive/MR, but with Presto when I tried 4 cores vs. 8 cores vs. 16 cores, on both scenarios I got a nice almost (but not quite) linear scaling in the duration: 150->55->38 on Count Errors, and 66->462->366 on Average Duration. Also, if you look at individual query durations (instead of averages) for each scale point, Presto typically has much less variation at any one scale point.

And finally: our electric Shark!

And now comes my newest and most troublesome addition to Blue Coffee: Spark and Spark SQL/Shark (it's really Spark SQL now, but I just like calling it Shark too much). I have to admit: this took me by far the most effort to get up and running, and even now I'm pretty sure I don't have it running that well. My troubles got much less severe once I moved away from the master branch in Spark (which is way too active/unstable) and the standalone discontinued Shark repository, to the branch-1.1 branch with the packaged Spark SQL there that replaced Shark. However, even after that move I still encountered problems. I think my main performance problems that I haven't fully gotten to the bottom of is in the shuffling of data between nodes (BlockFetcher): I had to bump up a couple of connection timeouts to even get it working on the more complex Average Duration scenario, but it still worked too slow for what it is and I think it comes down to problems there.

For this post, I'm not using Spark's full power: I'm not loading the data first into RDD's, instead just querying them from cold storage directly. As follow-up I hope to experiment with that, but for now since I'm not worrying about loading things into memory I can get away with using Large instances for the workers (instead of A6), so I can compare against HDInsight directly and fairly.

So I provisioned a Spark cluster: 8 Large workers, 1 Small Master node, 1 Large node for the Thrift server and 1 Small Hive Metastore node. Since the Hive metastore is backed by the same Azure DB where my table definitions are already stored, and Spark SQL understads HiveQL, I just launched the same queries I used for Hive from BeeLine from within the cluster (launching them from outside fails because Azure Load Balancer kills my connection after one minute).

The results were pretty bad. On 8 workers Count Errors finishes in 271 seconds which is about the same as Hive/MR 8 nodes but much worse the Hive/Tez or Presto, and Average Duration finishes in 714 seconds which is about 25% slower than the slowest 8-node contender so far (Hive/Tez). The worse news is that on 16 nodes the times hover about the same (on average slight faster but within the variance). I dug into it a tiny bit and it seems to do with my mysterious problem with the performance of fetching blocks among the nodes: since that's slow for whatever reason, exchanging blocks among 16 nodes overwhelms any speed gains from distributing the processing among them. It also doesn't help that by default Spark SQL has spark.sql.shuffle.partitions set to 200 which leads to several stages (especially among the small ORDER BY parts I put in there) to be split among 200 tiny tasks, leading to pretty big unnecesary overheads.

Since this is the least tuned/understood contender for me, I'll refrain from putting summary graphs in here: I fully acknowledge that this needs more work to get a fair chance in the arena, so I don't want anyone skimming this post to get the wrong impression about it from a graph.

So... now what?

My hope is that what you get out of this post isn't the specific performance numbers - since again I didn't tune any of this, and it doesn't qualify as a fair performance benchmark. Instead, what I'm hoping you'll get is:

  1. the feel for these different data analysis products and what they can do and
  2. how awesome it is that you can right now choose among all those different options (and more) to analyze your data in Azure!

In that spirit, I'm hoping to go on to expand this to other possibilities, e.g.: how about instead of loading the logs into blobs, the workers directly inserted them into an Elastic Search cluster instead? Would that make the end-to-end scenario easier? Or what if the logs went into Event Hub, and from there were consumed by both a Storm cluster that calculated average durations on the fly, and a spooler that placed the data into Azure blob store? What if the data was placed in blob store in a more sophisticated format (e.g. ORC or Parquet) than CSV? The possibilities are endless and awesome!

Another avenue I'd like to explore is higher scale: go beyond my puny 35 GB data volume here and into the 10's or 100's of TB's scale. It'd be great to see this work, and what I'd need to shard/do to get this scenario working smoothly.

Revisiting FTP Basics

Fri, 09/05/2014 - 12:04

Table of contents

1. Fundamentals

2. How active FTP works

2.1 Flow chart of active FTP

2.2 Flow chart explanation

2.3 Network trace

3. How passive FTP works

3.1 Flow chart of passive FTP

3.2 Flow chart explanation

3.3 Network trace

4. Firewall

4.1 Firewall setting for active FTP

              4.1.1 Server side firewall rule

4.2 Firewall setting for passive FTP

             4.2.1 Server side firewall setting

5. Implicit and explicit FTPS

5.1 How explicit FTPS works

                          5.1.1 Output of explicit FTPS in FileZilla

                          5.1.2 Output of FTPS logs after running Log Parser tool

5.2 How implicit FTPS works

                         5.2.1 Output of implicit FTPS in FileZilla

                          5.2.2 Output of FTPS logs after running Log Parser tool

6. Tools used to troubleshoot FTP issues

6.1 NetMon trace

6.2 ETW trace

6.3 DebugView

6.4 Port Query

FTP Overview

1. Fundamentals

FTP stands for File Transfer Protocol.  In short, FTP is a protocol for transferring files over the Internet, which uses the TCP/IP protocols to enable the data transfer. There is no UDP component involved in FTP. The FTP client initiates a connection to a remote computer running the FTP “server”.

Unlike HTTP and other protocols used on the Internet, the FTP protocol uses a minimum of two connections during a session: a 'data' port and a 'command' port (also known as the control port). By default, TCP port 21 is used on the server for the control connection, but the data connection is determined by the method that the client uses to connect to the server

FTP operates in two modes: active and passive. These two modes differ in the way the client / server chooses the port for data (file) transfer.

2. How Active FTP works

In active FTP, the client machine connects to the FTP server (control port 21) from a random port (port>1023). The client also sends its IP:Port to the server on which it wants to listen. The server then calculates the port, and then transfers the data from (data port 20) to the (port>1023) port calculated from IP:Port combination. The client then acknowledges this transfer.

2.1 Flow chart of Active FTP

2.1.1 Flow chart explanation

1. The client sends a request from a random port, along with the IP:Port combination to the control port (21) of the server. In the figure above, the IP is the IP address of the client, and port is the client port that the server should connect to and through which it wants to listen.

2. The server acknowledges the client’s request.

3. The server initiates a brand-new session with the port (calculated from IP:Port combination in step 1) from its data port ( 20 ).

4. Then the client acknowledges this data received from the server.

The following is a typical sequence for an active-mode FTP connection:


Sent From

Sent To

USER MyUserName

PASS MyPassword


250 CWD command successful.

PORT 192,168,4,29,31,255

200 PORT command successful.


<file listing is transferred>

226 Transfer complete.

2.2 Network trace

Below is the client side trace in FTP active mode. Apply the filter “ftp”, and then follow the TCP conversation.

The first trace indicates the TCP level handshake between the client and server.

Below, the client is sending the request from port 3471 to the server command port 21 and requesting for a password. Here the password is sent in clear text.

After authenticating the client, the client sends the IP:Port combination to the server. The port which the client likes to listen on will be calculated using the 5th octet multiplied by 256 and adding the 6th octet: (13*256)+145 =3473.

Again, the TCP hand shake happens with the port calculated from the IP:Port combination (3473).

After acknowledgement, the data transfer happens from the server data port 20 to the client port.


3. How Passive FTP works

In passive FTP, the client machine connects to the (control port 21) FTP server from a random port (port>1023). The server sends its acknowledgement to the client, and it sends the IP:Port combination. The client then communicates with the server over this port, and data transfer happens over this port. The server then sends the acknowledgement to the client.

3.1 Flow chart of Passive FTP

3.1.1 Flow chart explanation

1. The client sends a PASV command from a random port to the control port of the server (21).

2. The server acknowledges this and sends the IP:Port combination to the client.

3. The client then initiates a brand-new session to the port (calculated from IP:Port combination in step 2) from a random port.

4. Then, the server sends the acknowledgement to the client.

The following is a typical sequence for a passive-mode FTP connection: 


Sent From

Sent To

USER MyUserName

PASS MyPassword


250 CWD command successful.


227 Entering Passive Mode (192,168,4,29,9,227).


<file listing is transferred>

226 Transfer complete.

3.2 Output of the network trace

The first trace below indicates the TCP level handshake between the client and server. The client sends the PASV command to whichever server responds with the IP:Port combination: (18*256+232 =4840)

Then the data transfer happens with this port (4840) and the client port.

4. Firewall

A firewall is software or hardware that checks information coming from the Internet or a network, and then either blocks it, or allows it to pass through to your computer, depending on the rules defined in the firewall settings.

FTP commands are sent over a particular channel called control channel. That is the one that typically connects to the well-known FTP port 21. Any data transfer, such as directory listing, upload and download, happens on a secondary channel called data channel
To open port 21 on a firewall is an easy task. But having port 21 opened only means that clients will be able to connect to FTP server, authenticate successfully, create, and delete directories. This does not meant that clients will be able to see directory listings, or be able to upload/download files. This is because data connections for the FTP server are not allowed to pass through the firewall.

Many firewalls simplify this challenge with data connections by scanning FTP traffic and dynamically allowing data connections through. Some firewalls enable such filters by default, but this is not always the case. These firewall filters are able to detect what ports are going to be used for data transfers. and temporarily opens them on the firewall so that clients can open data connections.

Many firewalls do not accept new connections through an external interface. The firewall detects these connections as unsolicited connection attempts and drops them. Standard mode FTP clients do not work in this environment because the FTP server must make a new connection request to the FTP client.
Firewall administrators may not want to use passive mode FTP servers because the FTP server can open any ephemeral port number. Firewall configurations that allow full access to all ephemeral ports for unsolicited connections may be considered unsecured.


4.1 Firewall setting for Active FTP.

Information for client side and server side firewalls are outlined below.

4.1.1 Server side firewall rule:

In the server side firewall rule, we need to make sure that control port 21 and data port 20 are opened in the inbound and outbound rule section respectively.

4.2 Firewall setting for Passive FTP

Information for the client side and server side firewall settings are outlined below.

4.2.1 Server side firewall setting

With the server side setting, the server makes a decision of sending its port to the client, so we need to make sure that only that range of port is available in the inbound and outbound rule.

For example, if you want the server to choose a port between the 5000-10000 ranges, make sure this range is allowed in both inbound and outbound rules. Also, specify this range under the FTP Firewall Support section at the server level in IIS Manager.

In the figure below, we made all the ports between 1024 and 65535 ports available.

5. Implicit and Explicit FTPS

One of the many limitations of FTPs is a general lack of security. For example, user names and passwords are transmitted in clear text, data is transferred with no encryption, etc. In order to address this situation, FTP over SSL (FTPS) was introduced.

FTPS (SSL/TLS) has two types:

          1. Implicit

          2. Explicit

In explicit FTPS, the client connects to the control FTP port (port 21) and explicitly switches into secure (SSL/TLS) mode with "AUTH TLS".

On the other hand, in implicit FTPS, SSL/TLS mode is assumed right from the start of the connection, and normally listens on TCP port 990, rather than 21.

FTPS adds encryption security to FTP connections. Therefore, the benefit is that you can use FTP technology with SSL encryption to transfer data.


5.1 How Explicit FTPS works

The FTP client connects over the control/command channel (port 21), and then the client can negotiate SSL for either the command/control channel, or the data channel using new FTP commands like AUTH, PROT, CCC, etc.


5.1.1 Output of explicit FTPS in FileZilla:


5.1.2 Output of FTPS logs after running Log Parser tool:

5.2 How implicit FTP works

Implicit FTPS takes SSL one step further than simply requiring that SSL-related commands be sent first, like you can with Explicit SSL. With implicit FTPS, an SSL handshake must be negotiated before any FTP commands can be sent by the client. In addition, even though explicit FTPS allows the client to arbitrarily decide whether to use SSL, implicit FTPS requires that the entire FTP session be encrypted. Basically, the way that implicit FTPS works is that an FTP client connects to the command/control channel, in this case using port 990, and immediately performs an SSL handshake. After SSL has been negotiated, additional FTP commands for the session can be sent by the FTP client.


5.2.1 Output of implicit FTP in FileZilla:

5.2.2 Output of FTPS logs after running Log Parser tool:

6. Tools used to troubleshoot FTP issues

Information on some of the tools used to troubleshoot FTP issues are outlined below.

6.1 NetMon trace

This tool helps understand the ports that are being used, and also finds out if there is any firewall or and networking device which is intercepting the client-server request. This tool will not help for the FTP over SSL because the packets are encrypted.

6.2 ETW Trace

You use this tool in order to enable FTP ETW trace. To do that, open Command Prompt in the server and follow the steps below (for more details, visit Collecting ETW trace for FTP).

1. Type “logman start "ftp" -p "IIS: Ftp Server" 255 5 –ets”.

2. Reproduce the issue.

3. To stop the trace, type “logman stop "ftp" –ets”

4. Once you have the FTP ETW trace, use LogParser to parse the trace file. The command is:

LogParser.exe "select EventTypeName, UserData from ftp.etl" -e 2 -o:DATAGRID -compactModeSep " | " -rtp 20

6.3 DebugView

DebugView is an application that lets you monitor the debug output on your local system, or any computer on the network that you can reach via TCP/IP. It is capable of displaying both kernel-mode and Win32 debug output, so you don't need a debugger to catch the debug output your applications or device drivers generate. You also do not need to modify your applications or drivers to use non-standard debug output APIs (for more details, visit DebugView).

6.4 Port Query

This tool, also known as PortQry, is a command-line utility that you can use to help troubleshoot TCP/IP connectivity issues. This utility reports the port status of target TCP and User Datagram Protocol (UDP) ports on a local computer or on a remote computer. It checks if the ports are blocked or not. This tool has a user Interface which can downloaded here.

If the output shows “Filtered”, this indicates the Port 4862 is blocked else it would be displayed as LISTENING


Drupal 7 Appliance - Powered by TurnKey Linux