You are here

Feed aggregator

Top Gun Project「未来のクリエーター育成キャンプ」第2回「Kinect」(7/30)

MSDN Blogs - 9 hours 18 min ago

皆さん、こんにちわ。日本マイクロソフトのエバンジェリスト渡辺です。
シアトルでは、Imagine Cup 2014世界大会が開催中ですが、日本では、次のImagine Cupを目指す学生たちが動きはじめています。

Top Gun Project「未来のクリエーター育成キャンプ」の第2回目は、マイクロソフトの製品の中では、近年、理工系学生さんの受けがよい「Kinect」について学びました。講師は、日本マイクロソフトのKinect伝道師と言えば、この人、千葉慎二さんです。

まずは、千葉から「Kinect」の概要紹介です。「Kinect」の機能や特徴をデモをまじえながら説明していきます。デモは、学生さんにも手伝っていただきました。「Kinect」は、直感的に何が可能なのか理解できるのが良いですね。「Kinect」は、ジェスチャーや音声の認識機能を持つセンサーですが、それまでの同様のセンサーと比べて圧倒的に安価で世界の多くの国で入手可能です。そして、誰もが簡単に開発が行えるSDKが提供されていることから、様々な分野で活用が広がっており、映像をまじえたKinectの事例紹介も学生さんにとっては、興味深い内容だったのではないでしょうか。

概要紹介のあとは、グループに分かれて、Kinectが、自分たちの開発するソリューションに、どのように活用できるかをディスカッションしてもらい、最後は各グループごとにディスカッションの成果を発表してもらいました。実装できるかどうかはともかく、なかなか面白いアイデアもありました。

今回のTop Gun Projectでは、アイデアを出すだけではなく、実装(開発)にもチャレンジしていただきます。最終的に、どんなソリューションが出てくるのか楽しみです。

CodePlex July 2014 Refresh

MSDN Blogs - 9 hours 23 min ago

In addition to the release of DirectXMesh, I've also updated the other CodePlex projects with July 2014 releases.

DirectX Tool Kit

http://go.microsoft.com/fwlink/?LinkId=248929

The July 2014 release includes some minor fixes to DirectXTK for Audio and some updates related to the latest Xbox One XDK.

The many versions of DirectXTK Simple Sample have been updated on MSDN Code Gallery.

  • SimpleSample - A Win32 desktop sample (no audio)
  • SimpleSample - A Win32 desktop sample that uses XAudio 2.8 (Windows 8.x)
  • SimpleSample - A Win32 desktop sample that uses XAudio 2.7 (Windows Vista or later) from the legacy DirectX SDK
  • SimpleSample - A Windows Store app sample for Windows 8
  • SimpelSample - A Windows Store app sample for Windows 8.1
  • SimpleSample - A Windows phone 8 sample
  • SimpleSample - A Win32 desktop sample that demonstrates DirectXTK in combination with DXUT.

Related: DirectXTK (March 2012), DirectXTK Update (Jan 2013), CodePlex VS 2013 Refresh, DirectX Tool Kit for Audio

DirectXTex

http://go.microsoft.com/fwlink/?LinkId=248926

The July 2014 release fixes some bugs in texconv command-line tool and addresses some customer requests, as well as including some updates for the latest Xbox One XDK.

Related: DirectXTex (October 2011), DirectXTex Update (June 2013), DirectXTex and Effects 11 Update (August 2013), CodePlex VS 2013 Refresh

Effects for Direct3D 11

http://go.microsoft.com/fwlink/p/?LinkId=271568

The July 2014 release (11.10) is a minor update with some code review feedback.

Related: Effects for Direct3D 11 Update (October 2012), DirectXTex and Effects 11 Update (August 2013), CodePlex VS 2013 Refresh

Samples: The sample package for this version of Effects was refreshed along with a new sample (a port of the Direct3D 10 Instancing10 sample), as well as publishing of the "Effects" tutorials series Tutorial11 - 14 on MSDN Code Gallery.

DXUT for Direct3D 11

http://go.microsoft.com/fwlink/?LinkId=320437

The July 2014 release (11.06) has a number of bug fixes for device enumeration, the change device (F2) dialogs, and numerous other code review feedback changes.

Related: DXUT for Win32 Desktop Update (September 2013), CodePlex VS 2013 Refresh

Samples: The samples based on this version of DXUT posted on MSDN Code Gallery were also updated.

ExpressRoute の提供地域の拡大と新しいパートナー

MSDN Blogs - 9 hours 23 min ago

このポストは、7 月 21 日に投稿した New ExpressRoute locations and partners の翻訳です。

先日マイクロソフトは、ExpressRoute を提供する地域に米国とアジアの 7 か所を追加し、Orange および IIJ と新たにパートナーシップを提携することを発表しました。今回の提供地域の拡大と提携先の追加により、さらに多くのお客様が ExpressRoute で Azure にアクセスできるようになります。

新しい地域

2014 年 5 月 12 日に ExpressRoute の一般提供を発表した時点では、サービスの提供は米国とヨーロッパにある 3 つの地域 (カリフォルニア州のシリコン バレー、ワシントン D.C.、イギリスのロンドン) を通して行われていました。皆様もご存知のとおり、ExpressRoute の提供地域とは、Azure のデータ センターに ExpressRoute 接続を行うための施設が存在する場所を指します。

今回、この提供場所に新しく 7 つの地域が追加されます。

  • 米国: アトランタ、シカゴ、ダラス、ニューヨーク、シアトル
  • アジア: 香港、シンガポール

今回の発表により、米国、ヨーロッパ、アジアの 10 か所で ExpressRoute が提供されることになります。ExpressRoute の提供地域とそこでサービスを提供しているパートナーは次のとおりです。

米国

サービス プロバイダー

アトランタ

シカゴ

ダラス

ニューヨーク

シアトル

シリコン バレー

ワシントン DC

AT&T

 

 

 

 

 

Equinix

Level 3 (EVPL サービス)

 

 

 

 

 

Level 3 (IP VPN サービス)

 

 

 

 

 

Verizon

 

 

 

 

 

ヨーロッパ

サービス プロバイダー

ロンドン

アムステルダム

British Telecom

 

Equinix

 

Level 3 (IP VPN サービス)

近日提供開始予定

 

Orange

近日提供開始予定

 

TeleCityGroup

近日提供開始予定

Verizon

近日提供開始予定

 

アジア

サービス プロバイダー

香港

シンガポール

Equinix

Internet Initiative Japan (IIJ)

 

 

SingTel

 

近日提供開始予定

(訳注:上記の表に東日本および西日本リージョンを追加してもらうようオリジナル投稿の著者に依頼中です。ここではオリジナル原稿のママとします)

ネットワーク サービス プロバイダー経由の接続

上記の ネットワーク サービス プロバイダー (NSP) と契約されているお客様は、オンプレミスの設備から上記の地域を経由して Azure に接続できます。お客様が主にサービスを提供する予定の Azure リージョンに近い地域を選択してください。

他のサービス プロバイダー経由の接続

上記以外の NSP と契約されているお客様も、Azure への接続をご利用いただけます。

  • 上記のエクスチェンジ サービスの提供地域に拠点があるかどうかを、ご利用の NSP にご確認ください。
  • NSP に依頼し、お客様のネットワークを、選択したエクスチェンジ サービスの提供地域まで拡張してください。
  • エクスチェンジ プロバイダー経由で ExpressRoute 回線を契約して、Azure への接続をご利用ください。

ネットワーク サービス プロバイダーとエクスチェンジ プロバイダーの違いについては、私が執筆したブログ記事「ExpressRoute の概要」をお読みください。

新しいパートナー

マイクロソフトは、Orange Business Services (英語) と新たに戦略的な関係を構築することを発表しました。このパートナーシップにより、両社のヨーロッパのお客様は Orange Business VPN Galerie サービス経由で Azure ExpressRoute に接続できるようになります。Business VPN Galerie サービスは Orange の統合化された WAN とクラウドを結ぶソリューションで、専用のインフラストラクチャをデプロイしなくても Microsoft Azure プラットフォームへのプライベート アクセスを使用できます。また、マイクロソフトは、Internet Initiative Japan (IIJ) とも新たに戦略的な関係を構築することを発表しました。このパートナーシップにより、両社の日本のお客様も ExpressRoute で Azure に接続できるようになります。今後数か月以内に、上記パートナーを早期に導入いただくお客様向けにサービスの提供を開始する予定です。サインアップをご希望のお客様は、Microsoft アカウント チームまたは販売担当者にお問い合わせください。

さらに、ヨーロッパで British Telecom (英語) が ExpressRoute の提供を開始しましたのでお知らせします。

ExpressRoute の参考情報: http://azure.microsoft.com/ja-jp/services/expressroute/

Keyboard Filter Driver

MSDN Blogs - 9 hours 28 min ago

皆様、楽しいデバッグライフをお送りしておりますでしょうか!

WDK サポートチームのI沢でございます。

 

今月も私がお送りしたいと思いますが、これまでのようなツール紹介ではございません!

今回は、キーボードフィルタドライバにつきまして、ご紹介したいと思います。

 

私はキーボードフィルタドライバと聞くだけで、馬鹿なことをやったな…と思いだすことがあります。

当時、まだ若かった私はキーボードフィルタドライバを実装していたのですが、手軽にいろいろできたので、"Ctrl + Alt + Del" の同時押しを抑制して強制終了できなくしてみたらどうなるだろう…?と対象のキーを抑制したドライバを検証用の PC ではなく作業用の PC にインストールしてしまいました。

オチは想像できるかと思いますが、それはもう見事に Windows にログインできなくなりました!

それを機に、絶対にドライバ開発は検証用の環境で行おう…と心に誓いました。みなさまもお気を付けくださいね!

ちなみにそのときは、別の PC に対象のストレージを接続して、ドライバのバイナリファイルを削除することで復旧いたしました。

 

■キーボードフィルタドライバについて

キーボードフィルタドライバは、簡単に説明するとキーボードの入力をフィルタリングして、入力を無効にしたり、別のキーに置き換えたり、キー入力を追加したりといったことが可能になります。

フィルタリングはカーネルモードで行われるため、ユーザーモードではフィルタリングされたキー入力結果があたかも実際にそのような入力があったかのように見えます。

実際どのような場面で使用されるかというと、例えばショートカットキーによる強制終了等の特定の処理を抑制したり、キーの連射機能を実現したりと、キー入力に関わることであれば大抵のことが可能です。

ただ、よくお聞きするのがフィルタドライバの実装は難しそう…といったご意見です。しかし、キーボードフィルタについてはその限りではありません。

優秀なサンプルプログラムがございますので、たった 1 つの関数を変更するだけでキー入力のフィルタリングを行うドライバが実装できるのです。

 

KeyboardClassServiceCallback ルーチンについて

さて、早速キーボードフィルタドライバのサンプルプログラムを見ていきましょう。

サンプルプログラムは以下のサイトからダウンロード可能です。ダウンロードした zip ファイルは適当な場所に展開しましょう。

 

Keyboard Input WDF Filter Driver (Kbfiltr)

<http://code.msdn.microsoft.com/windowshardware/Kbfiltr-WDF-Version-685ff5c4>

 

展開したら、ソースコードを確認してみましょう。

"<展開したフォルダ>\C++\sys\kbfiltr.c" の 766 行目付近に KbFilter_ServiceCallback という名前の関数が定義されていると思います。

なんと本関数を変更するだけで、基本的なフィルタリング機能は実装できてしまいます。

本関数の型は KeyboardClassServiceCallback ルーチンとして定義されており、キーボードフィルタドライバは Kbdclass ドライバから IOCTL_INTERNAL_KEYBOARD_CONNECT と定義されたコントロールコードが送られてきたときに、この型の関数をコールバック関数として登録するような仕様となっております。

詳しい話は割愛いたしますが、以下の技術資料に詳細が記載されておりますのでご参照ください。

 

◇KeyboardClassServiceCallback 関数の登録方法について

IOCTL_INTERNAL_KEYBOARD_CONNECT control code

<http://msdn.microsoft.com/en-us/library/windows/hardware/ff541273(v=vs.85).aspx>

 

◇KeyboardClassServiceCallback ルーチンについて

KeyboardClassServiceCallback Routine

<http://msdn.microsoft.com/en-us/library/windows/hardware/ff542274(v=vs.85).aspx>

 

◇KBDClass ドライバについて

Kbdclass Driver Reference

<http://msdn.microsoft.com/en-us/library/windows/hardware/ff542278(v=vs.85).aspx>

 

 

KeyboardClassServiceCallback ルーチンの実装

実際に関数の実装を進めたいと思いますが、まずはそのインターフェースを見てみましょう。

KeyboardClassServiceCallback (

    IN PDEVICE_OBJECT  DeviceObject,

    IN PKEYBOARD_INPUT_DATA InputDataStart,

    IN PKEYBOARD_INPUT_DATA InputDataEnd,

    IN OUT PULONG InputDataConsumed)

 

InputDataStart - 最初のキー入力データが格納されているポインタ

InputDataEnd - 最後のキー入力データポインタの一つ後を指すポインタ

InputDataConsumed - 処理したデータ数を格納

簡単に説明いたしますと、InputDataStart から InputDataEnd までにキーの入力情報が格納されています。通常はこのデータが上位のドライバにそのまま渡されるのですが、本関数内で変更を加えることでフィルタリングすることができます。なにも手を加えていない状態の実装を見てみましょう。引数で受け取った情報をそのまま上位ドライバに渡しているのがわかるかと思います。

 

KbFilter_ServiceCallback(

    IN PDEVICE_OBJECT  DeviceObject,

    IN PKEYBOARD_INPUT_DATA InputDataStart,

    IN PKEYBOARD_INPUT_DATA InputDataEnd,

    IN OUT PULONG InputDataConsumed)

{

    PDEVICE_EXTENSION   devExt;

    WDFDEVICE   hDevice;

 

    hDevice = WdfWdmDeviceGetWdfDeviceHandle(DeviceObject);

 

    devExt = FilterGetData(hDevice);

 

    (*(PSERVICE_CALLBACK_ROUTINE)(ULONG_PTR) devExt->UpperConnectData.ClassService)(

        devExt->UpperConnectData.ClassDeviceObject,

        InputDataStart,

        InputDataEnd,

        InputDataConsumed);}

 

そこに以下のような実装を加えます。

 

    …

    devExt = FilterGetData(hDevice);

 

    InputData = InputDataStart;

    while (InputData != InputDataEnd) {

        if (0 == (InputData->Flags & KEY_BREAK)) {

            // キーボードの入力信号である Make コードから入力キーを判断する

            switch (InputData->MakeCode) {

            case 0x03:   // 2 キーが押されたらそれを 3 キーが押されたことにする

                InputData->MakeCode = 0x04;

                break;

            default:

                break;

            }

        }

        ++InputData;

    }

 

    (*(PSERVICE_CALLBACK_ROUTINE)(ULONG_PTR) devExt->UpperConnectData.ClassService)(

    …

これは、InputDataStart から InputDataEnd までに 2 キーの入力がないかを調べて、もしあればそれを 3 キーの入力に変えています。どのキーが押されたかというのは InputData->MakeCode にスキャンコードが格納されますので、それを変更することで実現しています。

 

Key scan codes

<http://msdn.microsoft.com/en-us/library/aa299374(v=vs.60).aspx>

 

たったこれだけの実装でキーの変換フィルタを実装できました。次はキー入力を無かったことにしてみましょう。

    …

    devExt = FilterGetData(hDevice);

 

    InputData = InputDataStart;

 

    while (InputData != InputDataEnd) {

        if (0 == (InputData->Flags & KEY_BREAK)) {

            switch (InputData->MakeCode) {

            case 0x06:    // 5 キーが押されていたらその入力を破棄するフラグを立てる

                bCancel = TRUE;

                break;

            default:

                break;

            }

        }

        ++InputData;

    }

 

    // 破棄フラグが立っていた場合、上位のドライバに渡さずキー入力を消費してしまう

    if (bCancel) {

        *InputDataConsumed = InputDataEnd - InputDataStart;

        return;

    }

 

    (*(PSERVICE_CALLBACK_ROUTINE)(ULONG_PTR)devExt->UpperConnectData.ClassService)(

    …

こちらも簡単ですね。5 キーが押されていた場合に、上位ドライバにキー入力を伝えないようにしたため、ユーザーモードプログラムからみると入力がなかったように見えます。最後に、入力が 2 回あったことにしてみましょう。

 

    …

    devExt = FilterGetData(hDevice);

 

    InputData = InputDataStart;

 

    while (InputData != InputDataEnd) {

        if (0 == (InputData->Flags & KEY_BREAK)) {

            switch (InputData->MakeCode) {

            case 0x07:

                // 6 キーが押されていたら もう一回 6 キーが押されていたことに

                bInsert = TRUE;

                break;

            default:

                break;

            }

        }

        ++InputData;

    }

    if (bInsert) {

        (*(PSERVICE_CALLBACK_ROUTINE)(ULONG_PTR)devExt->UpperConnectData.ClassService)(

            devExt->UpperConnectData.ClassDeviceObject,

            InputDataStart,

            InputDataEnd,

            InputDataConsumed);

    }

 

    (*(PSERVICE_CALLBACK_ROUTINE)(ULONG_PTR)devExt->UpperConnectData.ClassService)(

    …

 

こちらは先ほどとは逆に、上位ドライバに同じキー入力を 2 回伝えることで、あたかも 2 回押されているかのように見せることができているわけです。

 

上記のような実装を加えた後に、展開したフォルダに格納されております "description.html" に記載されている内容に従ってビルド、インストールを行っていただければ、すぐにでもキーボードフィルタの威力を実感できるかと思います。2 キーの入力を 3 キーにするだけでも、2 キーをパスワードに使用している人にとっては致命的だったりしますので、くれぐれもインストールするときは慎重に行ってくださいね!なお、上記サンプルコードは、弊社にてその動作を保証するものではございませんので、ご使用の際には、貴社システムに合わせて修正して頂き、十分なテストを実施していただけますようお願いいたします。

 

今回の記事は以上となります。今回はキーボードフィルタドライバにつきまして、実装ベースでお話をさせていただきましたが、Windows OS としてどのような仕組でフィルタドライバが動作しているかなど、より深い話につきましては、またの機会にご紹介させていただければと思います。

また来月も皆様にお会いできるといいですね!ではまたお会いいたしましょう!

UnityVS (Visual Studio Tools for Unity) 免費下載

MSDN Blogs - 10 hours 38 min ago

目前許多的 Unity 開發者是採用內建的 MonoDevelop 來編寫程式,但其實有很多開發者比較習慣用 Visual Studio 來當作編輯環境,UnityVS 即由此而生,透過 UnityVS 你就可以用 Visual Studio 來當作Unity的編譯環境。

微軟於 2014/07 宣布收購 SyntaxTree,也就是開發 UnityVS 的公司的同時,亦宣佈將進一步優化後讓開發者免費下載,並更名為 Visual Studio Tools for Unity。目前的版本為 v1.9,重要的更新如以下:

  • Faster debugger. Attaching and detaching the debugger as well as expanding local variables is now faster.
  • Faster startup. Opening VSTU projects is now faster.
  • Better handling of C# constructs. The local variables window is now properly populated when debugging iterators or when variables are accessed inside closures.
  • Start your game and your debugging session in one click. This feature is one of our most-requested: you can now attach the debugger and start the game by simply changing the debug target. This is only available in Visual Studio 2012 and 2013.

目前提供三種不同版本的 Visual Studio add-ons 讓開發者免費下載:

進一步說明請見: http://blogs.msdn.com/b/visualstudio/archive/2014/07/29/visual-studio-tools-for-unity-1-9.aspx

 

We have a mural painted on our wall

MSDN Blogs - Wed, 07/30/2014 - 20:21

If you haven’t noticed the various posts on my Facebook feed, we now have a mural on our wall next to the dining room. It’s a picture of the Charles St Bridge in Prague in the Czech Republic.


I really liked the city of Prague – I was intrigued by its multiple types of architecture: Renaissance, Gothic, and Baroque. In the picture above, you can clearly make out Gothic and Baroque.

What’s the difference? Well, Baroque devices from the French term “tromp l’oeil” which means to “trick the eye.” That style is not reflected above although it is everywhere in Prague. Instead, another part of Baroque style are the ice-cream cone-style roofs on some of the ceilings. If you look in the distance you’ll see them.

Gothic, by contrast, is characterized by its long spires and ash-colored roofs. That’s the central focus of the bridge and even the clock tower on the right. I like that style of building.

But I also liked the bridge especially for its religious significance. The Charles St bridge has carvings of Christian saints all along the side of it, and Gothic churches have a history behind them in that their layout is intended to tell the gospel story; it was how it was told to pre-literate societies. For example, the churches are laid out in the shape of a cross, they are oriented a particular direction, and the spires signify being close to God. The Czech Republic has one of the highest rates of atheism in Europe but it was not always this way, as demonstrated by its architecture and carvings.

I’ve wanted a mural on the wall for a couple of years now. I had several ideas in mind but ultimately settled on Prague after I bought a painting on the street but couldn’t find a frame that fit for it. Rather than spending $100 on a frame (no exaggeration), I decided to spend money on the wall.

I don’t regret it at all.

* * * * *

You may be wondering what this has to do with cyber security? Well, in June 2013, MAAWG had a session in Vienna, Austria. After Vienna, my wife and I went to the Czech Republic for a week. We spent time in Cesky Krumlov (highly recommended) and the rest in Prague.

While in the Czech Republic, I started drinking beer for the first time in my life. All my previous years, I didn’t drink it. I had it a few times but strongly disliked the taste. But in the Czech Republic, I tried it and it was amazing! It was the Pilsner Urquelle and that’s the drink that got me into trying various types of beer. The Pilsner Urquelle doesn’t taste the same in the US as in Europe, but I’m fortunate enough to live in a part of the country where they brew pretty good beer locally. It turns out that I didn’t dislike beer, I only ever tried stuff that wasn’t very good.

So, this post appears on this blog because if it weren’t for MAAWG, I would not have this mural on my wall, nor would I have ever started drinking beer.

That’s a true story.

如何将 Cortana 与 Windows Phone 8.1 应用集成 ( Voice command - Natural language recognition )

MSDN Blogs - Wed, 07/30/2014 - 19:06

随着 Windows Phone 8.1 GDR1 + Cortana 中文版的发布,相信有很多用户或开发者都在调戏 Windows Phone 的语音私人助理 Cortana 吧,在世界杯的时候我亲测 Cortana 预测德国和阿根廷的比赛很准的。(题外话扯远了),可是作为开发者我们怎么将Cortana集成到应用中呢,今天我用一点时间给大家介绍一下如何使用 voice command 集成 Windows Phone 8.1 的应用。

首先要明确两个名词 Voice command & Voice Command Definition 即 VCD文件,相信做过windows Phone 8.0 开发的朋友应该有所了解,通过注册VCD文件 Windows phone 8.0 的应用当中就可以实现 voice command 的功能,如果你不了解请先读一下我之前的文章(这里我就不在过多介绍 8.0 Voice command 的重复内容了),Windows Phone 8 语音 - Speech for Windows Phone 8 快速了解一下Windows Phone 开发语音功能的前期准备工作。

 

简单的说在 Windows Phone 8.0 voice command 功能比较简单,主要是通过 Voice Command Name 判断预制在VCD文件中的几个命令。

在 Windows Phone 8.1 应用中 Cortana 提供了更强的自然语言识别(Natural language recognition)

当然 VCD 文件的中的 grammars  也得到了扩充,并且区别两个OS版本的

http://schemas.microsoft.com/voicecommands/1.0 for Windows Phone 8.0 Voice Command and Cortana compatible.

http://schemas.microsoft.com/voicecommands/1.1 only for Widnows Phone 8.1 Cortnan.

详细内容请参考

Windows Phone 8.0:  Voice command element and attribute reference for Windows Phone 8 Windows Phone 8.1:  Voice command elements and attributes

通过 8.0 和 8.1 VCD 文件属性支持情况来看有一个最主要的区别在8.1 VCD中支持 PhraseTopic 这个属性。

文字说的太抽象了还是贴出代码给大家说说吧:

这里我主要强调说一下 ListenFor  结点和 PhraseTopic 结点。 注意在 Listenfor 结点中的中括号 {dictatedSearchTerms} 是对应的 PhraseTopic  结点中的 Label 属性。同时我们可以把 PhraseTopic 理解成任意内容。最后都可以从Cortana回传到我们的应用当中来。

<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.1"> <!-- The CommandSet Name is used to programmatically access the CommandSet --> <CommandSet xml:lang="zh-CN" Name="chineseCommands"> <!-- The CommandPrefix provides an alternative to your full app name for invocation --> <CommandPrefix> 微软 文档 </CommandPrefix> <!-- The CommandSet Example appears in the global help alongside your app name --> <Example> 搜索 构造 函数 </Example> <Command Name="MSDNSearch"> <!-- The Command example appears in the drill-down help page for your app --> <Example> 搜索 构造 函数' </Example> <!-- ListenFor elements provide ways to say the command, including references to {PhraseLists} and {PhraseTopics} as well as [optional] words --> <ListenFor> 查找 {dictatedSearchTerms} </ListenFor> <ListenFor> 搜 {dictatedSearchTerms} </ListenFor> <ListenFor> 搜索 {dictatedSearchTerms} </ListenFor> <ListenFor> 查 {dictatedSearchTerms} </ListenFor> <ListenFor> 找 {dictatedSearchTerms} </ListenFor> <!--Feedback provides the displayed and spoken text when your command is triggered --> <Feedback> 查找 MSDN... </Feedback> <!-- Navigate specifies the desired page or invocation destination for the Command--> <Navigate Target="MainPage.xaml" /> </Command> <Command Name="MSDNNaturalLanguage"> <Example> 我 想 去 Windows 手机 开发 中心 </Example> <ListenFor> {naturalLanguage} </ListenFor> <Feedback> 启动 MSDN... </Feedback> <Navigate Target="MainPage.xaml" /> </Command> <PhraseTopic Label="dictatedSearchTerms" Scenario="Search"> <Subject> MSDN </Subject> </PhraseTopic> <PhraseTopic Label="naturalLanguage" Scenario="Natural Language"> <Subject> MSDN </Subject> </PhraseTopic> </CommandSet> </VoiceCommands>

 

了解完新的VCD文件,在这里我提醒下大家,其实在Windows Phone 8.0的应用中也可以兼容 Cortana的功能的,在8.0的应用当中我们只需要判断一下操作系统的版本然后选择不同的VCD文件向系统注册即可。

首先我们需要把两个版本的VCD文件都存放到项目中来

其次在注册VCD文件的时候根据系统版本进行一下判断即可。

/// <summary> /// Installs the Voice Command Definition (VCD) file associated with the application. /// Based on OS version, installs a separate document based on version 1.0 of the schema or version 1.1. /// </summary> private async void InstallVoiceCommands() { const string wp80vcdPath = "ms-appx:///VoiceCommandDefinition_8.0.xml"; const string wp81vcdPath = "ms-appx:///VoiceCommandDefinition_8.1.xml"; const string chineseWp80vcdPath = "ms-appx:///ChineseVoiceCommandDefinition_8.0.xml"; const string chineseWp81vcdPath = "ms-appx:///ChineseVoiceCommandDefinition_8.1.xml"; try { bool using81orAbove = ((Environment.OSVersion.Version.Major >= 8) && (Environment.OSVersion.Version.Minor >= 10)); string vcdPath = using81orAbove ? wp81vcdPath : wp80vcdPath; if (InstalledSpeechRecognizers.Default.Language.Equals("zh-CN", StringComparison.InvariantCultureIgnoreCase)) { vcdPath = using81orAbove ? chineseWp81vcdPath : chineseWp80vcdPath; } Uri vcdUri = new Uri(vcdPath); await VoiceCommandService.InstallCommandSetsFromFileAsync(vcdUri); } catch (Exception vcdEx) { Dispatcher.BeginInvoke(() => { MessageBox.Show(String.Format( AppResources.VoiceCommandInstallErrorTemplate, vcdEx.HResult, vcdEx.Message)); }); } }

最后在应用当中获取用户的语音输入方法,注意这里也是需要通过 PhraseTopic 结点的 Label 名称获取的。

/// <summary> /// Takes specific action for a retrieved VoiceCommand name. /// </summary> /// <param name="voiceCommandName"> the command name triggered to activate the application </param> private void HandleVoiceCommand(string voiceCommandName) { // Voice Commands can be typed into Cortana; when this happens, "voiceCommandMode" is populated with the // "textInput" value. In these cases, we'll want to behave a little differently by not speaking back. bool typedVoiceCommand = (NavigationContext.QueryString.ContainsKey("commandMode") && (NavigationContext.QueryString["commandMode"] == "text")); string phraseTopicContents = null; bool doSearch = false; switch (voiceCommandName) { case "MSDNNaturalLanguage": if (NavigationContext.QueryString.TryGetValue("naturalLanguage", out phraseTopicContents) && !String.IsNullOrEmpty(phraseTopicContents)) { // We'll try to process the input as a natural language query; if we're successful, we won't // fall back into searching, since the query will have already been handled. doSearch = TryHandleNlQuery(phraseTopicContents, typedVoiceCommand); } break; case "MSDNSearch": // The user explicitly asked to search, so we'll attempt to retrieve the query. NavigationContext.QueryString.TryGetValue("dictatedSearchTerms", out phraseTopicContents); doSearch = true; break; } if (doSearch) { HandleSearchQuery(phraseTopicContents, typedVoiceCommand); } }

整个过程就这么简单,心动不如行动,赶快把你的应用加入Cortana 功能让小伙伴儿们调戏一番。

更多参考资料:

Quickstart: Voice commands (XAML)

Speech for Windows Phone 8

快速入门:语音命令 (XAML)

源码下载:

MSDN Voice Search for Windows Phone 8.1

Issues with TechNet Wiki - Investigating

MSDN Blogs - Wed, 07/30/2014 - 18:41

Initial Update: 31st July 2014 01:24 AM  UTC

The TechNet Wiki page is currently experiencing issues. Users will see “Error: Not Found: Resource Not Found” when browsing the articles in the TechNet Wiki page.

Dev ops are engaged and actively investigating to mitigate the issue.

We apologize for the inconvenience and appreciate your patience.

-MSDN Service Delivery Team

 

Lync Server 2013 のファブリック プール マネージャーにおけるマジョリティ起票者の動作と変更

MSDN Blogs - Wed, 07/30/2014 - 17:16

Japane Lync Support チームです。

 

Lync Server 2013 では、ファブリック プール マネージャーに関して以下の不具合があります。

 

[現象]

 現在同様のケースが既知の製品不具合として報告されており、rtcshared データベースがバックエンド サーバーのミラーにフェールオーバーされると WinFabricVote テーブルが更新されなくなります。
結果として、2 台のフロントエンド サーバーの内 1 台が停止すると Quorum を失い、サービスが終了いたします。

[原因]

現在同様のケースが既知の製品不具合として報告されており、rtcshared データベースがバックエンド サーバーのミラーにフェールオーバーされると WinFabricVote テーブルが更新されなくなります。
結果として、2 台のフロントエンド サーバーの内 1 台が停止すると Quorum を失い、サービスが終了いたします。

本現象は製品不具合です。

[解決方法]

現時点で、解決方法はありません。

日本のサポート 部門より、開発部門に対して修正されるべき問題としてフィードバックをしております。

[回避策]

現時点で、回避策はありません。

[状況]

マイクロソフトは、この問題を Lync Server 2013 の製品の問題として認識しています。

 

弊社製品不具合につきご迷惑をお掛けいたしますが、どうぞよろしくお願いいたします。

Japan Lync Support Team

Hola Windows Phone: Plataformas de Desarrollo

MSDN Blogs - Wed, 07/30/2014 - 15:00

Crear aplicaciones para Windows Phone es una experiencia fantástica, aunque la situación actual del mercado no apoya dicha opinión. Es una oportunidad para aprovechar el conocimiento que se tenga de las tecnologías Microsoft o para aprender a programar en general con una curva de aprendizaje muy fácil.

Con nuevas plataformas de desarrollo, la forma más fácil de aprender sobre ellas y dar inicio a unas bases sólidas es encontrar buenos ejemplos y ejercicios que lo lleven de la mano y le muestren el camino a seguir. Este artículo presenta una introducción a los conceptos básicos de las plataformas que Windows Phone ofrece, y en artículos posteriores se presentarán ejercicios de implementación para cada una de ellas. Al final, se tendrá una idea clara de cual es más compatible con lo que se necesita crear.

Con la llegada de Windows Phone 8.1 muchas nuevas características cobraron vida en el sistema operativo, pero una que beneficia a los desarrolladores en la disponibilidad de tres plataformas de desarrollo. Se puede elegir entre: Silverlight, Windows Runtime XAML, y JavaScript para la creación de aplicaciones.

Silverlight

La plataforma de desarrollo inicial que Windows Phone presento está basada en Silverlight. Una tecnología que surgió de Windows Presentation Foundation [WPF] como una alternativa para la creación de aplicaciones Web. Proporcionando un esquema de ejecución basada en componentes activos en el navegador, logrando equiparar la experiencia a la ejecución de aplicaciones locales del sistema. A medida que las tendencias de desarrollo móvil fueron creciendo, el uso de componentes activos comenzó a ser cuestionado en favor de HTML5. Fue allí, cuando esta plataforma encontró su camino y evolucionó dentro de Windows Phone 7, como una forma nueva de crear experiencias de aplicaciones fluidas generando un distanciamiento en el sistema operativo de las versiones anteriores de Windows Mobile.

Crear una aplicación en Silverlight consiste en describir los elementos de la interfaz de usuario haciendo uso de un lenguaje de definición llamado XAML y la creación de clases y métodos que le inyectarán funcionalidad en archivos de código anexo code-behind).

XAML hace uso de etiquetas de la misma forma que XML o HTML, siendo familiar y facilitando la transición para desarrolladores Web. Una etiqueta define una instancia de un objeto que puede o no tener una representación visual. Cada etiqueta puede ir acompañada de una lista de atributos que definen las propiedades de dicha instancia. Estos atributos pueden ser expresados en línea con la etiqueta o en bloques de etiqueta contenidos por la instancia principal.

1 2 3 4 5 6 7 8 <!-- en linea --> <Button Content="Hola WP" /> <!-- en a bloque --> <Button>   <Button.Content>         <Image Source="/images/win.png" />   </Button.Content> </Button>

Los archivos de código anexo pueden ser codificados utilizando C# o Visual Basic para la mayoría de los casos. También, puede combinarse con Visual C++ para crear lo que se denomina aplicaciones hibridas que interactúan con Direct3D y otras plataformas de juegos que requieren de ventajas de desempeño que sólo es posible en código nativo. Las clases contenidas en el código anexo heredan de una clase base llamada PhoneApplicationPage que reside en el espacio de nombres Microsoft.Phone.Controls. Esta clase base, activa servicios fundamentales como: navegación, cambios de orientación, enlace de datos, etc. en cada página que compone las vistas de la aplicación en el teléfono. Sin embargo, clases tradicionales que realizan tareas especializadas pueden existir independientes del código de interfaz de usuario contenido en los archivos XAML.

De igual forma que se crean aplicaciones de Silverlight para correr en un navegador, muchas de las técnicas de programación se pueden aplicar cuando se crean aplicaciones para el teléfono. Model-View-ViewModel [MVVM] es una de ellas. Sin embargo, hay que tener cuidado de no agregar muchas capas de procesamiento en la aplicación, ya que esto puede impactar el rendimiento cuando se ejecuta en dispositivos de bajo costo.

De igual forma cuando se crean aplicaciones Silverlight para correr en Web, una aplicación para el teléfono ésta contenida en un paquete con extensión .xap. Este es un archivo comprimido que contiene el componente principal de la aplicación (.dll) al igual que los archivos anexos, metadatos, y componentes de terceros referenciados en el proyecto. Cuando se despliega, la aplicación es ejecutada como un proceso dependiente hospedado de forma restringida en un ambiente de ejecución ofrecido por un motor llamado TaskHost.exe.

En Windows Phone 8.1 se introdujo una nueva versión de Silverlight, con un mayor acceso a las librerías de interfaz de aplicación de Windows Runtime [WinRT]. Las aplicaciones creadas en Silverlight 8.1 son ejecutadas en un contexto o proceso llamado AgHost.exe que permite un acceso extendido a un grupo de clases y servicios del sistema operativo.

Windows Runtime XAML

Windows Runtime XAML es la plataforma de desarrollo que llego con Windows 8 para crear Aplicaciones Modernas. Con el esfuerzo constante de Microsoft en reunir las plataformas de desarrollo entre las diferentes versiones de Windows esta llegó a Windows Phone 8.1. Permitiendo la creación de aplicaciones universales que pueden ser ejecutadas en Windows y Windows Phone compartiendo una cantidad significativa de código (cerca del 90%) con algunos cambios específicos para los diferentes dispositivos.

Como el nombre lo sugiere, esta plataforma es también basada en XAML y si no se observa con detalle es difícil diferenciar su código fuente con Silverlight. Una diferencia particular es que la clase base utilizada por las clases de código anexo para Windows Runtime es llamada Page y reside en el espacio de nombres Windows.UI.Xaml.Controls. En general la mayoría de clases de Silverlight están contenidas en el espacio de nombres Microsoft.Phone, mientras que para el caso de Windows Runtime, estas están contenidas dentro de la jerarquía de espacio de nombres de Windows.*. Esto último es debido al código común entre los dos sistemas operativos.

La unión de las plataformas de desarrollo no está aún completa. En este momento hay funcionalidad que sólo puede ser lograda en Silverlight que no se puede con Windows Runtime XAML y vice-versa. Pero esta expansión tiende a crecer hacia el futuro. Para proyectos nuevos la recomendación inicial es crearlos utilizando WinRT XAML, sin embargo si ya se cuenta con una aplicación existente en Silverlight es necesario balancear las necesidades y la funcionalidad para determinar el impacto de transición hacia la nueva plataforma.

En el lado de ejecución de las aplicaciones, las diferencias son bastante significativas. Las aplicaciones creadas en WinRT XAML son empaquetadas en archivos .appx y al igual que en Silverlight estos son archivos comprimidos que contienen el módulo principal de la aplicación (.exe) con sus archivos anexos, metadatos y componentes de terceros utilizados en el proyecto. El módulo principal de la aplicación es un ejecutable independiente pero aun corre en un ambiente controlado para proteger la estabilidad del sistema operativo de código maligno o errores de la aplicación.

Al igual que con Silverlight los lenguajes de programación que se pueden utilizar para crear aplicaciones son C# y Visual Basic. Pero a diferencia de Silverlight ya se puede utilizar Visual C++ de forma independiente para crear aplicaciones de código nativo más XAML. Esto permite una ventaja tremenda para desarrolladores de juegos y aplicaciones avanzadas que requieren de elementos de alto desempeño como manipuladores de fotos y audio, donde C++ es más adecuado.

JavaScript

Esta plataforma es también conocida como WWA por sus siglas en inglés (Windows Web Application) o WinJS. Crear aplicaciones para Windows Runtime con JavaScript es en fines prácticos casi lo mismo que crear un sitio web. Una aplicación está conformada por archivos html, css, y js. La plataforma soporta la gran mayoría de los elementos de HTML5 y CSS nivel 3. Adicional a esto algunas características son agregadas al correr en el contexto de Windows Runtime como: mejor soporte para gestos de tacto, mayor control sobre la diagramación de la interfaz gráfica, acceso a servicios y capacidades de red del sistema operativo, controles adicionales, etc.

Al igual que sucede con una página web que es interpretada por el navegador, las aplicaciones WWA son ejecutadas en un proceso llamado WWAHost.exe el cual posee algunas restricciones con respecto a la funcionalidad comparada con una página web, por ejemplo: no se pueden abrir ventanas emergentes o ventanas de dialogo típicas de la función alert, no se pueden realizar cambios al tamaño de la ventana y aspectos de seguridad que impiden la inyección de código protegiendo que código malintencionado sea ejecutado. Piense en estos cambios de igual forma cuando se carga la misma página web en un navegador diferente, cada navegador tiene su propio nivel de soporte sobre las características.

Las aplicaciones WWA son empaquetadas en archivos .appx al igual que sucede con Windows Runtime XAML. Sin embargo el contenido de estos paquetes son completamente diferentes, en este caso contiene los archivos fuente del proyecto con algunos archivos de configuración adicional. Por esta razón el contexto de ejecución previene que haya código que se ejecute de fuentes externas y que sea cargado en la aplicación en tiempo de ejecución.

En artículos individuales posteriormente se presentarán ejemplos de cada una de estas plataformas para completar la introducción a estas y darle una idea inicial sobre lo que es el desarrollo de aplicaciones para Windows Phone.

Webinar gratuito "Process Industries: Containerized Packaging"

MSDN Blogs - Wed, 07/30/2014 - 13:09

Buen dia, están cordialmente invitados al webinar gratuito "Process Industries: Containerized Packaging" que se dicatrá el dia de mañana a las 12:00 pm (Horario de la Ciudad de México). Pueden acceder utilizando el siguiente link:

 

https://training.partner.microsoft.com/learning/app/management/LMS_ActDetails.aspx?UserMode=0&ActivityId=876821

Se trata de un overview de la funcionalidad relacionado con el manejo de Empaque, Ordenes de Empaque y Ordenes de lote consolidadas disponible en la vertical de Process Manufacturing and Logistics.

Saludos

Hiding Activities Tab from Notes section in CRM form

MSDN Blogs - Wed, 07/30/2014 - 12:21

The article shows you how to hide the activities tab in the Notes section on entity form as shown below :

Notes section

Use the below javascript file to attach it to the page load of the form :

[code]

onLoad: function () {

var ctrlElement = document.getElementById("header_notescontrol");

if (ctrlElement.children != null && ctrlElement.children.length > 0) {

for (var ele = 0; ele < ctrlElement.children.length; ele++) {

var ctrl = ctrlElement.children[ele];

if (ctrl.title == "ACTIVITIES") {

ctrl.style.display = "none";

if (ele + 1 < ctrlElement.children.length) {

ctrlElement.children[ele + 1].click();

return;

}

else if ((ele - 1) >= 0) {

ctrlElement.children[ele - 1].click();

return;

}

}

}

}

}
[/code]

Hope this helps!

How to get the EDMX metadata from a Code First model (and why you need it)

MSDN Blogs - Wed, 07/30/2014 - 12:02

Entity Framework provides a very good experience with its Code First development model. In it you can define classes and use them as POCO entities (Plain Old CLR Objects). For example, consider the following model:

     public class Blog

    {

        public int BlogId { get; set; }

        public string Name { get; set; }

        public virtual ICollection<Post> Posts { get; set; }

    }

 

    public class Post

    {

        public int PostId { get; set; }

        [MaxLength(200)]

        public string Title { get; set; }

        public int LikeCount { get; set; }

        public Blog Blog { get; set; }

    }

 

    public class BlogContext : DbContext

    {

        public DbSet<Blog> Blogs { get; set; }

        public DbSet<Post> Posts { get; set; }

    }

 In terms of usability it's a great way to go since the code is self-explanatory and can be highly customized via the Fluent API. Now, you might be asking yourself: what happened with the EDMX file? Well, you don’t really see it but it still exists.

The way Entity Framework Code First works is by analyzing your assemblies for all the types related to your model and enumerating into a set of “discovered types”. Once EF knows about these types, it explores them looking for special attributes such as [Key], [MaxLength] and [Index] to know more about the model. Finally, it applies a set of conventions that allow it to discover information that’s not explicitly set. One such convention is the discovery of keys by which field PostId is discovered as the key to the Post entity. There’s a number of these conventions and it’s important that you understand them before using Code First.

The resulting fully-loaded model description is stored in memory using the EDM format. Effectively, Code First reverse-engineers an EDMX out of the POCOs, attributes and fluent API calls. From here on, Entity Framework doesn’t care whether you created the model Code First or Model First, it behaves the same. It goes on and generates the views, validates the model and sets up all the metadata to be ready to serve its purpose.

As you might imagine the process of reverse engineering the EDMX out of the Code First model is costly. It’s only paid for once on the startup of your first context, but it may represent a performance annoyance for your application. There’s a way to speed your start up though: use an EDMX in your project.

For this, you’ll want to obtain the EDMX data from your Code First context by calling:

EdmxWriter.WriteEdmx(context, new XmlTextWriter(new StreamWriter("MyModel.edmx")));

Call it in your context instance and you’ll get an EDMX file with all the data you require. Add the EDMX to your project and modify your connection string accordingly to make use of the EDMX data.

Large models will benefit the most. I’ve noticed reductions in the hundreds of milliseconds just for the EDMX generation part of the code.

上手くデモを行うには? Imagine Cup 2014 - MSP Summit Day 1

MSDN Blogs - Wed, 07/30/2014 - 11:01

こんにちは、Microsoft Student Partners (MSP) 松原です!


シアトルで行われているImagine Cup 2014 と同時にMSP Summit に参加しています。

 

初日には マイクロソフトのStory Teller であるSteve Claytonによるデモセッションがありました。
そこで学んだ内容をご紹介します!


【良いデモを行う10のコツ】Demo Session

 

#1 Tell a story
- どのような内容を「持ち帰ってほしいか」、どのメッセージを伝えたいのかはっきりとさせる

#2 Have a backup plan
 デモが失敗したときのバックアップを準備する。
 別のデモや機材を整える。

#3 Bigger is better
 見せるものは大きく見せる!

#4 Rehearse - timing
 時間に合わせた準備を。
 時間枠を意識し、デモの時間を調整する。話すスピードを変えるだけではダメ。

#5 Rehearse- words
 言葉は一度紙に書き、練習する。
 適切な表現が適切な場所でできるように行う。

#6 Be wary of dead moments
 デモが失敗することもある。だが、最長1分間戦い、うまく行かなければ次のものに進む

#7 Have a great opening
 導入を大事に。
 人を発表に惹きつけ、関心を持たせる。これをするためにも客層を理解しておくことが大切。

#8 Have a great close
 発表の終わりも大切に!
 人間は3つのことしか同時に覚えられないことから、最後にデモで伝えたかったことを簡潔に3点でまとめる。

#9 Have fun
 発表者が楽しんでデモしていると、観客も楽しめる。
 また、楽しく愉快にプレゼンを行い観客を味方につけることが大事。味方についてくれていると、デモが上手くいかなくても観客は応援してくれる!

#10 If in doubt...
 1~9の準備を整えてもまだ不安であれば・・・

https://twitter.com/stevecla/status/489449883313512448

 オレンジ色の靴を履く!!
 デモが上手くいかなくても話題になるため。

 

今後もMSP Summitについて配信します。楽しみにしていてください!

How to use CDO 1.2.1 to connect to Exchange 2013 using a dynamic profile

MSDN Blogs - Wed, 07/30/2014 - 10:05

NOTE: This article only applies to Exchange's MAPI \ CDO download.  It doesn't apply to using CDO 1.2.1 with an Outlook 2007 client.

I was discussing an issue recently with a customer and I asked him to connect to the Exchange server using CDO 1.2.1.  Then I realized that I had never tried that myself.  To that end, I decided to set out to have CDO 1.2.1 create a dynamic profile and connect to Exchange 2013.

First, some things about dynamic profiles. CDO 1.2.1 has a concept of a dynamic profile.  This means that a profile is created on the fly by passing the server name and mailbox name as the last parameter to the Session::Logon() method.

This is different than using a static profile that you configured outside of CDO 1.2.1.

Session::Logon()

http://msdn.microsoft.com/en-us/library/ms526377(v=exchg.10).aspx

One gotcha that I ran into was that the server name and mailbox name need to be delimited by a line feed (character 10).  In VIsual Basic 6 the line would look like this:

objSession.Logon , , ,true, ,true, _ "e9b5d6f1-89f1-4e02-93a1-7b3762cf2c59@contoso.com" & Chr(10) & "admin"

Of course, in Exchange 2013 the server name is the personalized server name of the target mailbox.  The mailbox name is just the alias of the user.  That's the easy part.  The hard part is configuring the registry to make this all work.  The RPCHttpProxyMap registry value is needed to get the dynamic profile created.  I discuss configuring this value in my omniprof article.  The other registry value that needs to be in place is the one which instructs CDO 1.2.1 to proceed even if Public Folders don't exist in the organization.  This setting is discussed in this blog post article by a former member of my team.  Once those are in place it should work.

The reason why these values are needed is that CDO 1.2.1 needs to know how to properly connect to Exchange.  Telling CDO 1.2.1 to "Ignore No PF" instructs it to pass the CONNECT_IGNORE_NO_PF flag when creating the underlying dynamic profile.  Creating the RPCHttpProxyMap registry value tells the underlying MAPI subsystem what RPC Proxy Server to connect to, what authentication to use, and what to do if a non-trusted certificate is encountered.

The two scenarios that I couldn't get working are targeting an Office 365 mailbox or an On-Premises mailbox where the RPC Proxy Server has been configured to accept Basic Authentication.  This is because the username and password must be configured on the profile for Exchange's MAPI to use it. You'll need to use a static profile for those scenarios.

Lastly, I wanted to point out that CDO 1.2.1 is not the recommended API for connecting to Exchange Server 2013.  However, I understand that some customers have existing applications that they may need to get working for Exchange 2013 before they upgrade. If you fall into this category this article may help you until you can migrate your application to a better API.

There’s no business like the healthcare business, like no business I know

MSDN Blogs - Wed, 07/30/2014 - 10:03
Irving Berlin eat your heart out. There’s no business like the healthcare business or so it seems from a recently published info-graphic in the Wall Street Journal. Where are the jobs in America? You guessed it, healthcare. But is that a healthy thing for the economy, or a leading indicator of an insidious illness? First of all, let me apologize to every clinician reading this. As a doctor myself, I know there is nothing more distasteful to a physician, nurse, or anyone else who works in healthcare...(read more)

6 new updates in Power Query - July 2014

MSDN Blogs - Wed, 07/30/2014 - 09:00
In this post

Download Power Query Update

More flexible Load Options

Query Groups

Improved Error Debugging Experience

Additional Query Editor transformations

Options dialog

Update notifications

 

The July 2014 Update for Power Query is now available. You can download it from this page.

This update is packed with lots of new features, so please take a look at the following video and the rest of this blog post. We hope that you like them!

 

 

Here is a summary of the new features included in this release: 

  • More flexible Load Options for your queries.
  • Query Groups.
  • Improved Error Debugging Experience.
  • Additional Query Editor transformations:
    • Replace Errors within a column.
    • UX for defining Math operations based on a single column.
  • Options dialog: Restore Defaults & Tips for Cache Management
  • Update notifications: 3 notifications per update (max.), once per day at most.
More flexible Load Options

One of the most common areas of feedback about Power Query in the past has been the desire of having additional options to control how and where to load queries within your workbook. In this update, we're introducing a new Load Options dialog to customize how to load your queries. In addition to controlling whether a query should be loaded to the worksheet or Data Model, we now offer you the option to load to an existing worksheet instead of always loading to a new worksheet. It is also more clear now how to disable the load of a query (or to "only create connection" instead of downloading the results), which until now was only possible by disabling load to worksheet and load to Data Model.

In addition to new options for how to load your queries, another area of feedback has been the need to have access to these options in all places from which users can load queries to their workbook. To address that, the following entry points have been added in this update…

•From Search results:

•From the Navigator pane:

•From the Query Editor:

•From the Workbook Queries pane and the contextual Query ribbon tab (to reconfigure the Load Options of an existing query without having to go back to the Query Editor):

Query Groups

Query Groups is a new concept introduced in this update that will help users better organize their queries within a given workbook, as well as perform bulk operations on all queries within a group (such as Refresh). Up until now, Power Query offered only a few capabilities in order to organize queries in the Workbook Queries pane, primarily moving queries up and down in the list.

With this update, users can now select multiple queries using (CTRL + Click) and move them into their custom groups. Users can define as many groups as they want in the workbook, as well as groups within groups to create more advanced organization layers. This enables them to organize and classify the queries better within their workbook. In addition, users can leverage the context menu for each group entry to apply bulk operations to all queries within that group.

Improved Error Debugging Experience

In previous Power Query updates, we introduced some transformation capabilities within the Query Editor to "Remove Rows With Errors" and to "Keep Rows With Errors" within your queries. These features were helpful in order to discard all error rows or to narrow down to just the rows with errors in the final result, but didn't quite help users to see these errors in context, understanding the row from the result where these errors were introduced. In addition, after loading queries into the workbook, users would get in the Workbook Queries pane an indication of the total number of rows and the number of errors but weren't able to easily preview those errors from this pane.

With this update, we've turned the "Number of rows with Errors" indicator into a hyperlink which brings up a preview of the rows with errors that users can explore and interact with. This preview also includes the row index to better understand where these errors appear.

 

Replace Errors

There are cases in which the way to resolve errors within your data is not to ignore the rows with errors but rather to replace the error values with a default value for the column. We're introducing the ability to do this: a new "Replace Errors" operation is available in the Transform tab inside the Query Editor. This option brings up a dialog asking users for the value to replace errors with for the selected columns.

UX for defining Math operations based on a single column

Very frequently, users find the need to add new columns to their queries that reference a different column and apply a Math operation. In previous updates, we introduced the "Add Column" tab which provides several operations that will create new columns based on one or multiple existing columns. One limitation until now was that, for Math Standard operations (Add, Multiply, Subtract, Divide, etc.), users were only allowed to select two columns, representing the first and second operators.

In this update, we've added the ability to select just one column and use these Math operations. Users will be asked to provide the second operator in a dialog, and the result will be a new column added to the query with the selected Math calculation. This is available for all operations under the Standard dropdown menu in the "From Number" group, part of the "Add Column" tab.

Options Dialog: Restore Defaults & Tips for Cache Management options

If you followed our last two updates, you may be already aware that we have introduced new options for Custom Default Load Settings and Cache Management. This made the amount of choices in the Options dialog grow significantly, but there wasn't an easy way to "reset" Power Query to its default behaviors. Well, now there is such way…  : )

In addition to the new "Restore Defaults" button, we have added a few tooltips to help users better understand the Cache Management options introduced in our previous update.

Update Notifications Improvements

As you may be already aware, Power Query has an Update notification mechanism that tells users about our updates every month. This notification was displayed to the user in the system tray every time that they launched Excel and there was an update available. While this works out great for many users, we also heard from some of them that installing these updates wasn't directly possible for them and they needed to notify their system administrator to perform this update and wait for a few days or weeks. At that point, seeing the Update notification continuously displayed in the system tray every time they launched Excel would become annoying…

With this update, we've limited the number of times that a user will see the Update notification to three times per update (i.e. three times each month). In addition, we have also limited these updates to only be displayed once per day. We believe that this will establish a good balance between making users aware of our updates while also not reminding them too many times. : )

That's it for this update... We hope that you enjoy these new Power Query features. Please don't hesitate to send us a smile/frown or post something in our forums with any questions or feedback about Power Query that you may have.

Follow this links to access more resources about Power Query and Power BI:

The Data Driven Quality Mindset

MSDN Blogs - Wed, 07/30/2014 - 07:38

"Success is not delivering a feature; success is learning how to solve the customer's problem." - Mark Cook, VP of Products at Kodak

I've talked recently about the 4th wave of testing called Data Driven Quality (DDQ). I also elucidated what I believe are the technical prerequisites to achieving DDQ. Getting a fast delivery/rollback system and a telemetry system is not sufficient to achieve the data driven lifestyle. It requires a fundamentally different way of thinking. This is what I call the Data Driven Quality Mindset.

Data driven quality turns on its head much of the value system which is effective in the previous waves of software quality. The data driven quality mindset is about matching form to function. It requires the acceptance of a different risk curve. It requires a new set of metrics. It is about listening, not asserting. Data driven quality is based on embracing failure instead of fearing it. And finally, it is about impact, not shipping.

Quality is the matching of form to function. It is about jobs to be done and the suitability of an object to accomplish those jobs. Traditional testing operates from a view that quality is equivalent to correctness. Verifying correctness is a huge job. It is a combinatorial explosion of potential test cases, all of which must be run to be sure of quality. Data driven quality throws out this notion. It says that correctness is not an aspect of quality. The only thing that matters is whether the software accomplishes the task at hand in an efficient manner. This reduces the test matrix considerably. Instead of testing each possible path through the software, it becomes necessary to test only those paths a user will take. Data tells us which paths these are. The test matrix then drops from something like O(2n) to closer to O(m) where n is the number of branches in the code and m is the number of action sequences a user will take. Data driven testers must give up the futile task of comprehensive testing in favor of focusing on the golden paths a user will take through the software. If a tree falls in the forest and no one is there to hear it, does it make a noise? Does it matter? Likewise with a bug down a path no user will follow.

Success in a data driven quality world demands a different risk curve than the old world. Big up front testing assumes that the cost to fix an issue rises exponentially the further along the process we get. Everyone has seen a chart like the following:

In the world of boxed software, this is true. Most decisions are made early in the process. Changing these decisions late is expensive. Because testing is cumulative and exhaustive, a bug fix late requires re-running a lot of tests which is also expensive. Fixing an issue after release is even more expensive. The massive regression suites have to be run and even then there is little self hosting so the risks are magnified.

Data driven quality changes the dynamics and thus changes the cost curve. This in turn changes the amount of risk appropriate to take at any given time. When a late fix is very expensive, it is imperative to find the issues early, but finding issues early is expensive. When making a fix is quick and cheap, the value in finding a fix early is not high. It is better to lazy-eval the issues. Wait until they become manifested in the real world before a fix is made. In this way, many latent issues will never need to be fixed. The cost of finding issues late may be lower because broad user testing is much cheaper than paid test engineers. It is also more comprehensive and representative of the real world.

Traditional testers refuse to ship anything without exhaustive testing up front. It is the only way to be reasonable sure the product will not have expensive issues later. Data driven quality encourages shipping with minimum viable quality and then fixing issues as they arise. This means foregoing most of the up front testing. It means giving up the security blanket of a comprehensive test pass.

Big up front testing is metrics-driven. It just uses different metrics than data driven quality. The metrics for success in traditional testing are things like pass rates, bug counts, and code coverage. None of these are important in data driven quality world. Pass rates do not indicate quality. This is potentially a whole post by itself, but for now it suffices to say that pass rates are arbitrary. Not all test cases are of equal importance. Additionally, test cases can be factored at many levels. A large number of failing unimportant cases can cause a pass rate to drop precipitously without lowering product quality. Likewise, a large number of passing unimportant cases can overwhelm a single failing important one.

Perhaps bug counts are a better metric. In fact, they are, but they are not sufficiently better. If quality if the fit of form and function, bugs that do not indicate this fit obscure the view of true quality. Latent issues can come to dominate the counts and render invisible those bugs that truly indicate user happiness. Every failing test case may cause a bug to be filed, whether it is an important indicator of the user experience or not. These in turn take up large amounts of investigation and triage time, not to mention time to fix them. In the end, fixing latent issues does not appreciably improve the experience of the end user. It is merely an onanistic exercise.

Code coverage, likewise, says little about code quality. The testing process in Windows Vista stressed high code coverage and yet the quality experienced by users suffered greatly. Code coverage can be useful to find areas that have not been probed, but coverage of an area says nothing about the quality of the code or the experience. Rather than code coverage, user path coverage is a better metric. What are the paths a user will take through the software? Do they work appropriately?

Metrics in data driven quality must reflect what users do with the software and how well they are able to accomplish those tasks. They can be as simple as a few key performance indicators (KPIs). A search engine might measure only repeat use. A storefront might measure only sales numbers. They could be finer grained. What percentage of users are using this feature? Are they getting to the end? If so, how quickly are they doing so? How many resources (memory, cpu, battery, etc.) are they using in doing so? These kind of metrics can be optimized for. Improving them appreciably improves the experience of the user and thus their engagement with the software.

There is a term called HiPPO (highest paid person's opinion) that describes how decisions are too often made on software projects. Someone asserts that users want to have a particular feature. Someone else may disagree. Assertions are bandied about. In the end the tie is usually broken by the highest ranking person present. This applies to bug fixes as well as features. Test finds a bug and argues that it should be fixed. Dev may disagree. Assertions are exchanged. Whether the bug is ultimately fixed or not comes down to the opinion of the relevant manager. Very rarely is the correctness of the decision ever verified. Decisions are made by gut, not data.

In data driven quality, quality decisions must be made with data. Opinions and assertions do not matter. If an issue is in doubt, run an experiment. If adding a feature or fixing a bug improves the KPI, it should be accepted. If it does not, it should be rejected. If the data is not available, sufficient instrumentation should be added and an experiment designed to tease out the data. If the KPIs are correct, there can be no arguing with the results. It is no longer about the HiPPO. Even managers must concede to data.

It is important to note that the data is often counter-intuitive. Many times things that would seem obvious turn out not to work and things that seem irrelevant are important. Always run experiments and always listen to them.

Data driven quality requires taking risks. I covered this in my post on Try.Fail.Learn.Improve. Data driven quality is about being agile. About responding to events as they happen. In theory, reality and theory are the same. In reality, they are different. Because of this, it is important to take an empiricist view. Try things. See what works. Follow the bread crumbs wherever they lead. Data driven quality provides tools for experimentation. Use them. Embrace them.

Management must support this effort. If people are punished for failure, they will become risk averse. If they are risk averse, they will not try new things. Without trying new things, progress will grind to a halt. Embrace failure. Managers should encourage their teams to fail fast and fail early. This means supporting those who fail and rewarding attempts, not success.

Finally, data driven quality requires a change in the very nature of what is rewarded. Traditional software processes reward shipping. This is bad. Shipping something users do not want is of no value. In fact, it is arguably of negative value because it complicates the user experience and it adds to the maintenance burden of the software. Instead of rewarding shipping, managers in a data driven quality model must reward impact. Reward the team (not individuals) for improving the KPIs and other metrics. These are, after all, what people use the software for and thus what the company is paid for.

Team is the important denominator here. Individuals will be taking risks which may or may not pay off. One individual may not be able to conduct sufficient experiments to stumble across success. A team should be able to. Rewards at the individual level will distort behavior and reward luck more than proper behavior.

The data driven quality culture is radically different from the big up front testing culture. As Clayton Christensen points out in his books, the values of the organization can impede adoption of a new system. It is important to explicitly adopt not just new processes, but new values. Changing values is never a fast process. The transition may take a while. Don't give up. Instead, learn from failure and improve.

If you want to be notified when your app is uninstalled, you can do that from your uninstaller

MSDN Blogs - Wed, 07/30/2014 - 07:00

A customer had a rather strange request. "Is there a way to be notified when the user uninstalls any program from Programs and Features (formerly known as Add and Remove Programs)?"

They didn't explain what they wanted to do this for, and we immediately got suspicious. It sounds like the customer is trying to do something user-hostile, like seeing that a user uninstalled a program and immediately reinstalling it. (Sort of the reverse of force-uninstalling all your competitors.)

The customer failed to take into account that there are many ways of uninstalling an application that do not involve navigating to the Programs and Features control panel. Therefore, any solution that monitors the activities of Programs and Features may not actually solve the customer's problem.

The customer liaison went back to the customer to get more information about their problem scenario, and the response was, that the customer is developing something like an App Lending Library. The user goes to the Lending Library and installs an application. They want a way to figure out when the user uninstalls the application so that the software can be "checked back in" to the library (available for somebody else to use).

The customer was asking for a question far harder than what they needed. They didn't need to be notified if the user uninstalled any application from the Programs and Features control panel. They merely needed to be notified if the user uninstalled one of their own applications from the Programs and Features control panel.

And that is much easier to solve.

After all, when an application is installed, it registers a command line to execute when the user clicks the Uninstall button. You can set that command line to do anything you want. For example, you can set it to

Uninstall­String = "C:\Program Files\Contoso Lending Library\CheckIn.exe" ⟨identification⟩

where ⟨identification⟩ is something that the Check­In program can use to know what program is being uninstalled, so that it can launch the real uninstaller and update the central database.

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux