You are here

Feed aggregator

Web Authentication API 紹介 (Windows Hello を使った Edge 開発)

MSDN Blogs - Tue, 06/07/2016 - 18:20

Windows Hello を使った開発

こんにちは。
しばらく投稿が滞ってすみません。(長く、休暇をいただきました。)

Chrome で実装されている Credential Management API や、Microsoft Edge (Preview Build) に実装されている Web Authentication API (本投稿で紹介します) など、パスワードのみに依存しない、統一的で、ユーザーの負荷を軽減し、かつセキュアな認証に対するいくつかの試みが、W3C による標準化の動きと共に提案されています。

今回は、FIDO 2.0 に準拠した W3C の Web Authentication API について、実際の Microsoft Edge におけるプログラミングを見ながら細かく紹介します。

もう何度も本ブログで紹介してきたように、Microsoft では、Microsoft Passport と呼ばれるフレームワークを Windows 10 で導入しています。このフレームワークは、多要素認証 (Device と紐づいた認証) などのパスワードのみに依存しないセキュアな認証方式を、生体認証 (biometrics authentication) と組み合わせることで、より直観的に (入力の手間などユーザーの負荷を軽減して) 実現する仕組みで、FIDO に準拠し、既存技術の拡張の形で実装することで、将来的な相互運用も考慮した設計となっています。

以前投稿した「Windows Hello を使った App 開発」では、Windows の API (UWP の KeyCredentialManager API) を使って、このプログラムからの活用例を紹介しましたが、今回紹介する Web Authentication API は、この処理と同様の概念を Web アプリケーションで実現できる API です。

なお、BUILD 2016 のセッションの中でも紹介していますが、現在 (2016/05)、この Web Authentication API には下記の制限があります。

  • まだ早期ドラフトの実装です (名前空間等、MS-Prefix 実装です)。現在は Microsoft Edge のみで動作します。
  • USB, Bluetooth (BT) などによる External Authenticator (外部 Device による認証) は将来対応予定で、現在は TPM などの Embedded Authenticator (利用 Device に付随した認証) に限定しています。今後、FIDO Device がサポートされれば、この API を使って、wearable 端末も含むポータブルなデバイスを使った認証も可能になるでしょう。 (マイクロソフトの公式な投稿ではありませんが、こちらの記事 なども参照してみてください。。。)

 

概念 (おさらい)

まず、「Windows Hello を使った App 開発」でも紹介したように、Microsoft Passport による大まかな流れを理解しておきましょう。(何度も紹介していますが、復習のため改めて記載します。)

Privatre Key (秘密鍵), Public Key (公開鍵) の key pair を用いて、Privatre Key を Device 側、Public Key を接続する Service 側に保持して、相互に認証をおこないます。
まず最初に、PIN や Windows Hello による生体 (biometrics) 認証などの Device 側の認証 (ネットワーク上に情報が流れない閉じた認証) によって、key pair の作成と、Device 側への private key の登録、public key の取得 (および、この取得した public key の Service 側への登録) をおこないます。以降の認証では、同様に Device 側の認証 (PIN, Windows Hello などによる認証) を使って、格納されている private key を取り出して challenge data (nonce など) にデジタル署名 (Digital Signature) をおこない、Application 側 (Service 側) では、この生成された署名が正しいかどうかを public key を使って検証します。(Azure AD とも、このようにして相互に連携します。)

このため、Web Authentication API で使用する (主な) 関数は 2 つだけです。
まず、makeCredential 関数は、初回の key pair の作成、private key の (Device への) 登録、public key の取得をおこなう関数です。(この際、上述の通り、PIN の入力や、Windows Hello による生体認証がおこなわれます。) Application では、通常、利用開始の最初の 1 回だけ、この関数を呼び出すことになるでしょう。
getAssertion 関数は、登録された private key を用いた challenge data のデジタル署名 (Digital Signature) を生成します。(この際も同様に、PIN の入力や、Windows Hello による生体認証がおこなわれます。) Application における以降の認証では、この getAssertion を毎回呼び出して、Digital Signature を生成します。

 

プログラミング (Programming)

W3C で定義されている API の名前空間は webauthn.makeCredential、webauthn.getAssertion ですが、上述の通り、現在はドラフト仕様に基づく ms-prefix の実装であるため、現在 (2016/05 時点) の Edge の実装では、明示的に msCredentials.makeCredential、msCredentials.getAssertion を使用します。

下記は、これらの API を使った簡単な実装例です。

<!DOCTYPE html> <html> <head> <title>Web Authentication API Test</title> <script language="javascript"> function make() { var accountInfo = { rpDisplayName: 'Contoso', // Name of relying party userDisplayName: 'Tsuyoshi Matsuzaki' // Name of user account }; var cryptoParameters = [ { type: 'FIDO_2_0', algorithm: 'RSASSA-PKCS1-v1_5' } ]; msCredentials.makeCredential(accountInfo, cryptoParameters) .then(function (result) { // for debugging document.getElementById('credID').value = result.id; document.getElementById('publicKey').value = JSON.stringify(result.publicKey); //alert(result.algorithm) //alert(result.attestation) }).catch(function (err) { alert('err: ' + err.message); }); } function sign() { var credID = document.getElementById('credID').value; var filters = { accept:[ { type: 'FIDO_2_0', id: credID } ] }; msCredentials.getAssertion('challenge value', filters) .then(function(result) { //for debugging document.getElementById('signature').value = result.signature.signature; document.getElementById('authnrdata').value = result.signature.authnrData; document.getElementById('clientdata').value = result.signature.clientData; }); } </script> </head> <body> <button onclick="make()">Make</button> <button onclick="sign()">Sign</button> <div> Credential ID:<input type="text" size="120" id="credID"><br> Public Key:<input type="text" size="120" id="publicKey"><br> Base64 Encoded Signature:<input type="text" size="120" id="signature"><br> Base64 Encoded AuthnrData:<input type="text" size="120" id="authnrdata"><br> Base64 Encoded ClientData:<input type="text" size="120" id="clientdata"><br> </div> </body> </html>

この Web Application では、[Make] ボタンを押すと、makeCredential 関数が呼ばれて、下図の通り PIN 入力や生体認証 (Windows Hello) が促され、Device 側の認証に成功すると、key pair が作成され、Key Container (Device) に private key が登録されて、下図の通り public key (Json の値) が返ってきます。

返される publicKey は、下記のフォーマットの JSON Web Key と呼ばれる key で、今回、このサンプル プログラムでは、受け取った public key をテキストボックスに書き込んでいますが (上図参照)、通常は、この public key を Application 側 (Service 側) に保持 (記憶) しておいて、今後の認証 (getAssertion の処理) に備えます。

{ "kty": "RSA", "alg": "RS256", "ext": false, "n": "xCqz2wEsl-3...", "e": "AQAB" }

補足 : makeCredential で返される id (上記サンプルコードの中の credID) は、key identifier です。(「Windows Hello を使った App 開発」で紹介した KeyCredentialManager API の RequestCreateAsync, OpenAsync で渡した引数と同じ位置づけの id です。)
この key identifier は、今後、makeCredential に引数として渡して、あらかじめ指定可能になる予定ですが、現時点では、まだ事前の指定は不可能です。(なお、同じ key identifier を使って何度も makeCredential を呼ぶと、key が上書きされます。)
この key identifier は、一度忘れたら取り出す方法 (API など) はないので注意してください。一度作成したら、アプリ側で必ずおぼえておいてください

[Sign] ボタンを押すと、getAssertion 関数が実行されるため、この場合も、再度、PIN 入力や生体認証 (Windows Hello) が促されます。認証に成功すると、上記の getAssertion 関数の引数に指定した challenge data (今回の場合、「challengae value」の文字列) に対する Digital Signature (デジタル署名) を生成して、この署名の値を返します。(上記サンプル コードの authnrData, clientData については後述します。)

このあと見ていきますが、Application 側では、この受け取った Digital Signature (デジタル署名) が正しいかどうかを、あらかじめ保持 (記憶) しておいた public key を使って検証 (verify) します。(少しコツがあるので、後述)

 

Polyfill を使ったプログラミング

MS-Prefix (独自の名前空間) ではなく、W3C 標準の API を使ってプログラミングしたい場合は、webauthn.js の polyfill が提供されています。
この polyfill を使うと、下記の通り、webauthn.makeCredential、webauthn.getAssertion などを使った標準的なプログラミングが可能です。(返り値の取得方法など、細かな内容も上記と異なっていますので比較してみてください。)

<!DOCTYPE html> <html> <head> <title>Web Authentication polyfill test</title> <script src="webauthn.js"></script> <script language="javascript"> function make() { var accountInfo = { rpDisplayName: 'Contoso', // Name of relying party userDisplayName: 'Tsuyoshi Matsuzaki' // Name of user account }; var cryptoParameters = [ { type: 'ScopedCred', // also 'FIDO_2_0' is okay ! algorithm: 'RSASSA-PKCS1-v1_5' } ]; webauthn.makeCredential(accountInfo, cryptoParameters) .then(function (result) { document.getElementById('credID').value = result.credential.id; document.getElementById('publicKey').value = JSON.stringify(result.publicKey); }).catch(function (err) { alert('err: ' + err.message); }); } function sign() { webauthn.getAssertion('challenge value') .then(function(result) { document.getElementById('signature').value = result.signature; document.getElementById('authnrdata').value = result.authenticatorData; document.getElementById('clientdata').value = result.clientData; }); } </script> </head> <body> <button onclick="make()">Make</button> <button onclick="sign()">Sign</button> <div> Credential ID:<input type="text" size="120" id="credID"><br> Public Key:<input type="text" size="120" id="publicKey"><br> Base64 Encoded Signature:<input type="text" size="120" id="signature"><br> Base64 Encoded AuthnrData:<input type="text" size="120" id="authnrdata"><br> Base64 Encoded ClientData:<input type="text" size="120" id="clientdata"><br> </div> </body> </html>

なお、このサンプルでは、上記の通り、Credential Id (前述の key identifier) を指定せずに getAsserion を呼び出せていますが、polyfill 内部で Credential Id を indexedDB に保持しています。(最終的にブラウザーに実装された場合は、ここはブラウザー側の組み込み機能として実装されることになるでしょう。)

 

FIDO 2 準拠な Signature 検証 (Verify)

上記で取得した Digital Signature (デジタル署名) は、「Azure AD を使った Service (API) 開発 (access token の verify)」で紹介したように、RSA に基づく標準的な方法 (例えば、PHP の場合は、openssl_verify 関数を使うなど) で検証できます。
ただし、いくつか FIDO 2.0 の署名の扱いに関するお作法 (注意点) があるのでコードと一緒に紹介しておきます。(この仕様の詳細については「W3C : Web API for accessing FIDO 2.0 credentials」を参照してください。)

まず、この取得されるデジタル署名 (Digital Signature) は、上記の challenge data (getAssertion の第一引数) に対する直接的な署名ではありません。
上記の getAssertion の結果返ってくる authnrData (authenticatorData) と clientData の Hash の文字列結合を challenge としたデジタル署名 (Digital Signature) となっています。なお、この clientData は、getAssertion で渡した challenge data (第一引数) を使った下記フォーマットの JSON データを Base64 Url エンコードした文字列 (Base64 エンコードをおこない、+ を -、/ を _ に変換して、= を削除した文字列) ですので、意味的には getAssertion で指定した challenge data と同義です。(Json 化されただけと思ってください。)

{ "challenge" : "challenge value" }

このため、Digital Signature の検証は、例えば、PHP で記述した場合は、下記の通りとなります。(下記の $sig_enc が、検証対象の Digital Signature です。)

なお、上述の通り、今回使用する public key は、JSON 形式の modulus (上記 jwt の “n”) と exponent (上記 jwt の “e”) の指定された RSA 公開鍵のため、PEM エンコードを使って下記の通り PHP (Openssl) で扱える public key に変換してから、openssl_verify 関数で検証しています。(この他に、Crypt_RSA を使う方法などもあるでしょう。)

<?php function verify_signature() { // input value (see previous) $challenge = 'challenge value'; $sig_enc = 'nF7SxLHfOd...'; $n = 'xCqz2wEsl-3...'; $e = 'AQAB'; $authnrdata_enc = 'AQAAAAA'; $clientdata_enc = 'ew0KCSJjaG...'; // return 1, if signature is valid // return 0, if signature is invalid $res = 0; // get signature $sig = base64_url_decode($sig_enc); // get signed target input (nonce) $authnrdata = base64_url_decode($authnrdata_enc); $clientdata = base64_url_decode($clientdata_enc); $signeddata = $authnrdata . hash('sha256', $clientdata, true); // get public key $modulus = strtr($n, '-_', '+/'); // "=" erased base64 encode $exponent = strtr($e, '-_', '+/'); // "=" erased base64 encode $cert_data = 'MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA' . $modulus . 'ID' . $exponent; $cert_txt = '-----BEGIN PUBLIC KEY-----' . "rn" . wordwrap($cert_data, 64, "rn", true) . "rn" . '-----END PUBLIC KEY-----'; $pkey_obj = openssl_pkey_get_public($cert_txt); $pkey_arr = openssl_pkey_get_details($pkey_obj); $pkey_txt = $pkey_arr['key']; $res = $pkey_txt; // verify signature $res = openssl_verify($signeddata, $sig, $pkey_txt, OPENSSL_ALGO_SHA256); // if error occured, please check as following // $test = openssl_error_string(); return $res; } // Helper functions function base64_url_decode($arg) { $res = $arg; $res = strtr($res, '-_', '+/'); switch (strlen($res) % 4) { case 0: break; case 2: $res .= "=="; break; case 3: $res .= "="; break; default: break; } $res = base64_decode($res); return $res; } ?>

 

Origin (Domain) による Isolation

Windows Hello を使った App 開発」で紹介したように、UWP の KeyCredentialManager API では、User ごと、App ごとに使用する key が分離されていました。(例えば、App をまたがって key の共有は不可能でした。)
これと同様、この Web Authentication API では、User ごと、Domain (Origin) ごとに分離されます。

例えば、Web サイト A で作成した key を、同じ key identifier を指定して Web サイト B から使おうとしても、セキュリティ上の観点から不可能です。(もしこれができてしまうと、別のサイトから乗っ取られてしまいますね。)

つまり、KeyCredentialManager のときと同様、この API も、Azure AD や Office 365 など、別のサービスの認証に使うことはできないので注意が必要です。あくまでも、皆さん自身の Web Application の認証に使うための API と考えてください。

 

※ 参考情報

[Windows Blog] A world without passwords: Windows Hello in Microsoft Edge
https://blogs.windows.com/msedgedev/2016/04/12/a-world-without-passwords-windows-hello-in-microsoft-edge/

 

Free MYOB tax time workshop live from Microsoft Store!

MSDN Blogs - Tue, 06/07/2016 - 18:00

BizSpark startups are invited to a free MYOB tax time workshop in the Microsoft Store!!

Join Accounting Expert Debra Anderson who’ll answer all your tax time questions and give you loads of tips & tricks including how to improve your cash flow and key things to do before the EOFY. We’ll also be announcing exclusive joint offers and you’ll have the chance to win the ultimate small business Surface Book & MYOB bundle.

Two chances to tune in: Friday 17 June 10am-12pm or  Thursday 23 June 6pm-8pm.
You can attend in-person or online!

Topics featured:

  1. Tax time tips and tricks for small businesses
  2. Getting SuperStream-ready ahead of EOFY
  3. Top tips to improve your cash flow
  4. Tips for choosing the right book-keeper
  5. Key things to do inside MYOB before EOFY

Register Now!

How do I combine overlapping ranges using U-SQL? Introducing U-SQL Reducer UDOs

MSDN Blogs - Tue, 06/07/2016 - 17:53
The problem statement

A few weeks ago, a customer on stackoverflow asked for a solution for the following problem:

Given a log file that contains time ranges in the form (begin, end) for a user-name, I would like to merge overlapping ranges for each user.

At the time I promised to provide a solution using a custom reducer, but got busy with a lot of other work items and some vacation. But now I will fulfill the promise.

First let’s look at the sample data provided in the stackoverflow post with some augmentation:

Start Time - End Time - User Name 5:00 AM - 6:00 AM - ABC 5:00 AM - 6:00 AM - XYZ 8:00 AM - 9:00 AM - ABC 8:00 AM - 10:00 AM - ABC 10:00 AM - 2:00 PM - ABC 7:00 AM - 11:00 AM - ABC 9:00 AM - 11:00 AM - ABC 11:00 AM - 11:30 AM - ABC 11:40 PM - 11:59 PM - FOO 11:50 PM - 0:40 AM - FOO

After combining the ranges, the expected result should look similar to:

Start Time - End Time - User Name 5:00 AM - 6:00 AM - ABC 5:00 AM - 6:00 AM - XYZ 7:00 AM - 2:00 PM - ABC 11:40 PM - 0:40 AM - FOO

For the purpose of this post, I placed a sample input file /Samples/Blogs/MRys/Ranges/ranges.txt onto our U-SQL github repository.

Analysis of the problem and possible solution

If you look at the problem, you will at first notice that you want to define something like a user-defined aggregation to combine the overlapping time intervals. However, if you look at the input data, you will notice that since the data is not ordered, you will either have to maintain the state for all possible intervals and then merge disjoint intervals as bridging intervals appear, or you need to preorder the intervals for each user name to make the merging of the intervals easier.

The ordered aggregation is simpler to scale out, but U-SQL does not provide ordered user-defined aggregators (UDAGGs) yet. In addition, UDAGGs normally produce one row per group, while in this case, I may have multiple rows per group if the ranges are disjoint.

Luckily, U-SQL provides a scalable user-defined operator called a reducer which gives us the ability to aggregate a set of rows based on a grouping key set using custom code.

So my first script outline would look something like the following code where ReduceSample.RangeReducer is my user-defined reducer (Reducer UDO):

@in = EXTRACT begin DateTime, end DateTime, user string FROM "/Samples/Blogs/MRys/Ranges/ranges.txt" USING Extractors.Text(delimiter:'-'); @r = REDUCE @in ON user PRODUCE begin DateTime, end DateTime, user string USING new ReduceSample.RangeReducer(); OUTPUT @r TO "/temp/result.csv" USING Outputters.Csv();

So now the questions are: how can I sort the rows in each group to perform the range reduction and how does my reducer look like.

First attempt at writing the range reducer

The way to write a reducer is to implement an instance of Microsoft.Analytics.Interfaces.IReducer. In this case, since we do not need to provide any parameters, we only need to overwrite the abstract Reduce method as follows:

using Microsoft.Analytics.Interfaces; using Microsoft.Analytics.Types.Sql; using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ReduceSample { public class RangeReducer : IReducer { public override IEnumerable<IRow> Reduce(IRowset input, IUpdatableRow output) { // Insert your code to reduce } // Reduce } // RangeReducer } // ReduceSample

The U-SQL REDUCE expression will apply the Reduce method once for each group in parallel. The input parameter thus will only contain the rows for a given group and the implementation can return zero to N rows as output.

Therefore, the implementation of the method can concentrate on just doing the reduction on each group. So how can we get each group sorted? The first thought is, to just take the input rowset and sort it and then iterate over it:

public override IEnumerable<IRow> Reduce(IRowset input, IUpdatableRow output) { foreach (var row in input.Rows.OrderBy(x => x.Get<DateTime>("begin"))) { // perform the range merging } // foreach } // Reduce

However, this approach will run into the following error message:

E_RUNTIME_USER_UNHANDLED_EXCEPTION_FROM_USER_CODE An unhandled exception from user code has been reported. Unhandled exception from user code: "Access an expired row using Get method, column: begin, index: 0. Please clone row when iterating row set"

because the UDO’s IRowset is implemented as a forward-only streaming API, that does not support accessing a row again after accessing the next row and the LINQ’s OrderBy will need to see the values of at least two rows to be able to sort them.

The final version of the range reducer

Fortunately, U-SQL’s REDUCE expression provides an option to pre-sort the rows before they are being passed to the Reduce method using the PRESORT clause (which just recently got released and at the time of the writing of this blog has not been documented in the reference documentation yet). When adding the PRESORT clause, the rows in the input rowset are now guaranteed to be ordered.

So now we can focus on implementing the actual range merge logic (refer to the comments in the code for the explanation of the code):

public override IEnumerable<IRow> Reduce(IRowset input, IUpdatableRow output) { // Init aggregation values int i = 0; var begin = DateTime.MaxValue; // Dummy value to make compiler happy var end = DateTime.MinValue; // Dummy value to make compiler happy // requires that the reducer is PRESORTED on begin and READONLY on the reduce key. foreach (var row in input.Rows) { // Initialize the first interval with the first row if i is 0 if (i == 0) { i++; // mark that we handled the first row begin = row.Get<DateTime>("begin"); end = row.Get<DateTime>("end"); // If the end is just a time and not a date, it can be earlier than the begin, indicating it is on the next day. // This let's fix up the end to the next day in that case if (end < begin) { end = end.AddDays(1); } } else // handle the remaining rows { var b = row.Get<DateTime>("begin"); var e = row.Get<DateTime>("end"); // fix up the date if end is earlier than begin if (e < b) { e = e.AddDays(1); } // if the begin is still inside the interval, increase the interval if it is longer if (b <= end) { // if the new end time is later than the current, extend the interval if (e > end) { end = e; } } else // output the previous interval and start a new one { output.Set<DateTime>("begin", begin); output.Set<DateTime>("end", end); yield return output.AsReadOnly(); begin = b; end = e; } // if } // if } // foreach // now output the last interval output.Set<DateTime>("begin", begin); output.Set<DateTime>("end", end); yield return output.AsReadOnly(); } // Reduce

You may notice that the code is not doing anything with the grouping key. One of the features of UDO model is, that if you specify in the U-SQL UDO expression that a column is READONLY, the column will be passed through automatically and you can write your UDO code more generic by focusing just on the columns you want to transform.

Now that we have written the reducer, we have to go back to our U-SQL script and add the PRESORT and READONLY clauses:

@in = EXTRACT begin DateTime, end DateTime, user string FROM "/Samples/Blogs/MRys/Ranges/ranges.txt" USING Extractors.Text(delimiter:'-'); @r = REDUCE @in PRESORT begin ON user PRODUCE begin DateTime, end DateTime, user string READONLY user USING new ReduceSample.RangeReducer(); OUTPUT @r TO "/temp/result.csv" USING Outputters.Csv(); Final note: Recursive vs non-recursive reducers

If you now apply the above reducer on a large set of data (the customer mentioned that his CSV files were in the GB range), and some of your grouping keys may be much more frequently appearing than others, you will encounter something that is normally called data skew. Such data skew in the best case can lead to some reducers taking much longer than others, and in the worst case can lead to some reducers running out of the available memory and time resources (a vertex will time-out after running for about 5 hours). If the reducer semantics is associative and commutative and its output schema is the same as its input schema, then a reducer can be marked as recursive which allows the query engine to split large groups into smaller sub-groups and recursively apply the reducer on these subgroups to calculate the final result. A reducer is marked as recursive by using the following property annotation:

namespace ReduceSample { [SqlUserDefinedReducer(IsRecursive = true)] public class RangeReducer : IReducer { public override IEnumerable<IRow> Reduce(IRowset input, IUpdatableRow output) { // Insert your code to reduce } // Reduce } // RangeReducer } // ReduceSample

In our case, assuming the processing will preserve the sort among the rows in each recursive invocation, can be marked as recursive to improve scalability and performance.

You can find a VisualStudio project of the example on our github repository.

de:code 2016 PRD-006 フォローアップ記事

MSDN Blogs - Tue, 06/07/2016 - 17:00

みなさん、こんにちは。

先月実施された de:code 2016 が開催され、 Azure Machine Lerning (AzureML)と連携する方法を紹介しました。

今回、セッション内で紹介したサンプルソリューションを公開しました。

https://github.com/takayakawano/decode2016/

会場に来られた方、来ていない方もご覧ください。

 

Azure Machine Lerning との連携

機械学習の活用はこれまで以上に必要になってきてます。

Dynamics CRM においても今後、標準機能により AzureML 連携した機能が提供される予定です。

今回、AzureML 上に独自の案件確度を予測するモデルを作成し、

Dynamics CRM から呼び出し画面上で確認するというサンプルソリューションを紹介しました。

この方法を用いることで、AzureML 上の様々な予測モデルと連携することが可能になります。

 

サンプルソリューション

今回のセッションでは、過去営業案件のデータから将来の案件の確度を予測するというサンプルソリューションを紹介しました。

操作イメージは以下の通りです。

1. 新たに営業案件フォームを開きます。

2. 必須項目と、取引先企業、推定予算金額、意思決定者の特定を入力します。

3. 確率は空のまま保存をクリックします。

4. 保存が完了すると確率が自動で設定されます。

 

アーキテクチャ

営業案件の保存処理に同期処理のワークフローが呼び出されます。ワークフローのステップの中で

今回公開したカスタムアクティビティが呼び出され、事前に作成した AzureML Web API を呼び出します。

この Web API は、案件の予算金額、意思決定者の有無、顧客の企業形態、売上高の4つのインプット情報を基に、

過去の案件データから確度を出力するという簡単な予測モデルです。

 

 

詳細な構築手順は、別途セッションの動画が公開され次第アナウンスいたします。

 

– プレミアフィールドエンジニアリング 河野 高也

Microsoft Planner: Another look as MsolSettings–and a couple more answers

MSDN Blogs - Tue, 06/07/2016 - 16:18

I had a few questions following yesterday’s blog and also have some more resources to share on the Azure AD PowerShell commands.  Firstly I should make it clear that the required version of the PowerShell commands is 1.1.117.0 – and is at the current date (6/7/2016) in public preview.  Rob’s Groups Blog was posted yesterday and has more examples and talks about the other settings that you can configure.  Some of these others are not directly applicable to Planner in the way that the control of group creation is – but if you are using Planner then you are using Groups – so worth knowing what’s coming down the line.

From yesterday’s blog one improvement to my PowerShell would be to use the Get-MsolAllSettingTemplate and select the template by name rather than GUID as per Rob’s example:

$template = Get-MsolAllSettingTemplate | where-object {$_.displayname -eq “Group.Unified”}

This makes it very clear the template we are getting.  This is used to pull the template of settings and you then apply these to a new settings object using

$setting = $template.CreateSettingsObject()

From this you can take a look a the default name/value pairs

$setting.Values

We see that EnableGroupCreation and AllowToAddGuests are both true in the template – all other values are blank.

The settings I didn’t cover yesterday were UsageGuidelinesUrl, ClassificationList and AllowToAddGuest.  These are not all implemented yet in Office 365, but are roadmap items that are coming soon.  You can configure the settings – but nothing to see just yet.

UsageGuidelinesUrl will be a handy thing to set – and first you would create a page in your SharePoint site that detailed your company usage guidelines for Groups (and other aspects of Office 365 too would make sense) then the ‘Group usage guidelines’ link would display whenever someone created a group so they could check the guidelines.  This will not initially be in the New Plan dialog.

ClassificationLists is another ‘coming soon’ option – and would display as an extra drop down between Privacy and Language option when editing or creating a group. Again – this would be edit at the Group rather than the Plan.  AllowToAddGuests is exactly what it says on the tin. 

We can configure our settings by setting the values – the Url for our guidelines, and a comma separated list for our different classifications:

$setting[“UsageGuidelinesUrl”] = “https://guideline.contoso.com

$setting[“ClassificationList”] = “Red, Green, Blue”

And then we save our settings.

New-MsolSettings –SettingsObject $setting

We can see our existing setting objects using Get-MsolAllSettings, or pull a specific one using Get-MsolSettings –SettingId <GUID>.

We can remove our setting object using Remove-MsolSettings –SettingId <GUID>.  This might be more convenient rather than getting an existing setting object and updating the various settings – as there aren’t very many values to set.

Hopefully that gives you a good grounding in the MsolSettings and how they apply (or not) to Planner – on to some more Planner Answers!

Some users have noticed that even without a license applied that users can either navigate to the service Url – or respond to an assignment on someone else’s tasks from a plan and can still use Planner.  This was the initial intentional design – as it lowered the barriers of collaboration – particularly for First Release where not all users may have seen the tile.  We are reviewing this currently so you may see some changes here, but hopefully the MsolSettings does give better control on Group creation.  Licensed users should see the tile on the App Launcher – that is the main difference.

Finally – and I hope to have some samples soon – but I have heard from other teams that license manipulation when you are handling very large numbers of users can be painfully slow when using PowerShell.  Using the Graph API apparently is much faster – handling many thousands of changes per minute.  If anyone has samples or experiences to share I’d love to hear them!

Skype for Business 2016 2016 年 6 月の更新プログラムがリリースされています。

MSDN Blogs - Tue, 06/07/2016 - 16:06

こんにちは。Japan Lync/Skype サポート チームです。
Skype for Business 2016 2016 年 6 月のセキュリティ 更新プログラムがリリースされています。

June 7, 2016, update for Skype for Business 2016 (KB3115087)
https://support.microsoft.com/ja-jp/kb/3115087

この修正プログラムにより、修正される多数の問題が確認されています。
同公開技術情報から Link されておりますので、詳細は公開技術情報をご確認ください。

是非、最新の修正プログラムを適用していただき、
快適な Lync/Skype ライフをお送りください。

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Lync 2013 2016 年 6 月の更新プログラムがリリースされています。

MSDN Blogs - Tue, 06/07/2016 - 16:01

こんにちは。Japan Lync/Skype サポート チームです。

Lync 2013 2016 年 6 月の更新プログラムがリリースされています。

June 7, 2016, update for Lync 2013 (Skype for Business) (KB3115033)
https://support.microsoft.com/ja-jp/kb/3115033

この修正プログラムにより、修正される多数の問題と、適用後も発生する問題が確認されています。
同公開技術情報から Link されておりますので、詳細は公開技術情報をご確認ください。

是非、最新の修正プログラムを適用していただき、
快適な Lync/Skype ライフをお送りください。

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

xSharePoint is now SharePointDsc – what you need to know!

MSDN Blogs - Tue, 06/07/2016 - 15:58

For just over 12 months now we have been working hard to grow the xSharePoint DSC module to let SharePoint 2013 and 2016 administrators use PowerShell Desired State Configuration to manage their SharePoint deployments. We’ve come a long way in the last year, and now with the help of my core team we are making an important transition – we are renaming from “xSharePoint” to “SharePointDsc”.

This is important for a number of reasons for this – the most important of which is removing the “x” element. If you have ever had a read of the home page on GitHub for xSharePoint (and other “x” resources as well) we describe this as being a flag that the resource is “experimental”. This was something that made sense when we first created the resources were first built – we needed to test and refine them, collect feedback from sources both internal and external to Microsoft, and add more functionality to make the module something much more than just an experiment. Now the time has come to move forward with a new name that indicates the “Production ready” state of our resources, and that is the main driver behind the change. There is an open RFC from the PowerShell team that touches on some of these issues in terms of how we approach versioning going forward with the PowerShell modules that is also worth a read for more information on what is happening in this space.

What specifically is changing in SharePointDsc

The items in the module that are changing moving forward are:

  • The name will change from xSharePoint to SharePointDSC. This will mean that xSharePoint will no longer have new functionality published to the PowerShell gallery, version 0.12 will be the last version available under the old name [see note below on hotfixes]
  • All SharePoint DSC resource names will be renamed to begin with just “SP” instead of “xSP”. For example, xSPCreateFarm will become SPCreateFarm. This means updating your configurations to rename the resource types to the new name format. The parameters for each resource remain the same (with the exception of changes to schemas that were made after the 0.12 release – see the release notes for a list of changes in 1.0 to understand these changes)
  • Other internal name changes will take place to ensure that all internal function names go from [verb]-xSharePoint[noun] to [verb]-SPDSC[noun]. These are not designed to be called directly though, but in the event that you have written scripts that do use these, you will need to use the new function names
Hotfixes for xSharePoint

As version 0.12 of xSharePoint is the last version that will be published with that name, we wanted to ensure that we had an approach that would mean we could provide support and hotfixes to customers who are using xSharePoint currently who are unable (for whatever reason) to move to SharePointDsc. For that reason, a separate branch will be maintained with the code from the 0.12 release that we will be able to push specific hotfixes to in the event that a customer requires it. These will be published at increments to 0.12 (so from 0.12.1.0 to 0.12.2.0, 0.12.3.0 and so on). No new functionality will be added to this branch, SharePointDSC will be the module that receives all the new development effort – so customers should begin planning now for how to transition to the new module.

The release strategy for SharePointDsc

The next steps forward on how SharePointDsc is released are pretty simple. The next release (which is already available on the PowerShell gallery) will be numbered 1.0. Moving forward we will be doing releases every month and will be following the final version number strategy that is decided upon when the RFC from the PowerShell team is closed and formalised. Releases will usually be done at the start of the month, with the next scheduled release to be 1.1 in early July.

How to prepare for the module rename to SharePointDsc

There are a number of steps to be taken when the updated module becomes available. The below list is a high level walkthrough of the primary considerations.

Distribution of the module

Firstly you need to consider how you will deploy the module to your servers. The good news here is that because the names of all of the DSC resources and cmdlets have been changed, there is no reason that you can’t have xSharePoint and SharePointDsc on the same server as you transition. So the usual options for deploying these modules are available – such as deploying through PowerShellGet, adding them to a pull server (or Azure Automation if you are using it as your pull server), or through manually deploying the module to nodes you are pushing configurations to.

Updating configurations

Once you have the module available you can begin to update your configurations to use the new resource names. This will firstly mean changing the “Import-DscResource” cmdlet at the top of your config to use the “SharePointDsc” module and not “xSharePoint”. After this begin to change the names of each resource from the module to remove the x off the front (so as described above, “xSPCreateFarm” becomes “SPCreateFarm”).

Republish your new configurations

Once you have updated the configurations you can republish them to your existing SharePoint servers. The new resources behave exactly the same as the old resources, so this should not trigger any changes as the same test and set methods will be used – its just the names that have changed. So a configuration element that was in a compliant state before the change will continue to be compliant with the new name.

The future of SharePointDsc

We aren’t finished with SharePointDsc yet, not by a long shot. We keep our backlog of features, fixes and enhancements on GitHub – here you can see what we are working on, expected timelines for releases and you can even raise new issues for us to make any changes to the module here as well. We will be continuing to release updates with changes once a month and we will be constantly listening to feedback from the wider community. So raise an issue on GitHub, or post a comment here, and go out and do great things with SharePointDsc!

Lync の更新プログラム(CU) 一覧

MSDN Blogs - Tue, 06/07/2016 - 15:50

こんばんは。 Japan Lync Support Team です。

1. Skype for Business Server 2015

Version KB Date 6.0.9319.235 3061064 2016/03/18 6.0.9319.102 – 2015/11/17 6.0.9319.55 – 2015/06/19

 

2. Skype for Business 2016

Version KB Date 16.0.4393.1000 3115087 2016/06/07 16.0.4339.1000 3114846 2016/03/08 16.0.4351.1000 3114696 2016/02/09 16.0.4324.1000 3114516 2016/01/12 16.0.4312.1000 3114372 2015/12/08 16.0.4300.1001 3085634 2015/11/10 16.0.4288.1000 2910994 2015/09/30

 

3. Lync Server 2013

※公開サイトでダウンロード可能なバージョンは最新版のみです。ただしセキュリティパッチ(Sec)の場合、ひとつ前のパッチが公開されている場合もございます。

Version KB Date 5.0.8308.945 31266373126638 2016/01/07 5.0.8308.941 312121331212153120728 2015/12/15 5.0.8308.887 3051951 2015/05/01 5.0.8308.871 3131061 2015/03/19 5.0.8308.857 3018232 2014/12/12 5.0.8308.815 2937305 2014/09/23 5.0.8308.803 2986072 2014/09/08 5.0.8308.738 2937310 2014/08/05 5.0.8308.577 2905048 2014/01/08 5.0.8308.556 2881684 2013/10/07 5.0.8308.420 2819565 2013/07/01 5.0.8308.291 2781547 2013/02/27

 

4. Lync 2013 クライアント(Skype for Business)

Basic や VDI プラグインも同じバージョンが適用可能です。

Version KB Date 15.0.4833.1000 3115033 2016/06/07 15.0.4809.1000 3114944 2016/04/12 15.0.4797.1000 3114732 2016/02/09 15.0.4787.1001 3114502 2016/01/07 15.0.4779.1001 3114351 2015/12/08 15.0.4771.1001 3101496 2015/11/10 15.0.4763.1001 3085581 2015/10/13 15.0.4753.1000 3085500 2015/09/08 15.0.4745.1000 3055014 2015/08/14 15.0.4727.1001 3054791 2015/06/09 15.0.4719.1000 3039779 2015/05/12 15.0.4711.1002 2889923 2015/04/14 15.0.4701.1000 2956174 2015/03/10 15.0.4693.1000 2920744 2015/02/10 15.0.4659.1001 2889919 2014/10/29 15.0.4649.1000 2889860 2014/09/09 15.0.4641.1000 2881070 2014/08/12 15.0.4623.1000 2850074 2014/06/10 15.0.4615.1001 2880980 2014/05/13 15.0.4605.1003 2880474 2014/04/11 15.0.4569.1508 2863908 2014/03/11 15.0.4551.1001 2817678 2013/11/12 15.0.4551.1005 2825630 2013/11/07 15.0.4517.1504 2817621 2013/08/13 15.0.4517.1004 2817465 2013/07/09 15.0.4517.1001 2768354 2013/06/11 15.0.4481.1004 2768004 2013/05/20 15.0.4481.1000 2760556 2013/03/20 15.0.4454.1509 2812461 2013/02/27

 

5. Lync Server 2010

Version KB Date 4.0.7577.726 2493736 2016/04/18 4.0.7577.713 3057803 2015/05/01 4.0.7577.710 3030726 2015/02/06 4.0.7577.230 2957044 2014/04/24 4.0.7577.225 2909888 2014/01/08 4.0.7577.223 2889610 2013/10/07 4.0.7577.217 2860700 2013/07/12 4.0.7577.216 2791381 2013/03/15 4.0.7577.211 2791665 2013/01/29 4.0.7577.206 2772405 2012/11/06 4.0.7577.203 2737915 2012/10/11 4.0.7577.199 2701585 2012/06/16 4.0.7577.198 2698370 2012/04/20 4.0.7577.197 2689846 2012/03/29 4.0.7577.190 2670352 2012/03/01 4.0.7577.189 2670430 2012/02/07 4.0.7577.188 2658818 2012/01/23 4.0.7577.183 2650982 2011/12/13 4.0.7577.183 2514980 2011/11/19 4.0.7577.170 2616433 2011/09/13 4.0.7577.167 2592292 2011/08/29 4.0.7577.166 2571546 2011/07/25 4.0.7577.137 2500442 2011/04/20

 

6. Lync 2010 クライアント 

Version KB Date 4.0.7577.4484 3096735 2015/11/10 4.0.7577.4478 3081087 2015/ 9/ 8 4.0.7577.4476 3075593 2015/ 8/11 4.0.7577.4474 3072611 2015/ 7/ 7 4.0.7577.4456 3006209 2014/11/11 4.0.7577.4446 2953593 2014/ 6/10 4.0.7577.4445 2953593 2014/ 4/17 4.0.7577.4419 2912208 2014/ 1/8 4.0.7577.4409 2884632 2013/10/7 4.0.7577.4398 2842627 2013/ 7/12 4.0.7577.4392 2843160 2013/ 7/9 4.0.7577.4388 2827750 2013/ 5/14 4.0.7577.4384 2815347 2013/ 4/9 4.0.7577.4378 2791382 2013/ 3/14 4.0.7577.4374 2793351 2013/ 1/29 4.0.7577.4356 2737155 2012/10/11 4.0.7577.4109 2726382 2012/10/9 4.0.7577.4103 2701664 2012/ 6/16 4.0.7577.4098 2693282 2012/ 6/12 4.0.7577.4097 2710584 2012/ 5/14 4.0.7577.4087 2684739 2012/ 3/28 4.0.7577.4072 2670326 2012/ 3/1 4.0.7577.4063 2669896 2012/ 2/7 4.0.7577.4061 2670498 2012/ 1/28 4.0.7577.4053 2647415 2011/11/21 4.0.7577.4051 2514982 2011/11/19 4.0.7577.336 2630294 2011/10/19 4.0.7577.330 2624907 2011/10/11 4.0.7577.314 2571543 2011/ 7/25 4.0.7577.280 2551268 2011/ 5/24 4.0.7577.275 2540951 2011/ 4/27 4.0.7577.253 2496325 2011/ 4/4 4.0.7577.108 2467763 2010/ 1/20

免責事項:
本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

 

Modern document libraries in SharePoint

MS Access Blog - Tue, 06/07/2016 - 15:45

Last month, we unveiled our broad vision for the Future of SharePoint, and today we’re delighted to announce the that modern document libraries are now rolling out to all Office 365 commercial customers worldwide. You can learn more about how to use modern libraries in this article, “What is a document library?

What’s new

Helping people share files and collaborate on content has always been central to our mission. That’s why we’re creating a better experience for document libraries that’s faster, more intuitive and responsive.

Here’s a look at what’s new:

The new, modern document library experience, showing two documents and a link pinned to the top.

User interface

Modern document libraries combine the power of SharePoint with OneDrive usability—Modern document libraries have an updated user interface that offers an experience similar to OneDrive, so it’s more intuitive to create a new folder and upload files in the browser. The ribbon has been replaced with a trim command bar, which provides intelligent commands relevant to the tasks at hand. If your organization has customized the ribbon with buttons that map to critical business functionality in your enterprise, those buttons will appear in the command bar as well. With this update, each new Office 365 group now gets a full modern document library, replacing the former “Files” page.

Important documents easily highlighted—Click Pin to top to add documents “above the fold” in any onscreen view.

Copy and move files from the command bar—Copying isn’t new, but the copy and move gestures are intelligent about displaying your information architecture and letting you create new folders on the fly.

Copy files from SharePoint command bar.

Import files from other libraries—You may not have to make as many copies any more. Document libraries are also intelligent about remembering other files you’ve been using in SharePoint. That’s why you can import other files from other libraries as links, without having to duplicate files between multiple sites. You still see thumbnails and metadata for native files. And SharePoint shows your list of most recent documents, so you don’t have to cut and paste a link.

Create a link in modern document libraries.

Personalization

Personalized views simplified—The new document libraries let you group files directly in the main page without clicking to a separate admin screen. You can also click and drag to change the size of your columns, as well as sort, filter and group from any column header. To make the view available to everybody else in the library, just click Save View.

Responsive and accessible design—Mobile browsers have the same features as the desktop, making SharePoint productive for every user—whether they interact via mouse, keyboard, touch or screen reader.

Metadata

Document metadata now available inline—You can now edit metadata directly from the main view in the information panel. No more clicking into multiple screens to apply an update! If you’re in a view that groups files by metadata, you can drag and drop files between groups to update the metadata. And if you miss something required, the document is no longer hidden behind enforced checkout—you just receive a reminder to enter the data when you can.

One-stop shopping for everything about your documents—Thanks to Office Online integration, you can navigate a complete document preview at the top of the information panel. The panel offers metadata, including the history of recent activity, updates to the file and who received a share to the file. You can also add more users or immediately stop all sharing. Finally, all other file properties are displayed, in case there’s anything else not already covered.

The document information panel.

Keeping it authentically SharePoint—While we enhanced the document libraries to make them as intuitive and productive as possible, we know that the power of SharePoint has always been in your ability to customize document libraries to work for your team. At the same time, there’s a rich tradition of using content types, check-in/check-out, versioning, records management and workflows in SharePoint. Modern document libraries inherit all of these.

Navigation

Modern libraries come to Office 365 Groups—To bring enhanced content management to group files, libraries belonging to an Office 365 group have a new header control at the top of the page. Unlike the old control, which included links to the group’s conversation, calendar and member management, the new control has a single link to the group’s conversation, from which users can navigate to calendar and member management.

Getting started with modern document libraries

As we roll out modern libraries into production, we know it’s important to focus on several key aspects of managing the overall user experience.

Since usability requires manageability, we keep IT in control of the experience. You may be ready to adopt this across the board or you might want to stay in classic mode until you can prepare your users. We give you full control of using classic or modern looks at the tenant, site collection and library level.

When we bring modern document libraries into production later in June, it will become the new default for all libraries in most cases. However, we will add the tenant and administrative controls in advance of the actual library rollout, so if you choose to opt out, you can do so before users start seeing the new experience. We also included customization detection, so if we see certain features and customizations that don’t work in the modern experience, we automatically drop back to classic mode.

And we’ll keep classic mode running well into 2017 while users and developers adapt and adopt the new capabilities. See the support.office.com article “What is a document library?” for more details.

There’s more to come

First Release customers have been actively using many of these features since April and their feedback has guided our improvements announced today. You can join that conversation on the Office 365 Network on Yammer and weigh in on the improvements that will be part of our general release. For more context on the future of team sites beyond the new, modern document library experience, read “SharePoint—the mobile and intelligent intranet.”

We heard your feedback on extensibility and customization in particular, and we’ll have more to share in a future update. We plan to add support for customizing the page using modern techniques. Until then, customized library pages should stay in classic mode.

In the meantime, learn more about using and supporting libraries in “What is a document library?,” try out the new document libraries in SharePoint Online and give us feedback directly inside the modern document library experience with the Feedback button.

Thanks for using SharePoint.

—Chris McNulty, @cmcnulty2000, senior product manager for the SharePoint team

Frequently asked questions

Q. Will new document libraries support customization?

A. Yes, modern document libraries will continue to support declarative CustomActions that represent menu and command actions. Solutions that are currently deployed that make use of this mechanism should continue to work as before, with actions appearing in the new command surface in addition to the ribbon in classic mode. CustomActions that deploy script, JSLinks and additional web parts on the page are currently not supported. Environments that require these unsupported features should continue using classic mode for the time being.

Q. How long will classic mode be supported?

A. We recognize the need to test and prepare for any disruption to user experiences such as document libraries. We expect to run the two modes in parallel into 2017.

Q. Will these modern experiences come to on-premises servers?

A. Bringing modern experiences to SharePoint Server 2016 is central to our vision and is very much a part of the roadmap. At this time, we have no information to share yet about how or how soon this will happen.

Q. Which versions of Internet Explorer work best with modern libraries?

A. SharePoint Online supports the latest version of the Safari, Firefox, Chrome and Edge browsers, along with Internet Explorer 10 and 11. Older versions of Internet Explorer are already out of support generally. Internet Explorer 8 and 9 were previously noted as a “diminished experience” in SharePoint Online. Users of these older browsers should remain in classic mode or, preferably, upgrade to a currently supported version.

The post Modern document libraries in SharePoint appeared first on Office Blogs.

Hooking up additional spam filters in front of or behind Office 365

MSDN Blogs - Tue, 06/07/2016 - 15:25

Note: This blog post reflects my own recommendations.

Over here in Exchange Online Protection (EOP), people sometimes ask me why we don’t recommend hooking up multiple layers of filtering in front of solution. That is, instead of doing one of these:

Internet -> EOP -> hosted mailbox
Internet -> EOP -> on-prem mail server

… a customer wants to do something like this:

Internet -> on-prem mail server -> EOP hosted mailbox
Internet -> on-prem mail server -> EOP -> on-prem mail server

… or even this:

Internet -> another cloud filtering solution -> EOP hosted mailbox
Internet -> another cloud filtering solution -> EOP -> on-prem mail server

.

If you read through our Office 365 Mailflow Best Practices, you’ll see that those ones are listed as not being supported. If you want to put another filter in front of EOP, you should ensure that that other filtering solution is doing your spam filtering. If your on-prem mail server does not have spam filtering, you should install one. In other words, I do not recommend pipelining and double spam filtering your email.

So why do I recommend this?

After all, adding more malware filters gives you better protection. So shouldn’t this result in better spam protection?

No.

EOP makes use of a lot of sending IP reputation. So, suppose the sending IP is 1.2.3.4. In the supported case, it looks like this:

Internet, 1.2.3.4 -> EOP

.

In the unsupported case, it looks like this:

Internet, 1.2.3.4 -> on-prem mail server 12.13.14.15 -> EOP

.

In the first case, EOP sees the original IP. In the second case, it sees the on-prem mail server’s IP. Since the on-prem mail server will never be on an IP reputation list, the email must be spam-filtered instead and not blocked at the network edge. This loss of original IP degrades the overall experience.

But why can’t we simply crawl through the headers of a message, looking for the original IP? After all, some solutions do that.

There are numerous reasons why we don’t do this but here’s the biggest one – our IP throttling doesn’t work.

EOP’s IP throttling is a variant of graylisting. If email comes from a new IP address [1], EOP throttles that IP by issuing a 450 error, instructing the sending mail server to go away and try again later. Most legitimate mail servers retry, whereas most spammers give up and move on to the next message. This is a technique that has been used in spam filtering for years, and when EOP introduced it, we saw a lot of spam from new IPs get blocked (that is, get blocked and not retry)..

But if you do something like this:

Internet, 1.2.3.4 -> on-prem mail server 12.13.14.15 -> EOP

Even if EOP crawled through the headers and extracted the original IP address, and then issued 450’s to the connecting mail server, we’d be issuing the 450’s to the on-prem mail server (12.13.14.15) and not the original spammer. This would then force queues to build up on the on-prem mail server and then everything starts breaking. Either the on-prem mail server falls over because it can’t handle the queues building up, or it retries and tries to shove the message through EOP anyhow. We may not yet have enough reputation to make a spam verdict downstream. IP throttling works very well in the service.

But EOP assumes that in order to give you the best experience, we’re using all of the tricks up our sleeve to stop spam – including IP throttling. But putting another filter in front of EOP removes a key piece of that filtering. It isn’t made up elsewhere downstream.

.

IP throttling is a key piece but the reality is that any throttling, sending IP based or not, that issues 450 responses and assumes that good senders retry and spammers do not, won’t work properly by sticking something in-between the origin and EOP.

And since all of our filtering is not being applied – even if we crawled through headers – we would not get the same filtering experience because of the behavior differences between spammers (who won’t retry) and the mail server (who will). We wouldn’t apply throttling at all if we could get the same experience elsewhere.

That‘s why double spam-filtering is not supported, and why we don’t go out of our way to make it work.

.

That’s not to say you can’t put another service in front of EOP. But if you do:

  • IP reputation blocks, and IP throttling, do not work properly
  • DNS checks that use the sending IP do not work properly, such as PTR lookups, SPF checks, DMARC checks, and even DKIM if you route it through a mail server that modifies the message content
  • IP Allow entries do not work properly because the original sending IP has been lost
  • Some rules in the spam filter will not work properly because they look for a particular IP range or sending PTR
  • The Bulk mail filter does not work properly
  • The antispoofing checks do not work properly

.

All of this will result in more spam getting delivered, and more good email being misclassified as junk.

On the other hand, here’s what does still work:

  • Malware filtering
  • Exchange Transport Rules (ETRs) that don’t rely upon the sending IP
  • Safe and blocked senders
  • Safe and blocked domains
  • Advanced Threat Protection (Safe Links and Safe Attachments)
  • Some parts of the spam filter that only look at content still work, e.g., malicious URLs

So, filtering will work but it won’t be as good.

.

If you are going to use a third-party to do spam filtering, we recommend you do it this way: Using a third-party cloud service with Office 365. That points your organization’s MX record at EOP so that we are in front and the third-party is behind us. Many add-on services recommend you do it this way because they assume you have a spam filter in front of their service. In many cases, you can probably find an equivalent service from Office 365 to use, in place of what you were using that other appliance or cloud-filtering service for so you don’t need to run multiple services or appliances.

If you have to put a third-party in front of EOP such that your MX doesn’t point to EOP, then we recommend that you rely upon this third party to do spam filtering by having it stamp a set of x-headers for spam and non-spam, and then writing ETRs to look for those headers to mark as spam (SCL 5-9) and take the spam action, or non-spam (SCL -1, not 0-4) so it gets delivered to your inbox. It still goes through Malware, ETRs, Safe and blocked senders, and Advanced Threat Protection. Our other services (e.g., Data Leakage Protection, Advanced Security Management) still do, too.

If you have to put a third-party in front of EOP and want double spam-filtering, you will probably notice more misclassified email than if you used either of the above two options.

Hope this helps.

 

[1] There’s much more to IP throttling than simply being from a new IP address without previous history, it’s more complicated than this.

 

Page Compression and the 4038 Length Limitation

MSDN Blogs - Tue, 06/07/2016 - 15:24

While it is well documented that when applying page compression to a table only in-row data will be compressed, it is not so well known that only strings up a maximum length of 4038 can be compressed with page compression. If you have ever turned on page compression for large blobs of data and wondered why you are not getting a good compression ratio this post will explain why.

There are several prerequisites to page compression. One well documented stipulation is that compression will only work on in-row data. If a varchar(MAX) is used the string data will only be held in-row if it is 8000 characters or less than in length. Varchar(MAX) data can be compressed with page compression but it has to be of a small enough string length. However, although 8000 characters and under will allow a record to fit in-row, it would need to be at least 4038 characters or less for page compression.

The following example shows page compression not working for in-row data. Here we create a table, add 1000 records to it with a length of 8000 characters, apply page compression and then return the number of pages associated with that table.

CREATE TABLE MSCompressionDemoTestTable (stringdata varchar(8000)) GO INSERT INTO MSCompressionDemoTestTable (stringData) VALUES (REPLICATE('a', 8000)) GO 1000 ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE); SELECT in_row_data_page_count FROM sys.dm_db_partition_stats WHERE OBJECT_ID = OBJECT_ID('MSCompressionDemoTestTable') GO DROP TABLE MSCompressionDemoTestTable

1000 pages are still being used by this table. No compression has been performed.

To investigate why this is not working we can look at the extended event sqlserver.page_compression_attempt_failed.

CREATE EVENT SESSION [PageCompressionErrors] ON SERVER ADD EVENT sqlserver.page_compression_attempt_failed WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF) GO

When attempting to compress the data the extended event is returning the failure_reason of OnlyOneRecordFound.

The algorithm that performs the page compression will look at each page it is compressing to see if there is more than one record on that page. If there is then it will attempt compression, if not then it will skip over that page. The algorithm sees no value in creating a dictionary and compressing just one record so ignores pages with one record in them.

If we half the size of the string data we will get two records per page and so the page compression will be successful.

CREATE TABLE MSCompressionDemoTestTable (stringdata varchar(8000)) GO INSERT INTO MSCompressionDemoTestTable (stringData) VALUES (REPLICATE('a', 4000)) GO 1000 ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE); SELECT in_row_data_page_count FROM sys.dm_db_partition_stats WHERE OBJECT_ID = OBJECT_ID('MSCompressionDemoTestTable') SELECT sys.fn_PhysLocFormatter (%%PHYSLOC%%) LocationOfTheRecord FROM MSCompressionDemoTestTable GO DROP TABLE MSCompressionDemoTestTable

We can see from the results that each page has two slots (or records). After we turned on page compression we have compressed from 1000 records down to 3 pages.

Up until now we have been looking at non unicode data. The following example uses an NVarchar (unicode) type.

CREATE TABLE MSCompressionDemoTestTable (stringdata Nvarchar(4000)) GO INSERT INTO MSCompressionDemoTestTable (stringData) VALUES (REPLICATE(N'a', 4000)) GO 1000 SELECT sys.fn_PhysLocFormatter (%%PHYSLOC%%) LocationOfTheRecord FROM MSCompressionDemoTestTable ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE); SELECT in_row_data_page_count FROM sys.dm_db_partition_stats WHERE OBJECT_ID = OBJECT_ID('MSCompressionDemoTestTable') GO DROP TABLE MSCompressionDemoTestTable

From the results we can see we have filled up the page as we have one page per record, but the page compression still compresses to 3 pages.

This is because before page compression is run row compression is first run. Row compression take any data types down to the smallest data type it can be. In this case NVarchar(4000) can be converted down to Varchar(4000) and Varchar(4000) as we have seen before will allows 2 records to held onto a page, so page compression succeeds.

In the following example we are explicitly turning on row compression so the impact can be demonstrated.

CREATE TABLE MSCompressionDemoTestTable (stringdata Nvarchar(4000)) GO INSERT INTO MSCompressionDemoTestTable (stringData) VALUES (REPLICATE(N'a', 4000)) GO 1000 SELECT sys.fn_PhysLocFormatter (%%PHYSLOC%%) LocationOfTheRecordBeforeRowcompression FROM MSCompressionDemoTestTable ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW); SELECT sys.fn_PhysLocFormatter (%%PHYSLOC%%) LocationOfTheRecordAfterRowCompression FROM MSCompressionDemoTestTable ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE); SELECT in_row_data_page_count FROM sys.dm_db_partition_stats WHERE OBJECT_ID = OBJECT_ID('MSCompressionDemoTestTable') GO DROP TABLE MSCompressionDemoTestTable

The results show after row compression we have two rows per page so page compression will be able to compress and take us down to 3 pages

If we run the same example but this time with unicode data, the page compression will fail as the first step of row compression cannot compress enough to fit more than one record on a page.

CREATE TABLE MSCompressionDemoTestTable (stringdata Nvarchar(4000)) GO INSERT INTO MSCompressionDemoTestTable (stringData) VALUES (REPLICATE(NCHAR(1234) + NCHAR(1299), 2000)) GO 1000 ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE); SELECT in_row_data_page_count FROM sys.dm_db_partition_stats WHERE OBJECT_ID = OBJECT_ID('MSCompressionDemoTestTable') GO DROP TABLE MSCompressionDemoTestTable

The largest string we can compress is 4038 characters of non unicode data when we are using a heap with no other columns. This example show this in action where the row compression stage is first managing to shrink the data into two records so they can fit on one page.

CREATE TABLE MSCompressionDemoTestTable (stringdata varchar(8000)) GO INSERT INTO MSCompressionDemoTestTable (stringData) VALUES (REPLICATE('a', 4038)) GO 1000 ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW); SELECT sys.fn_PhysLocFormatter (%%PHYSLOC%%) LocationOfTheRecord FROM MSCompressionDemoTestTable ALTER TABLE MSCompressionDemoTestTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE); SELECT in_row_data_page_count FROM sys.dm_db_partition_stats WHERE OBJECT_ID = OBJECT_ID('MSCompressionDemoTestTable') SELECT sys.fn_PhysLocFormatter (%%PHYSLOC%%) LocationOfTheRecord FROM MSCompressionDemoTestTable GO DROP TABLE MSCompressionDemoTestTable

All these examples have been using a single heap with one column to give the maximum row size we can compress. Every column we add to the table will reduce the amount of string data we can compress as it will limit how many rows can fit on a page.

So when using page compression, you may be getting a much worst compression then expected if you have wide rows and long strings. It is always best to check before implementing page compression how many rows you can fit on each page after row compression has been performed. If that number is one then page compression will have no impact.

New Channel 9 video about Xbox One dev kit mode

MSDN Blogs - Tue, 06/07/2016 - 15:23

A couple of months ago, we announced a UWP Apps on Xbox One developer preview that allows anyone to convert their retail Xbox One to developer mode and deploy and debug UWP apps on them.  Yesterday, a new video was posted on Channel 9 where a couple of my colleagues describe Xbox One dev kit mode in more detail.  I encourage you to check out the video at https://channel9.msdn.com/Shows/Level-Up/Episode-18-Xbox-Dev-Kit-Mode if you get a chance.

I am back on my MSDN blog talking about Big Data, Azure Data Lake and U-SQL

MSDN Blogs - Tue, 06/07/2016 - 14:14

Hi all

It has been several years since I last blogged on my MSDN blog. Partially because I have been working on things I could not really blog about, partially because the many changes to the blogging infrastructure lost my log-in account connection. Now I got it back, as – as some of you probably have noticed – I am now working on U-SQL, our new highly-scalable, easy to extend Big Data Query language in the Azure Data Lake. Here are some links to the VS blog and MSDN Magazine articles I have authored on it:

And you can find my presentations on U-SQL (and some older SQL Server related presentations as well) on slideshare.net.

Other resources are:

So how am I going to use this personal blog? I will be blogging here more personal opinions on the data processing industry and the concepts, write adhoc, shorter blogs reacting to tweets, customers questions and requests and author draft postings that I will move over to the team blog.

Please let me know if you have questions and suggestions on things you want me to talk about.

A new release of ODBC for Modern Data Stores

MSDN Blogs - Tue, 06/07/2016 - 13:54

 

After more than 15 years since the last release, Microsoft is looking at updating the Open Data Base Connectivity (ODBC) specification.

 

ODBC was first released in September of 1992 as a C-based call-level interface for applications to connect to, describe, query, and update a relational store. Since its introduction, ODBC has become the most widely adopted standard interface for relational data, fostering an ecosystem of first-party data producers as well as 3rd party vendors that build and sell ODBC Drivers for a variety of data sources.

 

ODBC was designed for relational databases conforming to the ISO SQL-92 standard, and has not undergone a significant revision since the release of ODBC 3.5 in 1997. Since that time, not only have relational data sources evolved to support new data types, syntax, and functionality, but a variety of new sources of data have emerged, particularly in the cloud, that don’t conform to the constraints of a relational database.

 

The prevalence of tools, applications, and development environments that consume ODBC have led a number of vendors to create ODBC drivers to cloud-based and non-relational data sources. While this provides connectivity to existing tools, having to flatten the data into normalized relational views loses fidelity, functionality, semantics, and performance over a more natural representation of the data.

 

So we started to consider what it would look like to extend ODBC to more naturally support more modern Relational, Document-oriented, and other NoSQL stores. Borrowing and extending syntax and semantics from structured types in SQL-99, we identified a relatively small set of extensions necessary to support first-class representation of the modern data sources, as well as a set of general enhancements to improve the ability for applications to write interoperable code across ODBC drivers.

 

To be part of ODBC, these extensions would have to be backward compatible with existing drivers and applications, and extend ODBC in ways natural and consistent with the existing API.

 

From these efforts a new version of ODBC slowly took shape. We started to discuss these extensions with a few ODBC vendors to flesh out requirements and vet designs, and have reached the point that we’re ready to start talking more broadly about ODBC 4.0 – the first significant update to ODBC in over 15 years.

 

Thoughts/comments/ideas? Please share! And stay tuned here for more updates…

 

Michael Pizzo
Principal Architect, Microsoft Data Group

Java Tools Challenge results

MSDN Blogs - Tue, 06/07/2016 - 13:28

As I mentioned in a previous blog post, we are committed to making VS Team Services a great solution for all developers – for any application on any platform.  3 months ago we kicked off the Java Tools Challenge, sponsored by the Visual Studio Partner Program, and the Visual Studio Marketplace which was a longer term virtual hackathon designed to bring awareness to our Java solutions and engage the Java community to build working and published tools and apps.

The Challenge was an opportunity for Java developers to explore the extensibility of Visual Studio Team Services as well as make use of some of our solutions for Java developers such as our Eclipse plugin, cross platform command line, IntelliJ plugin, our new cross platform build agent and more.  They were challenged to develop either a Visual Studio Team Services (VSTS) extension that helps developers create, test, deploy, etc. Java apps or create a Java app using Visual Studio Team Explorer Everywhere (Eclipse plugin) or JetBrains IntelliJ plugin.

It is my pleasure to announce the following winners of the Java Tools Challenge!

 

Best Overall: jSnippy, VSTS extension

Prize: $10,000 cash, VSPP Premier Membership, Visual Studio Enterprise 2015 with MSDN (#3), Featured product listing on Visual Studio Marketplace.

jSnippy is a code snippet manager that manages code snippets for your team in a secure manner and with an easy to search interface. As a team member, you can contribute by creating Code Snippets in the repository. jSnippy provides an Editor to write your code in and includes a built-in Java language syntax highlighting.

Team quote: “With jSnippy team members can easily search for code snippets using powerful tag and full text search capabilities within their modules. For Java-based Teams, jSnippy also provides an option to import commonly used code snippets on first time load.”

Best Extension: Allure, VSTS extension

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

To make tests relevant, you need to be able to know when a failure occurs and what caused it. The report must be in a readable format that can help to diagnose test failures quickly.

 

Team quote: “By creating the Allure VSTS extension, a general approach is used to create an HTML report with test results and failure screenshots intertwined. However, this was easier said than done since there is no proven standard – leaving us to code our own approach based on the test framework we are using. Leveraging a language agnostic open-source library like the Allure Framework, we were able to generate a robust, human-readable HTML report from each of our test runs.”

 

Best Mobile App: WiChat, Mobile App

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

WiChat is a messaging application that creates a chat-room to connect nearby users using WiFi Hotspot. Need to message nearby friends in an environment where silence is mandatory? Why spend money on texting when instead you can stay connected via WiChat?

 

Team quote: “The connection is handled by Java’s ServerSocket and Socket Class and VSTS was used to keep track of team progress and enhance the private Git repository experience.”

 

Best App: Molecula Add-in for Outlook, App

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

Molecula indexes all emails in your Inbox and groups all the people you communicated with using special criteria. Groups are visualized with cool bubbles—bigger ones represent groups of people you communicate with more frequently. You can start a new thread by clicking a bubble, rearranging the recipients for TO, CC and BCC fields, and you are done!

 

Team quote: “We used IntelliJ IDEA 15 and the VSTS plugin as our primary IDE. Server-side is built using Java and hosted in Microsoft Azure using Docker containers. On the client side we use JavaScript API for Office, Office UI Fabric as a framework for UI and Microsoft Graph API to access user emails.”

 

Best Runner-up Extension: Documentation (Doxygen), VSTS extension

Prize: $1,500 cash, Surface Pro 4, Visual Studio Enterprise 2015 with MSDN.

VSTS being a great ALM platform and Doxygen being a great documentation tool, I wanted to integrate both and get the benefit of always having up-to-date documentation that can be viewed directly in VSTS projects.

 

Team quote: “Documentation (Doxygen) is a VSTS extension that contains two parts: A Build task to generate documentation as part of Build; and a Documentation hub under Code hub group to render the documentation from the selected Build.”

 

Best Runner-up Mobile App: My Cellars and Tastes, Mobile App

Prize: Oculus Rift, Visual Studio Enterprise 2015 with MSDN.

My Cellars and Tastes is a free app in 6 languages to manage cellars of wines, champagnes, beers, olive oils, etc.

 

Team quote: “Originally, the app was built using Eclipse locally, then I added a local SVN repository and I recently configured a dedicated Visual Studio Team Services account for this challenge to manage my backlog, my code with Git and also automate Build and Release.”

 

Congratulations to all of our Java Tools Challenge participants that submitted their projects and especially to those that won prizes! And, a big thanks to our other judges: Paul Barham, David Staheli, and Brian Benz as well as DevPost for hosting the Challenge!

 

You can view all of the Java Tools Challenge entries at http://javachallenge.devpost.com/submissions.

 

Brian

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux