V predchádzajúcich príspevkoch k vývoju univerzálnych aplikácií pre Windows 10 (Windows 10 – univerzálny vývoj aplikácií a Preneste Android, iOS, Web alebo Desktop aplikácie na Windows 10) sme si po teoretickej stránke predstavili novinky pre vývojárov moderných store aplikácií. V tomto príspevku sa dostaneme už aj k praktickej časti a ukážeme si, ako začať vyvíjať aplikácie postavené na Universal Windows Platform (UWP).Vývojárske nástroje
Pre vývoj Windows 10 aplikácií je potrebné Visual Studio 2015, v ktorejkoľvek edícií (t.z., že môžete využiť aj edíciu Community, ktorá je dostupná pre vývojárov zdarma). Počas inštalácie Visual Studia je potrebné nainštalovať nástroje na vývoj univerzálnych Windows 10 aplikácii (Universal Windows App Develoment Tools). Túto súčasť nájdete v zozname ponúkanej funkcionality (viď. obr. nižšie). V prípade, že ste si už Visual Studio nainštalovali, no nenainštalovali ste si nástroje na vývoj univerzálnych aplikácií, máte dve možnosti ako nástroje doinštalovať. Prvá z nich je modifikácia súčastí Visual Studia. Túto modifikáciu spustíte cez Ovládací panel, v sekcii Programy -> Programy a súčasti kliknite pravým tlačidlom myši na Visual Studio [vaša edícia] 2015 a zvoľte možnosť Zmeniť. Následne sa otvorí inštalačný sprievodca Visual Studia. Zvoľte možnosť Modify (Zmeniť) a v zozname funkcionalít vyberte spomenutú súčasť. Druhou možnosťou je stiahnuť súbor VSToolsForWindows zo stránky https://dev.windows.com/en-us/downloads. Sťahovanie inicializujete kliknutím na odkaz špecifický pre vašu verziu Visual Studia v časti Windows developer tooling. Stiahnutý súbor taktiež spustí sprievodcu inštaláciou Visual Studia. Následný postup je rovnaký ako v predchádzajúcom prístupe.
Obrázok 1: Visual Studio inštalácia - zoznam ponúkanej funkcionality
Čo sa týka operačného systému, odporúča sa vyvíjať na počítači s Windows 10, kvôli podpore priameho ladenia a spúšťania vytváranej aplikácie. Vyvíjať UWP aplikácie je možné aj na systémoch Windows 8, prípadne 7. Pre tieto systémy je však obmedzená možnosť ladenia aplikácie, nakoľko na Windows 8, môžete spúšťať aplikáciu iba v rámci emulátora, resp. simulátora, pričom na Windows 7 lokálne ladenie nie je možné vôbec, vzhľadom na chýbajúcu podporu Hyper-V virtualizácie.Vytvorenie prvého projektu univerzálnej aplikácie
Aplikačný projekt univerzálnej Windows 10 aplikácie vytvoríme nasledovne:
Obrázok 2: Windows Universal šablóny
Ako som už spomenul, Template10 je snahou komunity vytvoriť aplikačnú šablónu, ktorá obsahuje implementované triedy a funkcionalitu, ktorú by ste za iných okolností museli častokrát programovať sami, a tým sa snaží umožniť vám v čo možno najväčšej miere zamerať sa na implementáciu vašej špecifickej aplikačnej logiky. Template10 si môžete stiahnuť ako archív alebo naklonovať priamo z GitHub-u - https://github.com/Windows-XAML/Template10. Po otvorení šablóny Template10 vo Visual Studiu, zistíte, že obsahuje 5 rôznych projektov, a to nasledovné:
Obrázok 3: Prieskumník riešenia - Template10
Moje odporúčanie do vydania ďalšieho príspevku je poexperimentovať a preskúmať triedy v rámci Template10 šablóny. Ak s vývojom Windows aplikácií nemáte veľa skúseností, mnohé z nich sa vám môžu zdať momentálne nepotrebné, no časom zistíte, že implementujú funkcionalitu, ktorú by ste museli opakovane implementovať sami. Eventuálne by ste sa po niekoľkých razoch dopracovali k vytvoreniu vlastnej šablóny. Zároveň by som rád dodal, že projekt Template10 je stále vo vývoji, preto po nejakom čase nemusia presne platiť informácie popísané v tomto príspevku a pred tým ako sa pustíte do programovania každej aplikácie, odporúčam preskúmať aktuálny stav projektu a v prípade zmien ho stiahnuť nanovo. Rovnako, ako som už spomenul, Template10 bude dostupný aj vo forme Nuget-u, čo uľahčí jeho využitie vo vašej aplikácii.
On august 6th, we released the final version of Team Foundation Server 2015, you can get a list of the feature updates on the release notes.
Below are some of the highlights in TFS 2015
Cloud and ALM solution specialist
+971 52 908 7952 | email@example.com
這篇文章是由 Managed Language 團隊的 Program Manager, Lucian Wischik 所撰寫。
我們前幾天釋出了在 Visual Studio 2015 中用來撰寫 Windows 10 應用程式的通用 Windows 應用程式開發工具，這個工具的發行令人感到興奮，因為您可以用最新的 .NET 技術來建立在所有 Windows 裝置上的 Universal Windows Platform ("UWP") 應用程式，這些 Windows 裝置可能是您口袋中的手機、背包裡的平板或筆記型電腦、桌上的 PC、客廳中的 Xbox 主機、以及正在加入 Windows 家族的新裝置如 HoloLens、Surface Hub 、以及 IoT 裝置如 Raspberry Pi 2 等。安裝 UWP 工具
您可以安裝免費的 Visual Studio 2015 Community 版本，它預設就會安裝 UWP 工具，如果您安裝的是 Professional 或 Enterprise 版本，在安裝的過程中，選擇「自訂」安裝，然後勾選通用 Windows 應用程式開發工具的選項來安裝 UWP 工具。
如果您已經安裝了 Visual Studio 2015，有兩個方法可以取得這個新工具：
對一個 .NET 開發人員來說，您會很高興看到 UWP 提供了...
這裡提供一些有用的關於 UWP 開發的概論介紹以及教學手冊：
在這篇文章中，因為其它教學文件沒有特別提及，所以我們希望讓您知道，身為一個 .NET 開發人員，到底 UWP 的開發有什麼改良呢？首先，我們以 10 張圖片來介紹微軟提供給 .NET 開發人員什麼樣的新功能。
檔案 > 新增 > C#/VB > Windows > 通用 在這個項目下，開始建立空白的 UWP 應用程式專案，這個動作目前比起 VS2015 RC 來說快多了，這是因為我們在正式版中改良了 NuGet 的關係。而現在您也可以建立在 UWP、ASP.NET 5 以及 .NET 4.6 上共享的可攜式類別函式庫。
方案總管 > 參考 在專案中的參考節點中會以獨特的圖示顯示 NuGet 套件，而其中 Microsoft.NETCore.UniversalWindowsPlatform 這個套件很重要，它包含了 .NET Core 執行階段以及框架。而新的 project.json 檔案也取代了 packages.config 檔案，使用比起 NuGet 2.0 更快更有彈性的 NuGet 3.0。
自適應式 XAML 開發人員可以開發自適應式操作介面（adaptive UI）來適用於各種類型的裝置上，因為 XAML 技術的演進，現在有了 ViewState 觸發器、更多的裝置預覽畫面以及即時視覺化的 XAML 樹狀結構以便偵錯，當然，也加入了像 x:Bind 這樣的技術來提升資料繫結的效能。
自適應式程式碼. 通用型應用程式有個很棒的特色就是可以在不同裝置間共享許多程式碼，同時也能在每一個裝置上都有最好的使用體驗。現在您可以使用 .NET 、呼叫指定平台的 WinRT APIs 來撰寫自適應式程式碼，這比起在執行時使用 reflection 技術要好得多了。
快速圖形處理: Win2d 以及 System.Numerics.Vectors. 如果需要快速的圖形處理能力，就要使用 Win2d 函式庫，一個很棒的基於 DirectX 並且轉換成 .NET 容易使用的函式庫，當然您還是可以使用 SharpDX 或是 MonoGame。而 System.Numerics.Vectors 運用了 CPU 的 SIMD 指令來提供更快速的向量及矩陣運算，透過這些技術讓我可以在我的中階 Nokia Lumia 635 手機上只花 70ms 的時間就能畫出 Mandelbrot fractal 圖形。
WCF, HTTP/2 以及 Sockets. 在 .NET Core 函式庫中，已經加入了 WCF 以及 Add Service Reference 這些在之前的 Windows Phone 上無法使用的物件及方法，而 HttpClient 也重新改寫：除了有更好的效能以及支援 HTTP/2，另外我們也加入了 System.Net.Sockets 的支援，這是 .NET 開發人員一直希望能在 Windows 市集應用程式中使用的功能。
.NET Native 當您以 Release 模式建置應用程式套件時，���發工具會使用 ".NET Native" 編譯器來編譯程式，所以它會將程式碼轉換成高度最佳化後的原生機器碼，這樣一來，應用程式將會啟動得更快、消耗較少的電池電量、以及提升整體的運作效能。
上架到市集 您應該很高興現在我們整合了開發人員中心，同時，當您要上傳應用程式時，上架程式精靈傳的是您應用程式的 MSIL，而市集會再使用 .NET Native 技術重新編譯您的應用程式成原生的機器代碼（這樣一來您的應用程式就會像 C++ 程式碼那樣一樣難反組譯），然後才提供給其它使用者下載。
Application Insights 以及診斷工具 在新的專案中預設都會加入 Application Insights 的相關套件，這可以幫助您瞭解應用程式詳細的分析資訊（例如當機或使用情形），頂尖的應用程式開發商都知道如何運用這些工具讓他的應用程式保持領先。另外也可以在 ETW（Event Tracing in Windows）中使用更豐富的追蹤功能。
.NET Native 採取預先編譯（AOT, ahead-of-time）的作法：當您在編譯程式時就轉成原生的機器碼，這與傳統 .NET 應用程式使用 just-in-time (JIT) 編譯是在程式第一次執行後才進行編譯是不同的概念，.NET Native 的運作更接近 C++ 編譯器的作法，而事實上，在 .NET Native 的工具鏈中也有一部份使用了 Visual C++ 編譯器，像是當您將應用程式上架到 Windows 市集時它就會運作，產生出更快、更輕巧的機器碼。
.NET Native 為終端用戶帶來許多好處：應用程式啟動速度平均提升了 60%，而且使用較少的記憶體。在我開發的一些 UWP 應用程式中，它們運作在 Nokia Lumia 635 上時，啟動速度從 1 秒提升至 110 毫秒，這一切都要感謝 .NET Native 以及 VS2015 中新的效能及診斷工具的幫忙。
您可以找到許多文章提到 .NET Native 預覽版本的內容，而 UWP 的開發是第一個正式採用 .NET Native 技術的領域，雖然在大部份的開發時間中，開發人員不會特別感受到 .NET Native 的存在，只有在以 Release 模式建置時才會發揮功能，而這也使得以 Release 模式編譯應用程式時會花點時間，當然也就不易偵錯以及搭配 Visual Studio 中的效能診斷工具，不過除此之外，它並不影響應用程式的正常運作。當然您還是可以使用 Debug 模式建置應用程式，由於它的執行環境是 CoreCLR，所以還是可以享受到絕佳的偵錯體驗，調整您的應用程式。
雖然 .NET Native 已經公開預覽超過一年的時間了，UWP 的開發讓許多開發人員第一次使用它，面對開發人員的好奇心，以及我們對這項技術的信心，我希望在本文後續的篇幅再詳細介紹它是怎麼運作的。.NET Core 開發框架
UWP 應用程式使用 CoreFX，它是 Windows 市集開發 API 的超集合（superset）。
接下來讓我們特別提出幾個讓 UWP 開發人員覺得有趣的 .NET Core FX 功能：
使用 Core FX 有兩件令人興奮的事，一個是它完全開放源碼，另一方面則是不綁定特定版本的 Windows 或 Visual Studio，任何人都可以像 .NET 團隊一樣每天更新貢獻程式碼，這個團隊與社群合作不斷擴充 CoreFX 的功能並且加入更多 APIs，而 UWP 應用程式中都可以立刻使用到這些新加入的 APIs，感謝 project.json 以及 NuGet 的整合，任何 UWP 開發人員都可以透過「管理 NuGet 套件」對話盒來取得最新版本的 .NET Core FX 套件。
注意當您新增專案時，開發工具會代入完整官方並測試過的 Microsoft .NET Core 組建，如果您想要使用其它的版本函式庫，不是在「參考 > 加入參考...」中加入，而是使用「參考 > 管理 NuGet 參考」來加入這些函式庫。
如果您正在開發 .NET Core 函式庫，您可以撰寫 PCL（Portable Class Libraries，可攜式類別函式庫）並且目標平台選擇 .NET 4.6、UWP 以及 ASP.NET 5。通用專案
UWP 帶來的新概念讓您能夠撰寫通用型 (universal) 專案，這表示只需要在 Visual Studio 中開啟一個專案、同樣的程式碼、上傳單一套件至 Windows 開發人員中心，就能夠讓應用程式執行在多重「裝置家族」（包括桌上電腦、行動平台、XBox ......）上，您不必再另外開啟一個共享專案，然後使用 #ifdef 這類語法來表示平台的不同，這樣一來，應用程式的專案也就更容易維護了。
在 MSDN 上有一篇文章「Guide to UWP apps」說明了要如何讓您的應用程式在不同的裝置間依然可以看起來很棒，令人開心的是，通常只需要根據不同的視窗大小調整介面就能在所有的裝置上正常運作。
從 .NET 的角度來看，技術上可以使用自適應程式碼（adaptive code）的方式來做就可以，以下是個例子：
我的應用程式在 Windows 10 桌面環境上可以正常運作，而在 Windows 10 行動裝置上便會顯示狀態列，為了一致性所以我呼叫了 StatusBar.HideAsync 方法來隱藏狀態列，然而在桌面環境並不存在 StatusBar 這個物件，所以這段程式碼用了很簡單的方式來處理——透過 WinRT API Windows.Foundation.Metadata.ApiInformation.IsTypePresent 來確認某個 WinRT API 在當下的執行環境中是否存在，所以下面的程式碼只會在特定的平台上才執行。
有時候很難判斷您想使用的 API 是否需要使用 IsTypePresent 方法來確認，所以您可以使用一個 NuGet 套件 PlatformSpecific.Analyzer，把它加入到專案後，它就會分析您是否忘記確認，然後在 IDE 中顯示警告訊息。
上面這段很有趣，因為在 .NET 開發中，自適應程式碼（adaptive code）的概念目前只能在 UWP 平台中使用，而且只能針對 UWP 類別物件所使用，低階 .NET 專家會很想瞭解這是怎麼做到的。在 Debug 模式建置的應用程式中，CoreCLR 需要能 JIT 您的 SetupAsync 方法，而為了做到這點就必須知道所有類別的 metadata，以及每一個方法（即使程式中沒有呼叫）的內容，UWP 透過打包進一個應用程式本地端檔案「windows.winmd」來處理這些方法及類別，在 windows.winmd 檔案中包含了 UWP 在每一個裝置版本上所有的 WinRT 類別及方法；而在 Release 模式下建置的應用程式，.NET Native 會將必要的 metadata ，以 COM IIDs 以及 vtables 的形式放進最終編譯的機器碼中。
最後我還是想強調關於在自適應式應用程式中使用 PCLs 的注意事項，因為這對您將既有的程式碼放進 UWP 中是個很重要的概念，如果您曾經開發了「8.1 Universal PCL」目標平台同時是 Windows 8.1 以及 Windows Phone 8.1，那您的 UWP 應用程式還是可以參考使用它，這是因為這類型的 PCL 只會呼叫 WinRT 的子集合，當然也是 UWP 的子集合。NuGet 3.0 以及 "project.json"
NuGet 已經成為 .NET 應用程式開發的套件管理標準，我們也希望將 .NET Core 放上 NuGet 成為套件的形式來發佈，但是現有的 NuGet 2.0 用戶端以及搭配的 packages.config 檔案，雖然過去是個很棒的設計，但是現在已經不太適用於像是 .NET Core 這樣有超過 100 個子套件的使用情境，因為不但效能低落而且也沒有彈性。NuGet 3.0 修正了這些問題，一開始是在 ASP.NET 5 中使用，現在 UWP 也採用了新的 NuGet。
您可以很容易地判斷一個專案是否使用 NuGet 3.0，因為它使用 project.json 檔案取代了 packages.config，您可以在任何既有的 .NET 專案中使用 NuGet 3.0 以及 project.json，並不會影響開發工作（不過要先卸載再重新載入專案），而 project.json 檔案能順利運作的關鍵在於：
而接下來讓我們看一下使用 project.json 究竟帶來哪些好處：
下列 NuGet 套件並不是用相同的方式裝進 UWP 應用程式中，如果您還發現了其它有問題的套件，歡迎讓我們知道：
順帶一提，project.json 檔案預設用在「現代化」的 PCL 上，也就是那些目標平台是 .NET 4.6、UWP 以及 ASP.NET 5 Core 的 PCL。UWP 應用程式在偵錯時使用 CoreCLR、發行時使用 .NET Native
下面這張圖表顥示了當您建置、偵錯以及上架應用程式時到底發生了什麼事，VB 及 C# 編譯器跟之前一樣繼續會以 MSIL 格式來散佈 DLLs 檔案，而不同的的地方在於...
Debug 建置: CoreCLR. 當您在 Debug 模式建置您的 UWP 應用程式時，它會使用 ".NET Core CLR" 作為執行階段，這點與 ASP.NET 5 相同。它提供了很好的 編輯+執行+除錯 的操作體驗：快速部署、完整除錯功能、編輯及繼續執行。
Release 建置: .NET Native. 當您在 Release 模式下建置時，它會多花超過 30 秒的時間將您的 MSIL 以及引用的參考優化成原生的機器碼，我們持續在改進這個部份，它使用了 "tree-shaking" 的技術移除了完全沒有被使用到的程式碼，也使用了 "Marshalling Code Generation" 技術預先編譯及序列化程式碼，所以不必在執行階段才做 reflection。.NET Native 技術會對整個應用程式做最佳化，如此一來不僅是將程式編譯成原生的機器碼，並且產出單一原生 DLL 檔案，您可以在 bin\x86\Release\ilc 目錄下找到它。
**.NET Core: CoreCLR 與 .NET Native 都是「.NET 執行環境」，它們都使用相同的 .NET Core 函式庫（CoreFX），所以程式在 Debug 及 Release 模式下都會有相同的表現，在 Windows 8/8.1 中，使用的是為 Windows 市集應用程式打造的 .NET 環境，現在我們已經將 UWP 完全地使用 .NET Core 並且存取新的 CoreFX 函式庫，藉此能在 debug/release 模式下有相同的行為。
上架到市集. 當您建立好 Appx 套件準備要上傳到 Windows 市集時，appx 的內容含有 MSIL 碼，而 Windows 市集便會使用 .NET Native 來編譯您的套件，這能降低開發人員對於部署使用 .NET Core FX 應用程式的疑慮，他們擔心萬一 .NET 發生安全性問題時要如何解決，以往的做法都是透過 Windows Update 更新系統的 .NET 執行環境，現在可以只要修正 appx 套件內的 .NET Core 即可。使用 .NET Native 開發的技巧
在 Release 模式下測試應用程式. 請確保您會定期地使用 Release 模式建置並測試您的 UWP 應用程式，Release 模式使用 .NET Native，如果您定期（在開發時我會每 4 個小時測試一次）進行測試，您便能即時發現問題，像是 Expression.Compile 不同的效能表現。如果您在測試時發現問題，並且想要進行偵錯，請注意 Release 模式已經經過最佳化，所以您可能會想關閉最佳化來取得更好的偵錯體驗。
.NET Native 分析器. 有些 .NET 功能在 .NET Native 中還不支援，像是超過四維的多維度陣列，當您在使用 .NET Native 進行編譯時就會通知您，不過若是您希望在進行長達 30 秒以上的編譯前就發現不支援的部份，可以安裝 Microsoft.NETNative.Analyzer 這個套件，它會在您撰寫程式碼時，發現 .NET Native 不支援的部份便會提出警告。
AnyCPU 已經不用了. 由於 .NET Native 會將程式碼轉換成原生的機器碼，對於開發人員來說，使用 AnyCPU 已經沒什麼意義了，您只需要記住要部署到本地機器或是模擬器上時要選擇 x86，而要部署到 Windows 10 行動裝置上選擇 ARM，在您建置要上架到市集的應用程式套件時，套件製作精靈會協助您產生 x86/x64/ARM 三個平台的套件，並且打包在一起。
如果您正在開發類別函式庫或是 PCL，這時就應該選擇「AnyCPU」，這樣會讓事情簡單一點，您只需要散佈一個 DLL 便能讓所有類型的專案來使用。
一個最簡單的作法就是在建置 > 組態管理員對話盒中，即使工具列上顯示 AnyCPU，您還是可以設定 UWP 應用程式在建置及部署時都使用 x86 平台。
在 .NET Native 下偵錯. 有時候您會希望設定中斷點或是偵錯 .NET Native 編譯後的程式碼，當然最好不要這麼做，因為這會讓偵錯變得很困難，而且 .NET Native 使用了大量的最佳化技巧後，會讓偵錯更加困難。如果非這麼做不可，那建議使用 Debug 模式，但是在專案設定下開啟 .NET Native 的設定。以 C# 專案來說，它在專案 > 內容 > 使用 .NET Native 工具鏈來進行編譯；而 VB 專案則在我的專案 > 建置 > 進階下。
自訂 .NET Native 最佳化. 有時候程式開發會使用 reflections，.NET Native 可能會因為最佳化的操作將這些程式碼移除，而這是您可以控制的，可以參考下列這些部落格的說明：
*Expression,Compile*. 我想將這個部份獨立開來討論，因為這個在 Newtonsoft 的 Json.NET 中用到很多，所以會影響到很多開發人員，在傳統的 CLR 中，表述式樹（expression-tree）會在執行期間被編譯成 MSIL，同時 JIT 會將它們轉換成原生碼，這在 .NET Native 中是不可能的，取而代之的是解析這些表述式樹，這項改變也許您會在 Json.NET 中觀察到它的效果，它會更快速啟動（因為不必再啟動 CLR 處理表述式樹的部份），但是在序列化大量資料時會變慢，所以請您務必測量這些變化會如何影響您的應用程式，在我的經驗中，我的應用程式因為這項改變加快了約 200ms 的啟動時間。
F#. F# DLLs 不能在 UWP 程式中使用：因為它們還不支援 .NET Native，我們會改善它，如果這對您很重要，也請不吝告訴我們。
取得支援。如果您在 .NET Native 上遇到問題，歡迎您寄信至 firstname.lastname@example.org 尋求協助。結論
這次通用 Windows 平台的發行為 .NET 開發人員創造了一個重要的機會，您可以透過 UWP 應用程式觸及更多的用戶，而且您可以使用最新的 .NET 技術來開發這些應用程式。
這篇文章翻譯自 Universal Windows apps in .NET
#Windev_jp #win10jp #wpdev_jp
お待たせしました。Project Islandwood 改め、Windows Bridge for iOSがついに公開されました。
Windows Bridge for iOS
簡単に言えば、Objective-C を使って Windows アプリを開発することができるものです。機能としては
まず、当然ながら開発環境としては Windows 10 が必要です。開発者モードにしておく必要があります。
まずは変換済みのものがあるのでこちらで試します。解凍した Windows Bridge for iOS SDK のなかの Samples フォルダ内にある WOCCCatalog を開きます。ここで、WOCCatalog-WinStore10.sln をダブルクリックして Visual Studio でプロジェクトを開きます。
あとは、WOCCatalog （Universal Windows) を右クリックして、スタートアッププロジェクトに設定して実行すればOK。ユニバーサルアプリ上で iOSのコントロールが再現されている。
Visual Studio 拡張機能としてインストールする
さて、サンプルは既に変換済みのものなので、今度は Bridge for iOS の機能を Visual Studio の拡張として追加します。
拡張機能は、回答したフォルダ内の Bin フォルダにあるのですが、実はこれは Visual Studio 2012 Professional をターゲットに作られていて、そのままではインストールできません。そのためマニフェストを書き換えます。
あとは、この objc-syntax-highlighting.vsix をダブルクリックしてインストーラーを実行します。
すいません、手元に XCode のプロジェクトというか、Macの環境というかないのでとりあえずここまで。
Any organization/ architect/ technology decision maker that wants to set up a massively scalable distributed event driven messaging platform with multiple producers and consumers - needs to know about the relative pros and cons of Azure Event Hub and Kafka. This article assumes that you know the basics of both (what they are) and will focus on comparing them - both qualitatively and quantitatively.
The slightly bigger question: PaaS versus IaaS
Before we jump into the relative merits and demerits of these two solutions, it is important to note that Azure Event Hub is a managed service, i.e., PaaS, whereas when you run Kafka on Azure, you will need to manage your own Linux VM-s, making it an IaaS solution. IaaS is more work, more control and more integration. It may, however, be worth it for your specific situation - but it is important to know when it is not.
Kafka scales very well - Linked In handles half a trillion events per day spread across multiple datacenters. Kafka, however, is not a managed service. You can install Kafka and create a cluster of Linux VM-s on Azure, essentially choosing IaaS, i.e., you have to manage your own servers, upgrades, packages and versions. Though it is more convenient than having your own datacenters - because Azure offers you low click solutions for High Availability, Disaster Recovery, Load Balancing, Routing and Scaling Out, you still have to build your own systems to pipe real-time streaming data to offline consumers like Stream Analytics or Hadoop. You still have to manage storage (see how to handle storage when you run Kafka on Azure later in this article) and spend cycles maintaining a system instead of building functionality with them.
On the other hand, PaaS frees you from all such constraints. As a general rule, if a PaaS solution meets your functional needs and if its throughput, scalability and performance meet your current and future SLA-s, it is, in my opinion, a more convenient choice. In case of Azure Event Hub, there are several other factors like integration with legacy systems, choice of language/ technology to develop producers/ consumers, size of messages, security, protocol support, availability in your region, etc. which can come into play and we will look at these later in this article. PaaS versus IaaS debate is bigger and older than Kafka versus Even Hub - hence let us not spend much time there as it can distract us from the comparison at hand.
Kafka and Azure Event Hub are very similar to each other in terms of what they do. Both are designed to handle very large quantities of small messages driven by events. In fact, almost every common IT use case around us is driven by streams of events each producing some data, which, in turn, are consumed by downstream systems. Event streams in a retail system contain orders, shipments, returns etc. A bank account processes debits and credits. Financial system process stock ticks, orders, etc. Web sites have streams of page views, clicks, and so on. Every HTTP request a web server receives is an event. Even an RDBMS database is the result of a series of transactions which are events. Any business can be thought of as a collection of producer systems generating events (hence data), and a collection of consumer systems which use that data by either transforming it (hence creating derived streams of data), or changing their state, or storing the data for offline/ batch processing (e.g., reporting or mining). Also, there are use-cases where the data being transmitted must be consumed in real time: Error Monitoring Systems, Security and Fraud Analysis Systems etc. need to consume event data in near real time.
Both Kafka and Azure Event Hub provide a highway for that real time data. Both provide mechanisms to ingest that data from producers (systems generating data aka publishers) and forwarding it to consumers (systems interested in that data aka subscribers). Both have solved the inherent complexities of acting as a messaging broker in elegant ways
Both have solved the reliability and messaging semantics problem (what to do with undelivered message, node failure, network failure, missed acknowledgments, etc.) robustly by introducing non-optional persistence (storage). Consumers read from an offset into a persisted stream of data which can be thought of as a log file with lines being appended to it. Gone are the complexities of the COMET generation - now a consumer can actually rewind or playback/ replay messages easily
In fact, persistent replicated messaging is such a giant leap in messaging architecture that it may be worthwhile to point out a few side effects:
Per-message acknowledgments have disappeared, and along with it the hoard of complexity that accompanies them (consumer can still acknowledge a particular offset stating that "I have read all messages up to this offset")
Both have benefited from arguably the most important by-product of this design: ordered delivery (used to be notoriously difficult in COMET days)
The problem of mismatched consumer speed has disappeared. A slow consumer can peacefully co-exist with a fast consumer now
Need for difficult messaging semantics like delayed delivery, re-delivery etc. has disappeared. Now it is all up to the consumer to read whatever message whenever - onus has shifted from broker to consumer
The holy grail of message delivery guarantee: at-least-once is the new reality - both Kafka and Azure Event Hub provides this guarantee. You still have to make your consumers and downstream systems idempotent so that recovering from a failure and processing the same message twice does not upset it too much, but hey - that has always been the case
Both had to deal with the potential problem of disk storage slowing performance down, but both came up with the simple but elegant solution of using only sequential File I/O, linear scans and big arrays instead of small bursts - tricks that make disk I/O, in some case, faster than even random access memory writes!
Both have solved the scalability problem by using partitions which are dedicated for consumers. By using more partitions, both the number of consumers as well as the throughput can be scaled and concurrency increased. Messages with the same key are sent to the same partition in both Kafka and Event Hub
Both have solved the problem of extensibility on the consumer side by using consumer groups
Both have solved DR by using Replication, but with varying degrees of robustness. Azure Event Hub applies Replication on the Azure Storage Unit (where the messages are stored) - hence we can apply features like Geo-Redundant Storage and make replication across regions a single click solution. Kafka, however, applies Replication only between its cluster nodes and has no easy in-built way to replicate across regions
Both offer easy configuration to adjust retention policy: how long messages will be retained in storage. With Azure Event Hub, multi-tenancy dictates enforcement of limits - with the Standard Tier service, Azure Event Hub allows message retention for up to 30 days (default is 24 hours). With Kafka, you specify these limits in configuration files, and you can specify different retention policies for different topics, with no set maximum
The biggest difference is, of course, that Azure Event Hub is a multi-tenant managed service while Kafka is not. That changes the playing field so much that some that I know do argue - we should not be comparing them at all! It does not make them apples and oranges - but may be Hilton and Toll Brothers. Both Hilton (the reputed hotel chain) and Toll Brothers (the well known home builders) can provide you a place to stay with all utilities and appliances - but one of them builds their stuff keeping multi tenancy in mind, the other one does not. That makes a huge difference in optimization focus areas.
As a customer, what would you love about living in a Toll Brothers home? The degree of control you can exercise, of course! You could paint it pink if you wanted (if you can win the battle with the HOA that is). You could get the most advanced refrigerator that money can buy. You could hire as many chefs as you wanted to make you as much food as you want - without fear of being throttled by room service because you are stepping on the resources kept aside for other tenants. You could design your kitchen and its appliances the way you want - like have a water line run to the refrigerator nook so that ice makers and filtered drinking water faucets could work in it.
As a customer, what would you love about Hilton (the multi tenant solution)? The service of course! Fresh towels delivered at your doorstep. Rooms cleaned. Dishes washed. Then there are the subtle ones: if for some reason your particular suite is unusable, they can immediately put you up in another - and a visiting friend will have no trouble finding you in either suite because they will ask the front desk first (High Availability built into it). If the refrigerator in the suite decides to quit with perishable food items in it, they can bring another refrigerator before the food is unusable (Disaster Recovery built in - okay, that example was not right for DR - it is hard to come up with a DR analogy in real life. Only if you could make backup copies of your passports and jewelry in separate bank vaults, you could use that analogy - but you can't). If you invite 20 guests over, you could pick up the phone and rent 5 more equally or more comfortable suites. Stop paying for them when the weekend is over - scalability built right into it (both scale out and scale up).
At the end of the day, you need to ask yourself: what degree of control do I need? The analogy between Hilton and Azure Event Hub is not perfect because Hilton is always more costly than living at your own home. However, Azure Event Hub costs you less than $100 for 2.5 billion 1 KB events per month. Kafka, on the other hand, is open source and free - but the machines it runs on are not. The people you hire to maintain those machines are not. It almost always costs you more than Azure Event Hub.
Enough of that - we got dragged back into the PaaS and IaaS debate. Let us focus on some more technical differences.
On-premises support: Azure Event Hub cannot be installed and used on-premises (unlike its close cousins, the Service Bus Queues and Topics, which can be installed on-premises when you install Azure Pack for Windows Server). Kafka, on the other hand, can be installed on-premises. Though this article is about the differences between Azure Event Hub and Kafka running on Azure, I thought that I should point this one out
Protocol Support: Kafka has HTTP REST based clients, but it does not support AMQP. Azure Event Hub supports AMQP
Coding paradigms are slightly different (as is to be expected). In my experience, only the JAVA libraries for Kafka are well maintained and exhaustive. Support in other languages exist, but are not that great. This page lists all the clients for Kafka written in various languages.
Kafka is written in Scala. Therefore, whenever you have to track the source code down from the JAVA API wrappers, it may get very difficult if you are not familiar with Scala. If you are building something where you need answers to non-standard questions like can I cast this object into this data type, and you need to look at the Scala code, it can hurt productivity
Azure Event Hub, apart from being fully supported by C# and .NET, can also use the JAVA QPID JMS libraries as client (because of its AMQP support): See here.
As QPID has support for C or Python as well, building clients using these languages is easy as well.
REST support for both means we can build clients in any languages, but Kafka prefers JAVA as the API language
Disaster Recovery (DR) - Azure Event Hub applies Replication on the Azure Storage Unit (where the messages are stored) - hence we can apply features like Geo-Redundant Storage and make replication across regions a single click solution.
Kafka, however, applies Replication only between its cluster nodes and has no easy in-built way to replicate across regions. There are solutions like the GO Mirror Maker Tool which are additional software layers needed to achieve inter-region replication, adding more complexity and integration points. The advice from Kafka creators for on-premises installation is to keep clusters local to datacenters and mirror between datacenters.
Kafka uses Zookeeper for Configuration Management. ZK is actually known for its lack of proper multi-region support. ZK performs writes (updates) poorly when there is higher latency between hosts (and spread across regions means higher latency).
For running Kafka on Azure VM-s, adding all Kafka instances to an Availability Set is enough to ensure not losing any messages, effectively eliminating (to a large degree) the need for Geo-Redundant Storage Units - as 2 VM-s in the same Availability Set are guaranteed to be in different fault and update domains, Kafka messages should be safe from disasters small enough to not impact the entire region
High Availability (HA) and Fault Tolerance - Until the point the whole region or site goes down, Kafka is highly available inside a local cluster because it tolerates failure of multiple nodes in the cluster very well. Again, Kafka's dependency on Zookeeper to manage configuration of topics and partitions means HA is affected across globally distributed regions.
Azure Event Hub is Highly Available under the umbrella Azure guarantee of HA. Under the hoods, Event Hub servers use replication and Availability Sets to achieve HA and Fault Tolerance.
It may be worth mentioning in this regard that Kafka's in-cluster Fault Tolerance allows zero downtime upgrades where you can rotate deployments and upgrade one node at a time without taking the entire cluster down. Azure Event Hub, it goes without saying, has the same feature with the exception that you do not have to worry about upgrades and versions!
Scalability - Kafka's ability to shard partitions as well as increase both (a) partition count per topic and (b) number of downstream consumer threads - provides flexibility to increase throughput when desired - making it highly scalable.
Also, the Kafka team, as of this article being written, is working on zero downtime scalability - using the Apache Mesos framework. This framework makes Kafka elastic - which means Kafka running on Mesos framework can be expanded (scaled) without downtime. Read more on this initiative here.
Azure Event Hub, of course, being hosted on Azure is automatically scalable depending on the number of throughput units purchased by you. Both storage and concurrency will scale as needed with Azure Event Hub. Unlike Event Hub, Kafka will probably never be able to scale on storage - but again these are exactly the kind of points that go back to the point where Kafka is IaaS and Event Hub is PaaS
Throttling - With Azure Event Hub, you purchase capacity in terms of TU (Throughput Unit) - where 1 TU entitles you to ingest 1000 events per second (or 1 MBPS, whichever is higher) - and egress twice that. When you hit your limit, Azure throttles you evenly across all your senders and receivers (.NET clients will receive ServerBusyException). Remember that room service analogy? You can purchase up to 20 TU-s from the Azure portal. More than 20 can be purchased by opening a support ticket.
If you run Kafka on Azure, there is no question of being throttled. Theoretically, you can reach a limit where the underlying storage account is throttled, but that is practically almost impossible to reach with event messaging where each message is a few kilo bytes big, and there are retention policies in place.
Kafka itself lacks any throttling mechanism or protection from abuse. It is not multi-tenant by nature, so I guess such features must be pretty low in the pecking order when it comes to roadmap
Security - Kafka lacks any kind of security mechanism as of today. A ticket to implement basic security features is currently open (as of August 2015: https://issues.apache.org/jira/browse/KAFKA-1682). The goal of this ticket is to encrypt data on the wire (not at rest), authenticate clients when they are trying to connect to the brokers and support role based authorizations and ACLS.
Therefore, in IoT scenarios where (a) each publishing device must have its own identity and (b) messages must not be consumed by non-intended receivers, Kafka developers will have to additionally build in complicated security instruments possibly built into the messages themselves. This will result in additional complexity and redundant bandwidth consumption.
Azure Event Hub, on the other hand, is very secure - it uses SAS tokens just like other Service Bus components. Each publishing device (think of an IoT scenario) is assigned a unique token. On the consuming side, a client can create a consumer group if the request to create the consumer group is accompanied by a token that grants manage privileges for either the Event Hub, or for the namespace to which the Event Hub belongs. Also, a client is allowed to consume data from a consumer group if the receive request is accompanied by a token that grants receive rights either on that consumer group, or the Event Hub, or the namespace to which the Event Hub belongs.
The current version of Service Bus does not support SAS rules for individual subscriptions. SAS support will be added for this in the future
Integration with Stream Analytics - Stream Analytics is Microsoft's solution to real time event processing. It can be employed to enable Complex Event processing (CEP) scenarios (in combination with EventHubs) allowing multiple inputs to be processed in real time to generate meaningful analytics. Technologies like Esper and Apache Storm provide similar capabilities but with Stream Analytics you get out-of-the-box integration with EventHub, SQL Databases, Storage, which make it very compelling for quick development with all these components. Moreover, it exposes a query processing language which is very similar to SQL syntax, so the learning curve is minimal. In fact, once you have a job created, you can simply use the Azure Management portal to develop queries and run jobs eliminating the need for coding in many use cases.
Once you integrate Azure Event Hub with Azure Stream Analytics, the next logical step is to use Azure Machine Learning to take intelligent decisions based on that Analytics data.
With Kafka, you do not get such ease of integration. It is, of course, possible to build such integration, but it is time consuming
Message Size: Azure Event Hub imposes an upper limit on message size: 256 KB, need for such policies of course arising from its multi-tenant nature. Kafka has no such limitation, but its performance sweet spot is 10 KB message size. Its default maximum is 1 MB.
In this regard, it is worth mentioning that with both Kafka and Azure Event Hub, you can compress the actual message body and reduce its size using standard compression algorithms (Gzip, Snappy etc.).
Another word of caution in this regard: If you are building an event driven messaging system, the need to send a massive XML or JSON file as the message usually indicates poor design. Therefore, when comparing these two technologies for message size, we should keep this in mind - the well designed system is going to exchange small messages less than 10 KB in size each
Storage needs: Kafka writes every message to broker disk, necessitating attachment of large disks to every VM running Kafka cluster (see here on how to attach a data disk to Linux VM). See Disaster Recovery above to understand how stored messages can be affected in case of disaster.
For Azure Event Hub, you need to configure Azure Storage explicitly from the publisher before you can send any messages. 1 TU gives you 84 GB of event storage. If you cross that limit, storage is charged at the standard rate for blob storage. This overage does not apply within the first 24 hour period (if your message retention policy is 24 hours, you can store as much as you want without getting charged for storage).
Again a design pointer here: a messaging system should not be doubling as a storage system. The current breed of persistent replicated messaging systems have storage built into itself for messaging robustness, not for long term retention of data. We have databases, Hadoop and No SQL solutions for such - and all these could be consumers of the Event Hub you use, enabling you to erase messages quickly but still retain their value
Pricing and Availability: For running Kafka on VM-s, you need to know this: http://azure.microsoft.com/en-us/pricing/details/virtual-machines/
For using Azure Event Hub, you need to know this (has link to pricing page at the top): https://azure.microsoft.com/en-us/documentation/articles/event-hubs-availability-and-support-faq/
Performance: Now that we are down to comparing performance, I must go back to my managed multi-tenant service pitch first. You need to know that you are not comparing apples to apples. You can run a massive test on Kafka, but you cannot run it on Azure Event Hub: you will be throttled!
There are so many throughput performance numbers on Apache Kafka out there, that I did not want to set another up and offer more of the same thing. However, in order to compare Azure Event Hub and Kafka effectively, it is important to note that if you purchase 20 TU-s of Event Hub, you will get 20,000 messages ingested per second (or 20 MBPS of ingress, whichever is larger). With Kafka, this comparison concludes that a single node single thread achieves around 2,550 messages/second and 25 sending/ receiving threads on 4 nodes achieve 30K messages/second.
That makes the performance comparable to me, which is very impressive for Event Hub, as it is still a multi-tenant solution, meaning each and every tenant is getting that performance out of it! This, however, needs to be proved and I have not run multi-tenant tests simultaneously.
A point to note about the famous Jay Kreps test is that the message size there used is 100 bytes. Azure Event Hub guarantees 20,000 messages/second or 20 MBPS ingress to someone who has purchased 20 TU-s: that tells us that the expected message size is 1 KB. That is a 10 times difference in message size.
However, I did run single tenant test and tested something different: how long does a message take to reach the consumer? This approach provides a new angle to performance. Most test results published on the internet deals with throughput of ingestion. Therefore, I wanted to test something new and different.
Before discussing the results, let me explain an inherent inequality in these tests. The latency numbers here are of end-to-end message publishing and consumption. While in the case of Event Hub, the three actors (publisher, highway and consumer) were separated on the network - in case of Kafka, all three were on the same Linux Azure VM (D4 8 core).
With that, let us compare the results:
In spite of the fact that the Azure Event Hub end-to-end test involved multiple network hops, the latency was within a few milliseconds of Kafka (whereas the messages were traveling within the boundaries of the same machine in case of Kafka).
In essence, what I have measured shows no difference between the two. I am more convinced that Event Hub, in spite of being a managed service, provides a similar degree of performance compared to Kafka. If you add all the other factors to the equation, I would choose Azure Event Hub over running Kafka either on Azure or my own hardware.
Kafka and Azure Event Hub are not the only players in the persistent replicated message queues space. Here are a few others:
Mongo - Mongo DB has certain features (like simple replication setup and document level atomic operations) that can let you built a persisted replicated messaging infrastructure on top of it. It is not highly scalable, but if you are already using MongoDB, you can use this without needing to worry about a separate messaging cluster. However, like Kafka, Mongo lacks inter-region replication ease
SQS - Amazon has a managed queue-as-a-service, SQS (Simple Message Queue) providing at-least-once delivery guarantee. It scales pretty well, but supports only a handful of messaging operations
RabbitMQ - Provides very strong persistence guarantees, but performance is mediocre. Rabbit uses replication and partitions; it supports AMQP and is very popular. Like Azure Event Hub, RabbitMQ has a web based console (in my opinion this is a big contributor to its popularity). It is possible to build a globally distributed system where replication is happening across regions. However, replication is synchronous-only, making it slow - but the designers traded some performance for certain guarantees, so this is as-designed
ActiveMQ - Unlike Rabbit, Active has both synchronous and asynchronous replication, and is a good choice if you are married to JMS API-s. Performance-wise, Active is better than Rabbit, but not by much
HornetQ - Very good performance but Hornet has open replication issues, and under certain circumstances (certain order of death), data may get corrupted across nodes. Very good choice if fault-tolerance is not the highest priority. It has a rich messaging interface and set of routing options
ZeroMQ - ZeroMQ is not a real messaging queue, neither is it replicated/ persistent. Though it does not belong to this elite group at face value, it is possible to build such a system using ZeroMQ, but that will be a lot of work!
In my opinion, it can safely be said that Azure Event Hub provides a better out-of-the-box solution for durable fault tolerant distributed persistent replicating messaging framework for most use cases. Is it practical to build a Linked In on top if it? Probably not, as that would be costly, as you will need to buy too many TU-s. At 5 million events per second, only a specialized data center instance can handle that volume. Technology-wise, the underlying Event Hub implementation will be able to handle it if we deployed it on a dedicated data center, but Event Hub is not meant for that kind of use case.
I will be happy to update any particulars if you leave comments!
ZenFone に新しいのが出たんですね。Laser。楽天モバイルや NifMo からも出るみたいで。
今回は CPU も Snapdragon で画面も5インチ720Pでスタンダードなスペックです。こうして眺めてみると MADOSMAのスペックと大体同じような感じです。ストレージやRAMはOSが異なれば使用量も違うのであまり気にならないところ。Dual SIM ですが日本では使いにくい？音声と通信で使うのかな？MADOSMA ZenFone 2 Laser Platform Platform Windows Phone 8.1 Android 5.0 Dimension 70.4 x 142.8 x 8.4 mm 71.5 x 143.7 x 3.5 ~ 10.5 mm Weight 125g 145g CPU Snapdragon 410 Snapdragon 410 Memory 1GB 2GB Storage 8 8GB/16GB Memory Slot Micro SD(Max 64GB) Micro SD(Max128GB) Wi-Fi 802.11 b/g/n 802.11 b/g/n Bluetooth Bluetooth v4.0 Bluetooth v4.0 SIM Micro SIM Dual Micro SIM JP LTE Band 1,3,19 ? Display 5inch 1280x720 IPS 5.0inch 1280x720 IPS Camera 800 MPx / 2 MPx 13 MPx / 5 MPx Price with Tax 28,453円～ 24,624円(8GB)
Today we're visiting London, UK. In previous posts, I've looked at data from Seattle and Chicago. Now we're headed across the pond to see data from the UK. Specifically, the Greater London Authority has a myriad of data available at London Datastore. The datasets are generally covered by the UK Open Government Licence (OGL) but some have specific attributions and licenses requirements. So, lets get this out of the way - the data used in this blog post "Contains public sector information licensed under the Open Government Licence v2.0."
This treasure trove of information invites a lot of visualizations. But where should I begin?
Glad you asked! Turns out there's a catalog of all the data feeds available from the City of London in a handy CSV format. The data set has a really good set of attributes including direct links to datasets, attributions, etc. While there is a search feature on the website, it's somehow not as interactive as I'd like. So let's take a look at how we can sift through London's datasets using Power BI Desktop.
Download: Power BI Desktop if you don't already have it
More data, updated more often
I started by looking just to see how many data sets London publishes and how that has changed over time. Here's a great view that shows London is publishing or updating more data sets today than a year ago. You can see there are 602 datasets we could analyze - a cornucopia of data! You can see that the number of data sets updated in 2015 is already on pace to exceed 2014. So not only is London publishing data, but it's keeping the data updated as the months go by.
Loading the data
Loading the CSV file is very easy in Power BI Desktop. I just clicked Get Data and selected CSV. The basic CSV was loaded correctly with all the data types correctly detected. There were a few things I spotted that I wanted to do to make the data more useful. To start there's a URL field - I want it to be clickable when used in the report so I changed the Data Category for the URL field to "Web URL". Then there's the tag field. Its a single comma separated field and I wanted to see which were the popular tags. To do this, I needed first to isolate each tag value and then ensure I could slice and dice the data feeds based on the tags. Getting there required two additional tables. The first provided the unique tag values. The second related the tags to the IDs. You'll see in the Power BI Desktop file, I created relationships across the detail table and the two new tables to ensure I could filter the values. Importantly, I used the ID column from the IdToTagsMap table in the main table for tags to force the use of the relationships during filtering. You can see why this is needed in the image below, where the IdToTagsMap table is on the 'many' side of the relationships from the Datasets and Tags tables.
A searchable index of London datasets
When you have so many datasets, it's hard to find all of them. So the sheet below shows how I created an interactive index of all the datasets published for London. You can filter on Topics, by Tag, or the update frequency. Since it's an interactive sheet, you can use any of the Fields to build additional filters or modify the visualizations. One thing I immediately noticed is there's a ton of 'tags'. Many datasets have unique tag values that actually are compound name that include other tags in a single long string. So I filtered out the tags with few datasets. I set the filter on the tags visualization so I could show all the datasets in the other charts. This single view would make any student doing homework and looking for data from London really happy.
Drilling into Topics and Publishers
The view below shows the top topics by number of datasets. You can clearly see where the UK government and the City of London are spending their 'data dollars' err.. 'data pounds' :). It's clear to see that normal topics like demographics, employment and skills, environment, transport are of primary importance. But the 'transparency' topic is an interesting find. Should be a useful area for citizens to dive into. You can also see that the 3 biggest publishers, by number of datasets, are the Greater London Authority (GLA), Office of National Statistics (ONS), and Transport for London (TfL).
Slicing and Dicing
I built the view below to help me understand the datasets more. I can clearly see which data sets apply to which geographic area - I especially liked the 'smallest geography' attribute since it makes finding relevant data very easy. The 'bounding box' attribute was similar so you can understand the applicability of the dataset at a glance. Lastly, the 'date from' and 'date to' attributes help you find interesting historical reference data. For example, one of the data sets, Global City Population Estimates, goes all the way back to January of 1950! It might be the subject of a future blog post.
Building the initial index of London datasets was accomplished in a couple of minutes. I was able to slice and dice quickly. Once I got into features like analyzing by tag I had to put a little more work into it, mostly because I don't do that kind of thing very often. It was simple to do using the built-in UI. Having this index, that is easily accessible, searchable, and filterable with drill through to each of the pages hosting the data feeds is remarkably useful I think. Would really like to hear from other folks who have compiled data sets like this one.
One common question I get is how to do date filtering in Power BI. In a previous post, I showed how to make filters that show if a values occurred in the last 30 days, in the last month, in the last 12 months, or in the current year.
This post show how to show the latest date. You can read on below, or grab the data and MaxDateExample.pbix files to see it in action yourself. If you don't have Power BI Desktop yet, you can get it here.
Step 1: Import your data set.
In this case I have a simple table called Table1 with a column called Date. Just click Get Data and select the excel workbooks (Book1.xlsx). Select Table1 and press the Load button. The data will load and you'll see fields at right on the screen; you can drag them to the canvas to create a table or chart.
Step 2: Create a measure to return the Latest Date
Select the table in the fields list and press the 'New Measure' button in the ribbon. Here's the formula to use:
MaxDate = CALCULATE(MAX(Table1[Date]), ALL(Table1))
I wanted to mention why I'm using CALCULATE. In Power BI, every time you use data, it is implicitly grouped by the categories that are used. A measure when used in a group context, computes its value only within that group by default. This makes sense when trying to make measures that work at any level of grouping/hierarchy. In my "latest' case, I want to get the latest value regardless of grouping. To do that I'm using the CALCULATE (... , ALL(...)) pattern. It's a little confusing since the ALL(...) is called a 'filter' but it's working on grouping... yikes. You can think of a group as a kind of filter that select only rows that match the group label... so it all works out.
Step 3: Add a column to the table that tells you which rows have the latest date
Select the table in the files list and press the "New Column" button in the ribbon. Here's the formula to use:
Is Latest = if(Table1[Date] = Table1[MaxDate], "Latest", "")
This checks if the value of the "Date" column in each row exactly matches the MaxDate value computed for all rows and writes the value "Latest" if it does. Note that if you're using date time, this includes hours, minutes, seconds in the calculation; if you just want to know if it's on a specific day, then you'd adjust the formula.
Step 4: Use the "Is Latest" column in filters or slicers
You can now drag the "Is Latest" column to the filter area at the visualization or page level to show the value you want.
As a bonus, Power BI now support adding date and text measures to the canvas, so you can even put MaxDate in your report without jumping through hoops. This is useful if you'd like to show the "last updated time" for your queries.
This article features a sample dashboard that is configured to randomly scroll through and select an object from the list of objects available in a State Widget on a preconfigured interval. Together with other contextual widgets, this capability can make a dashboard come to life and can be extended for scenarios where using “automated dashboards” is the more suitable choice to display monitoring data, like in monitoring command centres etc.
Without this feature, a dashboard with a state widget and contextual widgets would require the user to move the selector manually and would probably be more suitable for hands-on analysis or troubleshooting work rather than being used as a command centre dashboard.
Thanks to some guidance from my colleague Ryan Benson, I learned that the State Widget’s DataGrid component exposes the selected items collection, and respects other components changing this property programmatically. If a new component which updates the selected items on an interval is written, the auto scroll and select scenario can be achieved. However the only issue is that the visual order of the items in the grid is sorted internally and not exposed, so if the sorting on any of the columns is enabled, the programmatic selection would jump around.
The sample health dashboard consist of a state widget scoped to the Windows Server Operating System Class, a header label widget, a detail widget, a contextual alert widget and a contextual health widget. The management pack containing this sample dashboard can now be downloaded from the TechNet Gallery.
Here is a logical representation of the sample summary dashboard composition:
Here is the PowerShell script used in the PowerShell Datasource component to randomly select an instance of the Windows Server Operating System class and return the selected object to SelectedItem field of State Widget’s DataGrid component. This PowerShell script is configured to run at a 15 minute interval:
class = get-scomclass -Name Microsoft.Windows.Server.OperatingSystem
$serverOSes = Get-SCOMClassInstance -class $class
$randomobject = Get-Random -InputObject $serverOSes
$dataObject = $ScriptContext.CreateFromObject($randomobject, "Id=Id,State=HealthState,Name=Name", $null)
When the sample MP is imported into a OpsMgr 2012 environment, the sample health dashboard will appear at the root of the Monitoring Workspace with display name as: Sample AutoScrolling OS Health Dashboard:
When an object in the State Widget is selected, the Label Widget will contextually display the name of the server hosting the selected object, the Detail Widget will contextually display the detailed information of the server hosting the selected object, the Contextual Alert Widget will display all the New alerts associated to the selected object and the Contextual Health Widget will display only unhealthy monitors running against the selected object as follows:
After x15 seconds, if an object that is in Healthy state is automatically selected, the following picture provides an example of what the contextual widgets on the dashboard would display:
After x15 seconds, if an object that is in Critical state is automatically selected, the following picture provides an example of what the contextual widgets on the dashboard would display:
After x15 seconds, if an object that is in Maintenance Mode is automatically selected, the following picture provides an example of what the contextual widgets on the dashboard would display:
Thank you for your support !
所有的Internet網站，常常會遇到惡意攻擊，例如常見的阻斷式攻擊(DoS, DDoS)，因此今天要介紹在Azure IaaS 上可以採取的主動預防做法： 建立Network Security Group
Azure Network Security Group (以下簡稱NSG) 是用來管理網路進出的Access Control List (ACL)，以下圖為例 - NSG 可以與VM, VNet 建立繫結，可以設定允許/拒絕 inbound/outbound的網路流量。
圖一, What is NSG
透過 New-AzureNetworkSecurityGroup 建立一個指定名稱及地區的NSGNew-AzureNetworkSecurityGroup -Name "IISNSG" -Location East Asia -Label "NSG for IIS"
步驟二: 將NSG 與VM或VNet 建立繫結
下列範例將以VM與NSG 繫結為例，透過Get-AzureVM 與 Set-AzureNetworkSecurityGroupConfig建立起關聯，執行完成後，你的VM或是VNet 就會受到NSG保護。Get-AzureVM -ServiceName "yourCloudServerName" -Name "yourVMName" | Set-AzureNetworkSecurityGroupConfig -NetworkSecurityGroupName "IISNSG" | Update-AzureVM
步驟三: 設定NSG Rule
下列範例將建立 [WEB] 名稱的規則，其優先權為100 (數字越小，優先權越高) 拒絕192.168.1.100這個IP Inbound的所有trafficGet-AzureNetworkSecurityGroup -Name "IISNSG" | Set-AzureNetworkSecurityRule -Name WEB -Type Inbound -Priority 100 -Action Deny -SourceAddressPrefix '192.168.1.100/32' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol TCP
步驟四: 確認NSG Rule
列出所有設定過的規則，可以注意Name, Priority, Action, Source Address Prefix, Port Range等資訊是否正確。Get-AzureNetworkSecurityGroup -name iisnsg -Detailed
圖二, NSG 設定值
This new video of the 10 Minutes with Construct 2 series brings you a detailed review of the Touch plugin, that can be useful to create touch enabled games.
If you are thinking about creating a game that is ready to be played in mobile devices you better learn the nuances of this plugin.
In a future video I will cover the motion detection capabilities of this Construct 2 plugin.
Practical steps to make EMS work for you.
This two day Enterprise Mobility Suite (EMS) training will provide level 200-300 instructor-led technical training complete with demos and hands on labs.
This training focuses on Microsoft’s Enterprise Mobility solutions included in the Enterprise Mobility Suite: Mobile Devices, Application & Data Management with Intune, Identity & Access Management with Azure Active Directory, and Information Protection with Azure Rights Management Services.
Plus it will include deep dives on setup scenarios focusing on Hybrid Identity with Azure Active Directory Premium Service, Information Protection with Azure Remote Machine Service, and Unified Device Management with Windows Intune Service.
Agenda Day 1
Agenda Day 2
Cost: The registration fee for attending this training is $199 per person. You are responsible for arranging your own travel and accommodations, including all travel-related costs.
Roles: This training is ideal for individuals in the following roles Solution Architects, Pre-sales technical, and deployment roles.
Sydney Workshop Information:
Melbourne Workshop Information:
Stuart is a contrivedly acronymed Windows 10 photo editing app which I wrote during a recent Win2D app building exercise. I’m posting it here because I’m pleased with how it turned out. XAML + Win2D makes it really easy to do this sort of thing!
Where to get it
Starting with this photo from a dawn hike on Mount Rainier:
Stuart can make the colors more intense:
Or brighten the foreground rocks and trees without changing the sky:
Or we can go retro:
Or completely stylized:
(Published on behalf of Adam Wilson)
Well that was exciting. In a little over 5 days we managed to get over 2000 insiders running the Ping Pong App and generating over 26 000 datapoints per day. The team is still pouring over the data, but we have been able to track down some interesting information about how notifications are delivered to your machines. I’d like to personally thank every single person who took the time to install the app on their device, your dedication to making Windows great inspires us and the data you provided will go a long way to making notifications more reliable.Feedback
Before we go into the results there were a few pieces of feedback to be addressed.
It seems that engineers aren’t allowed to design interfaces for a good reason, which meant the app was a little more complicated than it could have been. Here is a quick walk through of what the app is showing you:Chat
This pane lists the notifications that have been sent to your device or are pending. The colors are:
Notification delayed, but not lost
There is also a bit of information about the notifications that you can see in the message box if you are interested in seeing when the notification was received.Push
This column lists the number of notifications received and dropped as well as configuration options. We are going for 100% on the number received, but as you can see that doesn’t always happen (and in the screenshot you can see that my test machine lost a few)
Interval Type: Allows you set how often notifications are going to be delivered to your device. We’d prefer for you to leave it set as the default value of server. This lets our server schedule follow up notifications based on how often they are being lost on the way to your device, but there are other options if you’d like more control.Diagnostics
This panel has some information that we thought the developers in the audience would like to see, or those who are more interested in the internals of notifications on Windows.
ClientID: The ID that the notification service uses to track your device. This is how the Microsoft servers know to send your notifications only to your phone.
Channel: This is the address of the Ping Pong app on your device. Every app who sends notifications gets one of these URIs for each device it is installed on. This URI is how an app like Facebook or WhatsApp is able to communicate that they have a notification to be sent to a device.
Request New WNS Channel: Apps must refresh the channel or it will expire and stop allowing notifications to flow to your device. Ping Pong automatically refreshes the channel on a regular cadence, but this button allows you to force an update if the boxes above are blank.
Clear Chat History: Clears the record of past notifications from the left pane.
Force Watson (Phone Only): This is an experimental button that snuck out of the lab. We are looking at enabling Ping Pong to trigger the error reporting dialog which sends crash information to Microsoft, but currently it doesn’t work. We’ll let you know if there is something that you can do with it in a later quest.
As a side note, the engineering teams really do look at the reports that users send back. I didn’t believe the reports from my machine were doing anything until I started working here and saw how they get treated. Even on a small feature team, we spend time every week sorting through the reports and finding issues to investigate and fix.
Show Notifications: Lets you choose if you want toasts when a notification is received. Personally I keep if off unless it is a test machine. If the box is unchecked, the app will check for lost notifications silently in the background without showing any toast messages.Results
This was a great chance for the team to take a look at the flow of notifications from a number of different regions and device configurations that are usually hard to get debug data from. There are a couple interesting things that jump out from glancing at the data:Caching
You folks hammered the caching code paths on the server. In preliminary tests with Microsoft employees <5% of notifications had to be cached in the server for later delivery because the device wasn’t connected at the time. With insiders we were seeing as many as 20% of notifications being cached, as shown by the larger yellow bars in the graph below:
That is great for us to see, as it lets us make sure a lot of tricky scenarios surrounding devices connecting and disconnecting are working.Dedication
The quest went live mid-way through the day on July 31st (Pacific Time), and the response was unbelievable. We had expected to leave the quest up for a few weeks before hitting the user cap, but we managed to surpass the target in a single weekend.
What is most amazing to me is that the quest came down on August 3rd, but as of August 6th insiders are still running the app and generating feedback. This data is hugely valuable to track the flow over time and we are going to continue to analyze every single report that comes in.Errors
This is still under investigation by the team, but we got a couple of new error states that need to be tracked down. Our best development and quality teams (in my humble opinion) are currently investigating the issues and will be pushing fixes out as quickly as possible.One last thing!
I want to once again thank all the insiders for installing the app and taking the time to help us out. Hopefully you are enjoying Windows 10 and all the new features that it brings.
The notifications team has working hard to improve notifications on Windows and hopefully you’re going to love the results.
Al momento de configurar a idioma español, portugués u otro diferente al Inglés de (US) las etiquetas del Designer de recibos en Retail no se visualizan correctamente. (Ver. Fig_1 & Fig_2)
2. En el nodo de DesignList > click en ActiveX (propiedades)
3. Modifica los valores de Width & Height: por ejemplo de: Width: 1080 & Height a 950
4. Guarda los cambios
Pasos adicionales que no hacen daño:
5. Compila en Incremental CIL
6. Cierra clientes de AX
7. Reinicia el AOS e Intenta nuevamente.
Espero te sirva...
As mail flows through Exchange SMTP transport an Exchange transport agent can look at the stream and modify. Note that this only works with on premise Exchange servers and not Exchange Online. The events it works off of fire on messages before a message arrives in a mailbox and after its submitted. These agents are built with custom .NET code.Points of Interest:
Transport agents in Exchange 2013
Reading and modifying messages in the Exchange 2013 transport pipeline
Creating transport agents for Exchange 2013
Transport agents in Exchange 2013
How to write an Exchange 2013 transport agent
Understanding Transport Agents (2010)
Transport Agents (2010)
View or Configure a Transport Agent (2010)
Process Monitor v3.2 (Procmon)
Transport agent code samples for Exchange 2013
StripIncomingLinkAgent sample Exchange 2010 transport agent
Note that this sample writes to the application event log. Also note that all sample code is just that - a sample. This means that when you use a sample code such as this you are to make it your own and understand the code. Microsoft does provide support for sample code.
Any upgrade of TFS, regardless of how basic or complex, whether in-place or including hardware migration, needs to be planned. That planning should start long before the migration happens and should include all parties of the TFS ecosystem, meaning all the user roles as well as infrastructure roles.Week or two prior to upgrade
1 - TfsPreUpgrade Tool
꙱ - Done
If your database is large enough to warrant running the TfsPreUpgrade tool prior to the upgrade it can be done while still using the server in production. The process is non-destructive, cancelable and can be undone. Given that though it shouldn't be taken lightly or run too far in advance because it does have a performance impact on the server once it's run.Weekend of upgrade
1 – Backup
꙱ - Done
2 - Backup encryption key
꙱ - Done
Refer to rskeymgmt Utility (SSRS) for more information.
3 - Uninstall TFS
꙱ - Done
Uninstalling TFS does not alter the databases.
Authored by Mike Abrahamson
Mike is a Principle Consultant, based in Minnesota focusing on Application Lifecycle Management. Mike has Enterprise level experience leading ALM projects that help customers use Team Foundation Server to enable their business and ALM processes. Mike focuses on defining and delivering software solutions with an emphasis on process maturity and quality throughout the software development lifecycle.