You are here

Feed aggregator

Disabling TLS 1.0 on your Windows 2008 R2 server – just because you still have one

MSDN Blogs - Mon, 07/25/2016 - 03:56

Windows 2008 R2 server is a very popular distribution of Windows that has been used time and time again to power servers running ASP.net websites – either on the Internet or on Intranets. Although this Windows version has somewhat aged from 8 years ago, I still tend to see quite a lot of these installs around, and happen to have some myself, which are running for my bookmarking service www.linqto.me .

If you have been reading about all the security problems creeping up on the internet lately, you should have come across words like Heart Beat Bleed, Poodle and other such vulnerabilities that are problematic when encrypting an HTTP connection between a client and a server. For the record, when it comes to securing a connection between client and server for HTTP related exchanges, there is an entire list of protocols we can chose from (or rather the client and server have to agree on). The list, with the dates these were released should give you an idea of how old some of these technologies are:

    SSL (short for secure sockets layer) version 1, 2 and 3 – initial specs for these came out in 1995 – that is more than 20 years ago!

    TLS (short for transport layer security) version 1.0 – came out in 1999

    TLS version 1.1 – came out in 2006

    TLS version 1.2 – came out in 2008

SSL protocols should not be used any more, as they are full of known vulnerabilities. TLS 1.0 has had its share of vulnerabilities, and more and more organizations are beginning to turn this off as a choice for negotiation of encryption between client and server. I recommend that you do too, and use more secure versions like TLS 1.1 or 1.2 if possible. If you are already on this blog post, chances are you are trying to do just this – turn off TLS 1.0 on your Windows 2008 R2 server. Which should be easy to do… or not, so keep reading.

Steps to turn off TLS 1.0 on a Windows 2008 R2 server.

There is a Microsoft Support Knowledge base article that discusses this in some detail and also recommends that you download a ‘Fix it for me’ automated repair tool. The article in question is the following: https://support.microsoft.com/en-us/kb/187498 . However, there are a couple of problems and loopholes in the article above, so I want to go through them in some detail.

  • The first is that the ‘Fix it for me’ automated installer is no longer available. Microsoft has decided to retire this technology, hence also stopping you from having an automated solution to disable TLS 1.0 and leave only TLS 1.1. and 1.2
  • The manual solution indicates that you should change some registry keys, but I have found this to be somewhat incomplete, because just changing the indicated keys will turn off all TLS communications, including TLS 1.1 and 1.2 – which is not what you want when you are running a site that has HTTPS bindings.
  • The article never mentions that if you are connecting to your server via Remote Terminal Service (or Remote Desktop), you will also be cutting the branch out from under your feet – these methods of communication with a remote server actually rely on TLS 1.0, and once it is disabled you will not be able to connect to your server any more, not via Remote Desktop anyways. If your server is in a remote location or data-center, that can become a serious problem that can cause you much grief and downtime.

To correctly disable TLS 1.0 follow the steps below:

  • Install the Microsoft patch that allows you to continue using Remote Terminal Services or Remote Desktop after TLS 1.0 is disabled: https://support.microsoft.com/en-us/kb/3080079 . This should be the first step on your list, as missing this patch will leave you unable to connect to your server after disabling TLS 1.0.
  • Disable TLS 1.0 from the registry, using the registry editor. This one requires several sub-steps which you have to go through:
    • Open the HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocols registry key
    • If a TLS 1.0 key is present go inside, if not you will have to create a new Key and name it ‘TLS 1.0’
    • If the TLS 1.0 key exists, you should also have a key called ‘Client’ and one called ‘Server’ underneath, if not you will have to create them as you did in the previous step:

    • The next steps will have to be done both for the ‘Client’ and ‘Server’ keys as we want to disable TLS 1.0 when the OS is acting as a server (typically in the case of a website), but also when it is acting as a client and connecting to other resources that require secure connections. Go into the ‘Client’ key and create a DWORD (32 bit) entry and call this ‘Enabled’ and set its value to 0. Then repeat, and create a new DWORD (32 bit) entry for the ‘Server’ key and call it ‘Enabled’ and set the value to 0. This will disable TLS (all versions) for both client and server.
    • Now we have to enable versions 1.1 and 1.2 of TLS. For this, we need to create new keys called ‘TLS 1.1’ and ‘TLS 1.2’ underneath the ‘Protocols’ key.
    • For each of the TLS 1.1 and TLS 1.2 keys, you should also create a ‘Client’ and a ‘Server’ key, as shown in the screenshot below:

    • Once the key structure is created, you can proceed to creating a DWORD (32 bit) entry called ‘DisabledByDefault’ and set its value to ‘0’ in each of the four keys: TLS 1.1/Client, TLS 1.1/Server, TLS 1.2/Client and TLS 1.2/Server.

I have created a small export of the registry from my server which I am pasting below as text for reference:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocols]
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0]
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0Client]
“DisabledByDefault”=dword:00000001
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0]
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0Client]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0Server]
“Enabled”=dword:00000000
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1]
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1Client]
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1Server]
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2]
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2Client]
“DisabledByDefault”=dword:00000000
[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2Server]
“DisabledByDefault”=dword:00000000

Time to restart your server following these changes and you will see that the next connection attempt to resources that require https (secure connections) for your sites will be using TLS 1.1 or TLS 1.2. Hope this helps you secure your servers and dodge some nasty security vulnerabilities.

Paul Cociuba
follow what I post on www.linqto.me

Project Services Trial or Buying

MSDN Blogs - Mon, 07/25/2016 - 03:56

 

This article is to explain about Project Services trial or buying, including guidance for Existing Trial Customers.

 

As of July 6th, Trial editions for Project Service Automation are installed through CRMOL Settings > “Dynamics Marketplace” or Microsoft AppSource (link below).

 

Step 1:

Go to the marketplace and find ‘Trial for Project Service Automation’ solution

https://appsource.microsoft.com/en-us/marketplace?product=dynamics-crm

Review the application information and click on the “Try” button. You must also to provide credentials (the same ones that you use to connect to the CRM Online organization)

 

Alternatively, you can go into your Dynamics CRM Online instance and go to Settings > Dynamics Marketplace to get to the below list of apps.

 

Step 2:

Select the target instance on which to install the Trial solution and agree to the Terms of Service and Privacy statement.

This will take the user to the CRM Online Admin Center “Manage your solutions” page.

 

STEP 3:

Wait for the “Trial for Project Service Automation” installation to go from “Installation pending” to “Installed”

 

Step 4:

Then you can go to the CRM Online instance to access the new Project Services installed via the CRM Menu.

 

Existing Customer Trial Updates

If you have already installed a Trial for Project Services, then this provisioning model change will remove the line entry from their “Manage your solutions” view in the CRM Admin Portal.

To get updates for a Trial edition of the solution, you need to follow the above process to install the “latest” edition of the Trial package on at least one instance in your tenant.

This will make the Trial for Project Services package visible again in the “Manage your solutions” view and will make updates available for other instances with an installation of the trial solution.

 

Buying the CRM Project Services solution

There are 2 situations we are facing at the moment:

 

1. SANDBOX: – CANNOT INSTALL PAYED VERSION ONLY TRIAL

To use it here, you should Access to CRM Sandbox > Go to Settings > Customizations > Dynamics Marketplace, you will find Trial for Project Service and Field Service and you should just click on “Try” and the installation begins

The only prerequisite is to have CRM 2016 Service Pack 1 installed.

 

2. PRODUCTION

You need to firstly buy the solutions and after that, you will be able to access them. In order to buy them, you should either request support from the Billing department or get in touch with a TAM (Technical Adviser Manager) or SAM (Account Manager).

The recommendation will be to firstly UNINSTALL IT from Manage your Solutions on the Office 365 Portal, buy it, install it again. After the install is over, you should be able to see it like in the below image:

If you do not uninstall it first, then it is still going to show as trial but it will be the paid one if you purchased it.

 

Best Regards

EMEA Dynamics CRM Support Team

 

Share this Blog Article on Twitter


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');

Follow Us on Twitter


!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');

Inking Effects, Web Clipper, Sharing Notifications and more – the OneNote July Roundup

MSDN Blogs - Mon, 07/25/2016 - 03:45

Over the last school year we have seen a number of fantastic updates to OneNote Class Notebooks that have made it easier than ever before for teachers and students to share work, collaborate, and achieve more within the realm of teaching and learning.

The latest round of new features and improvements have landed, and these are explained in more detail over on the Office Blog. In summary, they include:

Ink effects – Sometimes a simple color isn’t enough to get your idea across. Now you can jazz up your notes and drawings with new ink effects like rainbow, galaxy, gold and silver to make anything you write more unique and even more fun.

OneNote Web Clipper – Our OneNote Web Clipper is better than ever. We have a number of new features on our Web Clipper for Chrome, Safari, and IE to give you more control over your screen clippings before you send it into OneNote.

Sharing Notifications Email service – Never miss an important notebook edit again. For consumers using a Microsoft account, our newly improved sharing notifications means you’ll receive an email letting you know that someone has made a change to your shared OneNote notebook—helping you stay on the same page.

Click through to read all about the new capabilities to enhance inking, clipping content from the web and working with others:

Office Blog: OneNote July roundup

To whet your appetite for some of the new inking features, click on the video link in the tweet below:

Add some fun in rainbow, galaxy & more with new ink effects. See what’s new for OneNote: https://t.co/a2zAq1fhQlhttps://t.co/00rWYhP3iF

— Microsoft Office (@Office) July 21, 2016

Inking Effects, Web Clipper, Sharing Notifications and more – the OneNote July Roundup

MSDN Blogs - Mon, 07/25/2016 - 03:45

Over the last school year we have seen a number of fantastic updates to OneNote Class Notebooks that have made it easier than ever before for teachers and students to share work, collaborate, and achieve more within the realm of teaching and learning.

The latest round of new features and improvements have landed, and these are explained in more detail over on the Office Blog. In summary, they include:

Ink effects – Sometimes a simple color isn’t enough to get your idea across. Now you can jazz up your notes and drawings with new ink effects like rainbow, galaxy, gold and silver to make anything you write more unique and even more fun.

OneNote Web Clipper – Our OneNote Web Clipper is better than ever. We have a number of new features on our Web Clipper for Chrome, Safari, and IE to give you more control over your screen clippings before you send it into OneNote.

Sharing Notifications Email service – Never miss an important notebook edit again. For consumers using a Microsoft account, our newly improved sharing notifications means you’ll receive an email letting you know that someone has made a change to your shared OneNote notebook—helping you stay on the same page.

Click through to read all about the new capabilities to enhance inking, clipping content from the web and working with others:

Office Blog: OneNote July roundup

To whet your appetite for some of the new inking features, click on the video link in the tweet below:

Add some fun in rainbow, galaxy & more with new ink effects. See what’s new for OneNote: https://t.co/a2zAq1fhQlhttps://t.co/00rWYhP3iF

— Microsoft Office (@Office) July 21, 2016

Inking Effects, Web Clipper, Sharing Notifications and more – the OneNote July Roundup

MSDN Blogs - Mon, 07/25/2016 - 03:45

Over the last school year we have seen a number of fantastic updates to OneNote Class Notebooks that have made it easier than ever before for teachers and students to share work, collaborate, and achieve more within the realm of teaching and learning.

The latest round of new features and improvements have landed, and these are explained in more detail over on the Office Blog. In summary, they include:

Ink effects – Sometimes a simple color isn’t enough to get your idea across. Now you can jazz up your notes and drawings with new ink effects like rainbow, galaxy, gold and silver to make anything you write more unique and even more fun.

OneNote Web Clipper – Our OneNote Web Clipper is better than ever. We have a number of new features on our Web Clipper for Chrome, Safari, and IE to give you more control over your screen clippings before you send it into OneNote.

Sharing Notifications Email service – Never miss an important notebook edit again. For consumers using a Microsoft account, our newly improved sharing notifications means you’ll receive an email letting you know that someone has made a change to your shared OneNote notebook—helping you stay on the same page.

Click through to read all about the new capabilities to enhance inking, clipping content from the web and working with others:

Office Blog: OneNote July roundup

To whet your appetite for some of the new inking features, click on the video link in the tweet below:

Add some fun in rainbow, galaxy & more with new ink effects. See what’s new for OneNote: https://t.co/a2zAq1fhQlhttps://t.co/00rWYhP3iF

— Microsoft Office (@Office) July 21, 2016

Кто выиграет кубок Imagine Cup 2016?

MSDN Blogs - Mon, 07/25/2016 - 03:31

Присоединяйтесь 28 июля в 19:30 по Москве к трансляции церемонии награждения победителей Imagine Cup 2016!

В этом году нашу страну представляет команда из Санкт-Петербурга с загадочным названием Infinite Pizza и увлекательной игрой Partycles!
Узнать подробнее и поиграть в игру можно, посетив группу команды Вконтакте: https://new.vk.com/partycles

Смотреть трансляцию: http://www.imagine.microsoft.com/

Waatah…Enter The Script Editor. Hiding the Page Title in SharePoint 2013 using the SEWP (Script Editor Web Part)

MSDN Blogs - Mon, 07/25/2016 - 03:19

One of my customers asked me if there is a way to remove some of the items that exist by default on a SharePoint Team site. Now as you may know, digging through any type of code can be like Bruce Lee fighting his way up the levels of the temple in the movie the Game of Death. Enter the Script Editor Web Part, document.getElementById() and Internet Explorer Developer Tools (F12).

What is document.getElementById()?

document.getElementById() returns a reference to the element by its ID. The ID is a string that can be used to identify the element; it can be provided using the id attribute from HTML id or script.

What is the Script Editor Web Part?

The Script Editor Web Part is a SharePoint 2013 out of the box webpart, that allows users to add scripts directly to the page without the need to edit the HTML or aspx files directly.

First, we need to find the where the title is located in the source code.

  • Open Internet Explorer.
  • Click F12 button on your keyboard to bring up Internet Explorers Developer Tools.


  • Click the Select Element (Ctrl+B)


  • Now let’s click on the Title we want to remove.


  • We now see that the title “Root Site” is associated with the span id DeltaPlaceHolderPageTitleInTitleArea


  • We can then test by adding display:none in the Inline style.


  • We will also see that the Inline style has been added to the span id DeltaPlaceHolderPageTitleInTitleArea


  • And now our Title has been removed.


    From the above test, we have found the DeltaPlaceHolderPageTitleinTitleArea is the area we want to remove. We also found that adding the inline style display:none removes the title.

    Now let’s remove the title permanently using the Script Editor Web Part.

  • From the SharePoint 2013 page. Click Page>Edit


  • Click Insert>Web Parts


  • Click Media and Content>Click Script Editor>Click Add


  • Click the drop down arrow to the far right inside of the Script Editor box>Click Edit Web Part


  • Click EDIT SNIPPET


  • And then paste in
<script type=”text/javascript”>
document.getElementById(‘DeltaPlaceHolderPageTitleInTitleArea’).style.display = ‘none’;

</script>

  • Click Insert after pasting in the above script.


  • Click Apply and Ok


  • Click Save


  • Now the Title is gone for good.


The Break Down:

In the first section, I explain how to grab the document.getElementById() using the Internet Explorer Developers Tools.

In the second section, I explain now to add the Script Editor Web Part to a page and add the necessary script to remove HTML parts by their ID.

Using the SharePoint 2013 Script Editor Web Part and document.getElementById(), I am able to remove all aspects of a SharePoint site as long as I can find the ID.

Note:

I have also removed the search bar in the upper right corner using the same method.

Before:


After:


I was able to stack the script in the Script Editor Web Part right below the script I used to remove the title.


To remove the search bar, I used this ID “ct100_PlaceHolderSearchArea_SmallSearchInputBox_csr_sboxdiv”

Jak na logování v Azure App Service

MSDN Blogs - Mon, 07/25/2016 - 03:00

V posledních měsících jsem se hodně věnoval diagnostice webových aplikací s využitím služby Application Insights. Co když ale Application Insights z nějakého důvodu nepoužíváte a přesto potřebujete zalogovat důležitou informaci během vykonávání kódu? Jak získat přehled o selháních HTTP požadavků? Se službou Azure App Service má vývojář k dispozici jednoduché ale velmi mocné diagnostické služby, které v tomto článku krátce popíši.

Přehled diagnostických služeb

V Azure App Service lze snadno aktivovat logování událostí a získat dokonalý přehled o tom, co se ve vaší aplikaci právě děje.

Web Server Diagnostics

Web Server Diagnostics poskytuje tři odlišné typy logů:

  • Detailní Error Log, který nabízí pohled na HTTP chyby (4xx >)
  • Request Tracing pro zfailované požadavky včetně trasování IIS
  • Web Server Logging, který nabízí pohled na HTTP transakce ve W3C log formátu
Application Diagnostics

Zatímco Web Server Diagnostics logy se sbírají na základě zpracování HTTP požadavků, Application Diagnostics log se generuje na základě samotného kódu. V ASP.NET aplikacích může vývojář použít

System.Diagnostics.Trace.TraceError("Trase message here")

pro trasování.

Deployment Logs

Speciálním typem logů jsou deployment logy, které služba App Service generuje během publikování aplikace. Pro tyto logy není potřeba nic nastavovat (vše se děje automaticky). Využít deployment logy lze nejčastěji při nasazování aplikací pomocí PowerShell scriptů.

Povolení diagnostických služeb v Azure

V prostředí Azure lze najít povolení diagnostických služeb na jediném místě v sekci Settings > Diagnostics Log u vybrané služby App Service. K dispozici je několik možností:

  • Application Logging (FileSystem / Blob)
  • Web server logging (FileSystem / Blob)
  • Failed request tracing

Společně s Application Logs lze nastavit i úroveň logů. Pokud je nastavena například na Error, pak se logují pouze Trace.TraceError (zatímco Trace.TraceInformation a Trace.TraceWarning jsou ignorovány).

Na rozdíl od změn prováděných ve web.config změna nastavení diagnostiky v Azure nerecykluje aplikační doménu. Samotné nastavení ve web.configu vůbec není potřeba.

Přístup k logům

Logy ve službě Azure App Service jsou dostupné přes FTP, lze je stáhnout jako ZIP pomocí Azure PowerShellu nebo Azure CLI. Struktura adresářů je následující:

  • Application logy: /LogFiles/Application (textové soubory)
  • Failed Request Traces: /LogFiles/W3SVC###### (XML+XSL soubory)
  • Detailní error log: /LogFiles/DetailedErrors (HTM soubory)
  • Web server logy: /LogFiles/http/RawLogs (TXT soubory dle standardu W3C)
  • Deployment logy: /LogFiles/Git (logy generované během deploy procesů)

Nástroj Azure Diagnostics Log Stream

Velmi zajímavý nástroj je Log Stream, který sbírá Application Logs a Web Server Logs a zobrazuje je v reálném čase přímo v portálu Azure. Nalézt jej lze v sekci: Tools > Log Stream ve službě App Service.

Na obrázku výše si můžete všimnout, že zapnutím funkce Detailed error messages se v konzoli zobrazí chybová stránka přesně tak, jako uživateli.

Další diagnostické nástroje v App Service

Vedle zmíněných nástrojů pro logování Azure App Service poskytuje několik dalších nástrojů, které slouží především k pohledu na celkový stav aplikace. Tyto nástroje jsou dostupné v bohaté sekci Settings > Support and Troubleshooting.

  • Live HTTP Traffic je graf, který nabízí pohled na počty requestů a HTTP errors v čase, filtrovat lze dle hostname
  • AppLens umí to samé ale navíc zobrazuje kdy došlo k posledním deploymentům… lze tak odhalit výkonnostní problémy po nasazení nových verzí aplikace

  • Metrics per Instance za posledních půl roku hodně pokročily a nabízí performance counters proti jednotlivým instancím (při horizontálním škálování) v téměř reálném čase
  • FREB logs poskytuje logy sesbírané IIS modulem (FREB) a nabízí filtrovatelný grid včetně možnosti stažení XML souborů s podrobným logem.

  • Diagnostics as a Service je chytrý modul, s pomocí kterého lze snadno získat přístup k HTTP logům, Event Viewer logům nebo Memory Dumpu

KUDU

Posledním (neméně zajímavým) místem, kde lze získat přehled o stavu aplikace, zkontrolovat aktuální konfiguraci prostředí nebo například debugovat aplikaci pomocí konzole (CMD / Powershell) je KUDU. KUDU je dostupné pro aplikace nasazené v Azure App Service pod URL:

http(s)://{nazev aplikace}.scm.azurewebsites.net

Podrobná dokumentace ke KUDU je dostupná ve Wiki

Závěr

Spuštění diagnostických služeb v Azure App Service je otázkou několika kliknutí a nabízí detailní pohled na dění ve vaší aplikaci. I když Vám dnes funguje vše správně, doporučuji logování spustit, protože v případě nenadálých potíží se jedná o první místo, kde lze zjistit příčinu chyb. Application Logs jsou nejsnazší cestou jak trasovat chování webových aplikací nebo například Web Jobů nasazených ve formě konzolové aplikace ve službe Azure App Service.

Miroslav Holec

Dynamics CRM Online 2016 更新プログラム 1 の新機能 : モバイル オフライン同期

MSDN Blogs - Mon, 07/25/2016 - 03:00

みなさん、こんにちは。

2016 年 5 月 23 日にリリースされた、Microsoft Dynamics CRM Online 2016 更新プログラム 1 (Dynamics CRM Spring 2016 Wave) の
新機能から、モバイル オフライン同期をご紹介します。現在のところ、Dynamics CRM 設置型では利用できません。

 

概要

このバージョンでは、モバイル クライアント向けの機能がいくつか強化されています。その一つがモバイル オフライン同期です。

従来、Outlook 用 Microsoft Dynamics CRM – Outlook アドインの Dynamics CRM クライアントでは、
オフラインでの機能が提供されていました。ユーザーがオフラインでもデータにアクセスし、ネットワークに接続されていなくても、
いつもと同じようにデータにアクセスし、作業をして、データの変更や登録を行えるようになっていました。
また、フィルターにより同期するデータの指定ができ、モバイル環境に最適化されていました。

この機能がスマート フォンやタブレット用のモバイル アプリケーションでも提供されました。
例えば、ネットワークに接続されていないときでもデータの更新や削除をしたり、新しくレコードを追加して
いつもと同じように保存できます。

この機能を利用するには、以下のいずれかの条件が必要です。

  • 少なくとも 5 つのプロフェッショナル CRM Online ライセンス
  • 少なくとも 1 つのエンタープライズ CRM Online ライセンス

 

モバイル オフラインの同期の有効化

この機能を利用するには、まずモバイル オフラインの同期の有効化を行います。

  1. Dynamics CRM サイトに管理者としてサインインします。
  2. [設定] – [Mobile Offline] をクリックします。
    ※このメニューがない場合は、使用の条件を満たしていないため、機能が利用できません
  3. [Mobile Offline の構成] をクリックします。
  4. “免責事項” を確認し、[続行] をクリックします。
    免責事項には、次のように記述されています。”このコマンドを有効にすることで、自身のデータを外部システムと共有することに同意したことになります。
    詳細については、この機能の技術文書を参照してください”
    モバイル オフライン同期では、データを Microsoft Azure サービスを使用してモバイル クライアントと
    Dynamics CRM を定期的に同期するため、外部システムにデータを置くことにを承諾する必要があります。
  5. [Mobile Offline] で、[有効化] をクリックし、[保存] をクリックします。

 

モバイル オフライン同期の設定

モバイル オフライン同期の設定は、以下のように行えます。

  • エンティティでのモバイル オフライン設定
    エンティティをモバイル オフライン同期のデータ対象にするかどうかを指定します。
    また、データのフィルター条件の設定も可能です。既定では、過去 10 日間のデータが対象です。
    [設定] – [カスタマイズ] – [システムのカスタマイズ] で各エンティティのプロパティで設定します。
  • モバイル オフライン プロファイル
    モバイル オフライン同期のフィルター設定のセットです。
    また、この設定の中でどのユーザーに適用するか、ユーザーの追加によって指定できます。
    既定では、サービスと営業の二つのロール向けのサンプル プロファイルが用意されています。
    [設定] – [Mobile Offline] -[Mobile Offline プロファイル] で設定します。
  • モバイル オフラインの設定
    システム全体に対するモバイル オフライン同期の規定の設定です。
    既定のモバイル オフライン プロファイル、競合の検出をするかどうかを指定します。
    [設定] – [Mobile Offline] -[Mobile Offline の設定] で行います。
    システムの設定の [携帯電話] タブの内容と同じです。

 

クライアント側の操作

ネットワークに接続されている状態では、データは定期的に同期されます。
オフライン中に作業したデータも同期されます。
ユーザーは、ネットワークの接続状態に関係なく、いつもと同じ操作でデータを編集したり、新規にレコードを登録することができます。

モバイル クライアントでは、モバイル オフライン同期が利用可能になっている場合は、画面左下に “利用可能” と表示されます。

ネットワークに接続されていなくても、同期されているデータを使って作業したり、編集や削除ができるほか、新規にレコードを追加できます。

 

オフライン中に変更したレコードを、ほかのユーザーも変更し手同期した場合、競合を検出させることもできます。
競合検出をする場合は、[Mobile Offline の設定] で有効にしておく必要があります。
 

 

まとめ

いかがでしょうか。
モバイル オフライン同期により、ユーザーはネットワークの接続状態を気にすることなく、いつもと同じ操作で作業できます。
ぜひ使ってみてください。

Microsoft Online 全般についての TechNet ブログも訪れてみてください。
「The new full offline experience with mobile Dynamics CRM apps (英語)」
https://blogs.technet.microsoft.com/lystavlen/2016/04/21/the-new-full-offline-experience-with-mobile-dynamics-crm-apps/

 

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

– Dynamics CRM サポート  片岡クローリー 正枝

Lab 11: Using aspnet_regiis

MSDN Blogs - Mon, 07/25/2016 - 02:46
General information
  • The description of the aspnet_regiis tool can be found here
Lab 11-1 Setup
  • 2 IIS servers are needed for this lab, install IIS as per these instructions Lab 1, but the CSharpGuitarBUgs web site is not required
  • Place a copy of each of the c:windowssystem32inetsrvconfigapplicationHost.config files in a temporary location, mark them so you know which server they came from
    • Copy c:windowssystem32inetsrvconfigapplicationHost.config from (IIS Server 1) and to the same location (IIS Server 2)
    • ASP.NET should initially be installed on both (IIS Server 1) and (IIS Server 2) server, ASP.NET 4.5 was used in this lab

    Lab 11-1 – Error when changing application pool identity

    Issue when an applicationHost.config is copied to another IIS server

    • It is usually better to use ShareConfiguration than to copy the configuration file
    • It is also possible to use WebDeploy to synchronize web sites in a web-farm

    1. Click on Application Pools in the IIS Management Console

    2. Click on DefaultAppPool and select the Advanced Settings… link from the Actions pane

    3. Change the Identity to a custom account and press the OK button, the above window is rendered. NOTE: in production, never use an administrator account as the identity of a worker process, for simplicity only is this done here.

    When attempting to change the identity of an application pool to a custom identity the following error is rendered: “Value does not fall within the expected range.”

    4. Q: In which context does the application run in? I.e. which bit mode and .NET version does it run in. You need to know this to find which version of the aspnet_regiis to execute… A: by default on an IIS 8.5 server the worker is running in 64 bit mode and .NET Framework 4 is present.

    5. From the location where the applicationHost.config file came from (IIS server 1) and execute the following commands to export the sessionKeys:  NOTE:  ASP.NET must be installed on this server.

    a. aspnet_regiis -px “iisConfigurationKey” “c:tempiisConfigurationKey.xml” -pri

    b. aspnet_regiis -px “iisWasKey” “c:tempiisWasKey.xml” –pri

    6. Copy the applicationHost.config and the 2 XML files created in step 5a and 5b from (IIS server 1) to (IIS Server 2)

    7. Place the (IIS server 1) applicationHost.config (same as you did in the setup of this Lab) into the c:windowssystem32inetsrvconfig directory and reproduce the error

    8. Import the sessionKeys to server 2 by executing the following commands:

    a. aspnet_regiis -pi “iisConfigurationKey” “D:iisConfigurationKey.xml”

    b. aspnet_regiis -pi “iisWasKey” “D:iisWasKey.xml”

    9. The issue no longer happens and you can enter a custom identity for the application pool

    Lab 11-2 Setup
    • Replace the current c:windowssystem32inetsrvconfigapplicationHost.config with the original on (IIS Server 2)
    • Place a deafult.aspx file into the c:intetpubwwwroot directory
    Lab 11-2

    1. Open IIS manager and navigate to the default.aspx file using a browser

    2. Open the Handler Mappings feature and find the handler for the requested file type, it is missing

    3. Q: What is the requested file type? A: ASPX

    4. Find out which context the application pool is running in, I.e. bit mode and .NET version, then navigate to that directory via a command prompt, for example, c:windowsMicrosoft.NETFramework64v4.0.30319

    5. Prior to IIS 8 you could use aspnet_regiis –i to reinstall/reset ASP.NET as you can see from the above image that the handlers are not present.  Use WPI as discussed in Lab 2, but it is always more important to take a backup of your configuration as discussed in Lab 9 and here, which is a lot easier and les risky than a complete reinstall.

    6.  Once installed, reopen the Handler mapping feature and you will find the ASPX handler.  Refresh the default.aspx page and it renders as expected

    Lab 11-3 Setup
    • Create a web.config file that includes a <connectionString> like below and place it into the c:inetpubwwwroot directory

    <?xml version=“1.0“?>

     <configuration>

       <connectionStrings>

        <add name=“TopSecretConnectionString“

             connectionString=“Initial Catalog=aspnetdb;data source=localhost;Integrated Security=SSPI“

             providerName=“System.Data.SqlClient“ />

       </connectionStrings>

     </configuration>

    Lab 11-3

    1. View the contents of the web.config file, pay special attention to the ConnectionStrings section.

    2. Confirm the .NET version and bit-ness in which the application pool is running under and navigate to the correct version of the aspnt_regiis.

    3. Execute the following command: aspnet_regiis –pef “connectionStrings” C:inetpubwwwroot –prov “DataProtectionConfigurationProvider”

    4. Open the c:inetpubwwwrootweb.config file and you will see that the content is encrypted.

    5. No code changes required to encrypt the connection string

    [Sample Of Jul. 25] How to create a localization UWP App

    MSDN Blogs - Mon, 07/25/2016 - 02:42
    Jul. 25

    Sample : https://code.msdn.microsoft.com/How-to-Create-a-localizatio-c61f4b37

    This sample demonstrates how to create a localization UWP App.

    You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

    Evolving the Visual Studio Test Platform – Part 1

    MSDN Blogs - Mon, 07/25/2016 - 02:23

    Three releases (VS 2015 Update 3, Visual Studio “15” Preview 3, MSTest V2) featuring Test Platform components in as many months indicate a path best traced by starting from the present.

    The Test Platform
    Presently, Visual Studio has an open and extensible test platform with tests being written using various test frameworks and run using a variety of adapters. The Test Platform, from its vantage, resolves the lifecycle of the test into a series of stages – two of which are writing and running the test – with the goal of providing extensibility at each stage.

    The Test Lifecycle
    The below diagram illustrates the lifecycle:

    And in broad terms, the stages can be described as follows:

    • The Test Platform supports running tests written in various test frameworks using a pluggable model. Based on user-choice, the desired test framework and its corresponding adapter can be acquired as a vsix or as NuGet package as the case may be. Adapters can be written in terms of a public API exposed by the Test Platform – for e.g. here is a description of the Chutzpah adapter’s implementation.
    • The next stage is to write the test. Based on the framework/adapter tuple, tests may be authored for Managed code, for native code, for JavaScript code, etc. When authored from within the Visual Studio IDE, then Project templates, wizards, IntelliSense and the rest of the code editing infrastructure comes into play here.
    • All authored tests are made available for running. Optionally, various attributes of the tests may be used to select a subset. Selection can be done by test case name, by grouping them based on criteria like project/class/traits/outcome/duration, by applying a filter, etc. This ability to select tests is available from the Test Explorer, from the command line (vstest.console.exe), as well as from the VSTest task. Selection enables focusing on the set of tests to run.
    • The system configuration to use to run the tests – the processor architecture (x86/x64) to use, the target .NET Framework to use, the set of data collectors to be in effect when the tests are run, etc.  – can be precisely set. Such configuration is set using the .runsettings file, and there are adapter-specific sections within this file (owned by the adapter writers, of course).
    • Then comes the run stage where the selected tests are run as configured and using the appropriate adapter. This is the testrun – the most often repeated stage in the life cycle. Tests may be run using the Test Explorer, command line (vstest.console.exe), as well as from the VSTest task.
    • Application-platform-specific debugging is supported to enable high-fidelity debugging for tests targeting the desktop, Store, UWP, and so on.
    • Feedback from the testrun is analyzed to figure out the outcome of each test, the duration, and – if configured – the code coverage data, and data from any other data collectors that may have been active.
    • The feedback is then reported for user-consumption. This is done on the Test Explorer if you are in the Visual Studio IDE or – if you are using the commandline – on stdout, as a .trx file, or published to TFS. Loggers, like adapters, can be plugged-in to the Test Platform to provide custom logging/reporting.

    Efficiency as a Theme for features across the life cycle
    Efficient execution has been the pervading theme for the features we have implemented across the stages in this lifecycle – not restricted only to efficient “running” of the tests alone, but efficiency in executing across the entire lifecycle. After all, as a developer, one wants to get from conceptualizing the tests, all the way to the point where they are generating reports that can inform business decisions.

    Next post – recap of the features
    We will continue this series in the next post, looking at the features we have implemented and delivered in the Visual Studio 2015 cycle that enable such efficiency.

    Stay tuned.

    Mapping ACS Reports to OMS Search Queries

    MSDN Blogs - Mon, 07/25/2016 - 01:51


    This post features a table that shows the mapping between Audit Collection Services (ACS) SSRS reports and search queries used in OMS Log Analytics.

    In OpsMgr 2012, Audit Collection Services (ACS) provides a means to collect records generated by an audit policy and store them in a centrally managed database. It allows filtering and analyzing of events using the data analysis and reporting tools provided by Microsoft SQL Server like SSRS. There is a set of audit report definition files specifically for ACS data that can be installed to be able to access this collected audit data. After installation, more than 20 audit reports and 2 report models will be available out-of-the-box in the Audit Reports folder on the SQL Reporting server (Figure 1). These reports enable the user to report on security events occurring in their IT environment that are related to Access Violation, Account Management, Forensic, Planning, Policy, System Integrity, Usage and Dynamic Access Control (DAC).


    Figure 1: Out-of-the-box audit reports available in the Audit Report folder


    In OMS Log Analytics,
    The Security and Audit solution in Log Analytics provides a comprehensive view into your organization’s IT security posture with built-in search queries for notable issues that require your attention.”
    Adding the Security and Audit solution to an OMS workspace will allow Windows security events, Windows application events, and Windows firewall logs to be collected using direct agents or MMA agents that the user enabled. For more information on installation, best practises and scenario walkthroughs on the Security and Audit solution, refer to Security and Audit solution in Log Analytics by Bill Anderson.

    Although the datasource is the same – the Security Eventlog, the event data collection mechanism used in ACS is different from what is currently used in the OMS Security and Audit solution. In ACS, the ACS collector receives and processes security events from ACS forwarders and then sends this data to the ACS SQL database.
    Whereas in OMS, security events are collected by the direct agent or OpsMgr agent and sent directly to the OMS service in the cloud for processing. The collected security event records can then be retrieved and consolidated quickly using log searches in a query syntax that OMS Log Analytics provide.

    To retrieve and analyze the security events highlighted by the ACS Audit Reports in OMS Log Analytics instead, the SQL query search conditions used in these Audit Reports can also be used as the filter expressions in OMS log search queries. The following table shows this mapping between the ACS Audit Reports and their corresponding Search Queries in OMS Log Analytics based on this idea:

    OpsMgr Audit Collection Services (ACS)

    OMS Log Analytics

     

    Report Name

    Description

    Log Analytics Search Queries

    Further Details

     

    Access
    Violation:
    Account Locked

    On Windows Server 2000 and 2003, events 539 and 644 indicate an account was locked.  On Windows Server 2008, event 4740 and 6279 indicate an account was locked. This report details all account lock events.

    Type=SecurityEvent EventID=539 OR EventID=644 OR EventID=4740 OR EventID=6279

    Coming Soon

    Type=SecurityEvent EventID=539 OR EventID=644 OR EventID=4740 OR EventID=6279 | measure count() by EventID

     

    Access
    Violation:
    Unsuccessful Logon Attempts

    On Windows Server 2000 and 2003, event 529-537 and 539 indicates that somebody has tried to logon unsuccessfully. On Windows Server 2008, event 4625 indicates that somebody has tried to logon unsuccessfully. This report details who and where. Large number of unsuccessful logon attempt for the same user or computer may indicate a potential intrusion.
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted), Any of (Event Id from 529 to 537, Event Id = 539, All of (Event Id = 4625, Status = “0xc000006d”)))

    Type=SecurityEvent EventID:[529..537] OR EventID=539 OR (EventID=4625 AND Status=0xc000006d)  | Select TargetAccount, IpAddress, Computer, LogonProcessName, AuthenticationPackageName, LogonTypeName

    Coming Soon

    Type=SecurityEvent EventID:[529..537] OR EventID=539 OR (EventID=4625 AND Status=0xc000006d) | measure count() by TargetAccount

     

    Account Management:
    Domain and Built-in Administrators Membership Changes

    This report details membership changes in the Domain and Built-in Administrators group.  It looks for event 632, 633, 636 and 637 (membership change event for local and global groups) with target sid = S-1-5-33-544 (Built-in Admin group sid) or target sid that ends with -512 (domain admins group).

    Type=SecurityEvent EventID=4728 OR EventID=4732 OR EventID=4756 OR EventID=632 OR EventID=636 OR EventID=660 AND (“*512” OR “S-1-5-32-544”) | Extend “Add Member” AS Action | Select Action, TargetUserName, Activity, SubjectAccount, MemberName, TimeGenerated, Computer

    Coming Soon

    Type=SecurityEvent EventID=4729 OR EventID=4733 OR EventID=4757 OR EventID=633 OR EventID=637 OR EventID=661 AND (“*512” OR “S-1-5-32-544”) | Extend “Remove Member” AS Action | Select Action, TargetUserName, Activity, SubjectAccount, MemberName, TimeGenerated, Computer

     

    Account Management:
    Passwords Change Attempts by Non-owner

    On Windows Server 2000 and 2003, event 627 indicates password change attempt and event 628 indicates password reset. On Windows Server 2008, event 4723 indicates password change attempt and event 4724 indicates password reset. This report details any password change/reset attempts by someone other than the account owner.

    Type=SecurityEvent (EventID=4723 OR EventID=4724 OR EventID:[627..628]) AND SubjectAccount!=”ANONYMOUS LOGON” TargetAccount NOT IN {Type=SecurityEvent (EventID=4723 OR EventID=4724 OR EventID:[627..628]) AND SubjectAccount!=”ANONYMOUS LOGON” | measure count() by SubjectAccount} | EXTEND SubjectAccount AS ChangedBy | Select  TimeGenerated, Computer, TargetAccount, ChangedBy

    Coming Soon

     

    Account Management:
    User Accounts Created

    This report shows user accounts created in the specified time range. The report looks for events 624 (Windows Server 2000 and 2003) and 4720 (Windows Server 2008) which tracks user account creation.
    Filter: Dv Alls with: All of (State Date on or after (prompted), End Date on or before (prompted), Any of (Event Id = 624, Event Id = 4720))

    Type=SecurityEvent (EventID=624 OR EventID=4720) | EXTEND SubjectAccount AS CreatedBy | Select TimeGenerated, TargetAccount, CreatedBy, Computer

    Coming Soon

     

    Account Management:
    User Accounts Deleted

    This report shows user accounts deleted within the specified date/time range.
    It looks for event 630 (Windows Server 2000 and 2003) and 4726 (Windows Server 2008) which tracks account deletion.
    Filter: Dv Alls with: All of (State Date on or after (prompted), End Date on or before (prompted), Any of (Event Id = 630, Event Id = 4726))

    Type=SecurityEvent (EventID=630 OR EventID=4726) | EXTEND SubjectAccount AS DeletedBy | Select TimeGenerated, TargetAccount, DeletedBy, Computer

    Coming Soon

     

    Forensic:
    All Events For Specified Computer

    This report shows all events generated from the specified computer within the specified time range.

    Type=SecurityEvent Computer=”<<Computer Name>>”

    Coming Soon

    Type=SecurityEvent Computer=”<<Computer Name>>” | measure count() by Activity

     

    Forensic:
    All Events For Specified User

    This report details all events associated with the specified user within the specified time range. It is useful for general investigation.

    Type=SecurityEvent Account=”<<User Domain\Account Name>>”

    Coming Soon

    Type=SecurityEvent Account=”<<User Domain\Account Name>>” | measure count() by Activity

     

    Forensic:
    All Events With Specified Event ID  

    This report details all events associated with the specified event id within the specified time range. It is useful for general investigation.

    Type=SecurityEvent EventID=”<<Event Id>>”

    Coming Soon

    Type=SecurityEvent EventID=”<<Event Id>>” | measure count() by Computer

    Type=SecurityEvent EventID=”<<Event Id>>” | measure count() by Account

     

    Planning:
    Event Counts

    This report shows the number of events collected, grouped by event id, within the specified time range. This help identify high volume events, which is useful in tuning and adjusting audit policies.
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted), Event Id ≠ 0)

    Type=SecurityEvent EventID!=0 | measure count() AS Count by Activity

    Coming Soon

     

    Planning:
    Event Counts by Computer

    This report shows the number of events collected, grouped by event id, within the specified time range.
    This help identify high volume events, which is useful in tuning and adjusting audit policies.

    Type=SecurityEvent Computer=”<<Computer Name>>” | measure count() by Activity

    Coming Soon

    Type=SecurityEvent Computer=”<<Computer Name>>” | measure count() by EventID

     

    Planning:
    Hourly Event Distribution

    This report display the event distribution group by the hour, averaged by the number of days.
    It is useful for capacity planning around the audit collection.
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted))

    Type=SecurityEvent EventID!=0 | measure count() AS Count by TimeGenerated Interval 1Hour

    Coming Soon

    Type=SecurityEvent EventID!=0 AND EventID:[xx..yy] | measure count() AS Count by Activity Interval 1Hour

     

    Planning:
    Logon Counts of Privileged Users

    This report shows the logon counts of privileged users.
    If the logon count for a specific privileged user is higher than the normal range, then this indicates unusual network activities that should be investigated.
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted), All of (Any of (String 01 does not contain “SeChangeNotifyPrivilege”, Header Domain ≠ “NT AUTHORITY”), Any of (Event Id = 576, Event Id = 4672), Last Character in User ≠ “$”))

    Type=SecurityEvent EventID=576 OR EventID=4672 AND SubjectDomainName!=”NT AUTHORITY” AND AccountType!=”Machine” | Select SubjectAccount, PrivilegeList

    Coming Soon

    Type=SecurityEvent EventID=576 OR EventID=4672 AND SubjectDomainName!=”NT AUTHORITY” AND AccountType!=”Machine” | Measure Count() by SubjectAccount

     

    Policy:
    Account Policy Changed

    On Windows Server 2000 and 2003, events 643 indicates an account policy change.  On Windows Server 2008, event 4739 indicates an account policy change.
    This report details all account policy change events.

    Type=SecurityEvent EventID=643 OR EventID=4739 | Select Computer, Activity, TimeGenerated, SubjectAccount

    Coming Soon

     

    Policy:
    Audit Policy Changed

    On Windows Server 2000 and 2003, event 612 indicates an audit policy was changed.  On Windows Server 2008, event 4719 indicates an audit policy was changed.
    This report details all audit policy change events.

    Type=SecurityEvent EventID=612 OR EventID=4719 | Select TimeGenerated, Account, Activity, SubcategoryGuid, AuditPolicyChanges

    Coming Soon

     

    Policy:
    Object Permissions Changed

    On Windows Server 2008, event 4670 indicates a permission was changed on an object
    This report details all object permission change events.

    Type=SecurityEvent EventID=4670 | Select TimeGenerated, Activity, Computer, EventData

    Coming Soon

     

    Policy:
    Privilege Added Or Removed

    On Windows Server 2000 and 2003, events 608 and 621 indicate a privilege was granted and 609 and 622 indicate a privilege was removed.  On Windows Server 2008, event 4704 indicates a privilege was granted and 4705 indicates a privilege was removed.
    This report details all privilege add or remove events.

    Type=SecurityEvent EventID:[608..609] OR EventID:[621..622] OR EventID:[4704..4705] | Select TimeGenerated, Activity, Computer, TargetAccount, PrivilegeList

    Coming Soon

     

    System Integrity:
    Audit Failure

    Event 516 (WIndows Server 2000 and 2003) or 4612 (Windows Server 2008) indicates that the system failed to log audit events due to lack of resources. This is a serious problem and should be resolved as soon as possible to prevent further loss of audit events. This report shows the time and computer on which the event occurred.
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted), Any of (Event Id = 516, Event Id = 4612))

    Type=SecurityEvent EventID=516 OR EventID=4612 | Select TimeGenerated, Activity, Computer

    Coming Soon

     

    System Integrity:
    Audit Log Cleared

    Event 517 (Windows Server 2000 and 2003) and 1102 (Windows Server 2008) indicates that somebody has cleared the Audit Log. This may suggest the person who cleared the log is trying to cover his/her tracks on the computer.
    This report shows which computer’s audit log was cleared and who cleared it.
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted), Any of (Event Id = 517, Event Id = 1102))

    Type=SecurityEvent EventID=517 OR EventID=1102 | Select Activity, Computer, SubjectAccount, TimeGenerated

    Coming Soon

     

    Usage:
    Object Access

    This report shows all object access related audit event within the specified time range.
    For Windows Server 2000 and 2003, it uses the events 560 (object opened) and 567 (object access attempted) to track items with object access auditing enabled.
    For Windows Server 2008, it uses the events 4656 (object opened) and 4663 (object access attempted).

    Type=SecurityEvent EventID=560 OR EventID=567 OR EventID=4656 OR EventID=4663 | Select Computer, Activity, TimeGenerated, EventData

    Coming Soon

     

    Usage:
    Privileged logon

    This report shows all privileged logons.
    It filters on EventID = 576 and string01 <> “SeChangeNotifyPrivilege”
    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted), Any of (Event Id = 576, Event Id = 4672), Privileges does not contain “SeChangeNotifyPrivilege”)

    Type=SecurityEvent EventID=576 OR EventID=4672 | Select TimeGenerated, Activity, Computer, SubjectAccount, PrivilegeList

    Coming Soon

     

    Usage:
    Sensitive Security Groups Changes

    Filter: Dv Alls with: All of (Start Date on or after (prompted), End Date on or before (prompted). Any of (All of (Event Id >=631, Event Id <=639, Event Id=641, All of (Event Id >= 658, Event Id <= 662), All of (Event Id >= 4727, Event Id <= 4735), Event Id=4737. All of (Event Id >= 4754, Event Id <= 4758)))

    Type=SecurityEvent EventID:[4727..4735] OR EventID=4737 OR EventID:[4754..4758] OR EventID:[631..639] OR EventID=641 OR EventID:[658..662] | EXTEND TargetUserName As GroupName | Select Activity, GroupName, SubjectAccount, MemberName, TimeGenerated

    Coming Soon

     

    Usage:
    User Logon

    This report display all user logon activity for a specified user within a specific time range
    It looks for event 540 and 528 to identify logon activity</Value>
    Filter: Dv Alls with: All of (Event Id in 528, 540, 4624, Start Date on or after (prompted), End Date on or before (prompted), Any of (UPPER(Primary DomainUser) = UPPER(Parameter: DomainUser), UPPER(Target DomainUser) = UPPER(Parameter: DomainUser)))

    Type=SecurityEvent EventID=528 OR EventID=540 OR EventID=4624 | Select TimeGenerated, Activity, Computer, IpAddress, AuthenticationPackageName, LogonProcessName, LogonTypeName, TargetAccount

    Coming Soon

     

    DAC:
    File Resource Property Changes

    This report displays File Resource Property changes
    For Windows Server 2012, it uses event 4911.

    Type=SecurityEvent EventID=4911 | Select Computer, Activity, TimeGenerated, SubjectAccount

    Coming Soon

     

    DAC:
    Central Access Policy For File Changes

    This report displays changes to the Central Access Policy that applies to a File Resource.
    For Windows Server 2012, it uses event 4913.

    Type=SecurityEvent EventID=4913 | Select Computer, Activity, TimeGenerated, SubjectAccount

    Coming Soon

     

    DAC:
    Object Attribute Changes

    This report displays Object Attribute changes.
    For Windows Server 2012, it uses events 5136 and 5137.

    Type=SecurityEvent EventID=5136 OR EventID=5137 | Select Computer, Activity, TimeGenerated, SubjectAccount

    Coming Soon

     

     

     

     

     

     

     


     

     

    Additional Resources:

    Log Analytics search reference by Bill Anderson:
    https://azure.microsoft.com/en-us/documentation/articles/log-analytics-search-reference/
         
    Getting started with Operations Management Suite Security and Audit Solution by Yuri Diogenes
    https://azure.microsoft.com/en-in/documentation/articles/oms-security-getting-started

    Some Custom ACS Reports by Jimmy Harper
    https://blogs.technet.microsoft.com/jimmyharper/2009/12/09/some-custom-acs-reports/

    What is Log Analytics? by Brian Wren
    https://azure.microsoft.com/en-us/documentation/articles/log-analytics-overview/


    TechNet: Collecting Security Events Using Audit Collection Services in Operations Manager
    https://technet.microsoft.com/en-us/library/hh212908(v=sc.12).aspx

    TechNet: Deploying ACS and ACS Reporting
    https://technet.microsoft.com/en-us/library/hh298613(v=sc.12).aspx

     


     

    Disclaimer:
    All information on this blog is provided on an as-is basis with no warranties and for informational purposes only. Use at your own risk. The opinions and views expressed in this blog are those of the author and do not necessarily state or reflect those of my employer.

     

    HDinsight – How to use Spark-HBase connector?

    MSDN Blogs - Mon, 07/25/2016 - 00:21

    Apache Spark is an open-source parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Azure HDInsight offers a fully managed Spark service with many benefits.

    Apache HBase is an open Source No SQL Hadoop database, a distributed, scalable, big data store. It provides real-time read/write access to large datasets. HDInsight HBase is offered as a managed cluster that is integrated into the Azure environment. HBase provides many features as a big data store.

    Spark-Hbase Connector

    The Spark-Hbase Connector provides an easy way to store and access data from HBase clusters with Spark jobs. HBase is really successful for highest level of data scale needs. Thus, existing Spark customers should definitely explore this storage option. Similarly, if the customers are already having HDinsight HBase clusters and they want to access their data by Spark jobs then there is no need to move data to any other storage medium. In both the cases, the connector will be extremely useful.

    Steps to use connector

    Currently, we need to manually install the connector on the Spark cluster. We are planning to release it soon with HDInsight clusters. The connector can be installed in 4 simple steps:

    • Step 1: Create a VNET.
    • Step 2: Create Spark and Hbase cluster in same or different subnet of same VNET.
    • Step 3: Copy hbase-site.xml from HBase cluster to your Spark cluster.
    • Step 4: Install the connector.

    Following are the detailed steps.

    • Create a Azure Virtual network.The VNET can be easily created from Azure portal.
    • Setup Spark and Hbase clusters. Please find instructions for linux and windows.
    • On Spark cluster, upgrade maven(if needed) to compile the package.
    • sudo apt-get install maven
    • Copy Package code and Hbase configuration xml file to Spark configuration folder.
    • sudo cp hbase-site.xml /etc/spark/conf/
    • Compile
    • mvn package -DskipTests
    • Run Spark Submit
    • $SPARK_HOME/bin/spark-submit --class org.apache.spark.sql.execution.datasources.hbase.examples.HBaseSource --master yarn-client --num-executors 2 --driver-memory 512m --executor-memory 512m --executor-cores 1 --jars /usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar --files /usr/hdp/current/spark-client/conf/hbase-site.xml /home/hdiuser/shc-master/target/hbase-spark-connector-1.0.0.jar

    Sample Program

    A sample notebook is checked-in at :https://github.com/AnunayTiwari/Notebook-Spark-Jupyter-SampleProgram-HBaseSparkConnector/blob/master/SampleProgramForSparkHBaseConnector.ipynb

    • Define Class and Object
    • In [5]:case class HBaseRecordAirline(col0: String,Year: Int,Quarter: Int,Month: Int,DayofMonth: Int,DayOfWeek: Int,FlightDate: Int,UniqueCarrier: String,AirlineID: String) defined class HBaseRecordAirline In [6]: object HBaseRecordAirlineTest {def apply(i: Int): HBaseRecordAirline = {val s = s"""row${"%03d".format(i)}""" HBaseRecordAirline(s,i,i,i,i,i,i,s,s)}} defined module HBaseRecordAirlineTest
    • Define the catalog: Catalog keeps mapping between Spark data and HBase table.
    • In [7]: val cat = s"""{ | |"table":{"namespace":"default", "name":"airdelaydata_scv_Test1"}, | |"rowkey":"key", | |"columns":{ | |"col0":{"cf":"rowkey", "col":"key", "type":"string"}, | |"Year":{"cf":"Year", "col":"Year", "type":"int"}, | |"Quarter":{"cf":"Quarter", "col":"Quarter", "type":"int"}, | |"Month":{"cf":"Month", "col":"Month", "type":"int"}, | |"DayofMonth":{"cf":"DayofMonth", "col":"DayofMonth", "type":"int"}, | |"DayOfWeek":{"cf":"DayOfWeek", "col":"DayOfWeek", "type":"int"}, | |"FlightDate":{"cf":"FlightDate", "col":"FlightDate", "type":"int"}, | |"UniqueCarrier":{"cf":"UniqueCarrier", "col":"UniqueCarrier", "type":"string"}, | |"AirlineID":{"cf":"AirlineID", "col":"AirlineID", "type":"string"} | |} | |}""".stripMargin cat: String = { "table":{"namespace":"default", "name":"airdelaydata_scv_Test1"}, "rowkey":"key", "columns":{ "col0":{"cf":"rowkey", "col":"key", "type":"string"}, "Year":{"cf":"Year", "col":"Year", "type":"int"}, "Quarter":{"cf":"Quarter", "col":"Quarter", "type":"int"}, "Month":{"cf":"Month", "col":"Month", "type":"int"}, "DayofMonth":{"cf":"DayofMonth", "col":"DayofMonth", "type":"int"}, "DayOfWeek":{"cf":"DayOfWeek", "col":"DayOfWeek", "type":"int"}, "FlightDate":{"cf":"FlightDate", "col":"FlightDate", "type":"int"}, "UniqueCarrier":{"cf":"UniqueCarrier", "col":"UniqueCarrier", "type":"string"}, "AirlineID":{"cf":"AirlineID", "col":"AirlineID", "type":"string"} } }
    • Write Data: Given a data frame with specified schema, this will create a HBase table with 5 regions and save the data frame inside. Note that if HBaseTableCatalog.newTable is not specified, the table has to be pre-created.
    • In [11]: sc.parallelize(data).toDF.write.options(Map(HBaseTableCatalog.tableCatalog -> cat, HBaseTableCatalog.newTable -> "5")).format("org.apache.spark.sql.execution.datasources.hbase").save()
    • Define DataFrame
    • In [13]: val df = withCatalog(cat) df: org.apache.spark.sql.DataFrame = [UniqueCarrier: string, Month: int, col0: string, Quarter: int, FlightDate: int, AirlineID: string, DayOfWeek: int, DayofMonth: int, Year: int]
    • SQL Support
    • In [12]: def withCatalog(cat: String): DataFrame = { | sqlContext | .read | .options(Map(HBaseTableCatalog.tableCatalog->cat)) | .format("org.apache.spark.sql.execution.datasources.hbase") | .load() | } withCatalog: (cat: String)org.apache.spark.sql.DataFrame In [13]: val df = withCatalog(cat) df: org.apache.spark.sql.DataFrame = [UniqueCarrier: string, Month: int, col0: string, Quarter: int, FlightDate: int, AirlineID: string, DayOfWeek: int, DayofMonth: int, Year: int] In [14]: df.registerTempTable("table1") In [15]: val c = sqlContext.sql("select AirlineID from table1") c: org.apache.spark.sql.DataFrame = [AirlineID: string] In [16]: c.show() +---------+ |AirlineID| +---------+ | row000| | row001| | row002| | row003| | row004| | row005| | row006| | row007| | row008| +---------+

    Please refer Spark-Hbase Connector for more information on the connector.
    I am working on using the connector for the EventHub Scenario. This will allow to persist events from Event Hub to HBase. I will be posting about this useful scenario in my next blog.

    HDinsight – How to use Spark-HBase connector?

    MSDN Blogs - Mon, 07/25/2016 - 00:07

    Apache Spark is an open-source parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Azure HDInsight offers a fully managed Spark service with many benefits.

    Apache HBase is an open Source No SQL Hadoop database, a distributed, scalable, big data store. It provides real-time read/write access to large datasets. HDInsight HBase is offered as a managed cluster that is integrated into the Azure environment. HBase provides many features as a big data store.

    Spark-Hbase Connector

    The Spark-Hbase Connector provides an easy way to store and access data from HBase clusters with Spark jobs. HBase is really successful for highest level of data scale needs. Thus, existing Spark customers should definitely explore this storage option. Similarly, if the customers are already having HDinsight HBase clusters and they want to access their data by Spark jobs then there is no need to move data to any other storage medium. In both the cases, the connector will be extremely useful.

    Steps to use connector

    Currently, we need to manually install the connector on the Spark cluster. We are planning to release it soon with HDInsight clusters. The connector can be installed in 4 simple steps:

    • Step 1: Create a VNET.
    • Step 2: Create Spark and Hbase cluster in same or different subnet of same VNET.
    • Step 3: Copy hbase-site.xml from HBase cluster to your Spark cluster.
    • Step 4: Install the connector.

    Following are the detailed steps.

    • Create a Azure Virtual network.The VNET can be easily created from Azure portal.
    • Setup Spark and Hbase clusters. Please find instructions for linux and windows.
    • On Spark cluster, upgrade maven(if needed) to compile the package.
    • sudo apt-get install maven
    • Copy Package code and Hbase configuration xml file to Spark configuration folder.
    • sudo cp hbase-site.xml /etc/spark/conf/
    • Compile
    • mvn package -DskipTests
    • Run Spark Submit
    • $SPARK_HOME/bin/spark-submit --class org.apache.spark.sql.execution.datasources.hbase.examples.HBaseSource --master yarn-client --num-executors 2 --driver-memory 512m --executor-memory 512m --executor-cores 1 --jars /usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar --files /usr/hdp/current/spark-client/conf/hbase-site.xml /home/hdiuser/shc-master/target/hbase-spark-connector-1.0.0.jar

    Sample Program

    A sample notebook is checked-in at :https://github.com/AnunayTiwari/Notebook-Spark-Jupyter-SampleProgram-HBaseSparkConnector/blob/master/SampleProgramForSparkHBaseConnector.ipynb

    • Define Class and Object
    • In [5]:case class HBaseRecordAirline(col0: String,Year: Int,Quarter: Int,Month: Int,DayofMonth: Int,DayOfWeek: Int,FlightDate: Int,UniqueCarrier: String,AirlineID: String) defined class HBaseRecordAirline In [6]: object HBaseRecordAirlineTest {def apply(i: Int): HBaseRecordAirline = {val s = s"""row${"%03d".format(i)}""" HBaseRecordAirline(s,i,i,i,i,i,i,s,s)}} defined module HBaseRecordAirlineTest
    • Define the catalog: Catalog keeps mapping between Spark data and HBase table.
    • In [7]: val cat = s"""{ | |"table":{"namespace":"default", "name":"airdelaydata_scv_Test1"}, | |"rowkey":"key", | |"columns":{ | |"col0":{"cf":"rowkey", "col":"key", "type":"string"}, | |"Year":{"cf":"Year", "col":"Year", "type":"int"}, | |"Quarter":{"cf":"Quarter", "col":"Quarter", "type":"int"}, | |"Month":{"cf":"Month", "col":"Month", "type":"int"}, | |"DayofMonth":{"cf":"DayofMonth", "col":"DayofMonth", "type":"int"}, | |"DayOfWeek":{"cf":"DayOfWeek", "col":"DayOfWeek", "type":"int"}, | |"FlightDate":{"cf":"FlightDate", "col":"FlightDate", "type":"int"}, | |"UniqueCarrier":{"cf":"UniqueCarrier", "col":"UniqueCarrier", "type":"string"}, | |"AirlineID":{"cf":"AirlineID", "col":"AirlineID", "type":"string"} | |} | |}""".stripMargin cat: String = { "table":{"namespace":"default", "name":"airdelaydata_scv_Test1"}, "rowkey":"key", "columns":{ "col0":{"cf":"rowkey", "col":"key", "type":"string"}, "Year":{"cf":"Year", "col":"Year", "type":"int"}, "Quarter":{"cf":"Quarter", "col":"Quarter", "type":"int"}, "Month":{"cf":"Month", "col":"Month", "type":"int"}, "DayofMonth":{"cf":"DayofMonth", "col":"DayofMonth", "type":"int"}, "DayOfWeek":{"cf":"DayOfWeek", "col":"DayOfWeek", "type":"int"}, "FlightDate":{"cf":"FlightDate", "col":"FlightDate", "type":"int"}, "UniqueCarrier":{"cf":"UniqueCarrier", "col":"UniqueCarrier", "type":"string"}, "AirlineID":{"cf":"AirlineID", "col":"AirlineID", "type":"string"} } }
    • Write Data: Given a data frame with specified schema, this will create a HBase table with 5 regions and save the data frame inside. Note that if HBaseTableCatalog.newTable is not specified, the table has to be pre-created.
    • In [11]: sc.parallelize(data).toDF.write.options(Map(HBaseTableCatalog.tableCatalog -> cat, HBaseTableCatalog.newTable -> "5")).format("org.apache.spark.sql.execution.datasources.hbase").save()
    • Define DataFrame
    • In [13]: val df = withCatalog(cat) df: org.apache.spark.sql.DataFrame = [UniqueCarrier: string, Month: int, col0: string, Quarter: int, FlightDate: int, AirlineID: string, DayOfWeek: int, DayofMonth: int, Year: int]
    • SQL Support
    • In [12]: def withCatalog(cat: String): DataFrame = { | sqlContext | .read | .options(Map(HBaseTableCatalog.tableCatalog->cat)) | .format("org.apache.spark.sql.execution.datasources.hbase") | .load() | } withCatalog: (cat: String)org.apache.spark.sql.DataFrame In [13]: val df = withCatalog(cat) df: org.apache.spark.sql.DataFrame = [UniqueCarrier: string, Month: int, col0: string, Quarter: int, FlightDate: int, AirlineID: string, DayOfWeek: int, DayofMonth: int, Year: int] In [14]: df.registerTempTable("table1") In [15]: val c = sqlContext.sql("select AirlineID from table1") c: org.apache.spark.sql.DataFrame = [AirlineID: string] In [16]: c.show() +---------+ |AirlineID| +---------+ | row000| | row001| | row002| | row003| | row004| | row005| | row006| | row007| | row008| +---------+

    Please refer Spark-Hbase Connector for more information on the connector.
    I am working on using the connector for the EventHub Scenario. This will allow to persist events from Event Hub to HBase. I will be posting about this useful scenario in my next blog.

    Share some great quote from Ren Zhengfei , CEO of Huawei

    MSDN Blogs - Sun, 07/24/2016 - 22:27

    Share some great quote from Ren Zhengfei , CEO of Huawei

    “Bigger than the world of the world, is your mind. ”

    we can’t use the term “fail” any more in the future,  you want to use “exploration”  instead, since “heroes, defeated heroes.” We need analyze the success or failure of the project, even it telling us the way is blocked, is also a kind of exploration.

    Apple a lot of money, but too conservative; we don’t have any money, but pretend to be rich as crazy investments.  We have no money, but still dare to do, Apple is so rich, why it did not do more? If Apple continues to lead the move human society, we can follow their lead if Apple can’t put more money, you can only follow us, we will become as rich as Apple.

    We will have two decision-making system, a decision-making system is the ideal Technology Center system, a decision-making system is customer-centric, Strategic Marketing base on realism.  Two systems in the middle of strong debate, compromise and achieving development goals.

    The egg broke to the inside from the outside, it is a “fried egg”; break eggs from the inside to the outside, it’s a new life. I also don’t like the NFV or SDN because they will upset the pattern and structure of our entire communications network, but I do not want to be a someone from the outside to break us “fried egg”. Embrace the challenges, embracing Subversion, this is our attitude towards future SDN, NFV.

    Don’t always think of a glory of leadership, not to bear this heavy burden slogans, and honor is useless for Huawei.

    The original Chinese article  can be viewed at http://mp.weixin.qq.com/s?__biz=MzA3NDQxNjg5OQ==&mid=2651918415&idx=1&sn=9273f03dac2f0f8862151c4340cd46d6&scene=5&srcid=0724CiieTOyJlDFYAfrUZchI#rd

    test archive 1

    MSDN Blogs - Sun, 07/24/2016 - 22:22

    DSC Position Request with Standard Horizon GX2200 and Lowrance Link8

    MSDN Blogs - Sun, 07/24/2016 - 22:07

    One of the challenges of using VHF is discussing your fishing secrets you are sharing them with the entire world.

    DSC Calling can briefly solve this by enabling you to call a particular person and having them switch to an obscure channel (like #6) -but this conversation is still in the “open” and as most boats will scan all the channels and end up finding and listening the channel you designated as your work channel.   Jamestown does a good job of going into this here.

    That said really the only thing you probably want to keep a secret while you are out there is your location so you don’t have 100 people coming and making your life hell.  From the title you can already guess the answer is DSC Position Request…or as Lowrance has decided to call it: “LL Request” where LL stands for “Latitude/Longitude”.   While I have gone over some of this in earlier posts, please see below;  this article will show to do this with a Lowrance Link8 and a Standard Horizon GX2200.

    https://blogs.msdn.microsoft.com/charles_sterling/2012/01/16/displaying-dsc-information-with-a-standard-horizon-chartplotter/

    https://blogs.msdn.microsoft.com/charles_sterling/2011/11/28/displaying-ais-data-on-a-simrad-nx40-using-a-standard-horizon-matrix-2150/

    https://blogs.msdn.microsoft.com/charles_sterling/2011/12/22/icom-302-does-not-send-out-properly-formatted-dsc-vhf-messages/

    https://blogs.msdn.microsoft.com/charles_sterling/2012/01/16/how-to-wire-your-dsc-vhf-to-a-nmea-0183-chartplotter-so-you-can-remove-them/

     

    Steps to creating a Position Request with a Standard Horizon Gx2200

    1. Press the Call button

     

    Step 2. Select the POS Request menu

    Step 3. Select the recipient

    Step 4.  Select the type of request

     

    Step 5. Select to send the POS Request

     

     

    Step 6. Receive the acknowledgement (ACK)

     

     

     

     

     

     

     

     

    Step 8 Determine what you want to do w/ the data

     

    Performing a LL Request with a Lowrance Link8

    Same steps as above but for a Lowrance Link8

    Step 1. Press Call

    Step 2. Select LL Request

     

    Step 3.   Select the recipient

    Step 4. Send the request

    Step 8.  Wait for the Ack

    Step 9.  Receive the data

    Small Basic – Traditional Patterns

    MSDN Blogs - Sun, 07/24/2016 - 17:00

    Do you know any traditional patterns in your country?  In Japan, there are a lot of patterns.  Today I introduce some of them.  If you can decompose a pattern into some shapes (such like rectangles, ellipses, triangles or lines), you can code to draw the pattern.


    Yagasuri (矢絣) – designed feathers of arrows


    Asanoha (麻の葉) – designed leaves of cannabis


    Seigaiha (青海波) – designed sea waves

    DAX入門(3) 相対日付 (Relative date) の実装 -地震速報の可視化-

    MSDN Blogs - Sun, 07/24/2016 - 16:45

    Microsoft Japan Data Platform Tech Sales Team

    土井 貴彦

    DAX 入門 第 3 回目では、前回の応用として、相対日付の実装について触れます。

    今回は 地震速報 データを使って、今日何時に地震が起きたか? 昨日起きた地震の最大震度は? 直近一週間何件地震があったか?といった形で相対日付で可視化するレポートを作ってみます。

    作成するレポートは以下のようなイメージです。スマホで閲覧されている場合は こちらのリンク を参照ください

     

    ■データの取得

    まず、前回の記事で作成したカレンダーテーブルを使うので こちら からダウンロードください。

    pbix ファイルを開いて、”データを取得” メニューから “Web” を選択します。

     

    URL に http://typhoon.yahoo.co.jp/weather/jp/earthquake/list/ を入力し、OK をクリックします。 (データ出典:Yahoo! Japan 天気・災害)

     

    ナビゲーター画面では “Table 0” のチェックボックスを選択し、”編集”をクリックします。

     

    ■データの加工

    取得したデータを分析しやすい形にするため、加工をしていきます。

    まず、“発生時刻” 列を選択した状態で、”変換” メニューの ”値の置換” をクリックし、”ごろ” を削除します。

        

     

    “ホーム” メニューから ”データ型” を選択し、”日付/時刻” に設定します。これにより、日付・時刻に関する各種 DAX 関数が利用可能になります。

     

    “列の追加” メニューから”日付”->“日付のみ” を選択します。

     

    同様に “時刻” を追加します

     

    次に “震源地” 列を地図上で可視化できるように加工していきます。

    実際には緯度・経度を基に地図上にプロットすべきですが、ここではあくまでサンプルのため、都道府県名でプロットするよう加工していきます。

    “震源地” 列をクリックして、”変換” メニューから “値の置換” をクリックします。

    検索する値 = 県

    置換後 = 県,

    とし、OK をクリックします。

     

    さらに、”列の分割” –> “区切り記号による分割” を選択し、

     

    コンマで列を区切ります

     

    震源地.2 列は不要なので、削除します。

     

    次に、震源地.1 列を選択した状態で、”列の追加” メニューから “条件列” を選択します。

     

    下記のように 1 都 1 道 2 府を例外処理します。また、都道府県名を含まない震源地については、今回は “その他” として処理します。

     

    “震源地” 列が追加されました。 “震源地.1” 列は不要なので、削除します。

     

    ここまでの加工のステップは全て “適用したステップ” として記録されています。プロパティから、クエリの名前を ”地震速報” と変更し、”ホーム” メニューから ”閉じて適用” をクリックします。

          

     

    カレンダーテーブルの ”日付” 列と地震速報テーブルの ”Date” 列のリレーションを張ります。

     

    ■相対日付列の作成

    さて、前置きが長くなりましたが、ようやく本題に取り掛かります。

    DATEDIFF 関数 を使ってふたつの日付型列から差を取得することができます。

    DATEDIFF(<start_date>, <end_date>, <interval>)

    ※ interval には年、四半期、月、週、日、時、分、秒 のいずれかの単位を指定できます。

     

    今回は下記のように、TODAY関数を使って本日から何日前か、を計算します。

    DATEDIFF('カレンダー'[日付],TODAY(),DAY)

    これと、SWITCH 関数や IF 関数を使ってカレンダーテーブルに対して列を追加していきます。たとえば、

    相対日付 = SWITCH(DATEDIFF('カレンダー'[日付],TODAY(),DAY),0,"本日",1,"昨日",2,"おととい","3日以前")

    あるいは

    一週間以内 = IF(DATEDIFF('カレンダー'[日付],TODAY(),DAY)<=7,"TRUE","FALSE")

    といった列を作成可能です。

     

    また、前回の記事 で作成した週番号列を使って、相対週の算出もできます。

    相対週 = SWITCH(IF('カレンダー'[年]-YEAR(TODAY())=0,WEEKNUM(TODAY(),21)-'カレンダー'[週番号],-1),0,"今週",1,"先週",2,"先々週","3週間以前")

     

    これにより、冒頭のレポートのような、相対日付でのスライサーを配置することが可能になります。

     

    ■更新日時列の作成

    今回のように、定期的にデータソースが更新される場合、更新日時をレポートに表示したいといった要件があるかと思います。

    Power BI Desktop で現在の時刻を取得する方法はいくつかありますが、今回は DAX で取得する例をご紹介します。

    “ホーム” メニューの “データの入力” をクリックします。

    “テーブルの作成” 画面で、1 行 1 列目に適当な値を入れます。下記の例では “1” を入力しています。

    テーブルの名称を ”更新日時” とし、”読み込み” をクリックします。

    テーブルが作成されました。 “列1” 自体は不要なので、”レポートビューの非表示” で隠してしまいましょう。

     

    “モデリング” メニューから “新しい列” をクリックし、以下のふたつの列を追加します。

    更新日時 (GMT) = NOW()

     

    更新日時 (GMT+9) = NOW()+9/24

    NOW 関数 は実行環境のロケールにおける現在時刻を返します。 Power BI Desktop を日本語環境でお使いの場合は時差を意識する必要はありませんが、

    クラウド (Power BI Service) にレポートを発行する場合、Power BI Service 内では GMT で時刻を取得する動きとなります。そのため、日本時間に合わせるために上記のような列を作成する必要があります。

     

    これにより、冒頭のレポートのように、更新日時を表示することが可能になります。

     

    ■まとめ

    記事がかなり長くなってしまったので、レポートの作成については端折ります。皆様のセンスで、自由にレポートを作成いただければと思います。

    冒頭で埋め込んだサンプルコンテンツは こちら からダウンロード頂けますので、もしよろしければご活用ください。

    Pages

    Subscribe to Randy Riness @ SPSCC aggregator
    Drupal 7 Appliance - Powered by TurnKey Linux