You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 54 min 13 sec ago

Azure で FreeBSD を実行する

Thu, 05/29/2014 - 19:00

このポストは、5 月 22 日に投稿した Running FreeBSD in Azure の翻訳です。

このブログ記事の読者の多くの方がご存知かと思いますが、マイクロソフトは長年 Linux コミュニティと共同で作業を行っており、Linux カーネル用のデバイス ドライバーを多数提供しています。これは Linux 統合サービス (英語) と呼ばれています。これらのドライバーは、Linux システムを Hyper-V および Microsoft Azure 環境下で動作させるうえで必要なものです。

さらに最近では FreeBSD コミュニティとも協力して、同様のドライバーを FreeBSD 10 にも提供しています。これらは、FreeBSD 用の BSD 統合サービス (BIS) と呼ばれています。今回の記事では、FreeBSD 10 のイメージを Microsoft Azure で実行するための準備とアップロード方法について説明します。

免責事項: この記事では FreeBSD を Azure で実行するための詳細な手順を説明しますが、現時点では FreeBSD は Microsoft Azure の推奨プラットフォーム (英語) ではなく、このためサポート対象外となっています。

基礎となる FreeBSD イメージを取得する

この記事では、FreeBSD 10 x86_64 のイメージを Microsoft Azure で使用するための準備について説明します。最初に、FreeBSD をローカルの Hyper-V (Windows Server 2012 R2 を推奨) にインストールします。FreeBSD コミュニティで VHD 形式での事前生成済み FreeBSD イメージが提供されていますので、こちらを利用すると作業が簡単になります。

  1. まず、次の URL にアクセスします。
    ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/10.0-RELEASE/amd64/Latest/
    次に、下記のような URL から最新の *.vhd.xz ファイルをダウンロードします。
    ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/10.0-RELEASE/amd64/Latest/FreeBSD-10.0-RELEASE-amd64-20140116-r260789.vhd.xz
  2. ダウンロードした .xz ファイルを解凍して、FreeBSD-10.0-RELEASE-amd64-*.vhd というファイルを展開します。この作業に役立つユーティリティ ソフトは多数ありますが、私はいつも 7-zip (英語) を使用しています。
  3. Hyper-V で仮想マシンを新規作成し、展開した VHD を IDE ディスクとして追加します (下図参照)。

FreeBSD の VHD のサイズを変更する (必要な場合)

インストール用 ISO ファイルを使用して FreeBSD をユーザー自身の手で Hyper-V の VM にインストールする場合は、このセクションの手順は省略できます。しかし、ftp.freebsd.org で提供されている事前生成済みの VHD を使用する場合は、VHD のサイズを変更して Azure との互換性を確保する必要があります。

Microsoft Azure 用の VHD を作成する場合、VHD のサイズはメガバイト単位の整数である必要があります。この条件を満たしていない場合、Azure で FreeBSD イメージを新規作成するときに、次のようなエラー メッセージが表示されます。

“The VHD http://<mystorageaccount>.blob.core.windows.net/vhds/FreeBSD-10.0-RELEASE-amd64-20140116-r260789.vhd has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs).”

上記の例では、21,475,270,656 バイトは 20,480.4140625 MB となるため、VHD が 21,000 MB ちょうどになるようにディスクのサイズを変更します。これは、Powershell または Hyper-V の管理コンソールから簡単に実行できます。

  • Hyper-V サーバーの場合、Powershell のコマンド プロンプトを開き、Resize-VHD コマンドレットを実行して VHD のサイズを増やします。

PS > Resize-VHD -Path \<PATH>\FreeBSD-10.0-RELEASE-amd64-20140116-r260789.vhd -SizeBytes 21GB

  • Hyper-V の管理コンソールからでも、ディスクのサイズを変更できます。この場合、FreeBSD の VM の設定メニューで FreeBSD の仮想 IDE ドライブを選択し、[Edit] をクリックして VHD 編集ウィザードを開きます。

次に、FreeBSD システムを起動して、ルートとしてログインします。コミュニティが提供している事前生成済みの VHD では、ルート パスワードが既定では設定されていないため、パスワードの入力は不要です。

ログイン完了後 gpart を実行し、ディスクのパーティションを確認します。VHD のサイズを変更したため、ディスク da0 は CORRUPT とマークされているはずです。

root@:~# gpart show

           34   41942973  da0  GPT  (21G) [CORRUPT]

             34        1024    1  freebsd-boot  (512K)

        1058    2097152    2  freebsd-swap  (1.0G)

   2098210  39844797    3  freebsd-ufs  (19G)

この状態で問題ありません。次のコマンドを実行して da0 を修正します。

root@:~# gpart recover da0

da0 recovered

 

root@:~# gpart show

=>          34   44040125  da0  GPT  (21G)

             34          1024    1  freebsd-boot  (512K)

          1058    2097152    2  freebsd-swap  (1.0G)

    2098210  39844797    3  freebsd-ufs  (19G)

  41943007    2097152         - free -  (1.0G)

オプションで、FreeBSD のルート パーティションのサイズを変更し、解放された空間を回収することもできます。ただし、ルートのファイル システムのサイズを変更するには FreeBSD Live CD から起動する必要があるため、余分な手間が掛かります。それでも解放された空間を回収する場合は、次の手順でルートのファイル システムを拡張します。

rroot@:~# gpart resize -i 3 da0

da0p3 resized

 

root@:~# gpart show

=>   34  44040125  da0  GPT  (21G)

           34           1024    1  freebsd-boot  (512K)

         1058    2097152    2  freebsd-swap  (1.0G)

   2098210  41941949    3  freebsd-ufs  (20G)

 

root@:~# growfs /dev/da0p3

これで VHD の準備が完了しました。次に、Azure 用 FreeBSD システムの準備を行います。

Azure 用 FreeBSD システムを準備するs

この手順は簡単で、数個のコンポーネントをインストールし、いくつかの構成を行って、Azure でシステムを実行できるようにするだけです。

  • ネットワークを有効にする
    hn0 インターフェイスを構成して、/etc/rc.d/netif を使用してネットワークを停止および開始できるようにします。

root@:~# echo 'ifconfig_hn0="SYNCDHCP"' >> /etc/rc.conf

root@:~# service netif restart

  • SSH を有効にする
    /etc/rc.conf で SSH デーモンを有効にし、さらにホスト キーを新規作成します。

root@:~# echo 'sshd_enable="YES"' >> /etc/rc.conf

 

root@:~# ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key

root@:~# ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key

root@:~# service sshd restart

FreeBSD のゲストに対して SSH を使用して構成を続行する場合、非特権ユーザーを新規作成するか、ルート ユーザーのパスワードを設定し、/etc/ssh/sshd_config で “PermitRootLogin yes” を設定します (この方法は推奨しません)。

  • Python 2.7 およびその他の必要なモジュールをインストールする

root@:~# pkg install python27 py27-asn1

The package management tool is not yet installed on your system.

Do you want to fetch and install it now? [y/N]: y

 

Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/freebsd:10:x86:64/latest, please wait…

Verifying signature with trusted certificate pkg.freebsd.org.2013102301… done

Installing pkg-1.2.7_2… done

If you are upgrading from the old package format, first run:

 

  # pkg2ng

Updating repository catalogue

digests.txz                                                   100% 1071KB   1.1MB/s   1.1MB/s   00:00

packagesite.txz                                          100% 4929KB   4.8MB/s   4.8MB/s   00:01

Incremental update completed, 22926 packages processed:

0 packages updated, 0 removed and 22926 added.

The following 4 packages will be installed:

 

        Installing gettext: 0.18.3.1

        Installing python27: 2.7.6_4

        Installing py27-setuptools27: 2.0.1

        Installing py27-asn1: 0.1.4_1,1

 

The installation will require 81 MB more space

 

12 MB to be downloaded

 

Proceed with installing packages [y/N]: y

ここで、/usr/bin/python に格納されている Python 2.7 のバイナリを見つけやすいように、シンボリック リンクを新規作成する必要があります。

root@:~# ln -s /usr/local/bin/python2.7 /usr/bin/python

  • sudo をインストールする
    通常、Azure ではルート アカウントを無効化し、非特権ユーザー アカウントから sudo を使用して権限を昇格させコマンドを実行します。

root@:~# pkg install sudo

  • Azure Linux エージェントをインストールする
    Azure Linux エージェントの最新リリースは、次の Github の URL から入手できます。
    https://github.com/Azure/WALinuxAgent/releases (英語)
    FreeBSD 用にはバージョン 2.0.5 以降を使用することを推奨します。しかし、一般的にはリモート 2.0 ブランチが非常に安定している��め、ここではこのブランチの最新のエージェントを直接プルします。これにより、FreeBSD のサポート対象となっている最新のエージェントを取得できます。

root@:~# pkg install wget

......

 

root@:~# wget https://raw.githubusercontent.com/Azure/WALinuxAgent/2.0/waagent

root@:~# mv ./waagent /usr/sbin/

root@:~# chmod 755 /usr/sbin/waagent

 

root@:~# /usr/sbin/waagent -install

これで、Azure エージェントのインストールが完了し、実行状態になりました。

  • (任意) Azure VHD をアップロードする前にシステムのプロビジョニング解除を実行する
    プロビジョニング解除は、イメージの「汎用化」を目的として行います。この操作では、SSH のホスト キーの削除、ルート ユーザーの無効化とパスワードの削除、DHCP キャッシュ エントリの削除などを行います。一般的にはこの操作は推奨されますが、この VHD を広く共有する予定がない場合は、この手順は省略可能です。

root@:~# /usr/sbin/waagent –deprovision

WARNING! The waagent service will be stopped.

WARNING! All SSH host key pairs will be deleted.

WARNING! Cached DHCP leases will be deleted.

WARNING! Nameserver configuration in /etc/resolv.conf will be deleted.

WARNING! root password will be disabled. You will not be able to login as root.

Do you want to proceed (y/n)? y


: このコマンドを実行した後、ovf-env.xml ファイルが存在しないために警告メッセージが表示される場合があります。この警告は無視してかまいません。

  • FreeBSD 仮想マシンをシャットダウンする

これですべて完了しました。一連の作業は簡単に完了させることができたかと思います。

FreeBSD VHD を Azure にアップロードする

VHD を Azure にアップロードする方法は複数あります。この記事では、Azure Powershell ツールを使用して VHD をストレージ アカウントにアップロードします。

  • まず、Hyper-V サーバーの FreeBSD 仮想マシンがシャットダウンされていることを確認します。
  • 次に、Add-AzureVhd コマンドレットで VHD をアップロードします。

PS C:\> $destination = “http://MYACCOUNT.blob.core.windows.net/vhds/FreeBSD-10.0-RELEASE-amd64-20140116-r260789.vhd”

PS C:\> $localfile = “C:\PATH\FreeBSD-10.0-RELEASE-amd64-20140116-r260789.vhd”

PS C:\> Add-AzureVhd -Destination $destination -LocalFilePath $localfile

  • アップロード完了後は、VHD を使用してイメージを作成できます。ここから、任意の数の FreeBSD 仮想マシンを Azure にプロビジョニングできます。

PS C:\> $destination = “http://MYACCOUNT.blob.core.windows.net/vhds/FreeBSD-10.0-RELEASE-amd64-20140116-r260789.vhd”

 

PS C:\> Add-AzureVMImage -ImageName FreeBSD10.0 -MediaLocation $destination -OS Linux

Powershell ツールでは “-OS FreeBSD” パラメーターが現時点ではサポートされていないため、ここでは OS の種類は “Linux” を使用する必要があるので、ご注意ください。

FreeBSD 仮想マシンを Azure で作成する

これで、FreeBSD 仮想マシンを Azure で新規作成する準備が整いました。この作業でも Powershell または CLI ツールを使用できます。また、通常の方法でポータルから FreeBSD 10 のイメージをプロビジョニングすることもできます。

  • Azure 管理ポータル (https://manage.windowsazure.com/) にログインし、[NEW]、[COMPUTE]、[VIRTUAL MACHINE]、[FROM GALLERY] の順にクリックします。
  • [MY IMAGES] をクリックします。
  • [Choose an Image] メニューで [MY IMAGES] を選択します。
  • 使用する FreeBSD 10 のイメージを選択し、画面の指示に従ってホスト名とパスワードまたは SSH キーを設定します。

プロビジョニングが完了すると、Azure で新規作成した FreeBSD 10 の VM が実行されます。以上で完了です。

Ressource Manager & Portail Preview (Cloud Cover Show épisode 138)

Thu, 05/29/2014 - 18:56

Voici un résumé en français du Cloud Cover Show 138. Dans cet épisode, Nick Harris et Chris Risner sont accompagnés de deux Program Managers (PM) de l'équipe "Azure Application Platform" :

· Gautam Thapar, responsable du service de gestion des ressources

· Nathan Totten, ancien présentateur du show qui a rejoint l'équipe de Gautam. Ce dernier travaille sur le portail azure en « preview » qu'il présentera dans la vidéo.

La discussion se focalise sur deux annonces de la Build 2014, « ARM » pour Azure Ressource Manager et le nouveau portail (ce dernier fait l'objet d'un show dédié voir le résumé 139).

Tout d'abord, rappelons l'intérêt de Ressource Manager. Aujourd'hui, l'ancien portail en production présente une vision orientée "ressources". En effet, en chargeant le portail de management, le menu sur la gauche présente des ressources correspondant aux différents services : Web sites, VM, Mobile Services, SQL Databases, etc.

Cependant, s'il présente ces ressources de façon unitaire, il ne permet pas de les visualiser telles que sont conçues vos applications. Si l'on prend l'exemple d'une application sous forme d'un site web utilisant une base de données, ces services sont présentés et gérés séparément dans l’ancien portail.

Un des concepts de base qu'introduit Ressource Manager est la notion de Groupe de Ressources (Resources group). L'idée est de rassembler les ressources partageant un cycle de vie commun afin de les transformer en une unité de management. La finalité est ainsi de permettre que les opérations suivantes s'effectuent ensemble : déploiement, mise à jour et suppression.

Le nouveau portail à la différence de l'ancien permet d'avoir une vision complète de votre application : plus de détails sur la facturation, sur les services et enfin les ressources qui la composent. La partie ALM n’est pas laissée en reste ; plutôt que d'avoir un portail de gestion de vos projets d'équipe et un portail de gestion des ressources séparée, ce nouveau modèle offre un point central de gestion via une interface unifié.

Gautam dans sa démo, prend l'exemple d’un site web associée d’une base de donnée, et nous fait découvrir Ressource Manager à travers le nouveau portail. Ainsi, durant ce tour d’horizon nous découvrons des nouveautés lors de la création de l’application de la démo :

  • Présentation des resources groups : concept et implémentation dans le portail via les blades. Il mettra aussi en avant les points suivant :
  • Point sur les notifications : Le portail propose un onglet dédié aux notifications. Ces dernières sont liées aux comptes [K3] et regroupe toutes les activités qui y sont associées. Ainsi si vous effectuez des actions en parallèles via le portail & les api de management, tout passera par le même service de notification

Notons que le passage vers ce nouveau modèle de groupe de ressources, reste transparent et compatible avec l'ancien portail. Vos configurations continueront de fonctionner sans effet de bord.

Si vous désirez approfondir le sujet, je vous recommanderais la session de Gautam et Nathan de la build : http://channel9.msdn.com/Events/Build/2014/2-607

Retrouvez tous les résumés du Cloud Cover Show (http://aka.ms/cloud-cover-show) à http://aka.ms/cloud-cover-show-fr.

 

Si vous souhaitez tester cela par vous-mêmes et que vous n’avez pas encore de compte Windows Azure ni d’abonnement MSDN, ouvrez un compte de test gratuit, vous obtiendrez 150 € de ressources pendant 1 mois.

 

Vous trouverez ci-dessous quelques images de l'épisode avec le timing associé pour cibler les parties qui vous intéressent.

David (@davidcoppet)

Présentation de l'invité principale : Gautam Thapar

"Arrivée surprise" de Nathan Totten

Le show commence réellement à 4:22

Comparatif - rappel de l'ancien portail centré sur les ressources - 4:45

Comparatif - présentation d'un exemple sur le nouveau portail - 5:39

Version Control Guidance v3.0 flight has landed delivering three great guides and lots of visuals

Thu, 05/29/2014 - 18:33

We are pleased to announce that the v3.0 of the Version Control (ex Branching and Merging) Guide has shipped, after Bill Heys completed the intensive copy-editing.

v3.BETA –> stable v3.0!

Also see:

what’s new?

Third version of this blockbuster guidance has been split into separate topics as summarized below, allowing you to pick the “world” (guide) you are interested in. This release delivers a new crisper, more compact style, which is easier to consume on multiple devices without sacrificing any content. The content is updated and aligns with the latest Visual Studio technologies and incorporates feedback from the readers of the previous guidance.

Branching Strategies
Practical guidance on a number of (not all) common Branching Strategies and usage thereof.

  • Branching concepts
  • Branching strategies
  • Walkthroughs

Team Foundation Version Control (TFVC)
Practical guidance on how to use Team Foundation Version Control (TFVC) features.

  • Workspaces
  • Merging
  • New features, i.e. Code Lens
  • Walkthroughs

Dependency Management with NuGet
Practical guidance on dependency management, using NuGet with Visual Studio.

  • Managing shared resources
  • Dependency management
  • Walkthroughs
where’s the stuff?

lots of visuals!

All illustrations used in the guidance and the quick reference posters are included in the ZIP package. Re-use the images in your presentations, documentation, etc.

what’s cooking?
  • The Git for TFVC user guide is still under development and not included in v3.0.
the team

A special THANK YOU to the team of ALM Rangers who volunteered their personal time and contributed their real-world experience to deliver this solution: Alan Wills , Andy Lewis, Anna Galaeva, Bill Heys, Dan Hellem, Esteban Garcia, Gordon Beeming , Hamid Shahid, James Waletzky, Krithika Sambamoorthy, Larry Brader, Malcolm Hyson, Matt Mitrik, Michael Fourie, Micheal Learned , Michel Perfetti, Peter Provost, Robert MacLean, Fabio Stawinski, Taavi Koosaar, Tina Botes, Tommy Sundling, Vinicius Hana, Willy-Peter Schaub.

Did I forget anyone? If yes, please ping me with a bang (!)

please send candid feedback!

We can’t wait to hear from you, and learn more about your experience using the add-in. Here are some ways to connect with us:

  • Add a comment below.
  • Ask a question on the respective CodePlex discussion forum.
  • Contact me on my blog.

Dynamics GP Developer Insights: Service Based Architecture Overview

Thu, 05/29/2014 - 18:00

Hi everyone, my name is Kevin Racer and I’m a Senior Program Manager Lead on the Dynamics GP team.  I spend my days working on improving our architecture and tools. 

Overview

A good number of you have probably seen or heard about the change to our architecture designed to better embrace services and the cloud.  We've officially referred to it as our Service Based Architecture code named "DexNext".  This change is scheduled for the Microsoft Dynamics GP 2015 release around Q4 2014 and has a number of pieces that developers, partners and customers need to be aware of if they want to take full advantage of these new and exciting capabilities.  In this article I am going to broadly describe the architecture as its technologies and then in subsequent blogs the team will follow up with deeper dives on the specific components.

Let's start with what Service Based Architecture means.  If you think of the term service oriented architecture (SOA) then you are on the right track because that's what the new architecture essentially models.  We call it service "based" because it more accurately describes what we are doing.  Any functionality in any dictionary can be exposed as a service call, which makes it the "base" for the services as well as the base for web and desktop clients.  Our aim with this was consistent with what we did with the web client in that we are evolving our architecture by adding technological capability while continuing to leverage existing resources.

Technologically speaking the architecture lets developers precisely control what gets exposed through the endpoint and includes the ability to define in .NET the makeup of the data you want to exchange.  While .NET is an added factor, the experience is still primarily a sanScript one so for those of you with that skillset it continues to have value.  The endpoint API conforms to a REST based pattern which is very simple to understand yet allows the invocation of some very complex operations.  We will also provide a Discovery Endpoint that describes all available service calls in a particular deployment.  This description includes things like parameter type requirements, a description of the method and an example URI to call it.  You can think of Discovery as real time documentation that is always current.

In order to deliver this new service based capability we are also deepening our interaction capability with .NET.  In GP 2015 you will be able to directly reference .NET assemblies and new up objects to use.  This helps the service exposure side by enabling you to use a .NET object as a single parameter instead of a bunch of singleton Dex types.  It also gives you greater ability to consume a service since you can now leverage all the coolness of the .NET platform to do your serialization, http requests etc.  As an added bonus this also helps your non service sanScript code do more (think collections)!!

Authentication to the service will initially support windows auth.  We are planning on expanding that to Azure AD and subsequent claims based methods post RTM.  From an authorization perspective the architecture integrates into the GP security model so access to the service calls can be tightly controlled by user/role giving customers complete control over who can read, create or update data.

I said it pretty well in a past blog Roadshow Content Review: Part 1 so to sum up I'll plagiarize myself a bit here and say "The Service Based Architecture broadens our ability to interact with the Cloud, provides a connective base for companion application development and greatly broadens our ability to think big about the future of Dynamics GP and the functionality we can begin to build and offer." 

Keep tuned to this blog for more in depth articles from the team.

Until next time

Kevin

Internet of Things, Arduino/Netduino: Counting the closure of a switch

Thu, 05/29/2014 - 17:24
Counting things and then storing them for later use in data analysis requires a bunch of things.  One of the main things is that the number of times a switch closes is an important data point for many categories of work:  manufacturing, retail, physical security and safety to name a few. It seems simple, a switch closes and you count that as a 1 so that the variable switchclosure = switchclosure + 1; or does it?  Well in the physical world of mechanical things, magnetic fields, chemical...(read more)

Issue with Application Insights -5/30- Mitigated

Thu, 05/29/2014 - 17:06

Final Update: Fri, May 30 2014 02:40 AM

Issue is now fully mitigated. Customer will not see any latency in data. We again confirm that there was no data loss during impact window. We continue to work with our Azure partner team to do the full post-mortem so that we can make resiliency in our services.

We appreciate your patience while we worked to mitigate the issue.

-Application Insights Service Delivery Team

----------

Update: Fri, May 30 2014 01:25 AM

The root cause was a transient networking issue that caused processing failures in our data pipeline. The networking issues self resolved and we are working to restore full service healhth. F5 streaming data is back online however both Usage data and Ad-Hoc search are experiencing latency. We provide an update once the latency is within our operating goals. Please note that there was no data loss during this event. We continue to work to restore all the affected services and provide more updates in next 2 hours.

----------

Initial Update: Fri, May 30 2014 12:04 AM

Application Insights is investigating issues that are currently affecting Usage data, ad-hoc log search and F5 streaming data.Customer trying to access this data will may experience errors however no data is being lost.Our DevOps are engaged to root cause the issue but it is unknown at this moment.

We provide more information as we progress.

We apologize for the inconvenience this may have caused.
 
-Application Insights Service Delivery Team

Lifecycle Services Maintenance - Thursday June 5, 2014

Thu, 05/29/2014 - 16:57

Dynamics AX Lifecycle Services will perform our scheduled infrastructure maintenance on June 5, 2014 (detailed timeline below)

We appreciate your patience as we work to improve the site for you.

LCS will be down during this period. 

PST

UTC

11:00 -  15:00 

19:00 - 23:00

Como validar si un dominio en Office 365 esta disponible

Thu, 05/29/2014 - 16:55

Uno de los temas más importantes en Office 365 es el “Dominio” de tercer nivel que todo tenant de Office 365 debe tener asociado y que por obvias razones no puede haber dos iguales. Y también es una de las preguntas más frecuentes que me hacen. (en otro post hablare exclusivamente de los dominios en office 365).

Para ver el detalle completo verlo aquí.

Saludos

@ferglo

Wish I can author DSC Resource in C#!!

Thu, 05/29/2014 - 16:38

In previous blog, we learned how one can use their PowerShell skills to author DSC resources very easily. Still there are folks (we met some at TechEd NA) who want to author their DSC resources using C# because they are more productive with it than PowerShell language. Well, you can fully leverage the power of DSC by writing your resources in C#. In this blog, we will explore how you can write a C# based DSC resource and later seamlessly consume it from your DSC Configurations.

Authoring DSC resources in C#

For the purpose of this blog, we will write a DSC resource named “xDemoFile”. This resource will be used to assert the existence of a file and its contents. It is similar to a File resource but with limited functionalities.

I)             Project Setup:-

a)      Open visual studio

b)      Create a C# project and provide the name (such as  “cSharpDSCResourceExample”)

c)       Select “Class Library” from the available project templates

d)      Hit “Ok”

e)    Add assembly reference to system.automation.management.dll preferably from PowerShell SDK [but you can add assembly reference to your project from the windows assembly cache, GAC (<systemDrive>\ Windows\Microsoft.NET\assembly\GAC_MSIL\System.Management.Automation\v4.0_3.0.0.0__31bf3856ad364e35\System.Management.Automation.dll)].

f)      *Update the assembly name to match the DSC resource name. (write right click on the project then hit properties and change the Assembly Name to MSFT_xDemoFile)

 

II)           Resource Definition

Similar to script based DSC resource. You will need to define the input and output parameters of your resources in <ResourceName>.schema.mof.  You can generate the schema of your resource using the Resource Designer Tool.

Save the following in to a file named MSFT_xDemoFile.Schema.mof

[ClassVersion("1.0.0"), FriendlyName("xDemoFile")]

class MSFT_XDemoFile : OMI_BaseResource

{

                [Key, Description("path")] String Path;

                [Write, Description("Should the file be present"), ValueMap{"Present","Absent"}, Values{"Present","Absent"}] String Ensure;

                [Write, Description("Contentof file.")] String Content;                  

};

 

 

III)         Resource Implementation

In order to write DSC Resources in C#, you need to implement three PowerShell cmdlets. PowerShell cmdlets are written by inheriting from PSCmdlet or Cmdlet. Detail on how to write PowerShell Cmdlet in C# can be found in this MSDN documentation.

See below for the signature of the cmdlets:-

Get-TargetResource

       [OutputType(typeof(System.Collections.Hashtable))]

       [Cmdlet(VerbsCommon.Get, "TargetResource")]

       publicclassGetTargetResource : PSCmdlet

       {

              [Parameter(Mandatory = true)]

              publicstring Path { get; set; }

 

///<summary>

/// Implement the logic to write the current state of the resource as a

/// Hashtable with keys being the resource properties

              /// and the values are the corresponding current value on the machine.

              ///</summary>      

              protectedoverridevoid ProcessRecord()

              {

// download the zip file at the end of this blog to see sample implementation

}

}

Set-TargetResouce

         [OutputType(typeof(void))]

    [Cmdlet(VerbsCommon.Set, "TargetResource")]

    publicclassSetTargetResource : PSCmdlet

    {

        privatestring _ensure;

        privatestring _content;

       

[Parameter(Mandatory = true)]

        publicstring Path { get; set; }

       

[Parameter(Mandatory = false)]     

        [ValidateSet("Present", "Absent", IgnoreCase = true)]

        publicstring Ensure {

            get

            {

                // set the default to present.

               return (this._ensure ?? "Present");

            }

            set

            {

                this._ensure = value;

            }

           }

 

            publicstring Content {

            get { return (string.IsNullOrEmpty(this._content) ? "" : this._content); }

            set { this._content = value; }

        }

 

///<summary>

        /// Implement the logic to set the state of the machine to the desired state.

        ///</summary>

        protectedoverridevoid ProcessRecord()

        {

//Implement the set method of the resource

/* Uncomment this section if your resource needs a machine reboot.

PSVariable DscMachineStatus = new PSVariable("DSCMachineStatus", 1, ScopedItemOptions.AllScope);

                this.SessionState.PSVariable.Set(DscMachineStatus);

*/    

  }

    }

Test-TargetResource    

       [Cmdlet("Test", "TargetResource")]

    [OutputType(typeof(Boolean))]

    publicclassTestTargetResource : PSCmdlet

    {  

       

        privatestring _ensure;

        privatestring _content;

 

        [Parameter(Mandatory = true)]

        publicstring Path { get; set; }

 

        [Parameter(Mandatory = false)]

        [ValidateSet("Present", "Absent", IgnoreCase = true)]

        publicstring Ensure

        {

            get

            {

                // set the default to present.

                return (this._ensure ?? "Present");

            }

            set

            {

                this._ensure = value;

            }

        }

 

        [Parameter(Mandatory = false)]

        publicstring Content

        {

            get { return (string.IsNullOrEmpty(this._content) ? "“:this._content);}

            set { this._content = value; }

        }

 

///<summary>

/// Write a Boolean value which indicates whether the current machine is in    /// desired state or not.

        ///</summary>

        protectedoverridevoid ProcessRecord()

        {

                // Implement the test method of the resource.

        }

}

 

IV)        How to handle Machine reboot in C# based DSC Resources.

If your resource needs a machine reboot. The way to indicate that in script-based DSC resource is setting the global variable $global:DSCMachineStatus  to 1 in the Set-TargetResource function of the resource.  To do similar in C#-based DSC resource, you will need to set the same variable in the runspace where the Set Cmdlet of the resource will be executed.

Adding the following two lines will signal a machine reboot to the DSC engine.

PSVariable DSCMachineStatus = new PSVariable("DSCMachineStatus", 1, ScopedItemOptions.AllScope);

this.SessionState.PSVariable.Set(DSCMachineStatus);

 

 

Consume C# based resources

I)        How to deploy C# based DSC Resource

The folder structure of C# based DSC resource is the same as the script based resource. Please refer to this blog to see how DSC resources should be deployed in your machine.

The output binaries from your project and the schema mof of the resource should be deployed to the correct path before you can use it to author or apply configurations.

Example: - if you deploy the resource under a module “CSharpDSCResource” inside $env:programfiles, the folder structure would look like:-

             $env:ProgramFiles\WindowsPowerShell\Modules\CSharpDSCResource\DSCResources\MSFT_XDemoFile\MSFT_XDemoFile.dll

             $env:ProgramFiles\WindowsPowerShell\Modules\CSharpDSCResource\DSCResources\MSFT_XDemoFile\MSFT_XDemoFile.Shema.mof

II)        Create Configuration:-

Configuration CsharpExample

{

    Import-DSCResource -Module CSharpDSCResource

    Node("localhost")

    {

        xDemoFile fileLog

        {

            Path = "c:\foo.txt"

            Content = "content example"

            Ensure = “Present"         

           

        }

    }

}

 

III)        Run the configuration:-

Start-DSCConfiguration -ComputerName localhost -Path .\CsharpExample\ -Verbose -Wait

             

Berhe Abrha

Windows PowerShell Team

 

 

 

 

 

 

Cloud Load Testing and Application Insights at the Software Developer Network Event 6-6 Netherlands

Thu, 05/29/2014 - 16:34

If you happen to be in the Netherlands June 6th and not fishing for Chinook off the coast of Washington (where I plan on being) make sure and check out the Software Developer Network Conference at the Achmea Conference Center in beautiful Zeist. 

Visual Studio Cloud Load Testing and Application Insights

Application Insights solves the age old dilemma of not collecting enough information to be actionable vs. collecting so much information as to negatively impact your application. It does this through the integration of monitoring and development tools. This session will show how to quickly identify performance issues and bottlenecks in your applications through Application Insights and the power of the Microsoft Monitoring Agent. In this session Hassan Fadili will show how Application Insights can monitor event data, analyze it for categories of issues, link to the line of source of the issue, monitor memory usage for leak detection and output memory snapshots and IntelliTrace logs that can be analyzed using Visual Studio 2013.

Hassan Fadili, FadiliCT Consultancy

Hassan Fadili is werkzaam als Freelance Lead Architect / Consultant en VS ALM Consultant (MVP) voor zijn eigen bedrijf FadiliCT Consultancy (http://www.fadilict-consultancy.nl). Hassan is zeer actief in de Community en mede bestuurslid van DotNed (.NET & VS ALM UG NL) en VS ALM Track Owner. Hassan houdt zijn blog op: http://hassanfad001.blogspot.com en te bereiken via: hfadili@fadilict-consultancy.nl, hassan@dotned.nl, hassanfad11@hotmail.com en/of via Twitter op: @HassanFad

New AX 2012 R3 Project White Papers

Thu, 05/29/2014 - 16:02

I'm happy to announce the availability of three new white papers covering new capabilities in Dynamics AX 2012 R3 Project Management and Accounting:

We will have additional white papers in the coming weeks.

 

How to monitor and respond to memory use

Thu, 05/29/2014 - 15:58
A colleague asked me how to run code in response to low memory condition. Apparently, data is buffered and can be flushed to disk or a server when memory gets low. So I showed him the code below. Start Visual Studio File->New->Project->C# WPF application Paste in the code below in the MainWindow.Xaml.cs file Then hit F5 to run it. The sample runs some code in a background thread that: 1. Gets the performance counters for the current process. There may be a delay in getting the counters so...(read more)

Releasing a companion universal Windows app

Thu, 05/29/2014 - 15:21

A few months ago, I released a simple task management app for Windows 8.1 to the Windows store. Since then, I've updated it a few times as I've had new ideas or needed to resolve some issues. However, what I really wanted to do was to create a Windows Phone version, and so I did. Today it went live in the Phone Store.

To get it onto the phone store, I updated the original Windows 8.1 version to be a universal Windows app (which was a piece of cake), and then I set about making a phone-friendly version. Here's what that entailed.

 

Going Universal

Actually updating the Visual Studio solution to create a universal version was the work of one menu option. This created a new project in my solution, and a third new branch for shared data. I added the project to my source control system, and that was that. Easy.

 

Data model, file handling etc.

This was the simplest part of the entire process. I had to do absolutely nothing to change the way the data (all the tasks) was stored internally, and then serialized for saving and loading. Not a thing. All the code I had written for manipulating the tasks was the same. I could even share the same methods between my two projects, so that any changes or updates I made would be automatically reflected. I had one issue, that was something that I already covered in a previous posting. Specifically, when saving data you must remember to use an async method and ensure that you are wrapping it in a deferral to make sure it gets completed before the app is terminated.

The wonderful thing was that the first time I ran the app on the phone, it automatically picked up the tasks I had entered on my desktop computer. More about this later. 

 

User Interface changes

I did however want to change a few things about my user interface, so I created a specific MainPage.XAML file for the phone version, and its related code-behind page. I tried to consider which functions would be the most important to a user (adding a new task, editing a task) and so I made those commands front-and-center with single taps. Other commands I added to a pop-up menu which appears when you tap-and-hold on a task, or tap one of the buttons in the app bar. This was a simplification of the desktop/tablet version, which was more about dragging-and-dropping or using the mouse to select a task, and then applying an action.

This didn't really take too long to code: all I was really doing was changing the event handlers I was working with, and trying them into the same task management methods I had already written. The important part was to use the app for a few days, and judge whether I'd made the right call between key functions and lesser-used functions. My big concern was how discoverable would the tap-and-hold pop-up menu be, so I err'd on the side of paranoia and made the default text for each task a little reminder in its own right.

     

 

As you can see, the desktop/tablet version has giant buttons for switching to different days (rather than the app bar). Also, the tasks in this version support a single tap to highlight them, and then the actions in the app bar became active. This version also supported working in landscape or portrait mode, while the phone is limited to portrait only. The phone version also did away with the day and time display, but it went full-screen to allow signal strength, battery life etc. to be shown.

 

Store mechanics

Publishing the phone version to the store was straightforward. When I went to the Windows Phone Dev Portal Dashboard, and selected the name of the app, a warning popped up, asking something to the effect of was I sure I wanted to link this phone app with my existing desktop/tablet app. When apps are linked, it means buying one also buys the other (which is fine by me, as both a free) but also that they can share data and in-app purchases. The latter didn't bother me either, but the former is actually a real bonus. As I serialize and save the data which contains the task to the roaming directory by default, this means that the tasks are automatically synced between the phone version and desktop version. And this is why when I first installed and ran the version on my phone, the tasks I had created in the desktop app appeared. Magic, I tells you, magic.

 

Conclusion

Obviously not all apps can be as easily converted to universal, but in this case, the task was surprisingly simple. If you have an app which is heavily XAML based, or based on HTML/WinJS, you will probably find that it will just work if you create a new version. Sure, the user interface will be broken in a dozen ways, but the app will essentially work, and you can focus on best taking advantage of the new form factor. This is impressive stuff you gotta agree. Now, go make some more apps!

Scrum at a Glance (Visual)

Thu, 05/29/2014 - 15:05

I’ve shared a Scrum Flow at a Glance before, but it was not visual.

I think it’s helpful to know how to whiteboard a simple view of an approach so that everybody can quickly get on the same page. 

Here is a simple visual of Scrum:

There are a lot of interesting tools and concepts in scrum.  The definitive guide on the roles, events, artifacts, and rules is The Scrum Guide, by Jeff Sutherland and Ken Schwaber.

I like to think of Scrum as an effective Agile project management framework for shipping incremental value.  It works by splitting big teams into smaller teams, big work into smaller work, and big time blocks into smaller time blocks.

I try to keep whiteboard visuals pretty simple so that they are easy to do on the fly, and so they are easy to modify or adjust as appropriate.

I find the visual above is pretty helpful for getting people on the same page pretty fast, to the point where they can go deeper and ask more detailed questions about Scrum, now that they have the map in mind.

You Might Also Like

Agile vs. Waterfall

Agile Life-Cycle Frame

Don’t Push Agile, Pull It

Scrum Flow at a Glance

The Art of the Agile Retrospective

Arduino/Netduino: Using the RGB smt LED Keynes Sensor

Thu, 05/29/2014 - 14:50
I am using the Keyes Sensor which came in a kit I bought from Amazon.com, you can get the sensor kit cheaper elsewhere (now I know).  The LED looks like the picture.  The LED has a blue, red and green LEDs on it.  The leads on my sensor appear to be incorrect, the Green and Red are reversed.  Which could be a problem if you want to issue a red light to stop something.  In this case, you need to know that the manufacturers often makes these kinds of mistakes, and if you order...(read more)

BASELINES with Performance Analyzer for Microsoft Dynamics (DynamicsPerf) are here !!

Thu, 05/29/2014 - 14:16
Performance Analyzer for Microsoft Dynamics (DynamicsPerf) is built to help resolve performance issues with Microsoft Dynamics products. As currently released it does a very good job at finding the following issues: SQL Configuration Database Schema (indexes) Application Code (limited) Application Configuration Poorly executing SQL statements Given all of this, there were still questions that were consistently asked that we couldn’t answer very well. One of the biggest...(read more)

Convert host-named site collection to Web application

Thu, 05/29/2014 - 14:11

You've identified a host-named site collection in your environment where application pool isolation has become important. To achieve this level of isolation, you will need to create a new Web application with its own Application pool.

I've created a PowerShell script to illustrate one way of doing this. This script can be downloaded from the TechNet Gallery located here: http://gallery.technet.microsoft.com/office/Migrate-host-named-80d73c79

 

Apprentice v Graduate – Who’s right for your business?

Thu, 05/29/2014 - 14:06

Editor’s note: The following post was written by SharePoint MVP Alan Richards 

 

Background

 

When I looked at writing this article I garnered some opinions from a lot of different people; educationalists, IT professionals and also the directors of my current company. The surprising bit about gathering all of this opinion was how similar it was, for me this was a bit of a shock, I really did think that all of these people from completely different career spectrums would have very differing opinions.
So what was that opinion; well in a nutshell it was that it depends. Now that might sound like a bit of a get out of jail reaction but it is very true, the choice of graduate or apprentice does depend on a lot of factors. Some of these I have listed here
· Maturity of the business
· Speed of new staff member being able to contribute to the business
· Skillset needed by the business
· Freeness of business staff to provide quality on the job training and guidance
This list is not an exhaustive one but shows some of the thought that has to go in to decision making behind whether the business takes on a graduate or an apprentice.

 

Education & Skillsets

 

While in essence the employment of either a graduate or apprentice will provide the business with the same thing; a new person in the business, the skillset and maturity of the new staff member can be quite different. An apprentice is normally a School leaver over the age of 16 while a graduate who has undertaken a 3 year course will be over the age of 22 once they have completed School, College and finally University. While the age difference is not and should not be a reason for a choice of employment, it’s that age difference that has allowed the graduate to gain more life experience but more importantly it has allowed them to gain experience of learning at a higher level while gaining a qualification. A lot of IT based degrees also now contain an element of professional qualifications, so as well as finishing University with a degree, a graduate could also leave with a Microsoft professional qualification. Click here to continue reading full article…

Temporary Post Used For Theme Detection (0ae72a80-0b15-452c-9406-5fae2e632d9f - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Thu, 05/29/2014 - 12:51

This is a temporary post that was not deleted. Please delete this manually. (1fed0fa9-989b-4c6d-a256-24ac379e56c5 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

“The only way to really succeed in making changes to education for the better is to hand power to the students.” – Scott Wieprecht, UK

Thu, 05/29/2014 - 12:26

originally posted on Daily Edventures

We often say here at Daily Edventures that behind every great student is a great teacher. And in the case of the @OffPerts (Microsoft Office Experts) – the students from the Saltash.net Community School in Cornwall, England who provided an inspiring and necessary student voice at the 2014 Global Forum – this couldn’t be more true.

You may remember earlier this spring when we shared the story of George, Amy, Jack and Rowenna – the @OffPerts. These four students not only attended the Global Forum, but they also shared and communicated their ideas with the educators, school administrators and government leaders in attendance. They made a big impression, and they also won a second runner-up award, along with their teacher Scott Wieprecht, in the Cutting Edge Use of ICT for Learning category.

“I think I knew I wanted to be a teacher when I was really young,” says Wieprecht. “I love the buzz you get when you help someone connect with content or an idea they have never seen before for the first time.” Wieprecht was drawn to teaching at an early age. By age 13, he was helping to teach drama at a local theatre group. By age 16, he had set up his own stage school “Stage Stars,” which he still runs today, in addition to his work at Saltash.net. “I think I was always destined to become a teacher,” says Wieprecht, “for the fundamental reason that I like empowering our young people to make decisions and take the lead and aspire to be something wonderful. In the same ways that I get to see when youngsters perform on the stage, I now get to see that every day in my classroom.”

In addition to his Expert Educator mantle and award at the Global Forum, Wieprecht was recently named as a Silver Teaching Award Winner in the category of “Outstanding use of Technology in Education” by the Pearson Teaching Awards, which recognize the life-changing impact of an inspirational teacher on the young people they teach. Wieprecht was selected from over 20,000 nominations received this year.

Please join me in congratulating Scott Wieprecht, not just because of the awards he is receiving, but because he is changing the lives of students, the world of education, and is a perfect example of how, behind every great student, there is a great teacher.

Can you tell us about a favorite teacher, or someone who made a difference in your education?

My favorite teacher was my Head of Year in secondary school. His name was Jeremy Martin, and despite all the countless hours he spent on everything, he always made you feel like he had enough time for you, and you were more important than any of that. To me pastoral care is more important than academic content, and it’s the teachers who make you feel good who further your life the most. I’m really lucky I now have a huge team of colleagues (Linda Griffin, Isobel Bryce, Dan Buckley, Ben Rowe, Alan Hawthorne…) who work with the same mantra, which is why I had the privilege at working in such and amazing school environment.

Please describe how your professional achievements have advanced innovation in education. What has changed as a result of your work?

I’m not a big fan of questions like this, as I personally don’t do anything to make the changes. The only way to really succeed in making changes to education for the better is to hand power to the students. Of course it has to be done in a managed, sensible, and constructed way initially, but it is the students, when given the appropriate support, that have the best tools to change the world. My achievement and recognition therefore come from these types of projects. The OffPerts, which is all about student leadership, has won a South West Digital Educators Award, 2nd Runner-up for Microsoft’s Outstanding Use of Cutting Edge Technology 2014, and recently seen me become a National Teaching Award winner for 2014 – it is all about the students, but I am more than happy to get shiny trophies.

How have you applied technology in innovative ways to support your work?

For me the idea behind the technology use has to be more powerful than the technology itself. For example, with Office 365, it would be quite easy to take it as its face value and use it for storage, email and messaging. It’s only when strong ideas get added to this, such as using Newsfeed to remind about coursework deadlines, using Lync to give a pupil off on long term illness a window into the classroom, using sites to run lessons or study groups… that’s when the technology becomes innovative. I am always quick to point out I am not a Microsoft evangelist, and I don’t sing from the ‘hymn sheet’ blindly. The reason I tell everyone about Microsoft, is because nothing compares. There is no organization that gives as much to education, whatever the long term ‘motive’ might be, and that actually cares about improving children’s futures.

In your opinion, how has the use of apps, cellphones, and mobile devices changed education? And your work?

Devices level the playing field, it’s as simple as that. Allowing each student access to their own device means they all have the same chance and opportunity. Yes, we do still have to account for and work with lack of parental engagement, or low self-esteem, but devices remove the physical blocks to learning and give everyone a fair chance. Devices also, far from stopping communication, encourage it beyond the classroom. In the world we live in, linking up with students from far afield, and different countries, is a hugely important life skill. With the evolution of apps and devices this can take place anywhere, anytime. For example, in my classroom on a warm sunny day, it can sometimes be unbearable. Now I can simply say, let’s continue this outside, and run my lessons just as effectively though Lync, showing the same slides, videos, whiteboard, as I would have in my room, but in a location that is comfortable, and more productive to learning. Anywhere is now a learning space, not just somewhere with wall and tables.

In your view, what is the most exciting innovation happening in education today?

The “cloud” is by far the most exciting thing for me at the moment. The “always on, everywhere access” ethos means that education is becoming a more fluid concept, and encouraging more to engage. The cloud also means updates are instantaneous, and collaboration is commonplace, and at the core of everything. Rather than being something you try to build in to a lesson, it is the basis of every lesson.

Is there a 21st century skill (critical thinking and problem solving, communication, collaboration, or creativity and innovation) that you are most passionate about? Why?

I think collaboration should probably be my answer, but I hope that one day this won’t be seen as a “skill” and actually just be everywhere. Improvement can only really happen, progress can only really be made, when we collaborate. Even taking some of our most fundamental inventions, accredited to a single person, they wouldn’t stand the test of time without collaboration. In my opinion, an idea is only as good as the 20 people that add to it.

If you could give one educational tool to every child in the world, what would it be? Why?

I’d give every child in the world a device with a blank OneNote on it. I’d ask them to plan how they were going to change the world, make it fairer for everyone. I’d link students up to annotate each other’s work, question their ideas and how they would go about doing them, and offer suggestions to obstacles that might stand in their way.

How must education change in your country to ensure that students are equipped to thrive in the 21st century?

Education needs to reflect the learners, and it needs to reflect the market. It would be really easy for me to say remove exams. Unfortunately I don’t really have another system that would go in its place so saying scrap it isn’t productive or useful. I would challenge what the exams are though. Should a mathematics exam really ask 30 questions about vectors and trigonometry, or should it be asking real world problems, something that a Venn Diagram probably won’t solve.

What is the biggest obstacle you have had to overcome to ensure students are receiving a quality education?

Allowing students to enjoy working together, both in school and outside. Living in a rural area it isn’t always possible or practical for them to meet up at each other’s houses every night to work on a project. Giving them an opportunity to do this, without relying on parents, or the fact that in winter it’s dark by 5pm, means they can take responsibility for their own progress.

How can teachers or school leaders facing similar challenges implement what you’ve learned through your work?

Don’t be afraid of the obvious. There seems to be stigmas attached to certain things, or the ‘done way.’ For me, Office 365 completely solved my problems, and not only that, had another load of benefits to boot. Best of all its free, so there really is no harm in trialing it – you won’t be disappointed.

How have you incorporated mobile devices/apps into your classroom and have you seen any improvements?

Mobile devices mean that everyone has personalized access to the work. We’ve been using Surfaces to access Office 365 so that everyone is working on their projects at the same time, it also means that I can measure everyone’s progress individually, and see how much each individual is contributing.

Describe to us your role as a leader for technology in your school, community or among other educators?

I am responsible for assisting in the setting up and deployment of Office 365 to the whole school and ensuring its effective use. I am also on some of the steering committees and trial groups in school to test new technologies and ideas. Recently a colleague and I started Empower to Aspire and Project Aspire with the aim of linking up schools around the world to develop student leadership.

How is the experience being a Microsoft Innovative Educator – Expert?

It’s been one of the most amazing experiences of my life. Its given my students life skills I would never have been able to give them alone, but also allowed me to engage with a whole host of other professionals. I am VERY lucky in that Stuart Ball and Steve Beswick have always been generous with their time with us, but Anya and Anthony also made a point of speaking to the students, and this left a lasting impression on them. It’s also helped me feel confident in my classroom, and expand the work I am doing out to other educators in the South West and the UK, and helped me to dream big. The only downside is I am now desperate to get an opportunity to attend the next Global Forum to learn more!

Pages

Drupal 7 Appliance - Powered by TurnKey Linux