You are here

Feed aggregator

Building elastic and fault tolerant Data Platform solutions with Azure, SQL Server and HDInsight

MSDN Blogs - Wed, 06/24/2015 - 06:50

If you are interested on a solution that provides elastic scale, High Availability and Disaster Recovery between different locations with a "pay as you go approach", I suggest you to invest 2 minutes of your time to take a look of this session!

Azure Machine Learning et Python: Apprentissage régulier et automatisé avec Scikit-learn

MSDN Blogs - Wed, 06/24/2015 - 04:52

 

Bonjour,

Julien Moreau-Mathis nous propose aujourd'hui un deuxième article sur Python et Machine Learning.

Pour faire plus amples connaissances, Julien est étudiant à l’école IN’TECH INFO, une école d’informatique et est actuellement en alternance (Master) au sein de la Division DX (Developer Experience) de Microsoft France. A l’origine, Julien nous viens du monde de la 3D où il contribue au projet Babylon.js, à l’initiative de mes collègues David Catuhe (@deltakosh), David Rousset (@davrous), Pierre Lagarde (@pierlag) et Michel Rousseau (@rousseau_michel). Il a ainsi pu créer Community Play 3D, ce qui lui a donné l’occasion de rencontrer et de côtoyer d’autres personnes de DX. Il travaille ainsi aujourd’hui avec mon autre collègue Benjamin Guinebertière (@benjguin) sur le sujet du Machine Learning.

Un grand merci à Julien pour cette contribution ;-)

Je vous souhaite une bonne lecture de son billet.

:-) Benjamin

________________________________________

Introduction

Rappelez-vous, Azure Machine Learning permet d’exécuter des scripts Python et utilise le back-end Anaconda 2.1. Cet article montre comment prédire le prix d’un appartement en fonction d’une surface inconnue. Prédiction en utilisant Azure Machine Learning avec Python pour les deux phases, apprentissage et prédiction, en utilisant la librairie “scikit-learn” disponible dans le back-end Anaconda 2.1. De plus, comment rendre ces deux phases automatiques et utilisables en production seulement avec Python et Azure Machine Learning.

Azure Machine Learning

Azure Machine Learning, ou Azure ML, est un service Azure de Microsoft qui nous permet d’accéder au monde du Machine Learning. C’est un service basé sur le Cloud dans Azure qui permet la création de systèmes puissants et automatiques d’analyse prédictive avec des possibilités de déploiement rapide. Azure ML propose un studio en ligne (Machine Learning Studio) pour s’entraîner et expérimenter :)

Note : Quelques références
Rappels sur le Machine Learning
Rappels sur Azure ML

Screenshot d’un modèle d’exemple dans le studio Azure ML

Le screenshot ci-dessus présente l’interface du studio avec un exemple fourni par défaut. Nous y trouvons un graphe dans lequel nous pouvons ajouter des éléments paramétrables et ainsi créer nos propres expérimentations ML.

Dans les éléments qui sont disponibles, il y en a un particulier qui attire notre attention. Il s’agit de l’élément “Execute Python Script” qui nous permet d’exécuter nos propres scripts Python pour ajouter une couche de personnalisation et tout ça dans le Cloud !

Note : Azure ML permet également d’exécuter des scripts R. Plus d’informations sur les langages R et Python.
Utilisation de R dans Azure ML. Python et Machine Learning

Plusieurs librairies pour Machine Learning sont disponibles dans l’écosystème Python. On peut trouver PyBrain, MlPy mais aussi Scikit-learn.

Scikit-learn ou Sklearn est très réputé surtout pour être simple, puissant et fait pour opérer avec les librairies NumPy et SciPy très utilisées et connues de la communauté Python. De ce fait, il permet d’interagir avec “Matplotlib” qui est une librairie Python pour la visualisation de données (plotting).

Azure ML & Python

Azure ML permet d’exécuter des scripts Python et met à disposition des librairies préalablement installées. Dans ces librairies on peut trouver :

Le but est de pouvoir pousser notre expérimentation ML en utilisant Sklearn et ainsi prédire nos résultats.

L’avantage d’utiliser Azure ML réside dans la puissance de calcul et la possibilité de déployer facilement et rapidement des Web Services. Ainsi, nous sommes capables de mettre rapidement à disposition des applications un modèle prédictif testable. Pour tester, nous disposons d’exemples en C#, R et même Python.

Cas d’étude

En reprenant l’exemple “immo ”du cours MVA disponible ici, le but est de le reproduire en utilisant Sklearn.

Dans ce cours MVA, nous cherchons à prédire un prix pour une surface d’appartement donnée. Nous distinguons la phase d’entraînement et la phase de prédiction. La phase d’entraînement se fait grâce aux outils déjà présents dans Azure ML et un fichier CSV contenant les données de référence (surface et prix).

Dans notre cas d’étude, les données sont écrites dans une base de données Azure SQL. Notre expérimentation doit se connecter à la base de données pour récupérer les données de référence, lancer la phase d’entraînement puis prédire un prix pour une surface donnée via un Web Service.

De plus, nous voulons que notre expérimentation d’entraînement puisse ré-apprendre avec de nouvelles données supplémentaires écrites dans la base de données et nous voulons pouvoir re-déclencher la phase d’entraînement grâce à un Web Service. Pour plus d’optimisation, nous voulons enregistrer les modèles de prédiction issus de la phase d’entraînement et n’utiliser que le meilleur modèle dans la phase de prédiction.

Où récupérer les données immo

Les données de référence sont disponibles sur le cours MVA sous forme de fichier CSV. De plus, je vous invite à suivre ce tutoriel.

Expérimentation d’entraînement

Ici, l’expérimentation d’entraînement est un Web Service et se décompose en plusieurs parties :

  • Lire les données depuis la base de données.
  • Exécuter un script Python avec Sklearn qui prend en entrée les données issues de la base de données.
  • Ecrire le modèle de prédiction et le coefficient d’estimation issus du script Python dans une autre table de la base de données.
Expérimentation de prédiction

L’expérimentation de prédiction est un Web Service aussi et se décompose de la façon suivante :

  • Lecture des données en sortie de l’expérimentation d’entraînement.
  • Exécuter un script Python avec Sklearn qui prend en entrée depuis le Web Service une surface donnée et qui va prédire son prix associé.
Création de l’expérimentation d’entraînement Lire dans la base de données Azure SQL

Les données de référence sont écrites dans la table immo de la base de données.

Pour lire les données de la table immo dans la base de données, nous ajoutons un élément de la liste dans le graphe appelé Reader.

Le Reader permet de lire et transformer les données issues d’une base de données en un dataset par une simple requête SQL.

Ajout d’un reader

Après avoir rempli les informations de connexion et la requête SQL, nous pouvons tester la connexion et la requête en cliquant sur “Run”.

Pour visualiser le dataset en sortie, il nous suffit de faire un clic droit sur le connecteur puis “Visualize”.

Ajouter un script Python

Tout comme le Reader, il nous suffit de glisser/déposer un élément de la liste dans le graphe.

La fonction azureml_main est la fonction exécutée par Azure ML lors de l’exécution du script. Les arguments dataframe1 et dataframe2 de la fonction correspondent aux deux datasets en entrée du script (input1 et input2) et peuvent être vides.

dataframe1 et dataframe2 sont des instances DataFrame de pandas et nous pouvons y accéder comme un dictionnaire classique :

surface = dataframe1["surface"]
prix = dataframe1["prix"] OU import pandas as pd
surface = pd.DataFrame(dataframe1, columns=["surface"])
prix = pd.DataFrame(dataframe1, columns=["prix"]) Entraînement avec Sklearn et régression linéaire

Les deux principes clés de Sklearn sont “fit” et “predict”.

  • “fit” permet d’ajuster la fonction de prédiction en prenant deux paramètres : les données de référence et les résultats associés. Ici les données de référence sont les surfaces et les résultats sont les prix.
  • predict” permet de prédire un résultat en prenant en entrée une donnée de référence non connue, donc une surface.
Fit avec Sklearn et immo

Il s’agit ici d’importer Sklearn et d’ajuster la fonction de prédiction avec surface en fonction de prix grâce un modèle de régression linéaire pour l’exemple.

from sklearn import linear_model
lr = linear_model.LinearRegression() import pandas as pd
surface = pd.DataFrame(dataframe1, columns=["surface"])
prix = pd.DataFrame(dataframe1, columns=["prix"]) # Ajuster le modèle
lr.fit(surface, prix) # print le résulat de la prédiction pour une surface de 100m2
print(lr.predict(100))

Pour récupérer le coefficient estimé, autrement dit la précision de la prédiction, il nous suffit d’accéder à la propriété “coef_”. Cette valeur nous sert à déterminer le meilleur modèle issu de la phase d’entraînement.

#Estimated coefficient, contenu dans [0, 1000]
coef = lr.coef_ Enregistrer le modèle en sortie du script

Dans notre expérimentation, nous voulons enregistrer en sortie du script le résultat de la phase d’adaptation. Pour cela, il suffit simplement de sérializer l’objet “lr” et d’écrire par la suite le résultat de la sérialization dans une nouvelle table immo_model dans la base de données Azure SQL.

#dump de l’objet "lr" avec pickle
import pickle
result = pickle.dumps(lr) #print le résultat du dump
print(result) Définir le dataset en sortie du script Python

Pour envoyer un dataset en sortie d’un script Python, il s’agit d’utiliser pandas. La fonction azureml_main retourne un dataset de type pandas.DataFrame de la manière suivante :

import pandas as pd ret = pd.DataFrame([result], columns=["result"])
ret["coef"] = pd.DataFrame(lr.coef_, columns=["coef"]) return ret,

Pour visualiser le résultat en sortie du script, il nous suffit de faire un clic droit sur le connecteur et cliquer sur “Visualize”.

Script complet def azureml_main(dataframe1 = None, dataframe2 = None):
from sklearn import linear_model
import pandas as pd
import pickle surface = pd.DataFrame(dataframe1, columns=["surface"])
prix = pd.DataFrame(dataframe1, columns=["prix"]) lr = linear_model.LinearRegression()
lr.fit(surface, prix) result = pickle.dumps(lr) ret = pd.DataFrame([result], columns=["result"])
ret["coef"] = pd.DataFrame(lr.coef_, columns=["coef"]) return ret, Ecrire la sortie dans la base de données Azure SQL

Dans l’expérimentation de prédiction, la méthode est de récupérer le résultat du dump de “lr” puis de le reconstruire pour récupérer l’intégralité de notre objet. Pour le récupérer il s’agit d’enregistrer le résultat dans la base de données, dans la table immo_model.

Pour écrire dans la base de données, il nous suffit de glisser/déposer un élément de type “Writer” et de remplir les informations liées à la connexion vers la base de données.

Création de l’expérimentation de prédiction

L’expérimentation de prédiction est sensiblement la même que celle d’entraînement.

Le Reader récupère la ligne qui a le meilleur coefficient estimé dans la table immo_model. Le script Python reconstruit l’objet “lr” grâce à Pickle, prend en entrée une surface inconnue (Enter Data) et inscrit le résultat de la prédiction en sortie (le prix prédit).

Le script Python aura en input1 la surface à prédire et en input2 le résultat le plus récent de la partie entraînement lu depuis la base de données. Pour définir la structure de données en input1, il s’agit de définir Enter Data comme étant au format CSV.

Note: Enter Data peut contenir une valeur (ici 25) qui sera utilisée pour les tests avec “Run”

Recharger l’objet “lr” et prédiction

Pour récupérer l’objet “lr” il s’agit de désérializer l’objet avec pickle.loads(string).

import pickle
lr = pickle.loads(dataframe2["lr"][0])

Une fois l’objet désérializé, on peut accéder à ses méthodes.

import pandas as pd #Transforme la surface donnée en dataset
given_surface = pd.DataFrame(dataframe1, columns=["surface"]) #Prédire le prix en fonction de la surface donnée
prediction = lr.predict(given_surface)

lr.predict renvoie un tableau de résultats et prend en entrée un tableau de données. Une fois la prédiction faite, il nous suffit de retourner le résultat en sortie du script.

import pandas as pd ret = pd.DataFrame(prediction, columns=["result"])
return ret, Script complet def azureml_main(dataframe1 = None, dataframe2 = None):
import pickle
import pandas as pd lr = pickle.loads(dataframe2["lr"][0]) given_surface = pd.DataFrame(dataframe1, columns=["surface"])
prediction = lr.predict(given_surface) ret = pd.DataFrame(prediction, columns=["result"])
return ret, Tester la prédiction avec un Web Service

Créer un Web Service ici permet à un client d’accéder à l’expérimentation de prédiction et prédire un prix en fournissant une surface inconnue.

Pour ajouter un Web Service à l’expérimentation, il s’agit d’ajouter les deux éléments “Input” et “Output” de la catégorie Web Service. Dans notre cas, l’Input est connectée à input1 sur le script Python et l’Output à la sortie du script Python. L’Input remplacera dans ce cas-ci l’élément Enter Data et l’Output renverra le dataset en sortie du script Python.

Note : L’élément Enter Data définit le format des données à fournir pour l’Input du Web Service. Il est donc nécessaire de le laisser dans l’expérimentation.

Une fois le couple Input/Output connecté, il nous suffit de cliquer sur “Publish Web Service”.

Une fois le Web Service créé, toutes les informations de connexion nous sont fournies. Pour tester le Web Service et prédire un prix, cliquer sur “Test”.

Le résultat suivant montre la prédiction pour 25 m2 qui est quasi égale à la valeur de référence dans la base de données.

Appel du Web Service côté client

Les appels du Web Service peuvent se faire via C#, R et Python. La documentation est accessible en cliquant sur le lien “REQUEST/RESPONSE”. La documentation fournit l’URL vers le Web Service et un exemple de code pour les trois langages.

Lien vers le script

Le processus suivit par le script Python est le suivant :

  • Définir le format JSON pour la requête (data)
  • Envoyer la requête en fournissant l’API Key et l’URL du Web Service
  • Récupérer la réponse et la désézialiser
  • Récupérer les informations sur la prédiction en parcourant l’objet JSON renvoyé par le Web Service.
Gestion du meilleur modèle de prédiction

Afin de récupérer le meilleur modèle de prédiction, nous choisissons arbitrairement de n’utiliser que le dernier modèle ayant un coefficient estimé supérieur ou égal à 0.8. Afin d’identifier facilement le dernier modèle utilisable dans la base de données, nous ajoutons un flag “flag_use” dans la table immo_model.

De ce fait, la requête SQL de l’élément Reader dans l’expérimentation d’entraînement

SELECT TOP 1 lr
FROM immo_model as IM
ORDER BY IM.coef DESC

devient

SELECT TOP 1 lr
FROM immo_model as IM
WHERE IM.flag_use = 1

Après cette modification, chaque modèle de prédiction ajouté dans la base de données doit être vérifié. C’est-à-dire :

  • Si le coefficient d’estimation >= 0.8, alors ajouter le modèle avec “flag_use = 1” et modifier les autres modèles avec “flag_use = 0”.
  • Si le coefficient d’estimation < 0.8, alors ajouter le modèle avec “flag_use = 0
Création du Web Service de la partie entraînement

Le processus de ré-apprentissage transforme l’expérimentation d’entraînement en un Web Service. On situe l’Input et l’Output au niveau du script Python.

Une fois la partie Web Service mise en place, l’élément Writer est grisé et ne sera jamais exécuté si le ré-apprentissage se fait par un appel au Web Service.

Note: Le Writer est exécuté si l’apprentissage est déclenché depuis le studio.

Pour re-déclencher l’apprentissage, il s’agit ici d’utiliser un script Python côté client qui récupère le dump de “lr” et qui écrit le résultat dans la base de données grâce à PyODBC.

Installer PyODBC et requêter la base de données Azure SQL pip install pyodbc

Pour se connecter à la base de données avec PyODBC, nous devons fournir une chaîne de connexion (connection string) qui est la suivante :

connection = pyodbc.connect("""DRIVER={SQL Server};
SERVER=server_adress;
DATABASE=database_name;
UID=username;
PWD=password""") Le processus

Le processus décrit trois étapes :

  • (1) déclencher l’expérimentation d’entraînement
  • (2) récupérer le dump de “lr” et son coefficient estimé qui lui est associé
  • (3) écrire le dump de “lr” (toujours sous forme d’une chaîne de caractères), le coefficient et le flag d’utilisation dans la base de données avec PyODBC

Ecriture avec PyODBC

Déroulement de l’écriture dans la base de données :

  • Connexion à la base de données (pyodbc.connect)
  • Création d’un curseur pour requêter la base de données (cursor)
  • Vérifier le coefficient d’estimation du nouveau modèle de prédiction. Si >= 0.8 alors modifier (UPDATE) tous les modèles existants avec “flag_use = 0
  • Exécution d’une requête INSERT INTO avec “lr”, “coef” et “flag_use” en paramètres
  • commit() pour informer le serveur de ne pas annuler les requêtes
import pyodbc def write_to_database(result):
connection = pyodbc.connect("""DRIVER={SQL Server};
SERVER=server_adress;
DATABASE=database_name;
UID=username;
PWD=password
""")
cursor = connection.cursor() val = result["Results"]["learn_output"]["value"]["Values"][0]
lr = val[0]
coef = float(val[1])
flag_use = True if (coef >= 0.8) else False if flag_use:
cursor.execute("UPDATE immo_model SET flag_use = 0;") cursor.execute("INSERT INTO immo_model VALUES(?, ?, ?);"
,
lr, coef, flag_use)
cursor.commit() Définir l’expérimentation d’entraînement comme étant un Web Service

Pour transformer l’expérimentation d’entraînement en Web Service, l’astuce tient à définir l’Input comme étant vide et donc ajouter un élément “Enter Data” sans données particulières (cf. capture d’écran ci-dessous).

Le JSON du script Python côté client définira donc la colonne “None” mais avec une valeur nulle.

Le JSON devient le suivant :

data = {
"Inputs": {
"learn_input": {
"ColumnNames": ["None"],
"Values": [ [ ""], ]
},
},

"GlobalParameters": {
}

} Script Python complet (lien vers le script)

Requête :

Ecriture dans la base de données :

S

Optimiser l’entraînement : créer un WebJob

L’utilisation des WebJobs dans Azure est très simple. Une fois notre Web App créée dans Azure, il suffit d’ajouter un package sous forme d’un fichier zip contenant notre fichier Python à exécuter dans la liste des WebJobs.

Créer un WebJob ici nous permet d’automatiser la phase d’apprentissage. Par exemple, si la base de données est constamment mise à jour, nous pouvons ajouter un WebJob qui sera exécuté toutes les heures afin d’améliorer le modèle de prédiction.

Note: Ajouter des librairies compilées (.pyd) au WebJob nécessite que ces librairies soient compilées dans leur version 32 bits. PyODBC n’est pas disponible par défaut dans l’environnement présent sur Azure. C’est pourquoi nous devons ajouter le fichier “pyodbc.pyd” à la racine du dossier contenant notre script Python à exécuter. Lien vers la librairie PyODBC Python 3.4 version 32 bits.

Ajouter un WebJob qui s’exécutera de façon récurrente :

Configurer le WebJob afin qu’il s’exécute toutes les heures à partir de maintenant :

Conclusion

Nous avons vu dans cet article comment créer les deux phases, d’entraînement et de prédiction, en utilisant la librairie scikit-learn dans des script Pythons personnalisés.

De plus, ce cas d’étude montre un cas concret d’utilisation de Azure Machine Learning ne serait-ce que par la rapidité de développement et la mise en production rapide des Web Services.

Popular Book Creator app now available on Windows for free

MSDN Blogs - Wed, 06/24/2015 - 02:15

The following post is an excerpt from a press release by Red Jumper, the team behind Book Creator, a popular classroom app that allows students and teachers to produce their own eBooks. Winners of the award for Best Educational App at BETT 2015, we’re pleased to share the news that Book Creator is now available on Windows Desktop for the first time.

---

Book Creator for Windows is here

Create ebooks on your desktop PC, laptop and tablet.

More than 15 million ebooks have been made with Book Creator for iPad and Android, and now the popular classroom app is receiving a Windows makeover and will be available on desktop devices for the first time. The developers are making the app free on the Windows store for a limited period following the launch.

Book Creator for Windows takes a blank-canvas approach to creativity that makes publishing and sharing ebooks easier than ever. With a simple and intuitive design, people of all ages can create their own international standard ePub files, and with a couple of clicks can become published authors.

“Book Creator for Windows provides a single ebook app to use across three platforms,” said David Fuller, principal education consultant at Tablet Academy. “The app has proven to be a popular and versatile ebook creator on iOS and Android devices, and I recommend all educationalists with Windows tablets to include it as a must-have download.”

With Book Creator for Windows you can:

  • Create books on a Windows tablet, laptop or desktop with an easy-to-use interface
  • Apply rich formatting with more than 40 fonts to choose from
  • Add photos and video or record audio
  • Utilize the drawing tool for illustrations and annotations
  • Read books with the in-app reader
  • Draft books in the ePub format to publish work on Apple's iBooks Store or the Google Play Store
  • Send books by email or upload to OneDrive for quick and easy sharing

"The iOS version of Book Creator is already a best-selling app, reaching number one in the iTunes store in 80 countries and becoming a core app in classrooms across the world… We are so excited to bring our app to desktop devices and make book publishing accessible to even more people.” - Dan Amos, Director of Red Jumper

Pricing and availability:

Book Creator for Windows is available worldwide exclusively through the Microsoft Windows Store. To celebrate the launch of the app, it will be free for a limited period. Book Creator has been translated into 11 languages, including English, French, Spanish and Chinese.

 

Popular Book Creator app now available on Windows for free

MSDN Blogs - Wed, 06/24/2015 - 02:15

The following post is an excerpt from a press release by Red Jumper, the team behind Book Creator, a popular classroom app that allows students and teachers to produce their own eBooks. Winners of the award for Best Educational App at BETT 2015, we’re pleased to share the news that Book Creator is now available on Windows Desktop for the first time.

---

Book Creator for Windows is here

Create ebooks on your desktop PC, laptop and tablet.

More than 15 million ebooks have been made with Book Creator for iPad and Android, and now the popular classroom app is receiving a Windows makeover and will be available on desktop devices for the first time. The developers are making the app free on the Windows store for a limited period following the launch.

Book Creator for Windows takes a blank-canvas approach to creativity that makes publishing and sharing ebooks easier than ever. With a simple and intuitive design, people of all ages can create their own international standard ePub files, and with a couple of clicks can become published authors.

“Book Creator for Windows provides a single ebook app to use across three platforms,” said David Fuller, principal education consultant at Tablet Academy. “The app has proven to be a popular and versatile ebook creator on iOS and Android devices, and I recommend all educationalists with Windows tablets to include it as a must-have download.”

With Book Creator for Windows you can:

  • Create books on a Windows tablet, laptop or desktop with an easy-to-use interface
  • Apply rich formatting with more than 40 fonts to choose from
  • Add photos and video or record audio
  • Utilize the drawing tool for illustrations and annotations
  • Read books with the in-app reader
  • Draft books in the ePub format to publish work on Apple's iBooks Store or the Google Play Store
  • Send books by email or upload to OneDrive for quick and easy sharing

"The iOS version of Book Creator is already a best-selling app, reaching number one in the iTunes store in 80 countries and becoming a core app in classrooms across the world… We are so excited to bring our app to desktop devices and make book publishing accessible to even more people.” - Dan Amos, Director of Red Jumper

Pricing and availability:

Book Creator for Windows is available worldwide exclusively through the Microsoft Windows Store. To celebrate the launch of the app, it will be free for a limited period. Book Creator has been translated into 11 languages, including English, French, Spanish and Chinese.

 

jadamelio’s Weekly Program Feature - WireFrame Maze

MSDN Blogs - Wed, 06/24/2015 - 01:07

 

Back in 2013, Ed wrote a blog post about Old Basic Coder’s 3D Maze and Ray Caster Maze. These programs inspired the “Small Basic Community Challenge: Want to turn this 3D Maze into an RPG?” that resulted in some great discussion and additions to the maze.

I’d like to take a look at similar program, Pathdrc’s 3D WireFrame Maze
The import code: RCS876

The first thing that excited me about this program was the data storage, instead of a pre-written map, or a list of images, the program stores lists of numbers, which are then interpreted to draw the maze. This is very similar to the method I used in Foul Sorcery, a roguelike game I designed for Small basic.

By not being limited to a single level, the 3D Maze program is open to many additions or extensions, procedural generation, custom level design, etc.

The importance of building flexible platforms when designing was a lesson under expressed in my computer science courses, and I hope that new learners, can keep the idea of modular design in mind when making little programs with big potential.  

The second thing that interested me about this program was the wire frame aspect. Frame rate and visual effects can be a tax on performance, and while the article on TechNet (More on that on Friday) has many good solutions, I appreciate the minimalism that comes with negative space and wire frame.  

So going forward keep modularity/flexibility in mind, and don’t be afraid go simple with the graphics! I’d love to see a project or program that connects multiple elements from different programmers.


-jadamelio

Error executing code: FormRun (data source) has no valid runable code in method 'new' when trying to edit Project workflow

MSDN Blogs - Wed, 06/24/2015 - 00:47

If you go to Project management and accounting > Setup > Project Management and accounting workflow and you select any workflow and hit Edit and you get an error stating

"Error executing code: FormRun (data source) has no valid runnable code in method 'new'

Try to to locate the form that is failing to launch in the AOT. In this case it will be 'WorkFlowEditorHost'. Make a small change to the form, compile it and then delete the change and recompile.

Try now to see if you are able to edit the workflow after this.

 



 

Dynamics CRM Online 2015 Update 1 新機能: Web API 開発者プレビュー その 4

MSDN Blogs - Tue, 06/23/2015 - 20:00

みなさん、こんにちは。

前回に引き続き Dynamics CRM Online 2015 Update 1 で提供される
Web API 開発者プレビューについて紹介します。連載記事になるため
第 1 回からご覧ください。

Web API 開発者プレビュー その 1
Web API 開発者プレビュー その 2
Web API 開発者プレビュー その 3

Web API では Upsert もサポートしています。Upsert の SDK サポート
は以下の記事をご覧ください。
Dynamics CRM Online 2015 Update 1 SDK 新機能: Upsert

Upsert で新規レコードの作成

まず Upsert を利用したレコードの作成を検証します。

プログラムの実装

1 前回利用した Visual Studio ソリューションを開き、Program.cs
ファイルを開きます。新しく以下のメソッドを追加します。

public async Task RunUpsert(string accessToken)
{
    // HttpClient の作成
    using (HttpClient httpClient = new HttpClient())
    {
        // Web API アドレスの作成
        string serviceUrl = serverUrl + "/api/data/";
        // ヘッダーの設定
        httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
        httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

    }
}

2. Main メソッドの以下のコードを書き換えて、新しい
メソッドを呼ぶように変更します。

元)  
Task.WaitAll(Task.Run(async () => await app.Run(result.AccessToken)));

変更後)
Task.WaitAll(Task.Run(async () => await app.RunUpsert(result.AccessToken)));

3. Upsert を利用するにはレコードの GUID を指定する必要があります。
まずはまだ存在しないレコードを指定して、レコードが作成されるか
確認しましょう。以下のコードを新しく追加メソッド内に追加します。

// レコードの GUID
Guid newGuid = Guid.NewGuid();

// 取引先企業オブジェクトの作成
Account account = new Account();
account.name = "Upsert デモ";
account.telephone1 = "555-5555";

4. 送信するリクエストを作成します。以下コードを追加します。
HttpMethod には PATCH を利用します。

// 送信リクエストの作成
HttpRequestMessage request = new HttpRequestMessage(new HttpMethod("PATCH"), serviceUrl+ "accounts(" + newGuid + ")");
request.Content = new StringContent(JsonConvert.SerializeObject(account, new JsonSerializerSettings() { DefaultValueHandling = DefaultValueHandling.Ignore }));
               
request.Content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");

5. 最後にリクエストを送信します。

// リクエストの送信
await httpClient.SendAsync(request);

6. 以下のコードを追加して、一旦画面を止めるようにします。

Console.WriteLine("レコードを作成しました。");
Console.Read();

動作確認

1. F5 キーを押下してプログラムを実行します。

2. 認証ダイアログが表示されたらログインします。

3. レコード作成のメッセージが表示されたら、レコードを作成されたか
ブラウザで確認します。

Upsert で既存レコードの更新

次に Upsert を利用したレコードの更新を検証します。

プログラムの実装

1 上記に続いて以下のコードを追加します。まずは取引先企業の
オブジェクトを作ります。Guid は作成で利用したものと同じ
ID を指定します。

// 取引先企業オブジェクトの作成
Account account2 = new Account();
account2.name = "Upsert デモ 更新しました";
account2.telephone1 = "555-5555";
account2.accountid = newGuid;

2. 次に送信リクエストを作成します。基本的には上記で作成
したものと同じです。

// 送信リクエストの作成
HttpRequestMessage request2 = new HttpRequestMessage(new HttpMethod("PATCH"), serviceUrl + "accounts(" + newGuid + ")");
request2.Content = new StringContent(JsonConvert.SerializeObject(account2, new JsonSerializerSettings() { DefaultValueHandling = DefaultValueHandling.Ignore }));

request2.Content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");

3. 最後にリクエストを送信します。

// リクエストの送信
await httpClient.SendAsync(request2);

4. レコードを更新したメッセージを表示します。

Console.WriteLine("レコードを更新しました。");

動作確認

1. F5 キーを押下してプログラムを実行します。

2. 認証ダイアログが表示されたらログインします。

3. レコードが作成された時点で一旦プログラムが停止します。ブラウザで
レコードをが追加されたことを確認します。

※ 1 行目は先ほど作成したレコードです。

4. プログラムに戻って、画面で Enter キーを押下します。

5. 以下のメッセージが出たタイミングでブラウザでレコードを更新された
ことを確認します。

最後に今回追加したメソッドを以下に示します。

public async Task RunUpsert(string accessToken)
{
    // HttpClient の作成
    using (HttpClient httpClient = new HttpClient())
    {
        // Web API アドレスの作成
        string serviceUrl = serverUrl + "/api/data/";
        // ヘッダーの設定
        httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
        httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

        // レコードの GUID
        Guid newGuid = Guid.NewGuid();

        // 取引先企業オブジェクトの作成
        Account account = new Account();
        account.name = "Upsert デモ";
        account.telephone1 = "555-5555";

        // 送信リクエストの作成
        HttpRequestMessage request = new HttpRequestMessage(new HttpMethod("PATCH"), serviceUrl + "accounts(" + newGuid + ")");
        request.Content = new StringContent(JsonConvert.SerializeObject(account, new JsonSerializerSettings() { DefaultValueHandling = DefaultValueHandling.Ignore }));

        request.Content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");

        // リクエストの送信
        await httpClient.SendAsync(request);

        Console.WriteLine("レコードを作成しました。");
        Console.Read();

        // 取引先企業オブジェクトの作成
        Account account2 = new Account();
        account2.name = "Upsert デモ 更新";
        account2.telephone1 = "555-5555";
        account2.accountid = newGuid;

        // 送信リクエストの作成
        HttpRequestMessage request2 = new HttpRequestMessage(new HttpMethod("PATCH"), serviceUrl + "accounts(" + newGuid + ")");
        request2.Content = new StringContent(JsonConvert.SerializeObject(account2, new JsonSerializerSettings() { DefaultValueHandling = DefaultValueHandling.Ignore }));

        request2.Content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");

        // リクエストの送信
        await httpClient.SendAsync(request2);

        Console.WriteLine("レコードを更新しました。");
    }
}

まとめ

Web API でも SDK 同様 Upsert が利用できるのは便利です。
是非お試しください!

- 中村 憲一郎

【受賞チーム発表】 13 のスタートアップソリューションが品川本社に集結~Microsoft Innovation Award 2015 いよいよ本日開催

MSDN Blogs - Tue, 06/23/2015 - 19:21
Microsoft Innovation Award 2015 に多数のご応募をいただき誠にありがとうございました。予想以上のご応募をいただき、審査にお時間をいただきました。直前のご案内となりますが受賞チームを発表いたします! 本来優秀賞 7 社を選定の上最優秀賞を決定する予定でしたが、審査の過程で優秀賞には選ばれなかったけれども是非プレゼンテーションの機会を提供させていただきたい、というソリューションが複数あり、急遽テクノロジーエッジ賞を新設させていただきました。 <優秀賞> ・株式会社ウサギィ Deep Learningに依存しない判定機: 画像解析できるマン ・サイトセンシング株式会社 属性や反応を自動計測する: FG-サイネージ ・株式会社情報基盤開発 紙のデータ集計を効率化する東大ベンチャー: AltPaper ・株株式会社 SPLYZA 残像画像を自動で生成: Clipstro ・TVISION INSIGHTS 株式会社 テレビ視聴の質を捉える: TVI...(read more)

Collecting the boot-time events over the network.

MSDN Blogs - Tue, 06/23/2015 - 18:20

Today I want to talk about what I actually do at work. I work on the service called "Setup and Boot Event Collector".  It has been included in the previous Server Technical Previews but disclosed only to the partners. Now it has been officially announced at the Ignite Conference and will be generally available in the next preview. There will be an official blog and some official blog posts about it but for now I want to tell about it in a more unofficial way (and by the way, you're welcome to ask the questions here).

Have you seen how when Linux boots, it prints all these interesting bits of information on the console? And on the headless machines you can get it to use the serial port as the console and get all it remotely. Have you ever wished that you could get the same from Windows, so that when things fail, you know what exactly went wrong? Now this wish has been answered. Even better, you can get this information directly over the network.

How it works: The information about the Windows start-up is sent in the form of the ETW events. And incidentally, you can get them even on the previous versions of the Windows if you happen to connect the kernel debugger and to configure the installed image just right. The boot event collector does kind of the same but in a more convenient and secure way: no debugger, one collector can get and record the data from many machines, and the provided PowerShell cmdlets help with configuring the image just right. And then you can read the collected events with the Message Analyzer or any other tools and find what is going on during the boot or setup (or really at any other time if you want to configure the image in a more custom way).

Now a short intro of how to get it working. You can get it installed in TP2 as well, though the PowerShell commands have changed a bit (and will change some more before the final release). We have a manual that has been shared with the partners in the previous previews but I'm not sure yet, how will it be generally distributed.

To install the collector you enable the optional feature "Setup And Boot Event Collection". It can be done through the Server Manager/Control Panel or from the command line through dism (or through the PowerShell commands):

dism /online /enable-feature /featurename:SetupAndBootEventCollection

That puts the binaries, the configuration files and the PowerShell scripts onto the machine (it can be a physical machine or a VM). The service gets started but its initial configuration is empty, so it does nothing. The PowerShell commands include "Sbec" in their names, so you can get the list of them with

PS> help *Sbec*

By the way, the path for the release of the proper help for PowerShell commandlets is a bit of a mystery to me. So far if I go to the URL for it, it says that this document hasn't been released yet. But there is a trick: if you look in c:\Windows\System32\WindowsPowerShell\v1.0\Modules\BootEventCollector\BootEventCollector.psm1, you can find the descriptions of the functions right in them.

The data files for the collector live in c:\ProgramData\Microsoft\BootEventCollector. The subdirectory Config contains the configuration files, Etl is intended for the saved event logs, and Logs for the logs of the collector. The normal logging is done by the collector through ETW as well, you can see it in the Event Viewer under Applications and Services Logs -> Microsoft -> Windows -> BootEvent-Collector. But you can also switch it to a file if you want, and there are the additional status log files.

In the Configuration directory you can find: Active.xml - the currently active configuration (it's best not to mess with it directly but use the PowerShell commands to change the configuration, then the collector will keep the configuration change history for you), Empty.xml - the empty configuration, in case if you want to return back to it, and Example.xml - essentially the description of all the possible configuration settings as comments in a configuration file, along with examples.

I won't go here into the details of the configuration, it's a separate subject that we can look at later. For now, suppose, you've configured the collector.

Then you go and configure the target machines (they're really the sources of events but we've kept the terminology consistent with the debugger). The event collection uses a small separate network stack borrowed from the kernel debugger (KD-NET), so it starts working very early in the Windows boot process, way before the normal networking starts. It means some inherited caveats though. The list of drivers supported by that small networking stack is shorter than for the normal drivers (but the typical popular NICs are covered). And the stack adds overhead on the NIC is uses, compared to the normal driver.

Just like KD-NET, the targets for event collection get configured with the address and port of the collector, the secret key for communications, and the information about which events to send. The PowerShell commands that came with the collector feature help with this configuration. They can be used to configure the WIM and VHD images, or to run the configuration on the target machines through the PowerShell remoting, and finally you can copy the scripts to a target machine and run them there locally, or to configure the network-booted and network setup through ADK/WDS.

The transport part gets configured with the command Set-SbecBcd, the selection of the events with Set-SbecAutologger (it provides a reasonable default set of events from the kernel, system logs and setup). The events from the kernel and system services normally stop the forwarding through the network after the system boot comes to the point when the event logging service is started, meaning that now the events can be collected locally. The setup events keep the forwarding. But all this is configurable, and can be changed if you have different preferences.

After the target has been configured, it needs to be rebooted (or booted for the first time from a VHD image). And then the events will come.

In the next installment, I plan to show an example of a simple diagnostics.

Compare Email Marketing and Social Media Marketing

MSDN Blogs - Tue, 06/23/2015 - 18:09

A report entitled  ‘The Collaborative Future’ by ExactTarget and CoTweet discovers many interesting strengths and weakness in email marketing and social media marketing:

http://scottge.net/2015/06/23/email-marketing-vs-social-media-marketing/

 

 

Azure: Visual Studio 2013 connection to MySQL

MSDN Blogs - Tue, 06/23/2015 - 17:17
In my previous post, I discussed the online article that connects to  MySQL, it was fairly simple, and so will this blog.  We will simply make sure that you are able to connect and use CRUD to implement a simple UI.  For some reason Google doesn’t get you to the right MySQL pages for Visual Studio.  Sources: http://dev.mysql.com/doc/connector-net/en/connector-net-visual-studio.html Various reading that I can’t find my notes from. Discussion: Model-View-Controller is considered...(read more)

Cloud Champions II – Community Call – recording and recap

MSDN Blogs - Tue, 06/23/2015 - 17:11

Yesterday we held the last Cloud Champions webinar for this round, we were joined by some of our speakers to answer the questions you raised.  You can listen to the recording of the session (and all other Cloud Champions webinars) and read the recap below.

We were joined by:

  • Phil Goldie – Director of Partner Business & Development
  • Michelle Markham – Product Marketing Manager for Office 365
  • Mike Heald – Product Marketing Manager for Microsoft Azure
  • Scott Lewis – Partner Development Manager for Dynamics CRM
  • Jack Pilon – Partner Marketing Manager for the Microsoft Partner Network
  • Brett Fraser – Partner Channel Development Manager

Below is a recap of the questions we covered during the call:

  • Why are Microsoft’s cloud solutions better than competitors? Do we have an in-depth comparison sheet for this?  As part of Cloud Champions we ran a compete session which covered the key differentiators.  You can view the on demand version and also access information on the Why Microsoft website.
  • We are currently a Microsoft Partner and reselling Office 365 through Telstra but would like to sell through Microsoft directly.  Could you please provide information on how we can sign up?  You can access step by step details here which cover how you become an Office 365 reseller through distribution, syndication and direct.  
  • How do we claim 'incidents' or support once certified?  You can manage your support benefits through the Membership essentials page on the partner portal, for more information about partner support see the following blog post.
  • How do I register for Partner of Record?  You can find up to date information on becoming a Partner of Record in this blog post.
  • I already have Action Pack subscription, do I have POR as part of that or do I need to apply separately?  Partner of Record is acknowledgement of your involvement in the sale / deployment of Microsoft Online Services – it usually involves your customer assigning your Microsoft Partner Network ID to their subscription. All Microsoft Partner Network members with an active profile are eligible for POR listing.
  • Gold Partners have Tele support - what can we utilise that dedicated contact for? The dedicated contact helps you manage you membership, utilise benefits, assists with support, helps you stay current and can connect you with other Microsoft staff.
  • How can I learn more about providing power BI services to my customers?  Is there Microsoft training available?  One of the best online resources is at The Microsoft Virtual Academy - there's specific PowerBI content there.    

For an opportunity to meet many of our Cloud Champions speakers face to face, register for the Microsoft Australia Partner Conference from 31st August to 3rd September on the Gold Coast.  We will be joined by speakers from our local teams in Australia as well as our global teams and a number of guest speakers from outside Microsoft.  The tracks span not only key solution areas but also building business skills and evolving your business model, and a specific track for marketing. View the full agenda to see the specific sessions.  As well as presentations we are running roundtable discussions called “In conversations” providing an opportunity to meet some of our key speakers in a smaller group environment.  Plus 90 minute workshops called “Masterclasses” providing an opportunity to get deeper into some areas of content and leave the event with a plan of action when you get back to your office. 

APC is a great opportunity to meet not only the Microsoft team, but importantly other like-minded Partners.  Many strong partnerships have grown from a connection at an Australia Partner Conference! 

We look forward to seeing you there.

Graph API updates: api-version=1.6 and a new Graph client library

MSDN Blogs - Tue, 06/23/2015 - 16:32
Api-version 1.6

The Azure AD Graph team is very pleased to announce the availability of the next version of Azure AD Graph REST API, api-version=1.6. There are no major changes from 1.5 to 1.6.

So you might ask why we’ve revved the version here.  In March of this year, we made some changes to the 1.5 API, believing that these were non-breaking changes.  However, due to an issue with a Graph client library dependency, this service side change caused a breaking change in the Graph client library (versions 2.0.5 and earlier).  We were forced to roll back the service side change, and pause any API changes on the service side.  More details can be found in this stack overflow question.

Since March, we’ve:

  1. introduced more validation testing gates before releasing new client libraries and
  2. introduced a fix in Graph client library version 2.0.8 that allows updates to the Graph REST API without breaking the client (by ignoring any unknown collections)

Now we’re releasing a new API version – api-version=1.6 – that will allow our team to release additional directory functionality and capabilities through the Graph REST API, without breaking any existing clients (2.0.6 and earlier).  As usual, you can try it out through Graph Explorer.

We will document the version changes through our regular MSDN documentation channels in our versioning document, as well as within our interactive reference documentation.

A new Azure AD graph client library

As part of this update, we’ll also be releasing graph client library version 2.1.0.  Versions 2.1.x will be tied to REST API version 1.6 (while graph client library 2.0.x will be
tied to REST API version 1.5).

We’ll be updating the graph client library on a regular basis to make new functionality (and bug fixes) available to developers who prefer to use a client library vs pure REST API calls.

You can find the latest .Net (portable) client library on nuget.org here.

Feedback

We’re always interested to hear what you think, so please let us know if you have any feedback or suggestions for the Graph API or client library.

SQL Server Management Studio – June 2015 Release

MSDN Blogs - Tue, 06/23/2015 - 16:23
With the release of SQL Server 2016 Community Technology Preview (CTP) 2.1 (blog post link here) , customers can for th e first time experience the rapid preview model for their on-premises SQL Server 2016 de velopment and test environments and can gain a faster time to production. In addition, w e are delighted to announce our first "preview" release of SQL Server Management Studio! This is our first effort to release SQL Server Management Studio (SSMS) in a mechanism outside of the...(read more)

Custom News RSS Feeds from Bing and Google News

MSDN Blogs - Tue, 06/23/2015 - 16:19

You can find bing and google news RSS feed from here and learn how to customize it:

http://scottge.net/2015/06/23/custom-news-rss-feeds-from-bing-and-google-news/

 

Cloud, Azure, DevOps, Love and Hate

MSDN Blogs - Tue, 06/23/2015 - 15:26
Early in 2007 or 2008, the one and only, DigUnix got back from an overnight hack-a-thon. During this thon he was introduced to running Linux boxes in Amazon’s datacenters. Till then, he and I used to just remotely build Gentoo boxes from across the hall and get in trouble for scream profanity in the office. Hey we worked with Linux… it’s what you do. This is when I was introduced to AWS and my love for the “cloud” was born. Of Couse this is before we called it the cloud...(read more)

The new Get Data experience

MSDN Blogs - Tue, 06/23/2015 - 15:20

Over the last four months we’ve been adding support for new SaaS services in Power BI, on a weekly basis, providing users with rich, out of the box dashboards and reports in just a few clicks!  As we continue this release cadence we wanted to make it even easier to find the content that matters to you.   With this in mind, we are releasing the biggest visual change to Power BI since December: a cleaner and simpler experience to Get Data.  When you click on Get Data, you are now presented a single screen with a set of categories to choose from:

Each of the groups in Get Data provides a simple short cut to content.  All existing and new content packs and data sources appear in the appropriate sections so don’t worry about finding your favorite source.  The sections are:

  • Services – all the SaaS content packs can be found here.
  • Files – importing Excel or Power BI Designer files, start here.
  • Big Data & More – connect directly to your On-Premise SQL Server Analysis Services or Azure data sources.
  • Samples – start with a sample content pack focused on retail sales.

After you click on the type of data you want, you’ll be taken to the full list of content found in the selected group.  Let’s take a quick tour of each group.

Services

In this group, you’ll find all the out of the box content packs for the most popular SaaS services supported in Power BI. This includes QuickBooks Online, Salesforce, Google Analytics, Microsoft Dynamics CRM, and many more.  Each service has an icon making it easy to identify.

Once you find the service you’re interested in, click on the tile and you’re presented with a small description along with the Connect button.  If you want to find out more, click on the Learn More link and you’ll be presented with detailed information about the specific service and content pack.

 

Once you click the Connect button you’ll have a beautiful dashboard with supporting reports providing you detailed insight into your business data.  Customize the dashboard and reports to get exactly the view you want!

Files

Importing your Excel data and Power BI Designer reports continues to be very easy under the Files section.  You can import these files from your local computer and your OneDrive for Business or OneDrive Personal accounts.   Importing data from OneDrive makes it easy to ensure updates to the data can be refreshed in Power BI.  This means you and your co-workers, using the latest view of data, can make decisions about your business.

After you select the storage location of your files, you will then browse and select the specific file you want to import into Power BI.  Start exploring the data by clicking on the new dataset you’ve added or ask questions using Power BI’s Q&A natural language service.  Pin the insights you find to your dashboard and you start monitoring the health of your business.

Big Data & More

Connecting directly to your on-premises Microsoft SQL Server Analysis Services or Azure data sources is as easy as clicking a button and entering your credentials.  Once complete, you can begin to explore your data, build reports, and pin visuals to your dashboard. 

As you play with the new layout, send us feedback and additional feature requests via http://support.powerbi.com.  And stayed tuned as we continue the weekly cadence of updates.  It’s going to be really exciting in the coming weeks!

DSC Resource Kit flourishes as open source

MSDN Blogs - Tue, 06/23/2015 - 14:16

We are excited to announce the recent updates which were made to DSC Resource Kit since open sourcing it on GitHub! We were working hard on improving coverage and robustness of DSC as well as saw incredible engagement from the community in the recent months.

That effort resulted in adding 32 new DSC resources across 8 modules and fixing bugs in 22 resources across 17 modules since last release at the beginning of May! It means that we’ve achieved our next milestone of 200 resources and are now up to a total of 212 DSC resources!

For those who love numbers as much as we do, here’s couple other statistics from our GitHub repositories:

  • 19 contributors from outside of the PowerShell team participated in DSC development
  • 109 pull requests have been merged
  • 28 issues have been closed                                                    

Let’s not forget to mention that many of bug fixes and almost all (30) new resources are coming from the community. We are thrilled to see such great enthusiasm to contribute to and help improve DSC and encourage you to continue this effort (see “How can I contribute?” section for details on how you can get started). Thank you, you are awesome!

You may have noticed that today’s announcement does not mention anything about new DSC Resource Kit Wave and may be wondering why that’s the case. The answer is simple: waves are gone!

So far (till Wave 10 in April 2015) we’ve been releasing Resource Kit Waves on TechNet and the PowerShell Gallery. Since moving DSC Resource Kit to GitHub, we will no longer release to TechNet, but will update modules regularly in the PowerShell Gallery.

We will be working on development of DSC resources and accepting contributions on GitHub on an ongoing basis. Once updates of given DSC resource module are significant enough to release a new version, we will pull
recent code from GitHub and publish it to the PowerShell Gallery with a new version number.

Periodically, you can expect a blog post here describing what has been released recently – just as this one does :).

 

Where can I find all released DSC modules?

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also type a module’s name in the search box on the upper right side of the PowerShell Gallery to find specific module.

Another way is to go directly to a specific module by typing it’s URL:

http://www.powershellgallery.com/packages/<Module_Name>

e.g.:

http://www.powershellgallery.com/packages/xWebAdministration

Of course, you can always use PowerShellGet (available in WMF 5.0) as well:

Find-DscResource 

 

How can I install DSC resources from the PowerShell Gallery?

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module –Name <Module_Name>

e.g.

Install-Module –Name xWebAdministration

If you have previous versions of modules installed, you can update them by calling (from an elevated PowerShell prompt):

Update-Module

If there is an issue you are particularly concerned about, watch the version number in the PowerShell Gallery for updates to that particular resource. You can also file an Issue against the module on GitHub to help get it fixed.

After installing the modules, you can discover all of the resources available to your local system by running:

Get-DscResource

As with the previous Resource Kits, all the resources are experimental. The “x” prefix in the names stands for experimental – which means these resources are provided AS IS and are not supported through any Microsoft support program or service.

 

How can I find DSC modules on GitHub?

As we mentioned in April, we’ve open sourced development of DSC resources on GitHub. You can see the most recent state of all resources by going to their GitHub pages at https://github.com/PowerShell/<Module_Name>, e.g. for xCertificate module, go to: https://github.com/PowerShell/xCertificate.

All DSC modules are also listed as submodules of the DscResources repository, so that you can see them in one place (click the xDscResources folder).

 

How can I contribute?

You are more than welcome to contribute to development of DSC resource modules and there’s many ways to do it. You can create new DSC resources or modules, add test automation, improve documentation, fix existing
issues or open new ones. Most of the information you need to get started can be found in our contributing guide.

If you are not sure what can you do, but would like to help anyway, please take a look at list of open issues for DscResources repository. You can also check issues opened for specific modules by going to https://github.com/PowerShell/<Module_Name>/issues, e.g. https://github.com/PowerShell/xPSDesiredStateConfiguration/issues.

Your help in developing DSC is much appreciated!

 

What has been recently released?

You can see a detailed summary of all recent changes in the table below.

If you want to see a change log for previous versions, go to the GitHub repository page for a given module (see section “How can I find DSC modules on GitHub?” for details).

You may be wondering why some of the modules have two versions listed. The reason for that is that since Wave 10 we’ve released them to the PowerShell Gallery twice – in May and June. Some of the modules were updated only once, but others got fixes in both releases – for those we list both versions together with the changes they contain.

Module name

Version

Description

xActiveDirectory

2.4.0.0

  •   Added xADRecycleBin resource
  •   Minor fixes for xADUser resource

xCertificate

1.0.0.0

(New)

  •   Initial public release of xCertificate module with following resources:
    •   xCertReq

xChrome

1.0.1.0

  •   Minor changes in module manifest

xComputerManagement

1.3.0

  •   Fixed issue with Test-TargetResource in xComputer resource when domain or workgroup name is not specified
  •   Added tests

xDatabase

1.2.0

  •   Improved support for credentials

xDhcpServer

1.2

  •   Fixed "Cannot set default gateway on
      xDhcpServerOption" bug

xDisk

Deprecated

  •   xDisk module has been deprecated and replaced
      by xStorage

xDnsServer

1.1

  •   Added xDnsARecord resource

xDscResourceDesigner

1.4.0.0

  •   Added support and tests for -FriendlyName on
      Update-xDscResource
  •   Added tests for creating and updating
      resources
  •   Minor fixes for Update-xDscResource

1.3.0.0

Merged changes from PowerShell.org fork:

  •   Removed requires -RunAsAdministrator
  •   Added support for Enum types (with associated
      ValueMap)
  •   Added support for EmbeddedInstances other than
      MSFT_Credential and MSFT_KeyValuePair
  •   Fixed parameter name in Test-xDscResource
      comment-based help to match actual command definition
  •   Updated Test-xDscResource to use a process
      block, since it accepts pipeline input
  •   Fixed invalid use of try/catch/finally in
      Test-MockSchema and Test-DscResourceModule
  •   Updated code related to Common parameters, now
      handles all common parameters properly based on command metadata.
  •   Added very basic tests for Test-xDscResource

xExchange

1.1.0.0

  •   xExchAutoMountPoint resource: Added
      parameter EnsureExchangeVolumeMountPointIsLast
  •   xExchExchangeCertificate resource: Added error
      logging for the Enable-ExchangeCertificate cmdlet
  •   xExchExchangeServer resource: Added pre-check
      for deprecated Set-ExchangeServer parameter, WorkloadManagementPolicy
  •   xExchJetstressCleanup resource: When
      OutputSaveLocation is specified, Stress* files will also now be saved
  •   xExchMailboxDatabase resource:
    • Added AdServerSettingsPreferredServer parameter
    • Added SkipInitialDatabaseMount parameter,
        which can help in an enviroments where databases need time to be able to mount successfully after creation
    • Added better error logging for Mount-Database
    • Databases will only be mounted at initial database creation
        if MountAtStartup is $true or not specified
    •   xExchMailboxDatabaseCopy resource:
      • Added SeedingPostponed parameter
      • Added AdServerSettingsPreferredServer parameter
      • Changed so that ActivationPreference will only be set if the number of existing copies for the database is greater than or equal to the specified ActivationPreference
      • Changed so that a seed of a new copy is only performed if SeedingPostponed is not specified or set to $false
      • Added better error logging for Add-MailboxDatabaseCopy
      • Added missing tests for EdbFilePath and LogFolderPath
    •   xExchOwaVirtualDirectory resource: Added
        missing test for InstantMessagingServerName
    •   xExchWaitForMailboxDatabase resource:
        Added AdServerSettingsPreferredServer parameter
    •   ExchangeConfigHelper.psm1:
        Updated DBListFromMailboxDatabaseCopiesCsv so that the DB copies that are returned are sorted by Activation Preference in ascending order.

xHyper-V

2.4.0.0

  •   Fixed VM power state issue in xVMHyperV
      resource

2.3.0

  •   Fixed check for presence of param
      AllowManagementOS

xNetworking

2.2.0.0

  •   Changes in xFirewall resources to meet
      Test-xDscResource criteria

xPhp

1.1.0.0

  •   Updated module name to support WMF 5

xPSDesiredStateConfiguration

3.3.0.0

  •   Add support to xPackage resource for checking
      different registry hives
  •   Added support for new registration properties in
      xDscWebService resource

3.2.0.0

  •   Fix problems with file names containing square
      brackets in xArchive resource
  •   Fix default culture issue in xDSCWebService
      resource
  •   Security enhancements in xPackage resource

xRemoteDesktopAdmin

1.0.3.0

  •   Updated examples

xRobocopy

1.0.0.0

(New)

  •   Initial public release of xRobocopy module
      with following resources:
    •   xRobocopy

xSharepoint

0.3.0.0

  •   Fixed issue with detection of Identity
      Extensions in xSPInstallPrereqs resource
  •   Changes to comply with PSScriptAnalyzer rules

0.2.0.76

(New)

  •   Initial public release of xSharePoint module
      with following resources:
    • xBCSServiceApp
    • xSPCacheAccounts
    • xSPClearRemoteSessions
    • xSPCreateFarm
    • xSPDiagnosticLoggingSettings
    • xSPDistributedCacheService
    • xSPFeature
    • xSPInstall
    • xSPInstallPreReqs
    • xSPJoinFarm
    • xSPManagedAccount
    • xSPManagedMetadataServiceApp
    • xSPManagedPath
    • xSPSearchServiceApp
    • xSPSecureStoreServiceApp
    • xSPServiceAppPool
    • xSPServiceInstance
    • xSPSite
    • xSPStateServiceApp
    • xSPUsageApplication
    • xSPUserProfileServiceApp
    • xSPUserProfileSyncService
    • xSPWebApplication

xSmbShare

1.1.0.0

  •   Fixed bug in xSmbShare resource which was
      causing Test-TargetResource to return false negatives when  more than three parameters were specified.

xSqlServer

1.3.0.0

  •   Made features case-insensitive in
      xSqlServerSetup resource

xStorage

1.0.0.0

(New)

  •   Initial release of xStorage module with
      following resources (contains resources from deprecated xDisk module):
    •   xDisk (from xDisk)
    •   xMountImage
    •   xWaitForDisk (from xDisk)

xTimeZone

1.1.0.0

  •   Added tests

xWebAdministration

1.6.0.0

  •   Fixed bug in xWebsite resource regarding
      incorrect name of personal certificate store

1.5.0.0

  •   Fix issue with Get-Website when there are
      multiple sites in xWebsite resource
  •   Fix issue when trying to add a new website
      when no websites currently exist in xWebsite resource.

xWebDeploy

1.0.0.0

(New)

  •   Initial release of xWebDeloy module following
      resources:
    • xWebDeploy
    • xWebPackageDeploy

xWindowsEventForwarding

1.0.0.0

(New)

  •   Initial release of xWindowsEventForwarding
      module with following modules:
    •   xWEFCollector
    •   xWEFSubscription

xWindowsUpdate

2.0.0

  •   Minor changes in documentation

xWinEventLog

1.0.0.0

  •   Fixed Set-TargetResource function in
      xWinEventLog resource not to reapply if resource is in desired state already.

 

Questions, comments?

If you're looking into using PowerShell DSC, but have questions, are blocked by issues with current resources, or a lack of resources, let us know in the comments or create an issue on GitHub.

 

Karol Kaczmarek          

Software Engineer

PowerShell Team



 

Using Win2D to apply effects on your files

MSDN Blogs - Tue, 06/23/2015 - 14:08

It’s been a long time since my last post about C# but I’m still using it, mainly for a personal project: UrzaGatherer 3.0.

Version 2.0 was done using WinJS and JavaScript but because I love discovering new things I decided that version 3.0 will be developed using C# and XAML for Windows 10.

One of the feature I’m working on is a blurred lockscreen background. Basically, the idea is to pick a card and use the picture as lockscreen background.

The main problem that I was facing is that the cards scans are in a too low resolution. So to get rid of the inevitable aliasing produced by scaling my pictures up, I decided to add some Gaussian blur.

The first version of my blurred lockscreen background used a kind of brute force approach: Going through all the pixels and applying my filter. On my desktop PC: no problem. But on my phone (Remember, it is a Windows 10 universal application that I’m working on), the operation was too slow.

Then enters Win2D!

Thanks to it I was able to produce a method to blur my files that uses GPU and DirectX. So faster results and in the same time less battery consumption.

Even the code is pretty simple:

byte[] bytes; int width; int height; var file = await Package.Current.InstalledLocation.GetFileAsync("test.png"); using (var stream = await file.OpenAsync(FileAccessMode.Read)) { BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream); PixelDataProvider pixelData = await decoder.GetPixelDataAsync(); bytes = pixelData.DetachPixelData(); width = (int)decoder.PixelWidth; height = (int)decoder.PixelHeight; } var device = new CanvasDevice(); var renderer = new CanvasRenderTarget(device, width, height, 72); var bitmap = CanvasBitmap.CreateFromBytes(device, bytes, width, height, DirectXPixelFormat.B8G8R8A8UIntNormalized); using (var ds = renderer.CreateDrawingSession()) { var blur = new GaussianBlurEffect(); blur.BlurAmount = 8.0f; blur.BorderMode = EffectBorderMode.Hard; blur.Optimization = EffectOptimization.Quality; blur.Source = bitmap; ds.DrawImage(blur); } var saveFile = await ApplicationData.Current.LocalFolder.CreateFileAsync("temp.jpg", CreationCollisionOption.ReplaceExisting); using (var outStream = await saveFile.OpenAsync(FileAccessMode.ReadWrite)) { await renderer.SaveAsync(outStream, CanvasBitmapFileFormat.Png); }

So basically:

  • Open the picture and use a BitmapDecoder to get bytes and dimension
  • Create a  canvasDevice and a CanvasRenderTarget to have offscreen rendering capabilities
  • Create the effect you want to use (GaussianBlurEffect here)
  • Apply the effect
  • Save your file


Insanely simple, right?

Before:

After:

Win2D is a great library that you can find here: https://github.com/Microsoft/Win2D 

Documentation can be found here: http://microsoft.github.io/Win2D/html/Introduction.htm

A series of posts you may find interesting about Win2D effects: http://i1.blogs.msdn.com/b/win2d/archive/2014/10/30/add-sizzle-to-your-app-with-image-effects-part-1.aspx

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux