You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 1 hour 27 min ago

Ubicando nuestros ficheros en el dispositivo

Tue, 12/02/2014 - 03:07

En un post anterior vimos la importancia de los datos en nuestras aplicaciones, y todo lo que ello conlleva. Hoy en cambio vamos a hablar del almacenamiento y el acceso a estos datos, o a ficheros disponibles en el dispositivo donde ejecutemos nuestra aplicación.

Al igual que a la hora de elegir el tipo de fichero para almacenar nuestros datos, el lugar donde almacenar o leer estos datos también es importante y variará dependiendo del tipo de archivo que sea y su uso.

Installation Folder

Se trata del directorio creado cuando se instala una aplicación. En él se almacenan todos los ficheros que se adjuntaron a la solución antes de crear el paquete de instalación, por lo que es un directorio solo de lectura.

Su uso principal es para ficheros estáticos o para obtener los datos por defecto de nuestra aplicación, tales como una imagen predeterminada cuando no se ha insertado nada en la aplicación.

Para acceder a estos elementos, debemos indicar su ruta precedida de “ms-appx:///”, lo cual sirve para indicar que se trata de un fichero de la carpeta de instalación.

1: <Image Source="ms-appx:///images/image.png" 2: HorizontalAlignment="Center" 3: VerticalAlignment="Center"/> 4:  5: var file = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///texts/text.txt"));

 

Por otro lado, si queremos acceder a estos archivos mediante código también disponemos de la API Windows.ApplicationModel.Package.Current.InstalledLocation, mediante la cual podemos acceder a la ubicación de estos ficheros como a los demás recursos de ficheros.Para ello obtenemos la carpeta empleando esta API, y procederemos a leer los ficheros almacenados, pero sin poder realizarles ninguna modificación.

1: var folder = Windows.ApplicationModel.Package.Current.InstalledLocation; 2: var files = await folder.GetFilesAsync();

 

App Data folder

Este es el espacio de almacenamiento propio que la aplicación tiene reservado y al cual ninguna otra puede acceder. Está dividido en tres directorios, que son: Roaming, Local y Temp.

Roaming

En esta carpeta se almacenan principalmente los ficheros de configuración que se van a compartir con otras instalaciones de la aplicación, tanto de Windows como de Windows Phone, que estén vinculadas a la misma cuenta de Microsoft Account.

A pesar de que aquí se pueden almacenar más ficheros, solo se sincronizarán hasta llegar al límite establecido, quedando el resto de ficheros almacenados pero no sincronizados.

1: var roamingData = Windows.Storage.ApplicationData.Current.RoamingFolder; 2: StorageFile file = await roamingData.CreateFileAsync("file.txt", CreationCollisionOption.OpenIfExists); 3: var roamingSettings = Windows.Storage.ApplicationData.Current.RoamingSettings; 4: roamingSettings.Values["exampleSettings"] = "Amplio";

Local

Se trata del lugar donde podemos almacenar los datos locales de nuestra aplicación, y que solo estarán disponibles en ese dispositivo. El tamaño máximo de ficheros almacenados aquí es el mismo que la capacidad del dispositivo en cuestión. Estos ficheros pueden ser tanto ficheros de texto en los que volcar datos o ficheros de configuración.

Por otro lado, si actualizamos la aplicación, estos datos se mantendrán.

1: var localSettings = Windows.Storage.ApplicationData.Current.LocalSettings; 2: Object value = localSettings.Values["exampleLocalSettings"]; 3:  4: StorageFolder folder = ApplicationData.Current.LocalFolder; 5: StorageFile file = await folder.CreateFileAsync("file.txt", CreationCollisionOption.OpenIfExists);

 

Temp

En este directorio almacenamos cierta información que consideremos temporal, ya que no tenemos la certeza de que esté ahí la próxima vez que abramos la aplicación. Esto se debe a que estos ficheros pueden ser eliminados en caso de que la capacidad de almacenamiento del dispositivo sea reducida.

1: StorageFolder folder = ApplicationData.Current.TemporaryFolder; 2: StorageFile file = await folder.CreateFileAsync("file.txt", CreationCollisionOption.OpenIfExists);

 

 

Por otro lado, disponemos del método ClearAsync, aplicable en las tres carpetas antes mencionadas, que nos sirve para eliminar el contenido completo del directorio indicado, lo cual resulta útil a la hora de devolver la aplicación a su estado inicial.

1: Windows.Storage.ApplicationData.Current.ClearAsync(Windows.Storage.ApplicationDataLocality.Roaming);

SD Card

Una vez tratado el almacenamiento interno de la aplicación, toca acceder a otras ubicaciones en las cuales poder almacenar datos y leerlos. Muchos dispositivos móviles actuales traen una ranura para tarjeta micro SD, ya que desde Windows Phone 8.1 se puede leer y escribir en las tarjetas externas.

Al considerarse un recurso compartido, si queremos que nuestra aplicación tenga acceso a la tarjeta SD, necesitamos declarar este requisito en el manifiesto de la aplicación. En dicho manifiesto debemos indicar cada tipo de archivo que queremos que sea accesible.

Para obtener los ficheros de la tarjeta SD, lo primero que debemos hacer es comprobar que existe una tarjeta en el dispositivo móvil. Para ello obtenemos el listado de elementos extraíbles de almacenamiento, y si hay algún elemento en esa lista tenemos la carpeta destino.

1: var devices = Windows.Storage.KnownFolders.RemovableDevices; 2: var sdCards = await devices.GetFoldersAsync(); 3: if (sdCards.Count == 0) return; 4: StorageFolder firstCard = sdCards[0];

 

Una vez tenemos el directorio de la tarjeta, ya podemos crear o eliminar carpetas o ficheros de igual manera que lo hacemos en local, pero solo podremos trabajar con los tipos de archivo declarados en el manifiesto.

KnownFolders

El acceso a la API de KnownFolders facilita mucho la vida de los desarrolladores a la hora de buscar archivos dentro del dispositivo. En vez de tener que buscar en todas las ubicaciones posibles, el sistema nos ofrece un listado con todos los archivos de ese tipo concreto, ya sean imágenes, música o vídeos.

Además, los ficheros que almacenemos en KnownFolders estarán disponibles para todas las demás aplicaciones.

Para acceder a los directorios de música, imágenes o vídeos deberemos añadir este requisito en el manifiesto de la aplicación. Tras eso, ya podemos acceder a los elementos que encuentre.

1: var pictures = await Windows.Storage.KnownFolders.PicturesLibrary.GetFilesAsync();

OneDrive

Uno de las principales características a buscar es el estar siempre conectado, y que nuestros datos se almacenen en algún lugar donde, en caso de que nuestro dispositivo sufra algún percance, nuestra información tenga un respaldo al que acceder.

Ya hemos visto que mediante la carpeta Roaming podemos compartir la configuración, y en menor medida, cantidades pequeñas de datos. Pero en el momento que queremos empezar a almacenar una cantidad mayor de datos, ficheros de tamaño más elevado como imágenes, o elementos similares, nos vemos en la obligación de emplear otras maneras de almacenamiento.

Gracias a la API de Onedrive, podemos almacenar nuestros ficheros en el Onedrive personal de cada usuario, gracias a lo cual tendremos de manera accesible los datos.

Por supuesto, siempre deberemos comprobar mediante código que los elementos han sido encontrados, ya que al ser su Onedrive personal, el usuario los puede mover de sitio sin problemas.

En primer lugar, debemos iniciar sesión con la cuenta de Onedrive del usuario, para así tener acceso a este almacenamiento. Para ello debemos crear una serie de miembros para la conexión, para posteriormente llevar a cabo el proceso de autenticación.

1: private LiveConnectClient _liveClient; 2: private LiveAuthClient _liveAuth; 3: private LiveLoginResult _liveResult; 4: private LiveConnectSession _liveSession; 5: private string[] _requiredScopes; 6:  7: public async Task<bool> SignIn(string clientId) 8: { 9: try 10: { 11: if (_liveSession == null) 12: { 13: if (_requiredScopes == null) 14: { 15: //setting scopes by default 16: _requiredScopes = DefaultScopes; 17: } 18: _liveAuth = new LiveAuthClient(clientId); 19: _liveResult = await _liveAuth.InitializeAsync(_requiredScopes); 20: if (_liveResult.Status != LiveConnectSessionStatus.Connected) 21: _liveResult = await _liveAuth.LoginAsync(_requiredScopes); 22: _liveSession = _liveResult.Session; 23: _liveClient = new LiveConnectClient(_liveSession); 24: return true; 25: } 26: } 27: catch 28: { 29: return false; 30: } 31: return false; 32: }

Así, en caso de haber introducido los datos erróneamente o haber cancelado la operación, devolveremos un false, mientras que en caso de que todo haya ido bien, recibiremos una respuesta afirmativa y tendremos ya los datos necesarios almacenados.

Antes de empezar a subir o bajar ficheros debemos saber cómo crear carpetas y acceder a ellas. Empezamos creando una carpeta dentro del Onedrive del usuario, ya que es una de las operaciones básicas para darle un uso mínimamente ordenado al espacio disponible.

Gracias a este método, crearemos una carpeta en la raíz de Onedrive, y devolveremos el identificador de la carpeta.

1: public async Task<string> CreateFolder(string folderName) 2: { 3: var folderData = new Dictionary<string, object> { { "name", folderName } }; 4: LiveOperationResult operationResult = await _liveClient.PostAsync("me/skydrive", folderData); 5: dynamic result = operationResult.Result; 6: string id = string.Format("{0}", result.id); 7: return id; 8: }

 

Posteriormente, debemos ser capaces de acceder a estas carpetas, para así ver qué ficheros contiene. El proceso consiste en que si el usuario esta autenticado en nuestra aplicación, buscaremos la carpeta indicada, devolviéndola en caso de encontrarla, y creándola en caso contrario.

1: public async Task<string> GetFolder(string folderName) 2: { 3: var folderId = string.Empty; 4: //the session is already established, so let's find our folder by its name 5: if (_liveClient != null) 6: { 7: LiveOperationResult result = await 8: _liveClient.GetAsync("me/skydrive/files/"); 9: var data = (List<object>)result.Result["data"]; 10: foreach (IDictionary<string, object> content in data) 11: { 12: if (content["name"].ToString() == folderName) 13: { 14: //The folder has been found! 15: folderId = content["id"].ToString(); 16: } 17: } 18: //the folder hasn't been found, so let's create a new one 19: if (string.IsNullOrEmpty(folderId)) 20: folderId = await CreateFolder(folderName); 21: } 22: return folderId; 23: }

Una vez ya somos capaces de crear carpetas y acceder a ellas, podemos empezar a subir y bajar ficheros.

La subida en un proceso relativamente sencillo, en el cual indicamos la carpeta de destino, el nombre del fichero y los datos a subir, y nuestro elemento LiveConnectClient se encarga de subir el fichero a Onedrive, indicando si se ha podido subir con éxito o no.

1: public async Task<OperationStatus> UploadFile(string folderName, string fileName, Stream fileStream) 2: { 3: try 4: { 5: var folderId = await GetFolder(folderName); 6: if (_liveClient != null) 7: { 8: await _liveClient.UploadAsync(folderId, fileName, fileStream, OverwriteOption.Overwrite); 9: } 10: return OperationStatus.Completed; 11: } 12: catch 13: { 14: return OperationStatus.Failed; 15: } 16: }

Por otro lado, la descarga de un fichero varía ligeramente, ya que debe buscar antes el fichero en el espacio en la nube del usuario, y en caso de encontrarlo empezar el proceso de descarga.

1: public async Task<string> DownloadFile(string folderName, string fileName) 2: { 3: var fileId = string.Empty; 4: //looking for the file 5: if (_liveClient != null) 6: { 7: var folderId = await GetFolder(folderName); 8: LiveOperationResult fileResult = await _liveClient.GetAsync(folderId + "/files"); 9: var fileData = (List<object>)fileResult.Result["data"]; 10: foreach (IDictionary<string, object> content in fileData) 11: { 12: if (content["name"].ToString() == fileName) 13: { 14: //The file has been found! 15: fileId = content["id"].ToString(); 16: break; 17: } 18: } 19: await _liveClient.BackgroundDownloadAsync(fileId + "/content", 20: new Uri("/shared/transfers/" + fileName, UriKind.Relative)); 21: } 22: return fileId; 23: }

Para terminar

Acabamos de ver cómo en unos sencillos pasos podemos acceder a múltiples ubicaciones donde almacenar o recoger datos, desde la carpeta de instalación, donde tener los elementos por defecto, hasta Onedrive, donde tener nuestros datos en internet, lo cual nos abre un gran abanico de opciones acerca de cómo guardar nuestra información, lo cual unido a nuestro artículo referente a formas de almacenar los datos puede ayudarnos a aumentar fácilmente las funcionalidades de nuestra aplicación.

Un saludo,

Josevi Agulló (@josevi_7)

Luke 2 - DevOps og Continuous Delivery!

Tue, 12/02/2014 - 01:00
Velkommen til luke 2 i Microsofts julekalender for utviklere. Husk at hver luke du deler på Twitter eller Facebook gir deg mulighet til å vinne en Surface Pro 3, men kun 1 deling pr luke i hvert media telles ! DevOps er en stadig viktigere del av Application Lifecycle Management og er et voksende område av interesse som bedrifter trenger for å utvikle og distribuere kvalitetsprogrammer i et raskere tempo. Release Management for Visual Studio...(read more)

Azure:新的市场,网络改进,新的批处理计算,自动化服务以及其他

Tue, 12/02/2014 - 00:49

[原文发表地址] Azure: New Marketplace, Network Improvements, New Batch Service, Automation Service, more

[原文发表时间] 2014-10-28

今天我们发布了一系列关于Microsoft Azure的重要更新,其中包括:

  • 市场: 发布Azure 市场及与关键技术合作伙伴的伙伴关系
  • 网络: 网络安全组、 多虚拟���络接口、强制管道,源IP Affinity 及其他
  • 批处理计算:新的 Azure 批处理计算服务的公开预览版
  • 自动化: Azure自动化服务的通用版本
  • 反恶意软件:针对虚拟机和云服务的Microsoft反恶意软件的通用版本
  • 虚拟机: 更多VM 扩展 - PowerShell DSC,Octopus,VS 版本管理等的通用版本

现在所有的这些更新都可以立刻使用了(请注意,某些功能还只是预览版本)。下面将详细介绍这些功能:

市场:发布Azure 市场及与关键技术合作伙伴的合作关系

上周,在旧金山我们的云活动日上,我宣布发布新的 Azure 市场,其有助于更好地连接Azure用户和合作客户、 ISVs以及新创公司。你现在只需要点击几次鼠标,就可以快速地找到,购买并直接部署任意数量的解决方案到Azure。

探索市场

Marketplace项默认情况下被固定在Azure 预览门户的主页面上,您可以通过单击它来了解 Azure 市场:

通过点击Marketplace标题,您可以了解大片区域的应用程序,虚拟机镜像,以及你加入到Azure订阅中的服务:

使用marketplace提供了一个超级简单的方法,利用丰富的生态系统应用程序和集成的服务来很好地在Azure上运行。今天发布的Marketplace包含了通过短信服务或者Hortenworks分布式集成驱动运行的多个虚拟机模板,通过Unbuntu、 CoreOS、 Suse,CentOS,Microsoft SharePoint 服务器场驱动的Linux 虚拟机,通过DataStax驱动的Cassandra串及许多安全虚拟设备。

您可以在Azure门户网站上点击任何项目来更多的了解他们,也可以随意对他们进行配置. 跟着下面的一个简单的跟踪创建向导做,使您可以随意地配置他们如何运行及他们将在哪里运行,以及显示你选择的应用程序/服务 /VM 镜像的额外的定价需要。

例如,下面是创建一个有8个 节点 的DataStax 企业体系的所有需求:

您通过Marketplace购买的解决方案将自动记录到您的Azure订阅 (为了避免您设置单独的付款方式)中。虚拟机镜像将支持可以使用你自己的许可或按小时出租的镜像许可的能力(这是概念解决方案或者概念实例的理想的证明,在这你只需要这个解决方案很短时间)。从今天开始Azure 直接客户以及使用企业协议支付的客户可以使用Azure Marketplace了。

你可以了解更多的关于Azure Marketplace,也可以在这里浏览他的内容

网络:很多很多的新功能和改进

本周的Azure更新包含了很多Azure网络的新功能。你可以立即在北欧地区使用这些新的网络功能,等到2014年十一月的时候将支持全球所有地区。新的网络功能包括:

网络安全组:

现在你可以创建网络安全组来定义对子网中虚拟机或者虚拟机组进出通信的访问控制规则。安全组及其规则的管理和更新可以独立于虚拟机的生命周期。

支持多虚拟网络接口

现在你可以在一个虚拟机中创建和管理多个虚拟网络接口(NIC)。多虚拟网络接口支持是大多数部署在Azure中的网络虚拟设备的一项基本需求。现在在Azure中启用了这一支持,将使得我们能够使用更丰富的网络虚拟设备。

强制通道

现在你可以重定向或者“强制”所有起源于云设备的互联网通信通过站点到站点的VPN管道检测和审核回到本地网络。这对企业级应用程序来说是一个关键的安全功能。

ExpressRoute增强

现在你可以跨多个Azure订阅共享单个的ExpressRoute连接。此外,Azure中的单个虚拟网络现在也可以被连接到多个ExpressRoute电路,从而实现了更加丰富的备份和灾难恢复方案。

新的VPN网关大小

为了满足日益增长的混合连接的吞吐量需求和交叉预置站点的数量,我们发布了一个更高性能的Azure VPN网关。这将允许更快的ExpressRoute和站点到站点的拥有更多管道的VPN网关。

VNet网关和ExpressRoute的操作及审计日志

现在你可以查看虚拟网络网关和ExpressRoute电路的操作日志了。Azure门户将显示你所做的所有关于API调用的操作日志和信息 ,以及重要的基础设施的更改,例如网关预定的更新。

高级虚拟网络网关政策

现在我们启用了允许你控制虚拟网络之间管道加密的功能。现在你可以在3DES,AES128,AES256和空加密中选择一个,也可以为IPsec/IKE网关启用完全向前保密(PFS)。

IP Affinity

现在Azure负载平衡器支持一个新的分布模式

叫做源IP Affinity (也称之为会话 Affinity或者客户端IP Affinity)。现在你可以基于2元(源IP,目的IP)或3元(源IP,目的IP和协议)的分布模式来平衡通信负载。

通信管理器的嵌套政策

现在你可以为通信管理创建嵌套政策。这样使得在创建功能强大的负载平衡和故障转移方案以支持更大更复杂的部署时就有了极大的灵活性。

门户支持管理内部负载平衡器,虚拟机的保留和实例IP地址

现在可以使用Azure预览门户来管理创建和设置内部负载平衡器,也可以创建和设置虚拟机的保留和实例IP地址。

自动化:Azure自动化服务的通用版本

我非常激动的宣布 Azure自动化服务的通用版本发布了。Azure自动化技术使得在Azure环境下,可以使用高度可扩展的可靠的工作流引擎进行创建,部署,监控和资源维护。这些服务可以协调耗时以及遍及Azure和第三方系统的频繁重复的操作性任务,���时也可以减少业务费用。

Azure自动化允许你建立运行手册(PowerShell 工作流)来描述你的管理过程,同时还提供了一个安全的全球性的资源类商店,使得你不需要在你的运行手册中硬编码敏感信息,并且只要提供进度表就可以使运行手册被自动启动。

运行手册可以对大量的方案实现自动化--从简单的日常手动任务到跨越多重的Azure服务和第三方系统的复杂的处理过程。因为自动化技术是基于PowerShell创建的,所以你就可以利用许多其已有的PowerShell模型,或者你可以自己去创作进而集成到第三方系统中。

创建和编辑运行手册

你可以从零开始创建一个新的运行手册,或者通过导入一个在运行手册画廊中已有的模板来打开它。

对于运行手册的编辑操作也可以在管理员门户网站中直接执行。

报价

作为一个可用的现用现付的服务,自动化技术基于其运行在指定的Azure 订阅中的使用时间的分钟数来计费。500 分钟的免费使用权也包括在Azure用户每个月的免费使用范围内。

了解更多

若需要了解更多有关Azure自动化技术,请查阅以下资源:

批处理服务:Azure批处理预览版——针对并行和HPC应用程序的全新作业调度服务

我非常激动的宣布我们的新的Azure批处理服务预览版发布了。这种新的平台服务通过自动扩充计算机资源来提供“把作业安排作为一种服务”,使在Azure中进行大规模并行作业和高效计算(HPC)工作变得很容易。一旦你提交作业,我们就启动虚拟机,运行你的任务,处理任何故障,而且当工作完成后会关闭所有。

Azure批处理是我们内部对Azure媒体服务和对Azure自我测试的编码进行管理的作业安排引擎。随着预览版的发布,我们非常激动的是可以使用一个新的来自GreenButton(是微软在今年早些时候收购的一家公司)的应用程序框架来扩展我们的SDK。通过描述作业所需资源,数据,及一个或多个计算任务,Azure批处理SDK可以很方便地启用云并行,串和HPC应用程序。

Azure批处理可以用于通过编程方式来运行的大量的类似任务或并行运行的应用程序。一个命令行程序或者脚本将一组数据文件作为输入项,处理在一系列任务中的数据并且产生一系列的输出文件。现在客户在Azure上运行的批处理工作负载的例子包括为银行和保险公司计算风险,创造新的消费者和工业产品,基因测序和开发新的药品,寻找新的能量来源,渲染3D动画和视频转码。

Azure批处理可以轻松地为客户处理几百,上千及数以万计的处理器核芯数或者更多的需求。有了作业调度服务,Azure开发者可以专注于在他们的应用程序和交付服务中使用批处理计算,而不需要构建和管理一个工作队列,不用有效扩展资源,不用调度任务也不用处理故障。

Azure扩展可以帮助批处理计算客户更快的完成其工作,尝试不同的设计方案,运行更大的和更精确的模型,及测试大量的不需要对其进行调查和维护大型集群的不同的场景。

在这里了解更多关于Azure批处理的知识,而且现在你可以在你的应用程序中使用它了。

虚拟机:针对虚拟机和云服务的Microsoft反恶意软件的通用版本

我很激动地宣布,微软针对虚拟机和云服务的反恶意软件的安全扩展现在已经是可用的了。我们将它作为一个免费功能发布了,所以你可以不需要额外付费就使用这个功能。

微软反恶意软件的安全扩展可以帮助你鉴别和移除病毒,间谍软件或者其他恶意软件。它提供了对最新威胁的实时保护,同时还支持按需求计划扫描。启用它是最好的安全实践,无论对于本地还是云托管的应用程序。

启用反恶意软件扩展

你可以通过Azure preview portal , Visual Studio 或者API’s/PowerShell来为虚拟机选择和配置微软反恶意软件的安全扩展。然后反恶意事件通过Azure诊断被记录到客户配置的Azure存储账户, 并通过管道输送到HDInsight或者SIEM tool用于进一步分析。更多信息可以从Microsoft Antimalware Whitepaper. 中得到。

要在现有虚拟机上激活反恶意软件功能需要做以下操作。在Azure预览门户里,选择虚拟机的EXTENSIONS, 然后在命令栏里点击ADD,同时选中Microsoft Antimalware。然后点击CREATE,同时可以自定义任何设置:

虚拟机:更多虚拟机扩展的通用版本

除了对虚拟机激活微软反恶意软件扩展, 今天发布的版本还包括很多对虚拟机扩展的支持,你可以在你的虚拟机里启动这些支持。你同样可以在Azure预览门户里,通过选择虚拟机的EXTENSIONS来配置这些扩展(截图和上面的微软反恶意软件部分的截图一样)

今天激活的新扩展包括:

PowerShell的需求状态配置

PowerShell需求状态配置这个扩展,可以通过Desired State Configuration (DSC)来配置Azure 虚拟机。DSC使你可以指定你想配置的软件环境。DSC配置也可以通过Azure PowerShell SDK自动化实现, 同时你可以推送配置到任意Azure 虚拟机并且让他们自动通过。

更多详情,可以参照需求状态配置博客帖子.

Octopus

Octopus通过自动化配置IIS,安装服务以及更改配置设定来简化ASP.NET web应用程序,Windows服务和其他应用程序的部署。将Octopus整合到Azure是Azure用户之声上高需求的功能之一。通过这次整合,我们可以简化在VM上对Octopus的部署和配置。

Visual Studio发布管理��

Visual Studio的发布管理器是一个通过获取在TFS上发布的所有环境来自动化发布进程的持续的交付解决方案。Visual Studio的发布管理器是和TFS集成在一起的,你可以为自动部署配置多级发布管道以及在多个环境中验证您的应用程序。有了新的Visual Studio发布管理器的扩展,虚拟机可以预先配置操作发布管理器所需的核心控件。

总结

今天的微软Azure版本启用了很多重大的新功能,使得在云上构建应用程序变得更加容易。

如果你还没有Azure帐号,你可以申请免费试用并开始使用上述所有功能。然后,请访问Microsoft Azure开发人员中心以了解更多如何使用它构建应用程序的说明。

希望这些能够帮助到你

Scott

附:除了写博客外,我现在还使用Twitter来快速更新和共享链接。请跟我到这里:twitter.com/scottgu

 

 

 

 

 

How Azure helps UCAS and students survive results day

Tue, 12/02/2014 - 00:30

Arguably the biggest moment in the life of any student (up until that that point) is that day in August when they receive their exam results and find out whether their place at the university of their choice has been confirmed. On that day, you can forgive the individual students for only worrying about their own personal results. But what about the people whose job it is to worry about all the students’ results? We are of course talking about UCAS, and the part it plays in bridging the gap between Further Education and Higher Education.

Every year, UCAS processes the applications of over 700,000 students in the UK, who through their system, make over two and a half million choices. On results day, the UCAS system has to deal with over 5 million page views on UCAS.com, and the Track service must be able to adequately support in excess of 1.2 million logins. This is why UCAS looks to the cloud - specifically Microsoft Azure - to provide a robust and resilient system for the students and their information.

There were three main ‘must haves’ for UCAS when they began transitioning their systems to the cloud two years ago: reliability, flexibility and security. However, with a stable stream of traffic all year punctuated by a major spike on results day, UCAS needed a solution that could be relied upon to withstand the one-off phenomenal demand but not be wasteful for the rest of year. The scalability of the Azure platform allows for over 700,000 applicants to login at the same time on results day without the system suffering from abnormally high volumes of traffic and usage.

UCAS is just one example of a successful use of Microsoft Azure in education, but for more information please take a look at the following SlideShare:

Microsoft Azure in Education from Microsoft Education UK

How Azure helps UCAS and students survive results day

Tue, 12/02/2014 - 00:30

Arguably the biggest moment in the life of any student (up until that that point) is that day in August when they receive their exam results and find out whether their place at the university of their choice has been confirmed. On that day, you can forgive the individual students for only worrying about their own personal results. But what about the people whose job it is to worry about all the students’ results? We are of course talking about UCAS, and the part it plays in bridging the gap between Further Education and Higher Education.

Every year, UCAS processes the applications of over 700,000 students in the UK, who through their system, make over two and a half million choices. On results day, the UCAS system has to deal with over 5 million page views on UCAS.com, and the Track service must be able to adequately support in excess of 1.2 million logins. This is why UCAS looks to the cloud - specifically Microsoft Azure - to provide a robust and resilient system for the students and their information.

There were three main ‘must haves’ for UCAS when they began transitioning their systems to the cloud two years ago: reliability, flexibility and security. However, with a stable stream of traffic all year punctuated by a major spike on results day, UCAS needed a solution that could be relied upon to withstand the one-off phenomenal demand but not be wasteful for the rest of year. The scalability of the Azure platform allows for over 700,000 applicants to login at the same time on results day without the system suffering from abnormally high volumes of traffic and usage.

UCAS is just one example of a successful use of Microsoft Azure in education, but for more information please take a look at the following SlideShare:

Microsoft Azure in Education from Microsoft Education UK

Открыта регистрация на конференцию DevCon 2015

Tue, 12/02/2014 - 00:28

Мы рады сообщить вам, что регистрация на главную технологическую конференцию Microsoft DevCon 2015 открыта! 

Пятая юбилейная конференция пройдет 20-21 мая 2015 года



Формат конференции остается неизменным — это загородное двухдневное мероприятие в природном курорте в ближнем Подмосковье, и в стоимость билета на конференцию уже все включено:

  • Питание и проживание в отеле;
  • Участие в основной программе;
  • Посещение мастер-классов;
  • Участие в круглых столах и фокус-группах;
  • Трансфер из Москвы и обратно;
  • Развлекательная и спортивная программа;
  • Вечерняя программа.


Обратите внимание, до 16 января 2015 года действует специальная более привлекательная цена, о чем подробнее можно узнать на странице «Условия участия».

Регистрируйтесь сейчас! По опыту проведения конференций мы знаем, что билеты кончаются за несколько месяцев до проведения. Торопитесь занять свое место.

Что мы вам расскажем (и покажем)


Мы уделяем очень большое внимание качеству программы и тому, что вы сможете узнать. У вас будет уникальная возможность пообщаться с представителями Microsoft непосредственно занятыми разработкой и созданием продуктов, а также с сотрудниками других компаний, которые имеют глубокий практический опыт использования наших технологий. Основная сетка конференции будет включать более полусотни докладов на самые актуальные темы. В основную программу войдут четыре направления:

Windows


Разработка современных приложений для всех платформ с Windows – от крошечных микросхем, носимых устройств, телефонов и до ПК, игровых приставок и больших экранов. Разработка, монетизация и продвижение приложений для Маг��зинов приложений. 

Azure & Web


Подробное введение в новую версии платформы веб-разработкиASP.NET 5 и все инструменты добавленные в средства веб-разработки. Разработка веб-приложений, облачных приложений и сервисов дляMicosoft Azure, а также использование облачной платформы в качестве бэкенда для интернета вещей, мобильных и других приложений.  

Языки и инструменты разработки


Все про .NET 2015! Методология и использование средств разработкиVisual Studio 2015 и управления жизненным циклом программного обеспечения. Использование текущих возможностей и обсуждение будущего платформы .NET, языков разработки C#, JavaScript, XAML, технологий разработки.

Корпоративная разработка, обработка данных


SQL Server, Big Data, Machine Learning. Работа с данными, средствами бизнес-анализа и офисной платформой Microsoft. Разработка корпоративных приложений на платформе .NET, SharePoint и Office 365. Гибридные сценарии разработки приложений, интеграция корпоративной сети с облачным окружением. 

Разработка игр


Впервые на DevCon! В 2015 году мы добавим несколько тем посвященных разработке игровых приложений на популярных инструментах: DirectX, Unity и других. Мы расскажем больше про инструменты, подходы и опыт разработки игр. Следите за анонсами!

И еще…


И еще немного тем, над которыми мы думаем и которые хотим осветить на конференции: универсальные приложения, Xamarin, Apache Cordova, мультиплатформенная разработка, IoT, безопасность облачных сервисов, гибридная инфраструктура и решения для разработчиков, Internet Explorer, open source, Git, Xbox, ALM, C++, DevOps, интероперабельность в облаке, бизнес приложений…

Кроме того, вас ждут мастер-классы в оба дня конференции и, конечно, традиционный хакатон “Ночь кодирования”… Следите за новостями DevCon 2015!

Регистрируйтесь сейчас!

 

Хотите выступить на DevCon 2015?


DevCon 2015 — это не только возможность получить актуальные знания по продуктам и технологиям Microsoft, но и уникальный шанс выступить на главной технологической конференции по разработке. Если вы хотите поделиться своими знаниями, то можете подать заявку организаторам. Напишите немного о себе и той теме с которой хотели бы выступить мне на vyunev@microsoft.com (Владимир Юнев). 

До встречи на конференции, 
команда DevCon 2015.

Полезные ссылки

 

WPF のロードマップ

Tue, 12/02/2014 - 00:16
本記事は、マイクロソフト本社の .NET Blogの記事を勝手に翻訳したものです。
元記事は、The Roadmap for WPF 2014/11/12 AM 7:37

私のオレオレ翻訳なので、翻訳の変更などはこのBlogのコメント欄にフィードバックをください。正確な情報は、オリジナルのBlogを参照するようにしてください。

 

2006年に WPF (.NET Framework 3.0) を紹介した時の反応は、とても素晴らしいものでした。なぜなら、企業、ISV、マイクロソフトのパートナーは、お客様向けのミッション クリティカル アプリケーションと素晴らしいバーティカル ソリューションを構築するビジネスの中心となる技術として選択してくれたからです。また、現在へ繋がる推進力にもなりました( Visual Studio 2013 で過去60日で開発された新規プロジェクトの 10% が WPF プロジェクトです。WPF は、データ中心のビジネス アプリを構築する情熱的で活力のあるコミュニティを持っています。最近の事例としては、私たちのパートナーである InterKnowlogy が開発した新しい WPF アプリケーションがあります。このアプリは、CNN のプロデューサーがオンエア中の投票におけるデータの構成と検証、中間結果のアップロードを行うために使用しています。投票データは、CNN のMagic Wallに表示されています。Magic Wallの開発には、マイクロソフトの Bing Pulse チームが協力しています。この記事では、WPF プラットフォームのロードマップを取り扱います。ロードマップには、次期 Visual Studio リリースのツールにおける機能強化と投資領域の優先順位が含まれています。

プラットフォームに対する投資領域

今年(2014年)の //build カンファレンスで行ったユーザー調査における示唆や数か月に渡る様々なマーケットで活躍する多くの開発者に対するインタビューを基にして、WPF をより素晴らしいプラットフォームにするための投資領域の優先順を以下に記載します。

パフォーマンス:WPF が、大規模で高パフォーマンスなアプリ(たとえば、Visual Studio や Blend)で使われていることから、よりパフォーマンスを向上したプラットフォームにして欲しいという要望がカスタマー フィードバックとして届いています。具体的には、幾つかの重要なシナリオにおいて、たとえばアプリの起動時間や ItemsControl に対するスクローリングと仮想化のパフォーマンスの最適化を前進させることです。

DirectX との相互運用性:主となるシナリオは、WPF アプリケーションと最新の DirectX をシームレスに相互運用させることです。

最新ハードウェアのサポート:タッチや高DPIのディスプレイのような技術は、今日の様々なデバイスで使われています。新しいハードウェアをサポートするためには、既存の WPF アプリが新しいハードウェアを持つ最新型のデスクトップ PCに対応することは重要なことです。

ツール:私たちは、.NET や WinRT のような新しいプラットフォームの登場と合わせて対応させる場合において、ツールも一緒に進化させていきます。この約束は、この記事のツール領域に対する投資を反映しています。

幾つかの投資領域は、特定の OS バージョンなどに依存するリスクを持っています。このような場合は、OS が持つ機能を絞り込むか、その機能を使用する必要があります。

現在の進捗状況

最初に、共通する質問であるサポートを説明します:WPF は、.NET Framework の構成要素です。 .NET Framework は、独立した製品ではなく オペレーティング システムを構成するコンポーネントとして定義されています。この理由から、サポートはWindows オペレーティング システムのサポート ライフサイクルに依存しています。現行の推奨バージョンである Windows 8.1 上の .NET Framework 4.5.2 の延長サポートは、2023年まで提供されます。私たちは、WPF カスタマーに影響する様々な報告されたバグやセキュリティ問題に対する修正を行い続けます。

WPF の品質に対する改善

我々は、WPF の改善を止めずに更に発展させて行きます。たとえば、次期 Visual Studio のリリース や .NET Framework 4.6 があります。

次に示すのは、.NET Framework 4.6 で提供した最新の修正内容です。

  • System.Windows.Input.Cursor におけるマルチ イメージ カーソル ファイル
  • 透明なチャイルド ウィンドウ(Tranparent child windows)
    (元の Blogのコメントを読んでいくと、Window スタイルにおける透明スタイルのサポートのことであり、Window オブジェクトの AreTransparent プロパティのことではありませんので、ご注意ください -荒井による注ー)
  • ダブル タップ ジェスチャの性能向上
  • TextBox コントロールにおけるダブル タップによるテキスト選択
  • ComboBox コントロールにおけるスタイラス 入力の信頼性向上

私たちは、あなたのフィードバックを求めています!

私たちは、将来のリリースのために connect で投票の多かった報告されたバグや信頼性問題を調査しています。

タイトル 投票数 タッチ イベントが遅れる 29 リボン ウィンドウ:ボーダーが細過ぎる 18 マイクロソフト カメラ コーデックパックがインストールされている場合に、BitmapFrame.Create で TIFF ファイルを扱うと 予約メモリの 300MB が使用される 12 ツールに対する改善

WPF 向けのツールは、ユーザー調査やカスタマー インタビューにおいて上位に位置する要望事項です。これは、XAML ツール カテゴリにおける上位5つの中の3つであり、WPF サポートに寄せられる要望を反映したものです。

ビジュアルな診断機能:調査の第1位と XAML ツールに寄せられたアイディアの第2位は、WPF アプリ向けの UI デバッガーが必要だというものです。我々は、WPF アプリ向けの完全なデバッグ ツール スイート(デバッグ中に、ライブ ビジュアル ツリーに対する検査機能やプロパティの変更を可能にします)を構築していることをアナウンスすることを非常に喜んでいます。このツールは、デバッグ中の変更をソース コードへ反映することを可能にします。


タイムライン ツール:繰り返し寄せられる要望としてユーザー調査の第4位が、WPF 向けのパフォーマンス診断ツールです。私たちは、WPF アプリ向けの新しい診断ツール(アプリの起動時間の遅さ、低速なフレーム レートなどの共通的なパフォーマンス問題に対するトラブルシュートを可能にします)を開発している最中です。既存のメモリ使用率CPU 使用率のツールとマージして、より素晴らしい WPF アプリを開発できるようにするために Visual Studio に組み込むツールセットを提供します。

Blend の改善点:Blend for Visual Studio 2015 は、素晴らしいユーザー インターフェースを持つ XAML アプリを作成するための最良の選択肢になるように再デザインされています。Blend は、Visual Studio とのワークフローを改善し、ツールセットとして見栄えの一貫性を持ちます。さらに、新しい Blend は、Visual Studio(WPF を含んでいます!) と同じ技術をベースにしています。これには、以前の Blend の欠点を改善し、より良いソリューション エクスプローラーとソース コントロール サポートを含んでいます。もっと重要なことは、Blend に XAML インテリセンス、基本的なデバッガー機能が存在しているということです。この改善のための重要なことの1つは、非同期のソリューション ロードのサポートが行われたということです。大規模な WPF ソリューション向けに、この機能が既に提供されています。また、WPF に対する イン プレースのテンプレート編集と XAML エディタにおいて「定義をここに表示(Peek)」するなどの機能を含めて洗練した体験を提供します。

 荒井による注記:Blend for Visual Studio 2015 Previewの詳細については、「Blend for Visual Studio 2015 Preview」の記事を参照してください。XAML インテリセンスの詳細と合わせて、スケッチ フローが提供されなくなることが記載されています。

もっとフィードバックをください

私たちは、Visual Studio 2015 におけるツールの改善と WPF プラットフォームのロードマップについて、皆さんが考えていることを知ることに関心があります。どうか、この記事に対するフィードバックや、e-mailConnectユーザー ボイスを使って私たちにご連絡ください。

 

注意事項

この記事は、オレオレ翻訳ですので、フィードバックは「The Roadmap for WPF 」のコメント欄か、e-mailConnectユーザー ボイスを使用してください。オレオレ翻訳に関するフィードバックについては、この記事のコメント欄にお願いします。

本日 Lync Server 2010 の hotfix がリリースされました。

Mon, 12/01/2014 - 23:50

こんばんは、 Lync サポートの吉野です。

前回の Lync 2010 (クライアント)に引き続きまして
http://blogs.msdn.com/b/lync_support_team_blog_japan/archive/2014/11/13/lync2010.aspx

Lync Server 2010 のホットフィックスがリリースされました。
http://support.microsoft.com/kb/3012065

こちらはクライアントとは異なり、ダウンロードサイトから入手可能です。インシデントを発行いただく必要はありません。

今回は会議関連モジュールの一部のみが修正されております。特に現段階で問題が起きていなかったり、弊社サポートからの指示がないということであれば次の定期アップデートをお待ちいただいても問題ありません。
しかしながら、 LyncServerUpdateInstaller.exe には今までのすべての修正が含まれておりますので、もしだいぶ更新をされていないということであれば年末ですし、一度 Lync Server 2010 のアップデートをご検討いただければと思います。

それでは引き続き快適な Lync ライフをご満喫ください。

 

 

 

Открыта регистрация на конференцию DevCon 2015

Mon, 12/01/2014 - 23:27
Друзья, разработчики и тестировщики программного обеспечения! Мы рады сообщить вам, что регистрация на главную технологическую конференцию Microsoft DevCon 2015 открыта! Пятая юбилейная конференция пройдет 20-21 мая 2015 года . Формат конференции остается неизменным — это загородное двухдневное мероприятие в природном курорте в ближнем Подмосковье, и в стоимость билета на конференцию уже все включено : Питание и проживание в отеле; Участие в основной программе; Посещение мастер-классов; Участие в...(read more)

Setting ‘available time’ on a deployment

Mon, 12/01/2014 - 23:10
I’ve had customers over the years use the setting ‘available time’ on deployments. This option is disabled by default.  It is a perfectly fine option to use as long as it is understood.  Often times it isn’t. The ‘available time’ defines the specific time at which a deployment set to systems becomes available.  I can hear you saying “thanks Mr. obvious” but keep reading. To illustrate how this can impact a deployment consider two example deployments of software updates. Deployment...(read more)

Level2: コントラスト設定とカメラの切り替え

Mon, 12/01/2014 - 21:41

Level2 をクリアーするには、Level1 ができる必要があります。

Level1: カメラで撮影した画像を表示

http://blogs.msdn.com/b/opmjapan/archive/2014/12/01/level1.aspx

 

スライダーを変更するとカメラの画質が変わり、切り替えを使うと外部と内部のカメラ、両方持っている場合その切り替えをすることができます。

まずはカメラギャラリーを配置しましょう

続いてスライダー ユニットを追加します

スライダー ユニットは 2 つ追加して適当な位置に配置します

配置したらスライダーを選択してみましょう

右下のエクスプレス ビューを選択すると、ユニットに設定された値を確認することができます

また、左上にはユニット名が表示されます

続いてカメラを選択して設定値を確認します

 

 

BrightnessContrast に数値が設定されています

ここを Slider の値を取得するように変更します

 Brightness の値を消します

ここに Sli.. と入力してみましょう

次のように Slider1Slider2 が候補として表示されます

 

Slider1 を選択します

続いて ! と入力します

Value をはじめ、いくつか候補が表示されます

ここでは Value を選択します

同様に Contrast には Slider2 を設定しましょう

 

 

続いて、カメラの切り替え機能を実装します

切り替え ユニットを追加して配置しましょう

 

 

上の例では名前が Toggle1 になっています。

ここで、カメラ ユニットを選択します

デザインの横の白い三角をクリックして、メニューを展開します

Camera という設定が確認できます

ここの値が 0 になっていますので、ここに Toggle1 の設定が入るようにします

 

 

スライダーと同じように Toggle1!Value を設定しましょう

 

 

最後に Slider1Slider2 の値を設定します

Default の値が 0 になっていますが、ここを 50 にします

0 のままだと、初めからコントラストなどがおかしな表示になってしまいますので

 

 

これで完成です F5 キーを押すか、画面を長押しして画面右上に表示される Preview ボタンをタッチして動作させてみましょう!

スライダーを変更してガメラをタッチすると画質の異なる写真がギャラリーに登録されることが確認できましたでしょうか?

 

 

 

$100 quad monitor workstation GPU

Mon, 12/01/2014 - 18:58

Cliff notes: get the Gigabyte GV-R725XOC-1GD

At Microsoft, we love multiple monitor setups. It is quite rare to see developer workstations with *just* one monitor; most developers have two monitors and quite a few are using 3 or 4 these days. I have 3 Dell 1920x1200 IPS panels hooked up to a Radeon R9 290, so I’m a bit spoiled.

In the past it was quite expensive to build these setups: either a specialized video card, multiple video cards, or USB video devices had to be added. Throwing an old video card into your system to get extra outputs is a great way to get a quad monitor setup on a budget, but depending on what cards you have laying around, compatibility may be an issue. Typically, AMD and NVIDIA only support multi-card setups if both cards are from the same generation. Using two cards from two different vendors or two different GPU generations is a “use at own risk” situation; although it may work now, future device driver updates could drop support for the older card leaving you stranded.

Wherever possible, I like to simplify and reduce complications to my builds. If you aren’t searching for blazing gaming performance and just want to hook up 4 regular monitors to get work done, you can try the Gigabyte GV-R725XOC-1GD. It is a Radeon R7 250X card that has two DisplayPort outputs in addition to HDMI and DVI all for $80 with rebate ($100 without rebate):

[Picture courtesy Gigabyte.com]

There are many video cards on the market that have 4 connectors on the back, including my expensive Radeon R9 290, but most cards only allow you to use 3 of the 4 connections simultaneously because of chipset limitations. In general if a card has two or more DisplayPort connections, then it can support 4 monitors seamlessly. But if it only has one DisplayPort connection, you’ll need an MST hub or a second video card to enable the 4th display which adds expense and complication to your system build. As far as I know, Gigabyte’s version of the Radeon R7 250X is the first card to support a native quad monitor configuration for under $100.

The GPU on this card is not screaming fast; it is just a re-packaged Radeon HD 7770 with 1GB of VRAM, a design that is now two years old. But because it is based on the “Graphics Core Next” architecture (Cape Verde XT flavor), it will support DirectX 12 according to AMD. The card maxes out at 95 watts and requires a single 6-pin PCIE power connector and uses 2 PCI slots so it won’t fit into compact PCs. The specs recommend a 450watt system power supply, but 380-400watt supplies ought to work just fine. The PCI bus already supplies 75watts, so this card only needs 20watts more than a card without a PCIE-power connection requirement, so it’s not a big worry if you already have a quality power supply.

I like NVIDIA cards too, but the cheapest quad monitor card I found was the GTX 760 which is $220. It is a great gaming card based on the Kepler GK104 architecture and according to NVIDIA will also support DirectX 12. But it uses up to 170watts and requires an 8-pin PCIE power connector which many folks don’t have. NVIDIA offers many quad-monitor cards in their Quadro lineup which use less power, but they aren't any less expensive.

With either of these cards if you plan to use 4 HDMI/DVI monitors, you’ll have to get two "active" DisplayPort->HDMI/DVI adapters. These adapters are not needed if two of your monitors use DisplayPort natively. For more info on “active” adapters, please see DisplayPort->HDMI dongles - active vs passive re-visited.

If you know of other modern graphics adapters that have two or more DisplayPort outputs and support 4 simultaneous monitors for a budget price, please drop me a note and I’ll add to the list.

 

あの Scott Hanselman 来日決定! ~無償イベント GoAzure が 1/16 に開催~

Mon, 12/01/2014 - 18:55
思い起こせば 2 年前、2012/6/29 –6/30 に開催された GoAzure イベント。記憶に残ってる方も多いのではないでしょうか? その GoAzure が復活します! 来月は 2015 年 1 月 16 日 (金) に、 Japan Azure User Group (JAZUG) のみなさんが中心となって、ベルサール渋谷ファーストで開催されることが決定しました。 開催日: 2015 年 1 月 16 日 (金) 10:30 – 20:00, 場所: ベルサール渋谷ファースト 詳細・参加登録   : セッション情報 そして、なんと! US の Build カンファレンスや先日開催された Connect(); イベントなどでも登場し、” Azure Friday ” でもおなじみの Scott Hanselman (スコット・ハンセルマン) が初来日して、今回の GoAzure で華麗なトークとデモを披露する予定です。 一日で Azure と Visual Studio の最新情報を学べるとともに、Scott Hanselman にも会えるイベント GoAzure にぜひご参加いただければと思います...(read more)

Should an application handle user credentials?

Mon, 12/01/2014 - 18:02

I think the answer is 'no' or 'only under special circumstances' (see Exceptions below) but would be interested in your comments.

By 'own credential management' I mean have own store of user names AND passwords and code to challenge users for the credentials, create them, reset passwords, etc. The alternative I am recommending is for the application to use an out-of-the-box secure token server (STS) like AD/ADFS, Azure AD or one of a plethora of 3rd party solutions and standard protocols like WS-Fed, SAML-P or OAuth to obtain user identity.  

Benefits

Implementing and managing your own credential store obviously involves development, maintenance and operational costs. Furthermore, handling passwords in your own code also increases the security risk of the application. It also increases the user's own risk by preventing single sign on and requiring the user to keep separate credentials for each application. At the same time there are a number of other reasons why delegating authentication to an STS is a better solution:

1. Enables single-sign on for all applications sharing the STS (and even those who can federate to it indirectly via another STS).

2. Makes the application more portable to other environments already using an STS.

3. For multi-tenant applications, makes it easier to add new tenants or partners and integrate their existing identity infrastructure.

3. Management of authentication process is done in one place rather than in each application, making it easier to for example define new policies for when multi-factor authentication should be used.

4. Tenants and partners can do their own user management, possibly integrated with their human resources systems; their own password resets.

Myths

In working with customers in this area I have encountered a number of myths, which hamper adoption of standards based authentication. (Some of these are general, some specific to Azure AD or ADFS).

1. "My application makes authorization decisions based on some custom data I keep in a database so I can keep the passwords there as well" - rather delegate authentication to an STS and then augment the incoming claim set with what you have in the database to enable authorization. You can then remove just the password and related credential handling from your application.

2. "My SQL is already paid for. Why should I pay for Azure AD or ADFS?" - actually, Basic services in Azure AD (which includes authentication) are free. ADFS is a role enablement in Windows Server 2012. Also see some of the costs of own solution mentioned above.

3. "I don't have AD on premises, therefore I can't use Azure AD" - not true: you can use Azure AD as a stand-alone STS.

4. "I don't have ADFS on premises, therefore I can't use Azure AD" - not true: you can enable data sync from AD to Azure AD an use the latter as an STS.

5. "My application is not running in Azure therefore I can't use Azure AD" - not true: Azure AD is an STS, exposing standard, publicly accessible authentication protocols.

Exceptions

 Here are some situations where outsourcing authentication to an STS may not work:

1. Application is using SQL with windows authentication using the user's (rather applications) token (well, the application is still NOT handling the user credentials) 

2. Application/organization has some unusual, special requirements for user name or passwords format that is not supported by out-of-the-box STS'.

3. Application is deployed to some locked down environment with no access to other servers or public internet (an therefore local or public STS').

4. Application does not involve human interaction and simple symmetric key solution is adequate.

 

Unit test success using Ports, Adapters, & Simulators–kata walkthrough

Mon, 12/01/2014 - 16:27

You will probably want to read my conceptual post on this topic before this one.

The kata that I’m using can be found at github here. My walkthrough is in the EricGuSolution branch, and I checked in whenever I hit a good stopping point. When you see something like:

Commit: Added RecipeManager class

you can find that commit on the branch and look at the change that I made. The checkin history is fairly coarse; if you want a more atomic view, go over to the original version of the kata, and there you’ll find pretty much a per-change view of the transformations.

Our patient

We start with a very simple Windows Forms application for managing recipes. It allows users to create/edit/delete recipes, and the user can also decide where to store their recipes. The goal is to add unit tests for it. The code is pretty tiny, but it’s pretty convoluted; there is UI code tied in with file system code, and it’s not at all clear how we can get tested.

I’m going to be doing TDD as much as possible here, so the first thing to do is to dive right in and start writing tests, right?

The answer to that is “nope”. Okay, if you are trying to add functionality, you can use the techniques in Feather’s excellent book, “Working Effectively with Legacy Code”, but let’s just pretend we’ve done that and are unhappy with the result, so we’re going to refactor to make it easier to test.

The first thing that I want you to do is to look at the application & code, and find all the ports, and then write down a general description of what each port does. A port is something that a program uses to interface with an external dependency. Go do that, write them down, and then code back.

The Ports

I identified three ports in the system:

  1. A port that loads/saves/lists/deletes recipes
  2. A port that loads/saves the location of the recipes
  3. A port that handles all the interactions with the user (ie “UI”)

You could conceivably break some of these up; perhaps the UI port that deals with recipes is different than the one that deals with the recipe storage directory. We’ll see what happens there later on.

If you wanted, you could go to the next level of detail and write out the details of the interface of each port, but I find it easier to pull that out of the code as I work.

How do I do this without breaking things?

That’s a really good question. There are a number of techniques that will reduce the chance of that happening:

  1. If your language has a refactoring tool available, use it. This will drastically reduce the number of bugs that you introduce. I’m working in C#, so I’m going to be using Resharper.
  2. Run existing tests (integrated tests, other automated tests, manual tests) to verify that things still work.
  3. Write pinning tests around the code you are going to change.
  4. Work in small chunks, and test often.
  5. Be very careful. My favorite method of being very careful is to pair with somebody, and I would prefer to do it even if I have pretty good tests.

Wherever possible, I used resharper to do the transformations.

Create an adapter

An adapter is an implementation of a port. I’m going to do the recipe one first. My goal here is to take all the code that deals with these operations and get it in one place. Reading through the code in Form1.cs, I see that there the LoadRecipes() method. That seems like something our port should be able to do. It has the following code:

private void LoadRecipes() { string directory = GetRecipeDirectory(); DirectoryInfo directoryInfo = new DirectoryInfo(directory); directoryInfo.Create(); m_recipes = directoryInfo.GetFiles("*") .Select(fileInfo => new Recipe { Name = fileInfo.Name, Size = fileInfo.Length, Text = File.ReadAllText(fileInfo.FullName) }).ToList(); PopulateList(); }

I see three things going on here. First, we get a string from another method, then we do some of our processing, then we call the “PopulateList()” method. The first and the last thing don’t really have anything to do with the concept of dealing with recipes, so I’ll extract the middle part out into a separate method (named “LoadRecipesPort()” because I couldn’t come up with a better name for it).

private void LoadRecipes() { string directory = GetRecipeDirectory(); m_recipes = LoadRecipesPort(directory); PopulateList(); } private static List<Recipe> LoadRecipesPort(string directory) { DirectoryInfo directoryInfo = new DirectoryInfo(directory); directoryInfo.Create(); return directoryInfo.GetFiles("*") .Select( fileInfo => new Recipe { Name = fileInfo.Name, Size = fileInfo.Length, Text = File.ReadAllText(fileInfo.FullName) }) .ToList(); }

Note that the extracted method is static; that verifies that it doesn’t have any dependencies on anything in the class.

I read down some more, and come across the code for deleting recipes:

private void DeleteClick(object sender, EventArgs e) { foreach (RecipeListViewItem recipeListViewItem in listView1.SelectedItems) { m_recipes.Remove(recipeListViewItem.Recipe); string directory = GetRecipeDirectory(); File.Delete(directory + @"\" + recipeListViewItem.Recipe.Name); } PopulateList(); NewClick(null, null); }

There is only one line there – the call to File.Delete(). I pull that out into a separate method:

private static void DeleteRecipe(string directory, string name) { File.Delete(directory + @"\" + name); }

Next is the code to save the recipe. I extract that out:

private static void SaveRecipe(string directory, string name, string directions) { File.WriteAllText(Path.Combine(directory, name), directions); }

That is all of the code that deals with recipes.

Commit: Extracted recipe code into static methods

<aside>

You may have noticed that there is other code in the program that deals with the file system, but I did not extract it. That is very deliberate; my goal is to extract out the implementation of a specific port. Similarly, if I had been using a database rather than a file system, I would extract only the database code that dealt with recipes.

This is how this pattern differs from a more traditional “wrapper” approach, and is hugely important, as I hope you will soon see.

</aside>

The adapter is born

I do an “extract class” refactoring and pull out the three methods into a RecipeStore class. I convert all three of them to instance methods with resharper refactorings (add a parameter of type RecipeStore to each of them, then make them non-static, plus a bit of hand-editing in the form class). I also take the directory parameter and push it into the constructor. That cleans up the code quite a bit, and I end up with the following class:

public class RecipeStore { private string m_directory; public RecipeStore(string directory) { m_directory = directory; } public List<Recipe> Load() { DirectoryInfo directoryInfo = new DirectoryInfo(m_directory); directoryInfo.Create(); return directoryInfo.GetFiles("*") .Select( fileInfo => new Recipe { Name = fileInfo.Name, Size = fileInfo.Length, Text = File.ReadAllText(fileInfo.FullName) }) .ToList(); } public void Delete(string name) { File.Delete(m_directory + @"\" + name); } public void Save(string name, string directions) { File.WriteAllText(Path.Combine(m_directory, name), directions); } } Commit: RecipeStore instance class with directory in constructor

Take a look at the class, and evaluate it from a design perspective. I’m pretty happy with it; it does only one thing, and the fact that it’s storing recipes in a file system isn’t apparent from the method signature. The form code looks better as well.

Extract the port interface & write a simulator

I now have the adapter, so I can extract out the defining IRecipeStore interface.

public interface IRecipeStore { List<Recipe> Load(); void Delete(string name); void Save(string name, string directions); }

I’ll add a new adapter class that implements this interface:

class RecipeStoreSimulator: IRecipeStore { public List<Recipe> Load() { throw new NotImplementedException(); } public void Delete(string name) { throw new NotImplementedException(); } public void Save(string name, string directions) { throw new NotImplementedException(); } }

The simulator is going to be an in-memory implementation of the recipe store, which which will make it very good for unit tests. Since it’s going to be in-memory, it doesn’t have any dependencies and therefore I can write unit tests for it. I’ll do that with TDD.

Commit: RecipeStoreSimulator with tests

It was a very simple interface, so it only took me about 15 minutes to write it. It’s not terribly robust, however; it has no error-handling at all. I now have a simulator that I can use to test any code that uses the RecipeStore abstraction. But wait a second; the tests I wrote for the simulator are really tests for the port.

If I slightly modify my tests so that they use an IRecipeStore, I can re-purpose them to work with any implementation of that port. I do that, but I start seeing failures, because the tests assume an empty recipe store. If I change the tests to clean up after themselves, it should help…

Once I’ve done that, I can successfully run the port unit tests against the filesystem recipestore.

Commit: Unit tests set up to test RecipeStore

RecipeStoreLocator

We’ll now repeat the same pattern, this time with the code that figures out where the RecipeStore is located. I make the methods static, push them into a separate class, and turn them back into instance methods.

When I first looked at the code, I was tempted not to do this port, because the code is very specific to finding a directory, and the RecipeStore is the only thing that uses it, so I could have just put the code in the RecipeStore. After a bit of thought, I decided that “where do I store my recipes” is a separate abstraction, and therefore having a locator was a good idea.

Commit: RecipeStoreLocator class added

I create the Simulator and unit tests, but when I go to run them, I find that I’m missing something; the abstraction has no way to reset itself to the initial state because the file persists on disk. I add a ResetToDefault() method, and then it works fine.

Commit: Finished RecipeStoreLocator + simulator + unit tests

Status check & on to the UI

Let’s take a minute and see what we’ve accomplished. We’ve created two new port abstractions and pulled some messy code out of the form class, but we haven’t gotten much closer to be able to test the code in the form class itself. For example, when we call LoadRecipes(), we should get the recipes from the store, and then push them out into the UI. How can we test that code?

Let’s try the same sort of transformations on the UI dependency. We’ll start with PopulateList():

private void PopulateList() { listView1.Items.Clear(); foreach (Recipe recipe in m_recipes) { listView1.Items.Add(new RecipeListViewItem(recipe)); } }

The first change is to make this into a static method. That will require me to pass the listview and the recipe list as parameters:

private static void PopulateList(ListView listView, List<Recipe> recipes) { listView.Items.Clear(); foreach (Recipe recipe in recipes) { listView.Items.Add(new RecipeListViewItem(recipe)); } } And I’ll pull it out into a new class: public class RecipeManagerUI { private ListView m_listView; public RecipeManagerUI(ListView listView) { m_listView = listView; } public void PopulateList(List<Recipe> recipes) { m_listView.Items.Clear(); foreach (Recipe recipe in recipes) { m_listView.Items.Add(new RecipeListViewItem(recipe)); } } }

This leaves the following implementation for LoadRecipes():

private void LoadRecipes() { m_recipes = m_recipeStore.Load(); m_recipeManagerUI.PopulateList(m_recipes); }

That looks like a testable bit of code; it calls load and then calls PopulateList with the result. I extract it into a RecipeManager class (not sure about that name right now), make it an instance method, add a constructor to take the recipe store and ui instances, and pull the list of recipes into this class as well. I end up with the following:

public class RecipeManager { private RecipeStore m_recipeStore; private RecipeManagerUI m_recipeManagerUi; private List<Recipe> m_recipes; public RecipeManager(RecipeStore recipeStore, RecipeManagerUI recipeManagerUI) { m_recipeManagerUi = recipeManagerUI; m_recipeStore = recipeStore; } public List<Recipe> Recipes { get { return m_recipes; } } public void LoadRecipes() { m_recipes = m_recipeStore.Load(); m_recipeManagerUi.PopulateList(m_recipes); } } Commit: Added RecipeManager class 

Now to test LoadRecipes, I want to write:

[TestMethod()] public void when_I_call_LoadRecipes_with_two_recipes_in_the_store__it_sends_them_to_the_UI_class() { RecipeStoreSimulator recipeStore = new RecipeStoreSimulator(); recipeStore.Save("Grits", "Stir"); recipeStore.Save("Bacon", "Fry"); RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator(); RecipeManager recipeManager = new RecipeManager(recipeStore, new RecipeManagerUISimulator()); recipeManager.LoadRecipes(); Assert.AreEqual(2, RecipeManagerUI.Recipes.Count); RecipeStoreSimulatorTests.ValidateRecipe(recipeManagerUI.Recipes, 0, "Grits", "Stir"); RecipeStoreSimulatorTests.ValidateRecipe(recipeManagerUI.Recipes, 1, "Bacon", "Fry"); }

I don’t have the appropriate UI simulator, so I’ll extract the interface and write the simulator, including some unit tests.

Commit: First full test in RecipeManager

In the tests, I need to verify that RecipeManager.LoadRecipes() passes the recipes off to the UI, which means the simulator needs to support a property that isn’t needed by the new class. I try to avoid these whenever possible, but when I have to use them, I name them to be clear that they are something outside of the port interface. In this case, I called it SimulatorRecipes.

We now have a bit of logic that was untested in the form class in a new class that is tested.

UI Events

Looking at the rest of the methods in the form class, they all happen when the user does something. That means we’re going to have to get a bit more complicated. The basic pattern is that we will put an event on our UI port, and it will either hook to the actual event in the real UI class, or to a SimulateClick() method in the simulator.

Let’s start with the simplest one. NewClick() looks like this:

private void NewClick(object sender, EventArgs e) { textBoxName.Text = ""; textBoxObjectData.Text = ""; }

To move this into the RecipeManager class, I’ll need to add abstractions to the UI class for the click and for the two textbox values.

I start by pulling all of the UI event hookup code out of the InitializeComponent() method and into the Form1 constructor. Then, I added a NewClick event to the UI port interface and both adapters that implement the interface. It now looks like this:

public interface IRecipeManagerUI { void PopulateList(List<Recipe> recipes); event Action NewClick; string RecipeName { get; set; } string RecipeDirections { get; set; } }

And, I’ll go off and implement these in the UI class, the simulator class, and the simulator test class.

<aside>

I’m not sure that NewClick is the best name for the event, because “click” seems bound to the UI paradigm. Perhaps NewRecipe would be a better name…

</aside>

Commit: Fixed code to test clicking the new button

Note that I didn’t write tests for the simulator code in this case. Because of the nature of the UI class, I can’t run tests across the two implementations to make sure they are the same (I could maybe do so if I did some other kind of verification, but I’m not sure it’s worth it). This code mostly fits in the “if it works at all, it’s going to work” category, so I don’t feel that strongly about testing it.

The test ends up looking like this:

[TestMethod()] public void when_I_click_on_new__it_clears_the_name_and_directions() { RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator(); RecipeManager recipeManager = new RecipeManager(null, recipeManagerUI); recipeManagerUI.RecipeName = "Grits"; recipeManagerUI.RecipeDirections = "Stir"; Assert.AreEqual("Grits", recipeManagerUI.RecipeName); Assert.AreEqual("Stir", recipeManagerUI.RecipeDirections); recipeManagerUI.SimulateNewClick(); Assert.AreEqual("", recipeManagerUI.RecipeName); Assert.AreEqual("", recipeManagerUI.RecipeDirections); }

That works. We’ll keep going with the same approach – choose an event handler, and go from there. We’re going to do SaveClick() this time:

private void SaveClick(object sender, EventArgs e) { m_recipeStore.Save(textBoxName.Text, textBoxObjectData.Text); m_recipeManager.LoadRecipes(); } We’ll try writing the test first: [TestMethod()] public void when_I_click_on_save__it_stores_the_recipe_to_the_store_and_updates_the_display() { RecipeStoreSimulator recipeStore = new RecipeStoreSimulator(); RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator(); RecipeManager recipeManager = new RecipeManager(recipeStore, recipeManagerUI); recipeManagerUI.RecipeName = "Grits"; recipeManagerUI.RecipeDirections = "Stir"; recipeManagerUI.SimulateSaveClick(); var recipes = recipeStore.Load(); RecipeStoreSimulatorTests.ValidateRecipe(recipes, 0, "Grits", "Stir"); recipes = recipeManagerUI.SimulatorRecipes; RecipeStoreSimulatorTests.ValidateRecipe(recipes, 0, "Grits", "Stir"); }

That was simple; all I had to do was stub out the SimulateSaveClick() method. The test fails, of course. About 10 minutes of work, and it passes, and the real UI works as well.

Commit: Added Save
Commit: Added in Selecting an item in the UI
Commit: Added support for deleting recipes

To be able to support changing the recipe directory required the recipe store to understand that concept. This was done by adding a new RecipeDirectory property, and implementing it in both IRecipeStore adapters.

Commit: Added support to change recipe store directory

All done

Let’s look at what is left in the form class:

public partial class Form1 : Form { private RecipeManager m_recipeManager; public Form1() { InitializeComponent(); var recipeManagerUI = new RecipeManagerUI(listView1, buttonNew, buttonSave, buttonDelete, buttonSaveRecipeDirectory, textBoxName, textBoxObjectData, textBoxRecipeDirectory); var recipeStoreLocator = new RecipeStoreLocator(); var recipeStore = new RecipeStore(recipeStoreLocator.GetRecipeDirectory()); m_recipeManager = new RecipeManager(recipeStore, recipeStoreLocator, recipeManagerUI); m_recipeManager.Initialize(); } }

This is the entirety of the form class; it just creates the RecipeManagerUI class (which encapsulates everything related to the UI), the RecipeStoreLocator class, the RecipeStore class, and finally, the RecipeManager class. It then calls Initialize() on the manager, and, at that point, it’s up and running.

Looking through the code, I did a little cleanup:

  1. I renamed RecipeDirectory to RecipeLocation, because that’s a more abstract description.
  2. I renamed Recipe.Text to Recipe.Directions, because it has been buggin’ me…
  3. Added in testing for Recipe.Size

Commit: Cleanup

Unit Test Success using Ports, Adapters, and Simulators

Mon, 12/01/2014 - 15:38

There is a very cool pattern called Port/Adapter/Simulator that has changed my perspective about unit testing classes with external dependencies significantly and improved the code that I’ve written quite a bit. I’ve talked obliquely about it and even wrote a kata about it, but I’ve never sat down and written something that better defines the whole approach, so I thought it was worth a post. Or two – the next one will be a walkthrough of an updated kata to show how to transform a very simple application into this pattern.

I’m going to assume that you are already “down” with unit testing – that you see what the benefits are – but that you perhaps are finding it to be more work than you would like and perhaps the benefits haven’t been quite what you hoped.

Ports and Adapters

The Ports and Adapters pattern was originally described by Alistair Cockburn in a topic he called “Hexagonal Architecture”. I highly recommend you go and read his explanation, and then come back.

I take that back, I just went and reread it. I recommend you read this post and then go back and read what he wrote.

I have pulled two main takeaways from the hexagonal architecture:

The first is the “hexagonal” part, and the takeaway is that the way we have been drawing architectural diagrams for years (User with a UI on top, app code in between (sometime in several layers), database and other external dependencies at the bottom) doesn’t really make sense. We should instead delineate between “inside the application” and “outside of the application”.  Each thing that is outside of the application should be abstracted into what he calls a port (which you can just think of as an interface between you and the external thing). The “hexagonal” thing is just a way of drawing things that emphasizes the inside/outside distinction.

Dealing with externals is a common problem when we are trying to write unit tests; the external dependency (say, the .NET File class, for example) is not designed with unit testing in mind, so we add a layer of abstraction (wrapping it in a class of our own), and then it is testable.

This doesn’t seem that groundbreaking; I’ve been taking all the code related to a specific dependency – say, a database – and putting it into a single class for years. And,  if that was all he was advocating, it wouldn’t be very exciting.

The second takeaway is the idea that our abstractions should be based on what we are trying to do in the application (the inside view) rather than what is happening outside the application. The inside view is based on what we are trying to do, not the code that we will write to do it.

Another way of saying this is “write the interface that *you wish* were available for the application to use”.  In other words, what is the simple and straightforward interface that would make developing the application code simple and fun?

Here’s an example. Let’s assume I have a text editor, and it stores documents and preferences as files. Somewhere in my code, I have code that accesses the file system to perform these operations. If I wanted to encapsulate the file system operations in one place so that I can write unit tests, I might write the following:

class FileSystem { public void CreateDirectory(string directory) { } public string ReadTextFile(string filename) { } public void WriteTextFile(string filename, string contents) { } public IEnumerable<string> GetFiles(string directory) { } public bool FileExists(string filename) { } }

And I’ve done pretty well; I can extract an interface from that, and then do a mock/fake/whatever to write tests of the code that uses the file system. All is good, right? I used to think the answer is “yes”, but it turns out the answer is “meh, it’s okay, but it could be a lot better”.

Cockburn’s point is that I’ve done a crappy job of encapsulating; I have a bit of isolation from the file system, but the way that I relate to the code is inherently based on the filesystem model; I have directories and files, and I do things like reading and writing files. Why should the concept of loading or saving a document be tied to this thing we call filesystem? It’s only tied that way because of an accident of implementation.

To look at it another way, ask yourself how hard it would be to modify the code that uses FileSystem to use a database, or the cloud? It would be a pretty significant work item. That also means that my encapsulation is bad.

What we are seeing – and this is something Cockburn notes in his discussion – is that details from the implementation are leaking into our application. Instead of treating the dependency technology as a trivial choice that we might change in the future, we are baking it into the application. I’m pretty sure that somewhere in our application code we’ll need to know file system specifics such as how to parse path specifications, what valid filename characters are, etc.

A better approach

Imagine that we were thinking about saving and loading documents in the abstract and had no implementation in mind. We might define the interface (“port” on Cockburn’s lingo) as follows:

public interface IDocumentStore { void Save(DocumentName documentName, Document document); Document Load(DocumentName documentName); bool DoesDocumentExist(DocumentName documentName); IEnumerable<DocumentName> GetDocumentNames(); }

This is a very simple interface – it doesn’t need to do very much because we don’t need it to. It is also written fully using the abstractions of the application – Document and DocumentName instead of string, which makes it easier to use. It will be easy to write unit tests for the code that uses the document store.

Once we have this defined, we can write a DocumentStoreFile class (known as an “adapter” because it adapts the application’s view of the world to the underlying external dependency).

Also note that this abstraction is just what is required for dealing with documents; the abstraction for loading/saving preferences is a different abstraction, despite the fact that it also uses the file system. This is another way this pattern differs from a simple wrapper.

(I should note here that this is not the typical flow; typically you have code that it tied to a concrete dependency, and you refactor it to something like this. See the next post for more information on how to do that).

At this point, it’s all unicorns and rainbows, right?

Not quite

Our application code and tests are simpler now – and that’s a great thing - but that’s because we pushed the complexity down into the adapter. We should test that code, but we can’t test that code because it is talking with the non-testable file system. More complex + untestable doesn’t make me happy, but I’m not quite sure how to deal with that right now, so let’s ignore it for the moment and go write some application unit tests.

A test double for IDocumentStore

Our tests will need some sort of test double for code that uses the IDocumentStore interface. We could write a bunch of mocks (either with a mock library or by hand), but there’s a better option

We can write a Simulator for the IDocumentStore interface, which is simply an adapter that is designed to be great for writing unit tests. It is typically an in-memory implementation, so it could be named DocumentStoreMemory, or DocumentStoreSimulator, either would be fine (I’ve tended to use “Simulator”, but I think that “Memory” is probably a better choice).

Nicely, because it is backed by memory, it doesn’t have any external dependencies that we need to mock, so we can write a great set of unit tests for it (I would write them with TDD, obviously) that will define the behavior exactly the way the application wants it.

Compared to the alternative – mock code somewhere – simulators are much nicer than mocks. They pull poorly-tested code out of the tests and put it into a place where we can test is well, and it’s much easier to do the test setup and verification by simply talking to the simulator. We will write a test that’s something like this:

DocumentStoreSimulator documentStore = new DocumentStoreSimulator(); DocumentManager manager = new DocumentManager(documentStore); Document document = new Document("Sample text"); DocumentName documentName = new DocumentName("Fred"); manager.Save(documentName); Assert.IsTrue(documentStore.DoesDocumentExist(documentName)); Assert.AreEqual("Sample text", documentStore.Load(documentName).Text);

Our test code uses the same abstractions as our product code, and it’s very easy to verify that the result after saving is correct.

A light bulb goes off

We’ve now written a lot of tests for our application, and things mostly work pretty well, but we keep running into annoying bugs, where the DocumentStoreFile behavior is different than the DocumentStoreMemory behavior. This is annoying to fix, and – as noted earlier – we don’t have any tests for DocumentStoreFile.

And then one day, somebody says,

These aren’t DocumentStoreMemory unit tests! These are IDocumentStore unit tests – why don’t we just run the tests against the DocumentStoreFile adapter?

We can use the simulator unit tests to verify that all adapters have the same behavior, and at the same time verify that the previously-untested DocumentStoreFile adapter works as it should.

This is where simulators really earn their keep; they give us a set of unit tests that we can use both to verify that the real adapter(s) function correctly and that all adapters behave the same way.

And there was much rejoicing.

In reality, it’s not quite that good initially, because you are going to miss a few things when you first write the unit tests; things like document names that are valid in one adapter but not another, error cases and how they need to be handled, etc. But, because you have a set of shared tests and they cover everything you know about the interface, you can add the newly-discovered behavior to the unit tests, and then modify the adapters so they all support it.

Oh, and you’ll probably have to write a bit of code for test cleanup, because that document that you stored in your unit tests will be there the next time if you are using the file system adapter but not the memory adapter, but these are simple changes to make.

Other benefits

There are other benefits to this approach. The first is that adapters, once written, tend to be pretty stable, so you don’t need to be running their tests very much. Which is good, because you can’t run the tests for any of the real adapters as part of your unit tests suite; you typically need to run them by hand because they use real versions of the external dependencies and require some configuration.

The second is that the adapter tests give you a great way to verify that a new version of the external dependency still works the way you expect.

The simulator is a general-purpose adapter that isn’t limited to the unit test scenario. It can also be used for demos, for integration tests, for ATDD tests; any time that you need a document store that is convenient to work with. It might even make it into product code if you need a fast document cache.

What about UI?

The approach is clearest when you apply it to a service, but it can also be applied to the UI layer. It’s not quite as cool because you generally aren’t about to reuse the simulator unit tests the same way, but it’s still a nice pattern. The next post will delve into that a bit more deeply.

Microsoft Exchange Server Integration in AX2012 R3 CU8

Mon, 12/01/2014 - 15:24

 

One of the main features being added as a part of AX R3 CU8 is integration with Microsoft Exchange Server, which is an alternative to the existing Microsoft Outlook integration with AX. This integration allows one to synchronize Exchange appointments, tasks, and contacts with AX, similar to how the Microsoft Outlook integration feature works. It will allow employees who work in a remote desktop or terminal server environment to also efficiently leverage the integration feature.

There are important administrator notes regarding downloading the Exchange Web Services DLL that can be found at: https://mbs2.microsoft.com/Knowledgebase/KBDisplay.aspx?scid=kb;EN-US;2984369

It’s important to note as mentioned in the KB2984369 that the Microsoft Exchange Web Services Managed API 2.0 is downloaded to the client

http://www.microsoft.com/en-us/download/details.aspx?id=35371

When you run the CU8 AX Client installation the prerequisite checks will point to the 1.2 version of the Exchange Web Services so you need to make sure you have updated to the 2.0 API.

Detailed information on how to set up integration with Exchange Server can be found on TechNet: http://msdn.microsoft.com/en-us/vsto/hh209649(v=ax.50).aspx

The administrator will have the responsibility to enter the Exchange Server information, which can be found under Organization Administration > Setup > Microsoft Outlook or Exchange Server Integration > Microsoft Outlook or Exchange Server Parameters. Then, the administrator can select Exchange Server as the program to synchronize, as well as the default Exchange Server URL.

For the non administrator roles, the only difference between the Microsoft Outlook setup is that you will have to enter your Exchange Server credentials during the setup process.

Once the setup is done, you are able to synchronize your appointments, contacts, and tasks in the same manner as the Microsoft Outlook integration feature.

 

Posted on behalf of Brenda Lee from the R&D team who worked on this feature.

Cornell Note taking, OneNote and OneDrive – Note making Nirvana

Mon, 12/01/2014 - 15:03

Guest Post by Matthew O'Brien - Microsoft Expert Educator


As a learner I was always bad a taking notes and even worse at making them. I went through school and university pre mobile computing and was limited to A4 pages – At university, if we were lucky, the lecturer might let us copy his or her overhead transparencies and I remember in my final year a lecturer used PowerPoint and gave us slide print outs – it was amazing at the time!

I also remember being introduced to the Cornell note taking methodology and realising that having a system for taking, making and reviewing notes would really make a difference. If you haven’t been exposed to this method before, you simply rule a margin down the left hand side of the page which creates a large “content” area on the right, a smaller “cue column” on the left, and then you rule a final margin across the bottom of the page creating a “summary” area. During the leaning experience (or even a meeting!) you take notes however you would like in the content area. Any time you think of a question, or get a key point or idea, you write that down in the cue column. It’s important that you find answers one way or another to the questions, because this is your brain trying to associate information for meaning. Finally at the end of day, you write a summary down the bottom, and it is this you go to first when re-reading and revising your notes, going back to the cue and content areas only if you need more detailed information. This is only a really brief overview – you can more detail at this paper and even download a OneNote template from this link. If you really want to know more, you need to read the book “How to Study in College” by Walter Pauk, the Cornell professor who first came up with the method.

Fast forward a large number of years and we now have OneNote and the Tablet PC, and OneDrive which links our phones, PCs and OneNote all together. This, in my opinion, allows a complete redefinition of note taking.

One of the advantages of working digitally on a Tablet PC is the ability to mash up photographs, videos and published documents (high fidelity content) with inking and highlighting (low fidelity content). Travis Smith, from Microsoft Education Australia, makes a great case for this in his presentation “The Pen is Mightier than the Keyboard”, making his whole presentation from within OneNote, using pan and zoom to navigate the mind mapped content, co-creating parts of it with his audience, using digital ink. It was during one of these presentations that I had a personal epiphany in which I asked myself the question – What if we took notes in this way.

I tried this the next day at a lecture I attended and have done so again at every learning activity I have attended since. I’ve probably repeated the process over 20 times, and it doesn’t matter if it’s a meeting, a lecture or a lesson, this works – especially when coupled with a modified version of the Cornell Note taking method. I have found, I really am now making notes, not just taking them in my old linear way.

It works like this Step 1

At the start of the session, I create a new OneNote page on which to make my notes. I like “graph paper” rule lines, as it helps me navigate the page and write more neatly. I then zoom to the maximum extent, using the pinch gesture. This gives a reasonable amount of space, but not enough – so I draw in the bottom right corner, and then OneNote allows me to zoom out again using the pinch gesture. Repeating this twice again, I end up with a really large canvas – then delete all the previous corner marks leaving the last. Now I have a H U G E canvas to write on!

Step 2

In the centre of the canvas, I write the central idea/topic and the presenter(s) names. This then becomes the reference point to start taking notes and making my mind or spider map. The trick now is to use zoom (with the expanding pinch gesture) to zoom into the canvas to take notes in areas around the page, taking them sequentially and linking the ideas/notes with arrows. As a rule, I try to think of myself as a satellite – when zoomed out, I see the structure of my thinking, the big ideas and linkages – so I zoom out to make the headings and associations. Once I zoom in, I want to see detail – so it is here that I put the examples, traditional “notes” and my specific thoughts on an idea, process or concept. The ability to zoom in and out really changes the note making (and reviewing) experience in a way that is not possible on paper.

Step 3

The real time saver is to start inserting digital content by whipping out your mobile phone, taking a photo of the presenters screen which saves automatically to OneDrive, and then inserting this into the page in OneNote directly from OneDrive (insert>>picture>>OneDrive>>Camera Roll). Of course you can also take a photo with your Tablet PC, and insert it from the local camera roll. In a webinar or online presentation, the screen clipping tool works perfectly for this (and you can even just insert just the bit you want or need!). You are now able to annotate over and around this digital content, making your meaning, rather than simply note taking. The use of highlighters to create attention, and coloured pens to classify content also assists in creating a full set of notes that are easy to navigate.

Things to note:

I still keep a “Cue” column down the left (or sometimes right) for questions, key points and ideas; and instead of the summary being at the bottom, I make it at the top right, where I would have once traditionally started taking my notes. That way, when I come to review my notes, the opening point is the “start” of the page, which is where the title block and summary is. Then it’s just a matter of using zoom and pan to renavigate my notes – zooming out to see the big picture thinking and associations, and back in to see detail.

It's been a definite process for me to learn to take and make notes this way, but I know it is making a difference to my understanding – I won’t be going back!

How the notes look zoomed out (you can see the big ideas and links):

zoomed in (for specific content and notes)

 

Guest Post By:

Matthew O'Brien - Head of Strategic Planning - Brisbane Boys College

Twitter | LinkedIn | Blog

As the Head of Strategic Planning at Brisbane Boys College, Matthew is passionate bout providing opportunities to learn through technology. He has a keen interest in:

  • Use of the stylus as an interface
  • Use of data to inform (and improve) teaching practice
  • Flipped classroom (especially the use of video content)
  • Learning modalities
  • Learning analytics
  • Collaboration in the classroom

Microsoft Dynamics Marketing 2015 Update content is here!

Mon, 12/01/2014 - 14:56

Here’s where to find all the new documentation available for the Microsoft Dynamics Marketing 2015 Update. This update introduces new features such as

  • Sales and marketing collaboration: Strengthen your marketing and sales synergies with the new Sales Collaboration Panel, which allows sellers to provide input into campaigns and targeting.
  • Manage multi-channel campaigns: Streamline campaign creation and improve segmentation with graphical email editing, A/B and split testing, integrated offers, and approval workflows.
  • Improve B2B marketing: Deepen your lead management capabilities with webinar integration and improved lead scoring, including the ability to introduce multiple lead scoring models. 
  • Enhanced marketing resource management: Gain unprecedented visibility into your marketing plan with the new Interactive Marketing Calendar and improve collaborative marketing with Lync click-to-call and webinars.
  • Gain social insights within Microsoft Dynamics Marketing: Display social information collected with Microsoft Social Listening about your brand, campaigns, and more, all within Microsoft Dynamics Marketing.
  • Additional language & geographic availability:  Microsoft Dynamics Marketing is now available in Japanese and Russian, bringing the total to 12 languages and 37 countries currently supported. Find more information in the Microsoft Dynamics Marketing Translation Guide.

For more information, see What’s new in Microsoft Dynamics Marketing 2015 Update

Start here

Microsoft Dynamics Marketing Help Center

This central information hub gives you access to key Microsoft Dynamics Marketing content sources:

Information for IT pros, administrators, implementers, and customizers

Marketing Setup & Administration - The portal for Dynamics Marketing administrators!

 

 Learn about:

Information for developers

Marketing Developer Center - The portal for Dynamics Marketing Developers!

Download Software Development Kit for Microsoft Dynamics Marketing Update 2015

 

Learn about:

Information for end users

Marketing Help & Training - The place to find end-user help and training. Videos, e-books, quick reference guides, walkthrough topics, and more.

 

Learn about:

Thank You!

Handling lost devices in Win2D

Mon, 12/01/2014 - 14:10

A reality when working with hardware accelerated graphics is that sometimes bad things can happen. Someone might remove the graphics device, upgrade the graphics driver, put the GPU into an infinite loop or upset the driver in some way. These can all cause the GPU to lose track of resources that you’ve created on it. As an app developer it is not possible to prevent any of this from happening, but, unless you like seeing bug reports about your app crashing or displaying blank screens, it is up to you to ensure your app handles it.

This can be tedious and tricky to get right!

One of the challenges we set ourselves for Win2D was to relieve app developers from the burden of lost device handling. This requirement has shaped the design of the API from the very beginning.

This series attempts to give an insight into how we went about designing this aspect of Win2D, from whiteboard to a completed solution that turned out to be incomplete to one that really was complete.

Virtualization?

One possible approach that has worked well for retained mode APIs (such as XAML) is to entirely virtualize the resource that might get lost – to the extent that the app isn’t even aware that devices can be lost. So, if you have a bitmap, then the system keeps a copy of the image in CPU memory that can be used to recreate the GPU resource if necessary. This approach doesn’t scale too well for an immediate mode API where the image naturally exists only on the GPU. We’d have to be continuously copying the image from GPU memory to CPU memory for safe keeping. Every image loaded from disk would exist in memory twice, just to support the exceptional event of a device lost.

Things can get even more hairy: what would have to happen if you created a render target and drew on it while there’s no valid device? Should the system queue up the API calls so it can replay them when the device reappears? What happens then if you call GetPixelColors()?

We thought about this for a while, and it started to sound like it’d have a very high overhead and be really complicated. So we didn’t do that.

(Aside: for some types virtualization made sense since we were already performing some kind of virtualization for other reasons – CanvasStrokeStyle, CanvasTextFormat and effects are all device independent. The GPU resources backing these are “realized” on demand as they’re used. Types that behave like this are easy to spot since they are constructed without a device.)

Instead we decided, at the lowest level, to do nothing. It doesn’t get much simpler than that! The device dependent Win2D types (CanvasSolidColorBrush, CanvasBitmap, CanvasRenderTarget, CanvasSwapChain etc.) make no attempt to handle device lost. Instances of these types are all created against a device and, if the device is lost, operations on them will fail. At this point the object is no longer useful and a new one must be created.

It seems that we’re back where we started. In the next part I’ll describe our first attempt to manage resource creation via CanvasControl.

Pages

Drupal 7 Appliance - Powered by TurnKey Linux