You are here

Feed aggregator

Why is Environment.CurrentDirectory bad?

MSDN Blogs - Tue, 06/16/2015 - 22:50

It is bad in general for various reasons, but it’s especially bad in build tools and related space (like CI servers and such).

MSBuild could have had fine-grained parallelism where it could execute multiple tasks on different threads in parallel within the same process, significantly speeding up builds. However it can’t do that because many tasks rely on Environment.CurrentDirectory which is per process, and so to make sure the current directory is deterministic and well-known at the time a task executes, there can only be one task running at a time within the same process.

Additionally, it is terrible for discoverability and ability to reason about assumptions. Relying on Environment.CurrentDirectory makes an implicit assumption that is not recorded anywhere. For instance, here’s a field in Team Build that lets you select unit-test assemblies to run:

Where is this pattern rooted? To be able to write a pattern you need to know if its relative (to what?) and in which directory it is rooted. It obviously can’t be a full path since on the CI server the path varies by build agent id. After reading the source code for the build activity that calculates the files from this spec I found this:

This code relies on the current directory being set… where? To what? It’s a nightmare scenario for maintainers.

Last but not least, Environment.CurrentDirectory is global mutable state. Worse, it is global mutable state that makes it impossible to track causality. Who set this last? Where to put a breakpoint to intercept mutation?

I’m sure there are multiple reasons I’m missing but I don’t even care to enumerate exhaustively. The above is terrifying enough to send a clear message: avoid at all costs!

But, a reasonable response to that would be: what to do instead? Use patterns like $(SolutionRoot)\bin\Release\*.Tests.dll instead of just bin\Release. Every path should be rooted (maybe with a variable that is well defined) so that its clear where it is. Pass known variables around, avoid the untraceable global mutable state from the 80’s.

Same goes for environment variables. Avoid at all costs! If a build requires an environment variable set to work correctly, it has failed.

P.S. Also riddle me this. Why on earth is there a Directory.GetCurrentDirectory()?

Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network)

MSDN Blogs - Tue, 06/16/2015 - 22:30

Hello, cluster fans. In my previous blog, I talked about how to work around the storage block in order to implementing Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Network in cluster on Azure.

Before that, you should know some basic concepts of Azure networking. Here are a few Azure terms we need use to setup the cluster.

VIP (Virtual IP address): A public IP address belongs to cloud service. It also serves as an Azure Load Balancer which tells how network traffic should be directed before being routed to the VM.

DIP (Dynamic IP address): An internal IP assigned by Microsoft Azure DHCP to the VM.

Internal Load Balancer: It is configured to port-forward or load-balance traffic inside a VNET or cloud service to different VMs.

Endpoint: It associates a VIP/DIP + port combination on a VM with a port on either the Azure Load Balancer for public-facing traffic or the Internal Load Balancer for traffic inside a VNET (or cloud service).

You can refer to this blog for more details about those terms for Azure network:

 OK, enough read, storage is ready and we know the basic of Azure network, can we start to build the cluster?

Yes! The first difference you will see is that you need start the cluster with one node and then add the other nodes as the next step. This is because the cluster name object (CNO) cannot be online since it cannot acquire a unique IP address from the Azure DHCP service. Instead, the IP address assigned to the CNO is a duplicate address of node who owns CNO. That IP fails as a duplicate and can never be brought online. This eventually causes the cluster to lose quorum because the nodes cannot properly connect to each other. To prevent the cluster from losing quorum, you start with one node cluster. Let the CNO’s IP fail and then manually set up the IP address.


CNO DEMOCLUSTER is offline because IP Address is failed. is VM’s DIP, which is where CNO’s IP duplicates from.

In order to fix this, we will need go into the properties of the IP Address resource and change the address to another address in the same subnet that is not currently in use, for example,

To change the IP address, choose the Properties of the IP Address and specify the new address.

Once the address is changed, right click on the Cluster Name resource and tell it to come online.

Then you can add more nodes to the cluster.

Another way to resolve this issue is to use New-Cluster PowerShell cmdlet and specify static IP during cluster creation.

Take the above environment as example:

New-Cluster -Name DEMOCLUSTER -Node node1,node2 -StaticAddress

Note: The Static IP address that you appoint to CNO is not for network communication. The only purpose is to bring the CNO online due to the dependency request. Therefore, you cannot ping that IP; cannot resolve DNS name; cannot use CNO for management since its IP is an unusable IP.


Now you’ve successfully created a cluster. Let’s have a highly available role inside it. For the demo purpose, I’ll take File Server as an example since this is the most common role that lot of us can understand.

Note: In production environment, we do not recommend File Server Cluster in Azure because of cost and performance. Take this example as a proof of concept.

Different than cluster on-premises, I recommend you to pause other nodes and keep only one node up. This is to prevent the new file server role from moving among nodes forever because file server’s VCO (virtual computer object) will have a duplicated IP address automatically assigned as the IP on the node who owns this VCO. This IP fails and makes VCO not come online on any node and may eventually cause the failover cluster manager no response. This is a similar scenario as for CNO we just talked before.

Screenshots are more intuitive.

VCO DEMOFS won’t come online because failed status of IP address. This is expected because the dynamic IP address duplicates the IP of owner node.

Manually edit the IP to a static unused in this example, now the whole resource group is online.

But remember, that IP address is the same unusable IP address as CNO’s IP – you can use it to bring resource online but that is not a real IP for network communication. If this is a File Sever, none of the VMs except the owner node of this VCO can access the File Share. Azure networking loops the traffic back to the node it was originated from.


Show time starts, we need utilize load balancer in Azure to make this IP be able to communicate with other machines in order to achieving the client-server traffic.

Load Balancer is an Azure IP resource that can route network traffic to different Azure VMs. The IP can be public facing as VIP, or internal only, like DIP. Each VM needs have the endpoint(s) so the Load Balancer can know where the traffic needs go to. In the endpoint, there are two kinds of ports. Regular port is used for normal client-server communications. For example, port 445 is for SMB file sharing, port 80 is HTTP, port 1433 is for MSSQL, and etc. Another kind of port is probe port. The default port number is 59999. Probe port is to find out which is the active node that hosts the VCO in the cluster. Load balancer sends the probe pings over TCP port 59999 to every node in the cluster, by default, every 10 seconds. When you configure a role in cluster on Azure VM, you need figure out what port(s) the application uses because you will add this port to the endpoint. And then you add the probe port to the same endpoint. After that, you need update the parameter of VCO’s IP address to have that probe port. Finally, load balancer will do the similar port forward task and route the traffic to VM who owns the VCO. All the above settings need complete using PowerShell as the blog was written.

Note: When the blog was written, Microsoft only supports one resource group in cluster on Azure with Active / Passive model only. This is because VCO’s IP can only use cloud service IP address (VIP) or the IP address of the Internal Load Balancer. This limitation is still in effect although Azure now supports the creation of multiple VIP addresses in a given cloud service.

Here is the diagram for Internal Load Balancer (ILB) in cluster which can explain the above theory better:

The application in this cluster is File Server. That’s why we have port 445. And the IP for VCO is, the same as the ILB. There are three steps to configure this:

Step 1: Add the ILB to the Azure cloud service.

 Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

# Define variables.

$ServiceName = "demovm1-3va468p3" # the name of the cloud service that contains the VM nodes. Your cloud service name is unique. Use Azure portal to find out service name or use get-azurevm.

$ILBName = "DEMOILB" # newly chosen name for the new ILB

$SubnetName = "Subnet-1" # subnet name that the VMs use in the VNet

$ILBStaticIP = "" # static IP address for the ILB in the subnet

# Add Azure ILB using the above variables.

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

# Check the settings.

Get-AzureInternalLoadBalancer –servicename "$ServiceName

Step 2: Configure the load balanced endpoint for each node using ILB.

Run the following powershell commands on your on-premises machine which can manage your Azure subscription.

# Define variables.

$VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

$EndpointName = "SMB" # newly chosen name of the endpoint

$EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

# Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

ForEach ($node in $VMNodes)


Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM


# Check the settings.

ForEach ($node in $VMNodes)


Get-AzureVM –ServiceName $ServiceName –Name $node | Get-AzureEndpoint | where-object {$ -eq "smb"}


Step 3: Update the parameters of VCO’s IP address with Probe Port.

 Run the following powershell commands inside one of the cluster nodes.

# Define variables

$ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

$IPResourceName = “IP Address" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

$ILBIP = “” # the IP Address of the Internal Load Balancer (ILB)

# Update cluster resource parameters of VCO’s IP address to work with ILB.

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"="59999";"SubnetMask"="";"Network"="$ClusterNetworkName";"OverrideAddressMatch"=1;"EnableDhcp"=0}

You should see this window:

Take the IP Address resource offline and bring it online again. Start the clustered role.

Now you have an Internal Load Balancer working with VCO’s IP. One last task you need do is Windows Firewall. You need at least open port 59999 on all nodes for probe port detection; or turn the firewall off. Then you should be all set. It may take about 10 seconds to establish the connection to VCO at the first time or after you failover the resource group to another node because of ProbeIntervalInSeconds we set up before.

 In this example, VCO has an Internal IP If you want to make your VCO public-facing, you can use the cloud service’s IP address (VIP). The steps are similar and easier because you can skip Step 1 since this VIP is already an Azure load balancer. You just need add endpoint with regular port plus probe port to each VM (step 2); and then update the VCO’s IP in the cluster (step 3). Please be aware, your clustered resource group will be exposed to internet since VCO has a public IP. You may want to protect it by planning enhanced security methods. 

Great! Now you’ve completed all the steps of building Windows Server Failover Cluster on Azure IAAS VM. It is a bit longer journey. However, you’ll find it useful and worthwhile. Please leave me comments if you have question. Happy Clustering! 

Mario Liu

Support Escalation Engineer


Windows Server Failover Cluster on Azure IAAS VM – Part 1 (Storage)

MSDN Blogs - Tue, 06/16/2015 - 22:03

Hello, cluster fans. This is Mario Liu. I’m the Support Escalation Engineer in Windows High Availability team in Microsoft CSS Americas. I have a good news for you that starting in April, 2015, Microsoft supports Windows Server Failover Cluster on Azure IAAS VMs. Here is the supportability announcement for Windows Server on Azure VMs as the blog is written: Cluster feature is part of the announcement. This KB subjects to change once we make more improvements for WSFC on Azure IAAS VMs. Check the above link for the latest updates.

Today I’d like to share what are the main differences when you deploy a WSFC on-premises comparing with Azure VM. First, the VM OS must be Windows Server 2012 R2, or Windows Server 2008 R2 and Windows Server 2012 with hotfix Then, at a high level, the cluster feature does not change inside the VM. This is still a standard Server OS feature. The challenges are outside – Storage and Network. Let me start with Storage first.

 The big block to implement cluster on Azure is that Azure does not provide native shared block storage to VMs, which is different than on-premises – Fiber Channel SAN or iSCSI. That limits SQL Server AlwaysOn availability groups (AG) the primary use case scenario in Azure because SQL AG does not need shared storage. Instead, it leverages replication at the application layer to replicate data across Azure IaaS VMs. 

Till now, we have more options to work around the shared storage limitation; and that is how we can expand the scenarios beyond SQL AlwaysOn.

Option 1: Application-level replication for non-shared storage

This is the same as SQL Server AlwaysOn availability groups.


Option 2: Volume-level replication for non-shared storage

In other words, 3rd party storage replication. 

A common 3rd party solution is SIOS DataKeeper Cluster Edition. Microsoft and SIOS have worked together and ensured this solution is fully supported. For more details, please check SIOS’s website:

Option 3: Leverage ExpressRoute for remote iSCSI Target shared block storage for file based storage from an Azure IaaS VMs

ExpressRoute is an Azure exclusive feature. It enables you to create dedicated private connections between Azure datacenters and infrastructure that’s on your premises. It has high throughput network connectivity to guarantee that the disk performance won’t be degraded.

One of the existing examples is: NetApp Private Storage (NPS) exposes an iSCSI Target via ExpressRoute with Equinix to Azure IaaS VMs




For more details about ExpressRoute, please see


Option 4 (Not supported): Use an Azure VM as iSCSI Target to provide shared storage to cluster nodes. 

It uses the similar iSCSI concept as in option 3 but is much more simplified. You move the iSCSI target into Azure. The reason we do not support this is mainly due to performance hit. However, if you’d like to set up a cluster in Azure VMs as a proof of concept, you’re welcome to do so. This is the easiest way to have the shared storage. Please limit this option for development and lab purpose and don’t use it in production.


There will be more options to present “shared storage” to cluster as new scenarios present in the future. We’ll update this blog along with the KB once new announcement becomes available. As long as you fix the storage, you’ve built the foundation of the cluster. In my next blog, I’ll go through the network part. Stay tuned and enjoy the clustering in Azure!

Mario Liu

Support Escalation Engineer

CSS Americas | WINDOWS | HIGH AVAILABILITY                                          








Azure PowerShell ForbiddenError: The server failed to authenticate the request. Verify the certificate is valid and is associated with this subscription.

MSDN Blogs - Tue, 06/16/2015 - 20:57

 Hi There,

 You might be here reading this blog because of the below error!

ForbiddenError: The server failed to authenticate the request. Verify the certificate is valid and is associated with this subscription. You might get this error when you are working on Azure PowerShell and this being most common error when working with PowerShell.

How to fix this?

Turns out to be simple pretty straight forward, but I spent almost 3 hours in mid night figuring out what’s wrong in command or what's wrong with certificate  and the below command does the magic of clearing your azure profile which is existing.



You can also consider deleting the content of the folder C:\Users\AppData\Roaming\Windows Azure Powershell  manually. After which again run Add-AzureAccount to get the fresh/new one by entering your subscription details and  then execute any Azure PowerShell commands that you wish to run

Hope it helps!

PS: Thanks to Wriju for sharing this tip.

Dynamics CRM Online 2015 Update 1 SDK 新機能: フォームスクリプトのサブグリッドの制御 その 3

MSDN Blogs - Tue, 06/16/2015 - 20:00


前回に引き続き Dynamics CRM Online 2015 Update 1 で提供される


フォームスクリプトのサブグリッドの制御 その 1
フォームスクリプトのサブグリッドの制御 その 2

今回は ViewSelector オブジェクトを紹介します。

ViewSelector オブジェクト

ViewSelector オブジェクトは Grid オブジェクトより getViewSelector
関数で取得します。ViewSelector オブジェクトは以下の関数を持ちます。

- getCurrentVIew: 現在のビューを取得します。
- setCurrentView: ビューを設定します。
- isVisible: ビューセレクターが表示されているか確認できます。


isVisible と getCurrentView の利用

1. 以前の記事で作成した Web リソースを開きます。

2. 新しく以下の関数を追加します。

function viewSelectorSample()
  // viewSelector の取得
  var selector = Xrm.Page.getControl("Contacts").getViewSelector();
  // ビューの表示状況を確認

  // 現在のビューを確認
  var current = selector.getCurrentView();
  alert("エンティティタイプ: " + current.entityType + "\nビューの id: " + + "\nビューの名前: " +;

3. 保存して、Web リソースを公開します。

4. 取引先企業のフォームの OnLoad に設定します。尚、以前設定した

5. 保存してカスタマイズを公開します。


1. 任意の取引先企業レコードを開きます。以下のメッセージが

2. ビューの選択ができない状態になっているので、フォームの

3. カスタマイズを公開後、再度取引先企業レコードを開くと以下の

setCurrentView の利用


1. 設定 | カスタマイズ | システムのカスタマイズより、エンティティ |
取引先担当者 | ビューを選択します。

2. サブグリッドに設定したい任意のビューを開きます。この記事では
URL より id を抜き出します。

3. 先ほどの Web リソースに新しい関数を追加します。

function changeViewSample()
  // 新しいビューの作成
  var ContactsIFollow = {
    entityType: 1039, // SavedQuery
    name: "あなたがフォローする 取引先担当者"
  // ビューの設定

4. 保存して Web リソースを公開します。

5. 取引先企業フォームのカスタマイズより、任意のフィールドの
OnChange イベントに作成した関数を設定します。この記事では

6. 変更を保存してカスタマイズを公開します。


1. 任意の取引先企業レコードを開きます。取引先担当者のサブグリッド

2. 説明列を変更します。取引先担当者のサブグリッドビューが更新


これによりあるサブグリッドビューの OnLoad イベントを利用して

- 中村 憲一郎


Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux