You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 1 hour 25 min ago

Authenticode in 2015

Wed, 01/28/2015 - 12:34

Back in 2011, I wrote a post explaining why and how software developers should use Authenticode to digitally sign their applications. While the vast majority of the original post remains relevant, in today’s post, I’ll share my most recent experiences with code-signing.

Shopping for a Certificate

In the past, I signed my code using a certificate from the Comodo Certificate Authority (CA), purchased through a popular reseller named Tucows. This time, I decided to shop around a bit. My first stop was GlobalSign, but after some back and forth, I learned that they no longer sell code-signing certificates for individuals. If I wanted to get a code-signing certificate from GlobalSign, I’d need to file formal legal paperwork with the state to register a business. Since I’m not really making money on my freeware, I decided to keep looking.

My next stop was DigiCert and it proved more fruitful. I’d heard a lot of good things about DigiCert because they offer discounted certificates for SysDevs and free certificates for Microsoft employees and MVPs.

I had initially hoped to get an Extended Validation Code Signing certificate to ensure that my users didn’t encounter any temporary “Unknown Reputation” warnings from SmartScreen Application Reputation when I first switch over to signing using the new certificate. Unfortunately, it turns out that EV Code-Signing Certificates are not available to individuals from any certificate authority, so I placed an order for a regular code-signing certificate instead.

While it’s unfortunate that some users might get some “Unknown Reputation” warnings, my software is popular enough that hopefully the reputation will build on my certificate in a few days or so. My new certificate is valid for three years, so a few days of user questions shouldn’t prove a huge burden.

Validating My Identity

In past years, validating my identity to obtain a certificate was a complicated, multi-day affair involving faxing personal documents like my passport, utility bills, bank statements to the CA. In one case, I even had to ping the CEO of a certificate authority (we had met during my work on EV for HTTPS) to get the process unblocked.

In contrast, my experience with DigiCert was much more straightforward. I simply uploaded a copy of my driver’s license to their secure portal, along with a scan of a notarized document containing my address, driver’s license information, and certificate information. I was initially worried about the expense and complexity of finding a notary to countersign my document, but this turned out to be very easy. The UPS Store has notaries on staff and they charge just $6 per page notarized; the entire process took less than 20 minutes. I've been told that some banks offer free notary services to their customers. After supplying my identity proof, I got a quick call from the CA validators and was granted permission to request a certificate.

Generating a Certificate

The DigiCert portal offers a simple push-button interface for requesting a certificate. At first, the button didn’t work and I was reminded that, for Internet Explorer’s Certificate Enrollment ActiveX Control to work properly, I should put the site in my Trusted Sites Zone (especially because I run in Enhanced Protected Mode). Unfortunately, even after trusting the site, I got the same error message:

Fortunately, I quickly realized the problem: I use ActiveX Filtering to reduce my attack surface (and block annoyances) and this was preventing my use of the necessary ActiveX interface. After unblocking ActiveX controls

…and refreshing the page, the control loaded properly. It requested permission to generate a private key and certificate request:

And after I chose Yes, the new certificate was generated and signed by the Certificate Authority. The new certificate is automatically placed in the Current User\Personal\Certificates store which can be found by launching CertMgr.msc or by following these steps.

Double-clicking the certificate shows the Certificate UI which confirms that Windows has the matching private key:


Exporting the Certificate

My current build process uses signcode.exe from the Windows SDK to sign code; I supply my certificate in a SPC file and the private key in a PVK file. In order to generate these files, I must first export my certificate to a CER and a PFX file. This also allows me to back up my key, since the only copy in existence is in my Windows Certificate store.

Doing so is simple: just right-click the certificate and choose Export. Choose Yes, export the private key and select the Personal Information Exchange - PKCS #12 (.PFX) format. Save as mycert.pfx.

You’ll be prompted for a password to protect the file: do NOT forget your choice! You should probably back up this file somewhere safe and offline (e.g. a USB drive or CD) in case your PC ever suffers from an unrecoverable problem.

After exporting the PFX file (which contains both the certificate and the private key) you can export a plain CER file, which contains only the certificate. In the Certificate Export Wizard, choose No, do not export the private key and choose DER encoded binary X.509 (.CER) as the format. Save as mycert.cer.

Converting Certificate and Key Files

Note: These steps are not always necessary, for instance if you use signtool or the DigiCert Certificate Utility to sign your code. 

With the CER and PFX files in hand, you now need to convert the files into SPC and PVK files used by the signing tool.


Unfortunately, generating the PVK file containing the private key is a little tricky. Most of the tutorials on the web suggest you download a zip file containing a converter and run it. If the prospect of downloading an unsigned executable over HTTP and supplying it with your PFX and its password doesn’t set off your internal ZOMG no!!! alarms, perhaps you’re working in the wrong field.

So, if that approach is out, what do we do instead?

Here, OpenSSL comes to the rescue. First, convert to a PEM file:

openssl.exe pkcs12 -in mycert.pfx -nocerts -nodes -out mycert.pem

Then convert the PEM file to a PVK file:

openssl.exe rsa –inform pem –in mycert.pem –outform pvk -out mycert.pvk


Generating the SPC file from the CER is comparatively easy. Simply use the Cert2SPC.exe utility included in the Windows SDK:

cert2spc.exe mycert.cer mycert.spc

In the unlikely event that you don’t have the Windows SDK installed, you can generate the SPC using OpenSSL:

openssl.exe pkcs12 -in mycert.pfx -nokeys -out mycert.pem openssl.exe crl2pkcs7 -nocrl -certfile mycert.pem -outform DER -out mycert.spc

You can now delete the PEM files. You may wish to backup the PFX file somewhere safe (offline), and you may want to uninstall your private key from the Windows Certificate store. Read on for more details.

Signing Files

With all files in the proper formats, you can now sign your code:

signcode -spc \src\mycert.spc -v \src\mykey.pvk -n "My App Name" -i "" -a sha1 -t MySetup.exe

I explicitly specify -a sha1 for my hash algorithm, because the default algorithm (MD5) is NOT safe. I still support a few Windows XP users, and unfortunately XP doesn’t support SHA256 for Authenticode, even with Service Pack 3 installed. In another year or two, I will stop using sha1 and will use SHA256 instead.

I always provide a timestamp URL using the -t parameter to ensure that my program’s signature will remain valid even after the signing certificate expires.

Improving Security with Hardware

As users and security software are increasingly looking for digital signatures, bad guys are now looking for ways to get their malware signed. Perhaps the simplest way for them to do so is to hack into software developers’ PCs and steal their private keys, or submit their malware into automated processes configured to sign anything they receive.

While I’ve never configured automatic signing of anything, I’ve long been worried about the threat of a bad guy stealing my private key and signing his malware with my good name.

Fortunately, it’s now relatively easy to raise the bar against attackers.

While EV-Authenticode requires use of a hardware token for signing, even non-EV signers like me can benefit from hardware-based signing. Below is a eToken 72K security token; this one and similar products are available online at prices ranging from a few dollars to about $40.

When you buy an EV code-signing certificate, you’ll get a token with the certificate and private key installed; you don’t need to do much beyond updating the password.

In contrast, when you’re setting up a token yourself, there are a few steps; my token didn’t come with instructions, but it was pretty easy to figure out.

First, you need to install the appropriate software to use the token; in my case, it was the Aladdin Knowledge Systems eToken PKI client. Next, you need to know that the default password for a new eToken is 1234567890. Supply this password and then pick a strong new password to replace it. Your new password should be memorable (if you forget it, you’ll be in a world of hurt) and probably should be relatively easy to type, as you’ll be typing it each time you sign anything.

In the left pane, select your token:

At first, I was a bit worried about the word “Java” here—was this only for signing JAR files or something? No—it’s that the token itself is running Java for its own internal operating system.

After selecting your token at the top, click the yellow gear icon to go to Advanced View:

You can then right-click and choose Import Certificate.

Since we’ve left our certificate in the Windows Certificate Store, we’ll choose that option:

Next comes a very confusing prompt:

With fingers crossed, we click OK and get an error message:

Hrm. So we click Cancel and voila, our certificate in the Windows Certificate store appears:

My guess is that the first “Smart Card” prompt was there in case we wanted to copy a certificate and key from a different smartcard to our token. When we hit Cancel, it then just shows the Windows Certificate stores. We select the desired certificate and click OK. After supplying our token’s password, the certificate is imported.

After we’ve moved the token to the token, the certificate still appears within the Windows Certificate manager (CertMgr.msc):

…until we unplug the token and hit F5. At that point, the code-signing certificate disappears:


Signing Files From the Token

Because we’re no longer going to use the private key file from disk, we obliterate the mycert.pvk file and update our command line as follows:

signcode -p "eToken Base Cryptographic Provider" -spc \src\mycert.spc -k df2852d2a58a1cc5ce82d186e0fb6eda_0b960da4-1609-44a5-bfa9-aac9caea8170 -n "My App Name" -i "" -a sha1 -t MySetup.exe

We first specify that the signing should use the eToken Base Cryptographic Provider; you can find the list of available providers by looking in your registry under the HKLM\Software\Microsoft\Cryptography\Defaults\Provider node.

We next replace our reference to the PVK file with the name of the container containing the private key. Fortunately, the eToken software exposes the key’s Container name information directly in the UI:

Now, when we run the updated signcode command, the eToken software prompts for our password and returns the signature.

Here Be Dragons

At this point, I was pretty excited at how easy it was to use hardware to bolster security. For fun, I tried modifying my command line

signcode -spc \src\mycert.spc -k df2852d2a58a1cc5ce82d186e0fb6eda_0b960da4-1609-44a5-bfa9-aac9caea8170 -n "My App Name" -i "" -a sha1 -t MySetup.exe

…to omit the provider directive, and signing succeeded. And it didn’t prompt me for a password.

I had a bad feeling about this.

I unplugged my eToken from the PC and ran the command again.

And it succeeded, without either a password or the token.

What the what?!?

Fortunately, I have spent a few weeks banging my head against the wall with problems with Windows Certificate Key storage in the past, and I had a theory about what was going on—perhaps the process of importing the certificate to the token did remove the certificate from the Windows Storage, but did not properly blow away the private key? (That mistake recently got a bit of press because one high-profile piece of ransomware also left a copy of its RSA key locally.)

With this hunch, I searched my RSA keys folder for any files containing the key container name:

…and I got exactly one hit. After I deleted the key container file, signcode started behaving as expected: I could only sign the file using the eToken Base Cryptographic Provider and only when the token was inserted and the password supplied.

After having deleted my private key files (both the \Crypto\RSA\ file and the PVK file) from my hard disk, I ran cipher /w:C:\ to help ensure that the key files could not be recovered in the future.

While I feel pretty good about my new level of security, there’s no question that my private key would be more secure if I had been able to obtain an EV-Certificate token with a non-exportable key pre-installed.

Secure all the things!

-Eric Lawrence
MVP - Internet Explorer

Auto-property initializers

Wed, 01/28/2015 - 12:23

At first auto-property initializers does not sound very interesting at all, but wait…

Read more

.NET Core Open Source Update

Wed, 01/28/2015 - 11:39

Wow. Just wow. I don’t know of a better way to describe my feelings right now. Open source is probably one of the most energizing projects our team has been working on. It’s been a blast so far and the stream of enthusiastic contributors and interactions don’t seem to stop any time soon.

In this post, I want to give you a long overdue update on where we are, interesting changes we made, and give an overview of what’s coming next. Read to the end, this is a post you don’t want to miss.

A good problem to have: Too many forks to display

In the first update on open source I hinted at the fact that we’ve several teams at Microsoft whose focus is on collecting telemetry. The reason we invest so much in this space is due to the shift from many-year release cycles to quasi real-time delivery. For services, continuous deployment is a very common paradigm today. As a result, the industry has come up with various tools and metrics to measure the health of services. Why is this necessary? Because you need to know when a problem is about to happen, not when it already happened. You want to make small adjustments over time instead of drastic changes every once in a while because it’s much less risky.

Open source is no different from that perspective. We want to know how we’re doing and somewhat predict how this might change over time by recognizing trends. It’s interesting to point out that the metrics aren’t used by management to evaluate my team or even individuals. We use the metrics to evaluate ourselves. (In fact, I believe that evaluating engineers with metrics is a game you can’t win – engineers are smart little buggers that always find a way to game the system).

One metric totally made my day. When browsing the graphs of the corefx repo, GitHub displays the following:

Indeed we’ve more than 1,000 forks! Of course, this a total vanity metric and not indicative of the true number of engagements we have. But we’re still totally humbled by the massive amount of interest we see from the community. And we’re also a tiny bit proud.

But the total number of pull requests, is pretty high. In total, we’re approaching 250 pull requests since last November (which includes both, contributors from community as well as Microsoft):

(In case you wonder, I also have a hypothesis what the plateau means).

We’re also thrilled to learn that we’re already outnumbered by the community: the number of community contributions is higher than the number of contributions from us. That’s the reason why open source scales so well – the community acts as a strong multiplication force.

Real-time communication

One of the reasons why we decided to open source .NET Core was the ability to build and leverage a stronger ecosystem. Specifically, this is what I wrote last year:

To us, open sourcing the stack also means we’re able to engage with customers in real-time. Of course, not every customer wants interact with us that closely. But the ones who do make the stack better for all of us because they provide us with early & steady feedback.

Of course, the real-time aspects cuts both ways. It’s great if we hear from you in real-time. But how are we responding? If you ever filed a bug on Connect, you may not have had the impression that real-time is a concept that the people outside of the Skype organization have heard of. No offense to my fellow Microsofties; I’m guilty too. It’s incredibly difficult to respond timely if the worlds of the product team and their active customers are separated by several years.

In order to understand how responsive we are, we collect these two metrics:

  • How quickly do we respond to an issue? This is the time it takes us to add the first comment.

  • How quickly do we close an issue? This is the time it takes us to close an issue, regardless of whether it was addressed or discarded.

Here are the two graphs:

So far these look pretty good, and are in the realm what real-time collaboration. But there is also some room for improvement: getting a first response shouldn’t take more than a week. We’ll try to improve in this area.

Open development process

Part of the reason I share these metrics is to underline that we’re fully committed to an open development process.

But establishing an open development process is more than just sharing metrics and writing blog posts. It’s about rethinking the engineering process for an open source world. In a Channel 9 interview we said that we want to be mindful of the new opportunities that open source brings and not simply expose our existing engineering and processes. The reason isn’t so much that we fear giving anything away; it’s about realizing that our existing processes were geared for multi-year release cycles, so not all of them make sense for open source and agile delivery. In order to get there, we generally want to start with less process and only add if it’s necessary.

This requires adapting existing processes and adding some new processes. Let me give you a quick update of which processes we currently have and what tweaks we made.

Code reviews

To me, this is by far the most valuable part of collaborative software engineering. Realizing how many mistakes you can prevent by just letting somebody else see your code is a very liberating experience. It’s also the easiest way to spread knowledge across the team. If you haven’t done code reviews, you should start immediately. Once you did them, you’ll ever go back, I promise.

If you look at our GitHub pull requests, you’ll find that they aren’t just from the community – we also leverage pull requests for performing our code reviews. In fact, for parts of the .NET Core that are already on GitHub, there are no internal code reviews – all code reviews fully happen on GitHub, in public.

This has many advantages:

  • Sharing expectations. You can understand what feedback we’re providing our peers with, and hence expect contributors to follow, too.

  • Community participation. Anybody can comment on our pull requests which enables both, you and us, to benefit from the feedback of the entire community.

  • Our pull request metric looks better. Just kidding. A nice side effect of using pull requests for code reviews is that everything is in one place. There is, tooling wise, no difference between me code reviewing my coworker’s code and reviewing a community pull request. In other words, doing code reviews in public makes our lives easier, not harder.

API reviews

My team spends a lot of time on API design. This includes making sure an API is very usable, leverages established patterns, can be meaningfully versioned, and is compatible with the previous version.

The way we’ve approached API design is as follows:

  • Guidance. We’ve documented what constitutes good .NET API design. This information is available publicly as well, via the excellent book Framework Design Guidelines, written by our architect Krzysztof Cwalina. A super-tiny digest is available on our wiki.

  • Static analysis. We’ve invested into static analysis (formerly known as FxCop) to find common violations, such as incorrect naming or not following established patterns.

  • API reviews. On top of that, a board of API review experts reviews every single API that goes out. We usually don’t do this on a daily basis, because that wouldn’t efficient. Instead, we review the general API design once a prototype or proposal is available by the team building the API. Depending on the complexity we sometimes review additional iterations. For example, we reviewed the Roslyn APIs many, many times because the number of APIs and concepts is quite large.

We’ve found this process to be invaluable because the guidelines themselves are also evolving. For example, when we add new language features and patterns it’s important to come up with a set of good practices that eventually get codified into guidelines. However, it’s rare that the correct guidelines are known at day one; in most cases guidelines are formed based on experience. It’s super helpful to have a somewhat smaller group that is involved in a large number of API reviews because this focuses the attention and allows those folks to discover similarities and patterns.

With open source, we thought hard about how we can incorporate API reviews into an open development process. We’ve published a proposal on GitHub and based on your feedback put into production. The current API review process is now documented on our wiki.

But having a documented process is just one piece of the puzzle. As many of you pointed out, it’s a huge burden if the reviews are black boxes. After all, the point of having an open development process is to empower the community to be successful with contributing features. This requires infusing the community with the tribal knowledge we have. To do this, we started to record the reviews and upload them to Channel 9. We also upload detailed notes and link them to the corresponding parts in the video.

Of course, a complex topic like API design isn’t something that one can learn by simply watching a review. However, watching these reviews will give you a good handle on what aspects we’re looking for and how we approach the problem space.

We’ve also started to document some less documented areas, such as breaking changes and performance considerations for the BCL. Neither page claims to be complete but we’re curious to get your feedback.

Contributor license agreements (CLAs)

Another change we recently did was requiring contributors to sign a contributor license agreement (CLA). You can find a copy of the CLA on the .NET Foundation. The basic idea is making sure that all code contributed to projects in the .NET Foundation can be distributed under their respective licenses.

This is how CLAs are exposed to you:

  1. You submit a pull request

  2. We’ve an automated system to check if the change requires a CLA. For example, trivial typo fixes usually don’t required a CLA. If no CLA is required, the pull request is labelled as cla-not-required and you’re done.

  3. If the change requires a CLA, the system checks whether you already signed one. If you did, then the pull request is labelled as cla-signed and you’re done.

  4. If you need to sign a CLA, the bot will label the request as cla-required and post a comment pointing you to the web site to sign the CLA (fully electronic, no faxing involved). Once you signed a CLA, the pull request is labelled as cla-signed and you’re done.

Moving forward, we’ll only accept pull requests that are labelled as either cla-not-required or cla-signed.

It’s worth pointing out that we intend to have a single CLA for the entire .NET Foundation. So once you signed a CLA for any project in the .NET Foundation, you’re done. This means, for example, if you signed a CLA as part of a pull request for corefx, you won’t have to sign another CLA for roslyn.

Automated CI system

We’ve always had an automated build system. The triggers varies between teams, but the most common approach is a daily build in conjunction with a gate that performs some validation before any changes go in. For an open source world, having an internal build system isn’t helping much.

The most common practice on GitHub is to use a continuous integration (CI) system that builds on every push, including pull requests. This way, reviewers on the pull requests don’t have to guess whether the change will pass the build or not – the registered CI system simply updates the PR accordingly.

GitHub itself doesn’t provide a CI system, it relies on 3rd parties to provide one. Originally, we used AppVeyor. It’s free for open source projects and I use it on all my personal projects now. If you haven’t, I highly recommend checking them out. Unfortunately, AppVeyor currently only supports building on Windows. In order to enable our cross-platform work we wanted a system that we can run other systems, especially Linux. So we went ahead and now host our own Jenkins servers to perform the CI service.

We’re still learning

Our team is still learning and we believe it’s best to be transparent about it. As David Kean said in our initial open source interview:

Don’t be afraid to call us on it. If we do something wrong, overstep our boundaries, or do something that you thing we should have done better, call us out on it.

The earlier you can tell us, the better. So why not share our thinking and learning before we make decisions based on it? Here are a few examples. 

  • Bots talking to bots. When we started to roll our own CI system and added the CLA process, we got a bit trigger happy with using bots, which are essentially automated systems posting comments. This resulted in a flood of comments which caused a lot of noise in the PR discussions and in some cases even dominated the number of comments. Nobody on our side quite liked it, but given the amount of complaints from the community we prioritized this work and made our bots a lot less chatty. Instead of C3PO, we now have R2D2. No chitchatting, but few, short and actionable comments.

  • Using Git. Most of our team members have a lot of experience with using centralized version control, especially Team Foundation Version Control (TFVC). While we also have a set of quite experienced Git users, our team as a whole is still adapting to a decentralized workflow, including usage of topic branches and flowing code between many remotes. Andrew Arnott, who some of you probably know from the Channel 9 interview on immutable collections, recently did a Git training for the .NET team. We recorded it and uploaded it to Channel 9. We’d love to hear from you if sharing these kind of videos is interesting to you!

  • Up-for-grabs. There is an established pattern in the open source community to mark issues in a specific way that the community can query for when they want to find opportunities to jump in. We’ve started to label issues with up-for-grabs to indicate bugs or features that we believe are easy to get started with and we don’t currently plan on tackling ourselves. Thanks to Brendan Forster, the corefx project is now also listed on, which is a website devoted to document on how open source project ask the community for support. Based on some questions Brandon raised, this also started a discussion of what up-for-grabs actually means. Feel free to jump in and lest know what you think!

Are there any other topics you’d interested in? Let us know!

Library availability

At the time of the Connect() event, we only had a fraction of the libraries available on GitHub:

  • System.Collections.Immutable
  • System.Numerics.Vectors
  • System.Reflection.Metadata
  • System.Xml

Those four libraries totaled about 145k lines of code. Since then, we’ve added many more libraries which more than tripled the code size to now more than half a million lines of code:

  • Microsoft.Win32.Primitives
  • Microsoft.Win32.Registry
  • System.Collections.Concurrent
  • System.Collections.Immutable
  • System.Collections.NonGeneric
  • System.Collections.Specialized
  • System.Console
  • System.Diagnostics.FileVersionInfo
  • System.Diagnostics.Process
  • System.IO.FileSystem
  • System.IO.FileSystem.DriveInfo
  • System.IO.Pipes
  • System.IO.UnmanagedMemoryStream
  • System.Linq.Parallel
  • System.Numerics.Vectors
  • System.Reflection.Metadata
  • System.Text.RegularExpressions
  • System.Threading.Tasks.Dataflow
  • System.Xml

And we’re not even done yet. In fact, we’ve only tackled about 25% of what is to come for .NET Core. A full list is available in an Excel spread sheet.


Since November, we’ve made several improvements towards an open development model. Code reviews are in the open and so are API reviews. And – best of all – we’ve a very active community which already outnumbers the contributions from our team. We couldn’t have hoped for more.

Nonetheless, we’re still at the beginning of our open source journey. We’re heads-down with bringing more .NET Core libraries onto GitHub. On top of that, the runtime team is busy getting the CoreCLR repository ready. You can expect an update on this topic quite soon.

As always, we’d love hearing what you guys think! Let us know via the comments, by posting to .NET Foundation Forums or by tweeting @dotnet.

Introducing the first update to the Deploying and Administering CRM Online and CRM 2015 documentation!

Wed, 01/28/2015 - 11:31

Allow me to get you up to speed on what is going on in the world of CRM documentation for administrators and IT professionals. Update version 7.0.1 of the Deploying and Administering CRM Online and CRM 2015 documentation has recently published and includes several revisions and new topics. Included in this update is the Report writing with CRM 2015 for online and on-premises section that reflects the current report writing environment, which uses Visual Studio and SQL Server Data Tools (SSDT).



Here is the short summary of the significant changes included in this update.

New and updated topics

Description of change

Report writing with CRM 2015 for online and on-premises

Added section that includes several topics.

Update deployment configuration settings

Added several topics that further describe Microsoft Dynamics CRM Windows PowerShell cmdlets.

Use Deployment Manager to manage the deployment

Added section that includes all Deployment Manager Help topics.

Use Email Router Configuration Manager

Added section that includes all Email Router Configuration Manager Help topics.

Microsoft Dynamics CRM Monitoring Service

Added topic that describes how the Microsoft Dynamics CRM monitoring service records monitoring activity.

Matt Peart
Senior Technical Writer

Transforming to a digital business

Wed, 01/28/2015 - 11:20

Transforming to a digital business is now considered a requirement for virtually every enterprise. But this is much more than simply digitizing traditional processes as businesses are rethinking products, services and even entire business models in a mobile-first, cloud-first, all-digital world. Power & Utility companies have realized that "getting closer to the customer" is crucial to meeting and exceeding customer expectations whether it’s information about outages, new programs, services or just elevating the customer care experience through digital transformation initiatives. The energy marketplace is involving rapidly and customers see the experiences that other service providers offer such as finance, telecommunications and retail industries provide. This sets the expectations high as customers expect a lot more from their Utilities whether it’s in competitive or regulated markets. More and more they are seeking interactions that are quick, simple whether it involves mobile or digital options for transactions and information. Becoming more responsive is a natural outcome for companies that design customer-first products, services, processes and effectively transform into a digital business.

A leading example of the digitalization of the customer experience comes from AGL Australia. AGL worked with Avanade and Accenture to implement a customer experience solution from our partner Sitecore running on our cloud platform, Azure. This enabled AGL to take advantage of its rich and scalable features while leveraging their existing investments in Microsoft technology. Running their customer service solution in the cloud is a crucial element of the digital experience AGL wants to deliver as it means they can rapidly scale their solution to address demand, rapidly provide new products and services while maintaining a seamless customer experience. This is a great digital transformational story and you can read all about it in AGL puts energy into action with the Cloud. May the Cloud be with You! – Jon C. Arnold

Cloud: Adding a messagebox to your Powershell

Wed, 01/28/2015 - 11:16
In case you have a long acting action, you might want to have a message sent to you or a messagebox pop up.  Only use the message box when you are preforming testing.  Better yet, use logging early and often in your design.  This is just an example that I did to show myself, Powershell is just another way to implement Windows Programming, or something like that. To use the messagebox, powershell can call the standard messagebox from the namespace, and use it just like you would in...(read more)

Power BI on Office Mechanics

Wed, 01/28/2015 - 11:15

Yesterday we announced exciting news for Power BI.

The Power BI new preview has been around since December 18th with attractive new features such as live dashboards, connectors, the Power BI Designer and mobile apps. Yesterday's announcement included new offerings and pricing. Starting February 1st, our in market Power BI for Office 365 service price will be reduced significantly ($9.99/user/month for the Office 365 E3/E4 add-on and $17.99/user/month for the Standalone version). When the new Power BI is made generally available, there will be two tiers: Free and Pro, whith the Pro version price being $9.99/user/month. You heard right. FREE.

As a way to celebrate this announcement and to get a first look at the new Power BI, Jeremy Chapman joined Michael Tejedor from the Power BI team for a session on these brand new updates and features. Enjoy! 

Hello, World! Encore Edition aka Good-bye, Microsoft

Wed, 01/28/2015 - 10:52

Moving On

After nearly nine years at Microsoft, I’ve chosen to move on.

I’ve accepted a position at Fusion-io—now a SanDisk company—as Worldwide SQL Server Solutions Architect where I’ll be evangelizing, documenting best practices, collaborating directly with the Windows & SQL product teams & other partners, doing cutting-edge performance work, & of course flipping the /faster bit—all things about which I’m passionate—& doing so with the full faith & support of the best enterprise flash company on the planet.  I collaborated closely with several of these fine folks while managing the SQL CAT Customer Lab.  Like me, they’re passionate about performance, defining engineering discipline, & delighting the customer.  It’s a fantastic fit.

Why “Hello, World! Encore Edition”?  My very first MSDN post, “Hello, World!” reflects aspirations that remain true today.  (It also includes an hilarious excerpt from Scott Adams on “the why” of blogging.)  So this post isn’t good-bye¸ it’s simply a transition.

Contact: Blog & Twitter

My primary blog will be here: where I’ll continue to post tips, tricks, speaking events, & of course, insights on aforementioned /faster bit.  That site, hosted by MVP Adam Machanic, features an awesome roster of talent that I’m honored to be a part of.

I invite you to look me up on Facebook as well as Twitter: @AspiringGeek.  Here’s my email:  SELECT REVERSE('moc.evil@keeggniripsa')


Changing the World

Anyone who’s read my blog or heard me speak know I’ve liberated my motto from @GapingVoid

Change the world is exactly what I’ve tried to do—& will continue to do—to make it a better place, to best serve our customers.

I had lots of help, working with lots of great engineers & lots of great customers—lots of great people.

Along the way I authored one white paper and was named contributor or technical reviewer on almost two dozen others.  I had the privilege of editing or contributing to several books, including the career- & life-changing Getting Results the Agile Way: A Personal Results System for Work & Life by MS Principal PM J.D. Meier (Sources of Insight | 30 Days of Getting Results | wiki | amazon).

Test-drive the book on how to make the most of work and life.
Read Getting Results the Agile Way for free online.

I spoke at the PASS International Summit five times, TechReady seven times, & at dozens of user groups across the country & some outside.

Here’s one of favorite moments on stage, demonstrating how columnstore turns conventional row store on its head with my #SQLWingMan, Shahry Hashemi aka @dsfnet (photos courtesy of @GEEQL).

As a consultant, I led over 60 on-site customer engagements around the world.  As a SQL CAT PM & manager of the Customer Lab, we shepherded over 100 customer engagements.  With a lot of help from my team, we architected the Customer Lab to host parallel engagements, remote engagements, streamlined our onboarding processes, paved the way for Cloud engagements, upgraded our network & I/O infrastructures, & enhanced our collaboration with several partners resulting in the acquisition of dozens of the latest-&-greatest servers & storage devices.  At MSIT I was part of the team troubleshooting performance issues & establishing best practices during our massive migration to Azure.

The Roles

Here’s a summary of my exciting roles while at Microsoft:

  • Microsoft Consulting Services, Senior Consultant
  • Microsoft A.C.E. Performance Team, Principal Consultant
  • The SQL Server Customer Advisory Team—SQL CAT, Senior Program Manager
  • Microsoft Information Technology—MSIT, Principal Architect

The Training: Microsoft Certified Master #MCM4Life

Just as SQL CAT was in so many ways a dream job, Microsoft Certified Masters training was the highlight of my professional training.  I prepped for months, reading every single page of the required reading—Kimberly Tripp later stating I was “The One” who’d done so. ;-)  At that time the cert had a required three-week on-site component during which, one after another, we were instructed by the best SQL Server trainers on the planet.  Just as importantly, I was privileged to sit alongside two dozen peers, all of whom remain friends.  On Labor Day weekend 2012, Microsoft Learning unceremoniously without warning terminated the program.  Many of us hope the program is resuscitated in a meaningful way.  In the meantime, MCM training was in so many ways an incredible, special, unique experience.

Certification logo courtesy of MSL combined with lots of hard work.
Gang tat courtesy of Robert Davis, yo. 

The People

Attempting to thank all those who’ve supported, encouraged, & championed my career reminds me of the dilemma faced by Academy Award winners—it’s impossible to properly acknowledge everyone in the allotted time, & there’s the risk of leaving out many who richly deserve credit.  But I’m going to try anyway.

If I were to try enumerating my myriad community colleagues, allies, & friends outside of Microsoft, the list would be unmanageable.

Yet I must recognize those in Microsoft leadership positions without whom my success would not have been possible:

  • Eddie Lau & Irfan Chaudhry of A.C.E. Perf
  • Mark Souza & Lindsey Allen of SQL CAT

And here (in no special order) are just a few of the current & former Microsoft staff who’ve been invaluable:

  • My “Master Mind”: J.D. Meier, Alik Levin, Rob Boucher
  • Especially Special Persons: Marilyn Grant, Janelle Aberle
  • Mentors: Ty Moore, James Day
  • MCS Mentors: Joe Sack & Kate Baroni
  • SQL CAT: Shaun Tinline, Sanjay Mishra, Mike Ruthruff, Thomas Kejser, Denny Lee, Cindy Gross, Lara Rubbelke, Kathy MacDonald, Chuck Heinzelman, Ewan Fairweather, Mike Weiner, Tom Davidson, Regina Jones, Mike Anderson
  • MCMs: Jens Suessmeyer, Bertil Syamken, Jose Barrios, Robert Davis, Argenis Fernandez, Kalyan Yella, Tracey Jordan, Cris Benge
  • MCM Trainers: Kalen Delaney, Paul Randal, Kimberly Tripp, Greg Low, Adam Machanic
  • MSIT: Chris Lundquist, James Beeson, Vitaliy Konev, Casie Owen, Brian Walters, Ahmad Mahdi, Dale Hirt, Rob Beddard
  • PFEs: Gennady Kostinsky, Clint Huffman, Shane Creamer, Ken Brumfield, Robert Smith, Kaitlin Goddard
  • SQL BI: Kay Unkroth
  • SQL PG:  Sunil Agarwal, Eric Hanson, Susan Price, Kevin Farlee, Luis Carlos Vargas Herring
  • Others: Buck Woody, Pablo Brontvain, Steven Schneider, Brian Raymer, Steven Wort, Bruce Worthington, Dandy Weyn, Cephas Lin, Matthew Robertshaw, Mark Pohto, Bob Roudebush

Living Life to the varchar(max)

Thank you, everyone—inside & outside of Microsoft—for your help & friendship during my tenure. I couldn’t’ve done it without you.  Our journey continues, & I look forward to exploring the road of happy destiny together with all of you. 

In the meantime, here’s to living life to the varchar(max)!

Jimmy May, Aspiring Geek

MVP Community Camps - 244 MVPs, 2 Continents, Thousands of Participants

Wed, 01/28/2015 - 10:04

This weekend marks the start of something special in technical communities across Asia and Australia, the MVP Community Camps!  An estimated 5,000 IT, consumer and developer-minded technophiles will meet and discuss everything from Windows 10 to SQL Server.  The Microsoft MVP Community Camp (ComCamp) is scheduled in 28 cities in 7 countries throughout Asia and boasts 244 MVPs as presenters and speakers. 

The goal for MVPs is to share knowledge and expertise in their own cities and countries across Asia and Australia. The goal for attendees is to expand their knowledge base of Microsoft technologies and services. One of the unique attribures of the ComCamps is that  MVPs will spend time answering questions and sharing valuable feedback with attendees in an intimate setting.

Each country will deliver various sessions based on speaker and attendee needs.  Some session will be presented in-person exclusively while others will provide streaming and video download options.  Register now for an MVP ComCamp near you!






The History of the For Loop

Wed, 01/28/2015 - 09:25

As everybody knows (or should if they don't), a For Loop is a statement in programming languages that allow you to iterate (repeat) a body of statements. That's what a loop is (repeating a body of statements).

There are three common loops: For Loop, While Loop, and GoTo Loop. Some folks don't like GoTo loops (because of all the spaghetti it might make).


And so the For Loop is the most common of the three. The For Loop repeats for a certain number of times (for example, 6 times). Because that number can be a variable, your user can specify it. In other words, they can type "9" in your program, and then your For loop can read that variable and repeat 9 times.

Read about how to create For Loops in Small Basic here: Small Basic: How to Create For Loops


And so that brings us to the question...


Where did For Loops come from?


Well, there were originally three different perspectives on looping a body of code a set number of times...

  1. FORTRAN - Fathered the Do Loop
  2. ALGOL - Fathered the For Loop
  3. COBOL - Fathered the "Perform Varying" loop


Although ALGOL is the least recognized of the three languages, it's the one that won the "Loop Nomenclature War" and got to decide the term we use the most today... the For Loop.

Now, part of the reason why it won and also why it isn't as recognizable as a language is because it came last of the three (chronologically). So it didn't have quite as big of an impact on programming (being a bit of a copy of what was out there), but it also had the chance to think through some key improvements. And Fortran and Cobol have found long lives as they continue to wiggle down their business paths.



FORTRAN was started in 1953 by John Backus at IBM, in New York (the compiler was released in 1957). It was created as an assembly language for their shiny, new (and huge) IBM 704 mainframe computer (more like a whole room by itself). (NASA bought some of these bad boys in the 50s.) It borrowed heavily from the GEORGE compiler of 1952.

It included a bunch of programming concepts we still use today, like If, GoTo, Read, and Print. It also helped popularize loops with the Do Loop.

Read more about how to program Do Loops here:



Enter Grace Hopper, a United States Navy rear admiral (commodore). She was one of the first programmers of the Harvard Mark I computer in 1944! She invented the first compiler, she popularized machine-independent programming languages, and she even started the phrase "debugging" when she physically removed an actual moth from a computer! She even got the US Navy Destroyer USS Hopper named after her! That's why they called her "Amazing Grace"!

Well, in 1943, Amazing Grace joined the US Nave Reserves, aced school and went to work at Harvard, where she was one of the first programmers on the Harvard Mark I. In 1949 through 54, she went to work at Eckert-Mauchly Computer Corporation, where she helped invent the UNIVAC I, the first compiler, MATH-MATIC, and FLOW-MATIC.

Then, in 1959, the Navy funded her to get together a group of programmers (CODASYL), and together they masterminded COBOL (from Grace's FLOW-MATIC and IBM's COMTRAN). COBOL stands for COmmon Business-Oriented Language.

Being an assembly language, it has a more complicated take on iterative loops (thus the use of "Perform Varying").



ALGOL stands for ALGOrithmetic Language. It was developed by a committee of American and European scientists who met in Zurich in 1958. A key contributor was (you guessed it) John Backus!

Backus is back (us)!

One member of that committee was Heinz Rutishuaser, who brought in some elements of his Superplan language, including the Für loop, which they renamed to the For Loop in English. They also used the Step and Until delimiters.


So there you go! That's a shotgun blast of programming history.

Leave a comment to help add some info to this rich history!


And click here to learn about how Small Basic does For Loops: Small Basic: How to Create For Loops

   - Ninja Ed


Getting started with Azure, Don’t Panic

Wed, 01/28/2015 - 09:01
The Microsoft Cloud is AWESOME!  But how do you get started in figuring out how to get started.  This has been a big question for me, like Douglas Adam, the Author of “Hitchhiker’s Guide to the Galaxy” says about space: “Space…is big”, this also applies to Azure.  And since we aren’t worried about a galactic bypass forcing the destruction of Earth, getting motivated to learn about big things can be difficult.  But don’t panic.  Oh yeah, you got to understand the “Cloud” if...(read more)

NETMF 4.3 QFE2 released

Wed, 01/28/2015 - 08:59
The changes to NETMF 4.3 to make it work with VS 2013 have been released from its beta state and is available on Codeplex . This will give us room in the next few weeks to start a new beta with more cool things coming. Stay tuned....(read more)

TechDays Online 3-5. helmikuuta! Maksuton tekninen kisastudio omalta sohvaltasi.

Wed, 01/28/2015 - 08:49

Avaa selaimessa


Live technical sessions from your sofa



TechDays Online – Feb 3rd- 5th


Do you want as an business IT Professional or developer to stay a step ahead in the fast-moving world of Microsoft solutions? Do you want your business to thrive in a cloud-first, mobile-first world?

 Invite your tech teams to a three-day virtual technical conference designed for your business.

 We want to make it easy for your teams to get involved, TechDays Online 2015 brings you all the great content, advice and inspiration you’d expect from Microsoft. The conference program will be delivered by Microsoft UK specialists and industry professionals, as well as customers and partners. The fully interactive sessions give all delegates the opportunity to engage directly with presenters and other participants.


What’s on the agenda?

Each day will have a strong theme and a keynote session with customer case studies integrated throughout the days.

To make sure they are hearing the latest and greatest we’ve lined up Mary Jo Foley, top tech journalist to open the event. The father of Power Shell Jeffrey Snover will take them into day two, with last but not least Scott Hanselman giving the Dev spin on what’s been going on with tech.

Each days themes:

Day 1. Devices and managing a mobile-first world & Office 365 evening sessions

Day 2.The journey to the Cloud-first world

Day 3. Multi-device, cross-platform development.


Invite your teams to join us from 3-5 February for hands-on technical learning delivered by industry experts. There’s no travel necessary – just good WiFi.

Note that broadcast is coming from UK. Add reminder to calendar to get the right timezone.

Find out more and register here>>





Digital Leaders at BETT launch Kodu Kup UK

Wed, 01/28/2015 - 07:28

BETT seemed to be filled with kids this year. Lots were group of schools’ digital leaders . This is fantastic, as it gives students a real insight into the technology available to schools. I spoke to many teachers and groups of digital leaders and if any of you would like to share a great project that you are doing with Microsoft technology in your school then get in  touch (@innovativeteach) and I will post it here.

We launched Kodu Kup UK last week and I had the privilege of meeting the digital leaders from Wellstead Primary School, who helped me do this. Check out their blog at

This is their BETT adventure in their own words-

As part of being a digital leader, Children from KS2 went to the Bett Show. This took place in London and promoted every technological that could be used in schools. After getting a busy commuter train up to London, we had arrived at the Excel Centre. There was so much to see!! One of the first stands we went to was Microsoft. When meeting the representatives on the stand they told us about an opportunity called the Kodu Kup. Children create different games and the best game that is created wins a prize. They wanted us to be part of the “opening ceremony” to promote the launch of the Kup. We had an explore round the arena. We played with new bee bots, funky headphones and lego.

The compiler can make up its own calling conventions, within limits

Wed, 01/28/2015 - 07:00

A customer was confused by what they were seeing when debugging.

It is our understanding that the Windows x86-64 calling convention passes the first four parameters in registers rcx, rdx, r8, and r9. But we're seeing the parameters being passed some other way. Given the function prototype

int LogFile::Open(wchar_t *path, LogFileInfo *info, bool verbose);

we would expect to see the parameters passed as

  • rcx = this
  • rdx = path
  • r8 = info
  • r9 = verbose

but instead we're seeing this:

rax=0000000001399020 rbx=0000000003baf238 rcx=00000000013c3260 rdx=0000000003baf158 rsi=000000000139abf0 rdi=00000000013c3260 rip=00007ffd69b71724 rsp=0000000003baf038 rbp=0000000003baf0d1 r8=0000000001377870 r9=0000000000000000 r10=000000007fffffb9 r11=00007ffd69af08e8 r12=00000000013a3b80 r13=0000000000000000 r14=0000000001399010 r15=00000000013a3b90 contoso!LogFile::Open: 00007ffd`69b71724 fff3 push rbx 0:001> du @rdx // path should be in rdx 00000000`03baf158 "`" 0:001> du @r8 // but instead it's in r8 00000000`01377870 "C:\Logs\Contoso.txt"

Is our understanding of the calling convention incomplete?

There are three parties to a calling convention.

  1. The function doing the calling.
  2. The function being called.
  3. The operating system.

The operating system needs to get involved if something unusual occurs, like an exception, and it needs to go walking up the stack looking for a handler.

The catch is that if a compiler knows that it controls all the callers of a function, then it can modify the calling convention as long as the modified convention still observes the operating system rules. After all, the operating system doesn't see your source code. As long as the object code satisfies the calling convention rules, everything is fine. (This typically means that the modification needs to respect unwind codes and stack usage.)

For example, suppose you had code like this:

extern void bar(int b, int a); static void foo(int a, int b) { return bar(b + 1, a); } int __cdecl main(int argc, char **argv) { foo(10, 20); foo(30, 40); return 0; }

A clever compiler could make the following analysis: Since foo is a static function, it can be called only from this file. And in this file, the address of the function is never taken, so the compiler knows that it controls all the callers. Therefore, it optimizes the function foo by rewriting it as

static void foo(int b, int a) { return bar(b + 1, a); }

It makes corresponding changes to main:

int __cdecl main(int argc, char *argv) { foo(20, 10); // flip the parameters foo(40, 30); // flip the parameters return 0; }

By doing this, the compiler can generate the code for foo like this:

foo: inc ecx jmp bar

rather than the more conventional

foo: mov eax, edx inc eax mov ecx, edx mov edx, eax jmp bar

You can look at this transformation in one of two ways. You can say, "The compiler rewrote my function prototype to be more efficient." Or you can say, "The compiler is using a custom calling convention for foo which passes the parameters in reverse order."

Both interpretations are just two ways of viewing the same thing.

Code Sharing mit C# und Visual Studio

Wed, 01/28/2015 - 06:04

Ich habe einen MVA Kurs zum Thema Code Sharing mit C# und Visual Studio erstellt, in dem ich ein paar Praktiken vorstelle, wie man es hinbekommt möglichst wenig Code doppelt schreiben zu müssen, wenn man unterschiedliche Plattformen adressieren möchte.

Natürlich weiß ich, dass nicht immer alle Methoden geeignet sind, aber ich denke, es ist wichtig sich einmal den Überblick zu verschaffen. Teilweise habe ich über einzelne Themen schon in der Vergangenheit gebloggt, mit dem MVA Kurs gibt es jetzt die Möglichkeit sich das in aller Ruhe gemütlich von der Couch aus reinzuziehen.

Ihr findet den Kurs hier. Er besteht aus folgenden Modulen:

- Einführung

- Code Sharing auf Datei Ebene

- Klassenteile und Zeilen

- Portable Class Libraries

- Patterns

- Dirty Tricks


Viel Spaß dabei! Wenn Ihr die MVA noch nicht kennt, ist das eine gute Gelegenheit sie kennenzulernen. Die MVA ist unsere kostenlose Lernplattform. Meldet Euch einfach mit Eurer Live-ID an, dann kann man auch noch Punkte sammeln… Gamification everywhere… Punkte sammeln ist ja irgendwie immer gut und sie nehmen auch keinen Platz weg…

Vision for Technology in Education Conference – Monday 9th February 2015

Wed, 01/28/2015 - 05:30

Microsoft are running a joint ICT conference with School Business Services specifically designed to help school leadership teams understand how to embed new and upcoming technologies within your schools. The event is free to attend.

Book Now: Vision for Technology in Education Conference – Monday 9th February 2015

School Business Services is a leading supplier of education support services. Across three core specialisms; ICT, MIS/SIMS and Finance & Business Management; SBS offers consultancy, training, helpdesk support and managed services to over 350 schools.

SBS has developed its own cloud-based budget management software, SBS Online. Created to meet the ever changing needs of school and academy development, it incorporates budget planning, reporting and monitoring in one user-friendly interface.

Whether you are looking to improve your current ICT infrastructure, planning new technology procurement, or simply want the latest news on Microsoft educational products, this conference is for you.

SBS is delighted to announce the confirmed guest speaker as Jenny Smith, Frederick Bremer School’s Headteacher from the popular Channel 4 programme 'Educating the East End'.

Who should attend?

Headteachers • Principals • Deputy & Assistant Headteachers • Business Managers

The event will include presentations from experts on:

• Frederick Bremer’s ICT and SIMS journey – Jenny Smith 
• Developing your ICT strategy
• "Anytime Anywhere" Learning
• Planning your ICT budget effectively
• Windows 8 in education
• Data management and life without levels
• Office365 for education

The content has been devised to be most relevant to SLT members as it is a non-technical conference. Vision for Technology in Education is a FREE conference which will include refreshments and lunch at the our Microsoft offices in Victoria, London. We anticipate that this event will be oversubscribed so places are limited to two per school.

Book your place today!

Loading daily gold prices into Azure ML

Wed, 01/28/2015 - 05:22
Technorati Tags:

Want to load daily gold prices into Azure ML? Here’s one way to do that:

The properties for the Reader object now looks like this:

Now run your experiment, and after it completes, check the results by clicking Visualize on the output port:

Be More, Do More with Developer Movement

Wed, 01/28/2015 - 04:30

If you like to play with code, you'll want to read on. Join Developer Movement, Microsoft Canada's developer rewards program, and earn points for every challenge you complete! Points can be redeemed for a number of great prizes including #DevMov swag, gaming mice/keyboards, Xbox One consoles, Windows Phones and even the coveted Surface Pro 3.

Calling all indie devs, app devs, and new devs alike! Microsoft Canada wants YOU to put your coding skills to the test and earn rewards while having fun! Each month, Developer Movement participants are tasked with challenges spanning some of today's top topics in development: cloud, web, apps, games, and more. Those who complete the challenges successfully earn points, which can be redeemed for some awesome prizes including Xbox Ones, touch screen monitors, Nokia Lumia 1020s, and even the top-tier Surface Pro 3. The structure is simple: each month, one major challenge and five mini challenges are announced. For the major challenges, participants create apps that correspond to the month's themes - this month's theme is sports/fitness. For the mini challenges, the tasks give developers simple tasks in many different languages and platforms - past mini challenges have included Node.JS, MongoDB, PHP, and Unity.

Developer Movement, or #DevMov, as it's fondly called, gives developers of all experience levels an opportunity and a reason to try new things and experiment. A big part of being a developer is staying up to speed with the latest technologies, and #DevMov rewards you for doing just that. It's fun, it's ongoing, and it's free to join - sign up on and start working towards those rewards!

NOTE: As part of a future challenge of #DevMov, participants will be able to earn extra points by sharing the links to their apps in the comments section of this blog post. If you are sharing, comment below! If you are just reading, please check out the apps - you might just find the next big thing.


Sage Franch is a Technical Evangelist at Microsoft and blogger at Trendy Techie. Tweet her @theTrendyTechie to get in touch!

The current user does not have access to Release Management. Please login with a valid user or communicate with the Release Management administrator to add your user

Wed, 01/28/2015 - 04:26

One of the release management customers on VSO reported the above error where hewas not able to connect to the service using the WPF client.

Anjani Chandan from the team resolved the issue using the following steps.

  • He observed that the user is able to make the following rest call from the “private” browser window and is getting a correct JSON response from the service.">https://<mytenant>

  • User told him that he has recently changed the account owner from to and he is able to use the WPF client using the initial owner ( & not via the current owner (def@…)
  • Anjani suggested him to add the new owner (def@..) as a release manager via the WPF client and try connecting with new owner.
    • Login to the WPF client using initial owner.
    • Go to Admininstrator –> Manage Users –> New
    • Add the current owner (def@…)
    • Log out Admininstrator –> Settings –> System Settings –> Sign out
    • Log in with the new owner.
  • It should work.

Enjoy !!


Drupal 7 Appliance - Powered by TurnKey Linux