You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 1 hour 54 min ago

Should you utilise the services of marketing agencies?

Mon, 04/20/2015 - 17:52


Many companies are hesitant to seek the services of marketing agencies, based on the premise that if we have internal marketing capability we ‘should be able to do it ourselves’. In addition, when thinking of marketing agencies, large global ad agencies and large $ signs may spring to mind.

However, agencies often have a breadth of skills that in-house marketers don’t have and even seasoned marketers can’t possibly hope to know every facet of marketing to the degree that a team of say 20 people at an agency may collectively know. When hiring a marketing agency, it is safe to assume that you are getting both senior-level marketing professionals plus a wide-range of more junior skills.

In my previous blog I discussed the changing face of marketing, understanding how to effectively leverage digital marketing and more specifically in-bound and content marketing. These are not skills that are learnt overnight and the likelihood of needing expert help becomes more likely, at least in the short-term, in order to learn how better to not only create content but also how to effectively utitlise digital mediums through which to share content with potential future customers.

Hiring a marketing agency doesn’t mean that you should eliminate in-house marketing skills as in-house marketing staff act as a primary interface with agencies on messaging, value proposition and ensuring that marketing remains ‘on brand’ for the partner organisation. Often the result of in-house marketing working with an agency produces a better outcome than that of either party working in isolation. A good marketing agency can often bring out the best in in-house marketing if the two parties ‘gel’ well.

As well as adding marketing skills and ideas on how an organisation might better market its products and services, agencies can help you scale your marketing by looking after some of the more mundane aspects of marketing such as list building and production, thus enabling the internal marketer to focus on how to drive the overall marketing strategy for their business.

When choosing an agency, it is worthwhile to select one that has experience in the IT industry, to save yourself hours of having to explain concepts and why they are important. The agency does not necessarily have to have specific product knowledge, just sufficient knowledge of IT in general.

If you missed my previous blog, please find it at: http://blogs.msdn.com/b/auspartners/archive/2015/03/16/the-changing-face-of-marketing.aspx

Azure SQL Database Security Features

Mon, 04/20/2015 - 16:52

The Microsoft Azure platform is evolving fast. Azure SQL Database, which is a Relational Database service running on Azure, is riding high on the cloud wave with new features enabled at a fast pace. I want to share a few Azure SQL Database security features currently in GA or public preview) that could help developers and DBAs develop and manage a secure SQL Database solution. All security features mentioned in this blog are available for Basic, Standard, and Premium databases in v12 servers.

Feature

Status

Target scenario

Firewall

GA

All

Secure connection

GA

All

Auditing

GA

Log data access/change trails for regulatory compliance

Data masking

Public preview (V12)

Obfuscate confidential data in the result set of a query.

Row-level security (RLS)

Public preview (V12)

Multi-tenant data access isolation.

 

Firewall (GA) – This feature has been available for Azure SQL Database since the very beginning. It’s a way for DBAs to control which clients, based on IP addresses, can access a logical Azure SQL Server or a specific database. By default, for a newly created logical server, no firewall rules are defined and nobody outside of Azure can access any database on that server yet. You must define a rule to start the first connection. Note the firewall rule IP ranges between server level and database level don’t overlap. You may also allow other Azure services to access your server or database using a single rule by selecting a checkbox rather than based on IP addresses.

Secure connection (GA) – SQL Database requires secure communication from clients based on the TDS protocol over TLS (Transport Layer Security). Note for application to be truly protected against man-in-the-middle type of attack, we encourage you to follow these guidelines to explicitly request an encrypted connection and do NOT trust server side certificate.

Auditing (GA) – Allows customers to record selected database events in log files for alerting and post-mortem analysis, for example, as part of maintaining regulatory compliance such as PCI, HIPAA. Common auditing events include insert, update, and delete events on tables. Using SQL Database Auditing, you can store the audit logs in Azure table storage and build reports on top of them. There is preconfigured dashboard report template available for download (requires Excel 2013 or later plus Power query). SQL Database Auditing requires the use of a secure connection string.

Data masking (public preview) – Is a policy based security feature that limits exposure of sensitive data like credit card numbers, social security numbers, clinic patient info to non-privileged users. Similar to Auditing, it’s useful for scenarios with compliance requirements. You may specify masking rules to be applied to designated fields, either at source (tables/columns), or at results (alias used in queries). Note that masking rules are applied to the appropriate data in the result set of a query. Unlike encryption, data masking does NOTprotect sensitive data at rest or during query processing in memory. Data masking requires the use of a secure connection string.

Row-level security (RLS) (public preview) – The feature is aimed at multi-tenant applications that share data in a single table within the same database. Typically, application developers currently have to build logic in the application code to isolate tenants from accessing each other’s. In contrast, RLS centralizes the isolation logic within the database, simplifying application design and reducing the risk of error. With RLS security policy managers can encode the isolation logic in a security policy using inline table-value functions. An example of how to use RLS in a middle-tier, multi-tenant application can be found here.

 

Additional security resources:

Intro to C# and Analyzing Government Data

Mon, 04/20/2015 - 16:32

Hello World!

I have recently been informed that many of my articles may be a bit advanced for folks, so I am going to kick off a series of C# articles dedicated to the Beginner to programming.  I have no idea how long this series is going to be, I’ll just keep adding to it as requests come in for various topics.  This series is meant to take the absolute beginner to a level in which they can possibly derive value from my other articles. 

 This is a pull from my personal blog at: www.indiedevspot.com

Tools

Lets start off with some tools.  For the purposes of this article, there is really only one tool that you need.  Visual Studio.  As of today Visual Studio 2015 is currently in preview, so in the heart of all that is beta, we will start with that immediately.  I imagine Visual Studio 2015 Community to be coming to general availability soon, but no idea when.

The download for Visual Studio is here: https://www.visualstudio.com/en-us/downloads/download-visual-studio-vs.aspx

References

These are the places that I go when I need help figuring something out, but also places you can go to learn.

  1. http://www.microsoftvirtualacademy.com/training-courses/c-fundamentals-for-absolute-beginners
  2. http://stackoverflow.com/
  3. https://msdn.microsoft.com/library
  4. https://github.com/
Terminology

These are some common terms you will hear in the development world.  I want to address some of these right away.

  • Full Stack Developer:  A developer who can code from database to web services to client on a single tech stack
  • Tech Stack: Usually this is .Net, Node, Open Source.  Its the series of technologies that you use to accomplish data storage, web services and UI
  • .Net:  Series of development languages and technologies developed by Microsoft.  C# is a language part of the .Net family, as is F# and VB.  They run in the “CLR”, which is the common language run time or code execution environment these languages run in, which is common across them.
  • Data Storage: This is typically in reference to long term data storage or databases, such as SQL.
  • SQL: A relational database developed by Microsoft for storing tabular data with relationships between the tables.
  • Web Services:  Endpoints on the web that can be accessed that complete tasks and return results.  Typically Restful.
  • Endpoint:  Typically a globally accessible or resolvable address on the internet, such as http://www.mywebsite.com/api/Cars
  • Restful:  Type of internet calls for endpoints.  These are typically Get, Post, Delete.  A series of known verbs control this.  It is typically manifested in a url as: http://www.mywebsite.com/api/Cars/Black/1 (would get a black car with an id of 1 if following normal conventions).
  • Object Oriented: A paradigm of coding in which objects are the primary feature of the language.
  • Object: A section of code that contains data as well as methods to manipulate data.  Manifested as pieces of memory that can be instantiated in your code.  Objects are common in structure, but differ in mutable types.
  • Mutable type:  Data that can be changed that lives in memory, typically in the context of an object.
  • Logic:  The intelligence to your code that manipulates data to produce a desired result.
Getting Started

Start by opening Visual Studio 2015.

On the top left, you should see a series of buttons, click File -> New -> Project

Expand Templates, click on Visual C# and select ConsoleApplication.  Name the application as seen below.

Click OK.

You should now have a view like below…

Lets take a minute to talk about this.

I will explain concepts in need to order.

  1. namespace Tutorial: This defines that code within these curly brackets ({ }) will be referenced by the name Tutorial.
  2. using System; : System is a namespace, we are using the namespace System, such that we can have access to objects within it.
  3. class Program: This is the default class built for us, other code will reference this via Tutorial.Program
  4. static void Main(string[] args): this is the “entry point” of the application.  This is where everything starts.

Now lets start with the super simple “Hello World” before we jump into something way more interesting.

1 2 3 4 5 6 7 8 9 10 11 namespace Tutorial {     public class Program     {         static void Main(string[] args)         {             Console.WriteLine("Hello World!");             Console.ReadKey();         }     } }

Change your code to reflect above (keep your using statements).  This simply will write to the console “Hello World!” and then wait for you to push any key before moving on.

To run the program, on the very top bar in Visual Studio is a “Debug Button”.  Click this to execute your code. It is identifiable by being a green Arrow |> pointing to the right with Start next to it.

You will see a console pop up and state “Hello World”, you can push a button and it will go away.

Doing something Real now.

So now lets do something interesting, like figure out the best and worst place to live in the United States based on the open data on data.gov :)

Download the data from here: https://onedrive.live.com/redir?resid=478A64BC5718DA47!300&authkey=!AP7WdDyZ_Inbfas&ithint=file%2cxls

This is real data pulled from data.gov.

First, we need to bring in a library that helps us parse Excel files.

Open Solution Explorer

On the right hand side of the screen is Solution Explorer, if you don’t have it there, it can be found at the top button bar View -> Solution Explorer.

Add the Excel Nuget Package.  Right Click the .csproj file or Tutorial, highlighted in blue here

After right clicking Tutorial, a submenu will pop up, click “Manage Nuget Packages”.  A new dialogue will fill the screen.  In the search text box, type “Excel”.

Click on ExcelDataReader and click Install.

Close that tab.

Now we are ready to do something useful!

Lets start by adding new using statements that we need to the top of the file:

1 2 3 4 5 using Excel; using System; using System.Collections.Generic; using System.Data; using System.IO;

Lets build a few types…

Within the Tutorial namespace, add the following code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 //Enum Type //These are actually integer values, but can be named. //Useful for switch statements. public enum Stats {     Employed,     Unemployed,     MedianIncome,     LaborForce, } /// <summary> /// Data Class that we can use for holding data. /// </summary> public class AreaStatistic {     /// <summary>     /// Data property of type string, holds the state this area is in.     /// </summary>     public string State { get; set; }     /// <summary>     /// Name of the area within the state.     /// </summary>     public string AreaName { get; set; }     /// <summary>     /// List of statistics for each year of this particular area.     /// It is a List of a Tuple, which is a pair of objects, in which the first     /// object is an integer and the second object is a tuple.     /// The second Tuple is a pair of objects in which the first is a Stat enum and the second is     /// a float?  a float? is a nullable floating point number     /// </summary>     public List<Tuple<int, Tuple<Stats, float?>>> YearlyStats { get; set; }   }

This code defines the data types we will be using for our analysis.  The AreaStatistic defines a class, which, when instantiated is an object.

Lets now write out our code for getting the statistics for a year.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 /// <summary> /// Gets stats for a particular year. /// </summary> /// <param name="r">Data Row you want to get stats for for that year</param> /// <param name="year">Year you want to have data for</param> /// <param name="i">index starting point for that year.</param> /// <returns></returns> public static List<Tuple<int, Tuple<Stats, float?>>> GetStatsForYear(DataRow r, int year, int i) {     //Create the empty object which holds the stats.     List<Tuple<int, Tuple<Stats, float?>>> stats = new List<Tuple<int, Tuple<Stats, float?>>>();     //Begin big region for making sure we deal with empty or null data.     string s = r.ItemArray[i].ToString();     string s1 = r.ItemArray[i + 1].ToString();     string s2 = r.ItemArray[i + 2].ToString();     float? f;     float? f1;     float? f2;     if (string.IsNullOrEmpty(s) || string.IsNullOrWhiteSpace(s))     {         f = null;     }     else     {         f = float.Parse(s);     }     if (string.IsNullOrEmpty(s1) || string.IsNullOrWhiteSpace(s1))     {         f1 = null;     }     else     {         f1 = float.Parse(s1);     }     if (string.IsNullOrEmpty(s2) || string.IsNullOrWhiteSpace(s2))     {         f2 = null;     }     else     {         f2 = float.Parse(s2);     }     //End big area of checking for bad data.       //get data for LaborForce this year     stats.Add(         new Tuple<int, Tuple<Stats, float?>>         (year, new Tuple<Stats, float?>         (Stats.LaborForce, f)));     //get data for # employed this year.     stats.Add(         new Tuple<int, Tuple<Stats, float?>>         (year, new Tuple<Stats, float?>         (Stats.Employed, f1)));     //get data for # unemployed this year.     stats.Add(         new Tuple<int, Tuple<Stats, float?>>         (year, new Tuple<Stats, float?>         (Stats.Unemployed, f2)));     //return the yearly stats.     return stats; }

Wow!  That is a TON of code!  Most of it is error checking, man I am missing F# pattern matching right now…The rest simply gets sections out of the row and adds it to a stats type.  So what calls this code?

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 /// <summary> /// Converts a single row into the AreaStatistic Type. /// Note, this is based on the format of the file being known. /// </summary> /// <param name="r">Data Row</param> /// <returns>AreaStatistic</returns> public static AreaStatistic ConvertRowToStat(DataRow r) {     //create an empty stat object we can populate     AreaStatistic stat = new AreaStatistic();     //Get the state code from the data row.     stat.State = r.ItemArray[1].ToString();     //if the state code is not of length 2, it is not a data row.     if(stat.State.Length != 2)     {         //Throw an exception so the code stops here and doesn't continue.         //An exception will be printed to the console due to the line of code that         //wraps this.         throw new Exception("Not data row");     }     //get the area name.     stat.AreaName = r.ItemArray[2].ToString();     //Build an empty object for the yearly stats.     stat.YearlyStats = new List<Tuple<int, Tuple<Stats, float?>>>();     //Stat years are 2000-2013.  The columns start at column 9,     //There are three rows of data and 1 skipped row.     //Therefor we do 9 (for year 2000 and add 4 * the current iteration to that)     for (int i = 0; i < 14; i++)     {         //Add to the list of stats the stats for this year, total of 13 stats.         stat.YearlyStats.AddRange(Program.GetStatsForYear(r, 2000 + i, 9 + (i * 4)));     }     //row 65 has income data.     string s = r.ItemArray[65].ToString();     float? f;     //sometimes data is blank, we need to deal with that.     if (string.IsNullOrEmpty(s) || string.IsNullOrWhiteSpace(s))     {         //representation of no data.         f = null;     }     else     {         //we should be good to parse this.         f = float.Parse(s);     }     //add the data.     stat.YearlyStats.Add(         new Tuple<int, Tuple<Stats, float?>>         (2013, new Tuple<Stats, float?>         (Stats.MedianIncome, f)));     //Return the stats for this area.     return stat; }

Wow, a bunch more code!  Again, notice that 99% of this code is dealing with poorly formed data and also comments.  Notice the amount of code re-use we get out of the GetStatsForYear method.  This is great, always try to extract methods out like this that you can use.  So Finally, we get into the reading of the data in.  This also has been extracted out, so readers can simply download the .xls file, put it where they want, and point the code to where it is on your laptop.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 /// <summary> /// Method we define that allows our Main function to look cleaner.  This reads in the data and returns /// a list of statistics /// </summary> /// <param name="url">location on our drive where the file is.</param> /// <returns>List of statistics for each area.</returns> public static List<AreaStatistic> ReadInData(string url) {     //Initialize the stats object as a new list that holds objects of type AreaStatistics     List<AreaStatistic> stats = new List<AreaStatistic>();     //Using ensures that the reader is disposed of from memory when we are done with it.     using (StreamReader reader = new StreamReader(url))     {         //This is from the ExcelData library we brought in, that parses the excel data for us.         IExcelDataReader excelReader = ExcelReaderFactory.CreateBinaryReader(reader.BaseStream);         //Converts the stream into a dataset type.         DataSet d = excelReader.AsDataSet();         //There is only 1 table, but we iterate as if there were multiples anyways.         foreach (DataTable dt in d.Tables)         {             //Iterate each row in the table.             foreach (DataRow r in dt.Rows)             {                 try                 {                     //add to our stats list the statistic after we convert it.                     stats.Add(Program.ConvertRowToStat(r));                 }                 catch(Exception e)                 {                     //if something goes wrong, write it to the console.                     //tell me what happened.                     Console.WriteLine(e.Message);                 }             }         }     }     //return the list of stats     return stats; }

This doesn’t seem so bad.  You can see the catching of the exception, this is where if any transient missed exceptions get thrown will be caught such that our program won’t just up and crash on us.  The downside is that if an exception is thrown due to bad data, that row will not be added to our stats, and instead the console will write out the message of what happened.

Finally, we take a look at our code entry point:

1 2 3 4 5 6 7 8 9 10 11 /// <summary> /// Entry point to the program. /// </summary> /// <param name="args">Command line arguments passed in.</param> static void Main(string[] args) {     //We populate the type "stats" with data we read from the .xls file.     List<AreaStatistic> stats = Program.ReadInData("C:\\Unemployment.xls");     //just lets us see some output.     Console.ReadKey(); }

From here, you can see our high level task, we wanted to read in the statistics into this “stats” object.  We now have a list of “stats” that we can do some real analysis with :).

The next article will focus on doing interesting things with this data for more insights into it.  Maybe we can see if we can verify the claims about unemployment rates.  Maybe there are interesting insights from this data we can see from a collection standpoint, or growth rates in total labor vs employed vs unemployed vs income.

Please use the comments section for any questions you might have.  I know this might be a bit more advanced than I intended.  The full code is pasted below…

Summary

We have covered some C# basics, as well as our first dive into analyzing government data.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 using Excel; using System; using System.Collections.Generic; using System.Data; using System.IO;   namespace Tutorial {     //Enum Type     //These are actually integer values, but can be named.     //Useful for switch statements.     public enum Stats     {         Employed,         Unemployed,         MedianIncome,         LaborForce,     }     /// <summary>     /// Data Class that we can use for holding data.     /// </summary>     public class AreaStatistic     {         /// <summary>         /// Data property of type string, holds the state this area is in.         /// </summary>         public string State { get; set; }         /// <summary>         /// Name of the area within the state.         /// </summary>         public string AreaName { get; set; }         /// <summary>         /// List of statistics for each year of this particular area.         /// It is a List of a Tuple, which is a pair of objects, in which the first         /// object is an integer and the second object is a tuple.         /// The second Tuple is a pair of objects in which the first is a Stat enum and the second is         /// a float?  a float? is a nullable floating point number         /// </summary>         public List<Tuple<int, Tuple<Stats, float?>>> YearlyStats { get; set; }       }     /// <summary>     /// Primary program that lets us do stuff.     /// </summary>     public class Program     {         /// <summary>         /// Entry point to the program.         /// </summary>         /// <param name="args">Command line arguments passed in.</param>         static void Main(string[] args)         {             //We populate the type "stats" with data we read from the .xls file.             List<AreaStatistic> stats = Program.ReadInData("C:\\Unemployment.xls");             //just lets us see some output.             Console.ReadKey();         }         /// <summary>         /// Method we define that allows our Main function to look cleaner.  This reads in the data and returns         /// a list of statistics         /// </summary>         /// <param name="url">location on our drive where the file is.</param>         /// <returns>List of statistics for each area.</returns>         public static List<AreaStatistic> ReadInData(string url)         {             //Initialize the stats object as a new list that holds objects of type AreaStatistics             List<AreaStatistic> stats = new List<AreaStatistic>();             //Using ensures that the reader is disposed of from memory when we are done with it.             using (StreamReader reader = new StreamReader(url))             {                 //This is from the ExcelData library we brought in, that parses the excel data for us.                 IExcelDataReader excelReader = ExcelReaderFactory.CreateBinaryReader(reader.BaseStream);                 //Converts the stream into a dataset type.                 DataSet d = excelReader.AsDataSet();                 //There is only 1 table, but we iterate as if there were multiples anyways.                 foreach (DataTable dt in d.Tables)                 {                     //Iterate each row in the table.                     foreach (DataRow r in dt.Rows)                     {                         try                         {                             //add to our stats list the statistic after we convert it.                             stats.Add(Program.ConvertRowToStat(r));                         }                         catch(Exception e)                         {                             //if something goes wrong, write it to the console.                             //tell me what happened.                             Console.WriteLine(e.Message);                         }                     }                 }             }             //return the list of stats             return stats;         }           /// <summary>         /// Converts a single row into the AreaStatistic Type.         /// Note, this is based on the format of the file being known.         /// </summary>         /// <param name="r">Data Row</param>         /// <returns>AreaStatistic</returns>         public static AreaStatistic ConvertRowToStat(DataRow r)         {             //create an empty stat object we can populate             AreaStatistic stat = new AreaStatistic();             //Get the state code from the data row.             stat.State = r.ItemArray[1].ToString();             //if the state code is not of length 2, it is not a data row.             if(stat.State.Length != 2)             {                 //Throw an exception so the code stops here and doesn't continue.                 //An exception will be printed to the console due to the line of code that                 //wraps this.                 throw new Exception("Not data row");             }             //get the area name.             stat.AreaName = r.ItemArray[2].ToString();             //Build an empty object for the yearly stats.             stat.YearlyStats = new List<Tuple<int, Tuple<Stats, float?>>>();             //Stat years are 2000-2013.  The columns start at column 9,             //There are three rows of data and 1 skipped row.             //Therefor we do 9 (for year 2000 and add 4 * the current iteration to that)             for (int i = 0; i < 14; i++)             {                 //Add to the list of stats the stats for this year, total of 13 stats.                 stat.YearlyStats.AddRange(Program.GetStatsForYear(r, 2000 + i, 9 + (i * 4)));             }             //row 65 has income data.             string s = r.ItemArray[65].ToString();             float? f;             //sometimes data is blank, we need to deal with that.             if (string.IsNullOrEmpty(s) || string.IsNullOrWhiteSpace(s))             {                 //representation of no data.                 f = null;             }             else             {                 //we should be good to parse this.                 f = float.Parse(s);             }             //add the data.             stat.YearlyStats.Add(                 new Tuple<int, Tuple<Stats, float?>>                 (2013, new Tuple<Stats, float?>                 (Stats.MedianIncome, f)));             //Return the stats for this area.             return stat;         }         /// <summary>         /// Gets stats for a particular year.         /// </summary>         /// <param name="r">Data Row you want to get stats for for that year</param>         /// <param name="year">Year you want to have data for</param>         /// <param name="i">index starting point for that year.</param>         /// <returns></returns>         public static List<Tuple<int, Tuple<Stats, float?>>> GetStatsForYear(DataRow r, int year, int i)         {             //Create the empty object which holds the stats.             List<Tuple<int, Tuple<Stats, float?>>> stats = new List<Tuple<int, Tuple<Stats, float?>>>();             //Begin big region for making sure we deal with empty or null data.             string s = r.ItemArray[i].ToString();             string s1 = r.ItemArray[i + 1].ToString();             string s2 = r.ItemArray[i + 2].ToString();             float? f;             float? f1;             float? f2;             if (string.IsNullOrEmpty(s) || string.IsNullOrWhiteSpace(s))             {                 f = null;             }             else             {                 f = float.Parse(s);             }             if (string.IsNullOrEmpty(s1) || string.IsNullOrWhiteSpace(s1))             {                 f1 = null;             }             else             {                 f1 = float.Parse(s1);             }             if (string.IsNullOrEmpty(s2) || string.IsNullOrWhiteSpace(s2))             {                 f2 = null;             }             else             {                 f2 = float.Parse(s2);             }             //End big area of checking for bad data.               //get data for LaborForce this year             stats.Add(                 new Tuple<int, Tuple<Stats, float?>>                 (year, new Tuple<Stats, float?>                 (Stats.LaborForce, f)));             //get data for # employed this year.             stats.Add(                 new Tuple<int, Tuple<Stats, float?>>                 (year, new Tuple<Stats, float?>                 (Stats.Employed, f1)));             //get data for # unemployed this year.             stats.Add(                 new Tuple<int, Tuple<Stats, float?>>                 (year, new Tuple<Stats, float?>                 (Stats.Unemployed, f2)));             //return the yearly stats.             return stats;         }         } }

 

Visual Studio 2013 continuously repairs producing many small log files

Mon, 04/20/2015 - 15:17

If you have Microsoft Visual Studio 2013 Professional, Premium, or Ultimate editions installed and are finding many small MSI*.log files in your %TEMP% directory, you may find you are running low on disk space because of how many of these files are created. While they may only be about 3MB, what causes these files to be produced can do so often enough that it may not take long to fill up the drive where %TEMP% resides.

Workaround

We are consistently seeing this issue caused by a missing directory, C:\Windows\Microsoft.NET\Framework\URTInstall_GAC (or %SystemRoot%\Microsoft.NET\Framework\URTInstall_GAC).

To work around this issue,

  1. Open an elevated command prompt.
  2. Type: mkdir %SystemRoot%\Microsoft.NET\Framework\URTInstall_GAC

In PowerShell, use $env:SystemRoot instead of %SystemRoot%.

Description

This problem is similar to a previous one that was caused by a different directory, only in this case it typically only happens after an OS upgrade. The .NET Framework created %SystemRoot%\Microsoft.NET\Framework\URTInstall_GAC as a staging directory for installing assemblies into the GAC (since the .NET Framework may be installing managed fusion for the first time, it can’t use built-in support). When Visual Studio 2013 is installed on Windows 7 the .NET Framework is installed via Windows Installer. Later OS upgrades migrate the .NET Framework to the Windows native installer format (CBS) but that directory isn’t created since those fusion custom actions aren’t used.

Unfortunately, advertised shortcuts in VS2013 Professional (part of Premium and Ultimate) cause a health check because some assemblies are installed into the GAC and since the directory doesn’t exist, Windows Installer initiates a repair operation that does not create the directory. This causes the problem to occur again and again until the directory is created manually.

If you have the Windows Installer Logging policy set (or the host program initiating the health check enables it via MsiEnableLog), those MSI*.log files won’t help much but to identify the calling program, which we typically see as,

%SystemRoot%\Microsoft.NET\Framework64\v4.0.30319\mscorsvw.exe

If you search the Windows Application event log for the MsiInstaller source, the event description for event ID 1004 should say something like,

Detection of product '{9C593464-7F2F-37B3-89F8-7E894E3B09EA}', feature 'Visual_Studio_Professional_x86_enu', component '{E3FF99AA-78B9-4A06-8A74-869E9F65E1FE}' failed.  The resource 'C:\WINDOWS\Microsoft.NET\Framework\URTInstallPath_GAC\' does not exist.

Creating this directory (whatever directory event ID 1004 specifies) should stop the MSI*.log files from being created in droves.

Using PowerShell to script Dynamics Marketing

Mon, 04/20/2015 - 14:46

Dynamics Marketing is a great tool to capture, score and nurture leads in automated campaigns. You can build your campaigns and lead management processes which automatically invite your audience, capture registrations, send welcome emails register for your marketing events and trigger post event communication e.g. based on specific record of attendance.

While most of marketing processes are automated in Dynamics Marketing, often a lot of what is going on at an event is also in different systems or in real world – like the system that records attendees at an in person event when they enter the session room.

In such situations you have a simple goal: Log a record of attendance in Dynamics Marketing such that lead scoring and nurturing processes can pick up this information for example through Lead Interactions and leads that get created automatically. And because Dynamics Marketing offers API and OData endpoints, this sounds to be very feasible – doesn’t it?

Some background

With the SDK for Dynamics Marketing sample code ships that shows how to write an application that communicated through a Windows Azure Service Bus queues with the API endpoint of an Dynamics Marketing instance. The methods available on this endpoint focus creation and update of Contacts, Companies and Leads; also managing lists, sending transactional email marketing messages creation of event registrations and records of attendance and more.

A good way to learn about the capabilities of the API Endpoints is simply looking into the assembly from the SDK directly and exploring the available methods and messages:

 

If you are looking for a simple step by step guide on building an SDK client with Visual Studio you can download this document. It explains step by step how to build a simple application that communicates through the Service Bus queues.

The OData endpoint has a rich documentation of all tables and fields here. And if you want to see how you can write an application that makes use of the OData endpoint as coded client application you can review one of my past blog posts here.

But for various smaller operations and feasibility prototypes or if you want to design a business flow then instead of developing an application a scripting approach might simply be easier for you. And if you are not a developer who likes to mangle with authentication code or data querying and parsing in code, then this is another reason to look into scripting options.

First Step: Install the PowerShell cmdlets from here. There are version available for both 32- and 64-bit PowerShell.

Scripting the SDK endpoint

After you have installed the power shell cmdlets on your machine you can connect with PowerShell to the SDK endpoint with this:

Add-PSSnapIn Microsoft.Dynamics.Marketing.API $SecureString=ConvertTo-SecureString "QUEUE ACCESS KEY GOES HERE =" -asplaintext -force Connect-MDMApi MyServiceBus101 owner $securestring sdkRequest sdkResponse -verbose

You need to configure the SDK endpoint before: Configure Dynamics Marketing to communicate through queues. This involves creating an ACS enabled Azure ServiceBus namespace also using PowerShell commands. You only need to create the service bus and then leave it to the SDK configuration in the integration settings page to configure the service bus and ACS. After that only one little piece is missing: In the ACS portal we need to add a credential of type symmetric key for the service identity that has been created for us. The key we define here is the secret that will allow your PowerShell command let to authorize sending commands to the service bus and receive responses. The key if referred below as the IssuerKey.

With that in place you are ready to use one of the commands from the cmdlet. Type Get-Command -Module:Microsoft.Dynamics.Marketing.API to get a list of all commands contained. Use the command Get-Help to learn about the command parameters.. For example Get-Help -Name:Get-MDMLeads

Connect-MDMApi       
Disconnect-MDMApi Send-MDMApiRequest               
Get-MDMApiResponse   
Add-MDMCompany                  
Add-MDMContact                  
Add-MDMContacts                 
Add-MDMContactToList            
Add-MDMCustomFieldCategories    
Add-MDMEventAttendance          
Add-MDMEventRegistration        
Add-MDMExternalEntity           
Add-MDMLead                     
Add-MDMList                     
Add-MDMMarketingResult       

Copy-MDMListContacts            Get-MDMAllLists                            
Get-MDMCompanies                
Get-MDMCompany                  
Get-MDMContact                  
Get-MDMContactPermissions       
Get-MDMContacts                 
Get-MDMCurrencies               
Get-MDMCustomFields             
Get-MDMEmailMessages            
Get-MDMEmailMessageSentStatus   
Get-MDMEventAttendance          
Get-MDMEventAttendanceStatuses  
Get-MDMEventRegistration        
Get-MDMExternalEntiity          
Get-MDMExternalEntity           
Get-MDMExternalIds              
Get-MDMHardbounces              
Get-MDMLanguages                
Get-MDMLead                     
Get-MDMLeadPriorities           
Get-MDMLeads                    
Get-MDMLeadStatuses             
Get-MDMList                     
Get-MDMMarketingResult          
Get-MDMMarketingResults         
Get-MDMMarketingResultTypes     
Get-MDMMissingContactPermissions
Get-MDMSalesRatings             
Get-MDMSalutations              
Get-MDMSchemaForEmailMessage    
Get-MDMTerritories               Send-MDMEmailMessages           

Set-MDMContactPermissions       
Set-MDMContactsPermissions      
Set-MDMHardbouncesProcessed  Remove-MDMAllContactsFromList   
Remove-MDMCompany               
Remove-MDMContact               
Remove-MDMContactFromList       
Remove-MDMEventAttendance       
Remove-MDMEventRegistration     
Remove-MDMExternalEntity        
Remove-MDMExternalEntityType    
Remove-MDMLead                  
Remove-MDMList                  
Remove-MDMMarketingResult    The first command we use is  to connect to the API endpoint:

Connect-MDMApi [[-Namespace] <string>] [[-IssuerName] <string>] [[-IssuerKey] <securestring>]
[[-RequestQueueName] <string>] [[-ResponseQueueName] <string>] [[-SessionId] <string>]  [<CommonParameters>]

A fully qualified connection command has the syntax like:

Connect-MDMApi -Namespace:"MyMDMserviceBus" -IssuerName:"MDMServiceIdentity" -IssuerKey:$SecureString -RequestQueueName:"sdkRequest" -ResponseQueueName:"sdkResponse" -VERBOSE

Please note that the -IssuerKey (the 256bit symmetric key) must be maintained in a secure string object of PowerShell scripting – meaning it will be encrypted and cannot be used & decrypted by anything than the user who has created the secure string – you.

Lets try few Hello World type of samples

1. Reading all Leads that have been updated in the last days and simply out put the leads.

Get-MDMLeads [-BelongsToCompanyId <guid>] [-FromUpdateDate <datetime>] [-MaxNumberOfRecordsToGet <int>]
[-OriginOfChange <string>] [-MaxResponseWaitTime <int>]  [<CommonParameters>]

$date = [System.DateTime]::Now.AddDays(-4).ToUniversalTime() $leads = Get-MDMLeads -FromUpdateDate:$date $leads

2. Adding a new marketing contact with parent marketing company under the assumption that we know the Id of the site company
(We assume knowledge of the External Id of the Site company. We will see later how to find the site company.)

Get-MDMCompany [[-CompanyId] <guid>] [-MaxResponseWaitTime <int>]

Add-MDMContact [[-Contact] <Contact>] [-DisableConcurrentRequestValidation <bool>] [-MaxResponseWaitTime <int>]

$SiteCompanyID="eddfce37-e3d9-4910-b0c4-f6979777dd07" $belongsto = get-MDMCompany $SiteCompanyID -verbose $error.Clear() $contact = new-object Microsoft.Dynamics.Marketing.SDK.Model.Contact $company = new-object Microsoft.Dynamics.Marketing.SDK.Model.Company $company.Name = "Coolest Company On Planet Earth" $company.IsMarketing = $true $contact.BelongsToCompany = $belongsto $contact.FirstName = "A First Name1" $contact.LastName = "A Last Name1" $contact.Company = $company $contact.IsMarketing = $true $newcontact = Add-MDMContact $contact -verbose if($error.Count > 0) { $error }

3. Let’s add a new lead to a know contact (like created above) and under a campaign and program. This sample shows also how to set fields like priority, status, sales rating, a custom origin of change, dates and even defines an initial score. The latter sill be overwritten as soon as lead get scored – if then a scoring model has been defined in the marketing context where the lead lands.
(We assume knowledge of Campaign Id and Program Id. These are the External Id)

$sitecompany="eddfce37-e3d9-4910-b0c4-f6979777dd07" $programid="fba68e22-e822-4097-a40f-f244bb401736" $campaignid="fcac690f-af01-43c0-9c9b-fda3b5ca8a9c" $contactid = "6346d0e5-ec1c-4931-b63b-2f24b8666c23" $program = new-object Microsoft.Dynamics.Marketing.SDK.Model.Program $program.id = new-object Guid($programid) $campaign = new-object Microsoft.Dynamics.Marketing.SDK.Model.Campaign $campaign.id = new-object Guid($campaignid) $lead = new-object Microsoft.Dynamics.Marketing.SDK.Model.Lead $lead.Name="SDK test Lead 123" $belongsto = Get-MDMCompany $sitecompany $lead.BelongsToCompany = $belongsto $contact = Get-MDMContact $contactid $lead.Contact = $contact $lead.Date=[System.DateTime]::Now.ToUniversalTime() $lead.DueDate=$lead.Date.AddMonths(1) $type = ("System.Collections.Generic.List"+'`'+"1") -as "Type" $type = $type.MakeGenericType("system.string" -as "Type") $origins = [Activator]::CreateInstance($type) $origins.Add("Cool SDK") $lead.OriginOfChanges=$origins $lead.Score=999 $priorities = Get-MDMLeadPriorities If ((-not ($priorities -eq $null)) , ($priorities.Count > 0)) { $lead.Priority=$priorities[0] } $statuses = Get-MDMLeadStatuses If ((-not ($statuses -eq $null)) , ($statuses.Count > 0)) { $lead.Status=$statuses[0] } $salesRatings=Get-MDMSalesRatings If ((-not ($salesRatings -eq $null)) , ($salesRatings.Count > 0)) { $lead.SalesRating=$salesRatings[0] } $lead.Program = $program; $lead.Campaign = $campaign; $newLead = Add-MDMLead $lead $newLead.Id

 

Scripting the OData endpoint

In order to connect with PowerShell to the OData endpoint you can use this command:

Add-PSSnapIn Microsoft.Dynamics.Marketing.OData Connect-MDMOData` -SeviceUrl:"https://YOURINSTANCE.marketing.dynamics.com"` -RedirectUrl:"http://localhost"` -OAuthTokenResourceName:"YOUR__REGISTERED_APPID" -verbose

Connect-MDMOData [[-SeviceUrl] <string>] [[-RedirectUrl] <string>] [[-AzureClientAppId] <string>] [-UserId <string>]
   [-OAuthUrl <string>] [-OAuthTokenResourceName <string>]  [<CommonParameters>]

The Service URL represents the root Url of your Dynamics Marketing instance, not the OData endpoint - so don’t bother adding the “/analytics”.

The RedirectUrl should typically be the “http://localhost”, which you have used when defining the app in Azure Active Directory (see here). And the AzureClientAppId  is the Id of the app that you have created in Azure Active Directory.

Optionally you may pass the Sign-in User Id, if you want to default it for the login. The user still he must sign with his password.

 

The cmdlets for the OData endpoint provide Find functions in different flavors.

Find-MDMContacts [-BelongsToCompanyId <string>] [-Id <string>] [-IsActive <bool>] [-Email <string>]
   [-FirstName <string>] [-LastName <string>] [-ChangedSince <datetime>] [-IsMarketing <bool>] [-IsClient <bool>]
   [-IsVendor <bool>] [-IsStaff <bool>] [-Fields <string>] [-Filter <string>] [-OrderBy <string>] [-Expand
   <string>] [-Top <int>] [-Skip <int>]  [<CommonParameters>]

Find-MDMCompanies [-BelongsToCompanyId <string>] [-Id <string>] [-Name <string>] [-IsMarketing <bool>]
[-IsClient <bool>] [-IsVendor <bool>] [-IsSiteCompany <bool>] [-Fields <string>] [-Filter <string>] [-OrderBy
<string>] [-Expand <string>] [-Top <int>] [-Skip <int>]  [<CommonParameters>]

Find-MDMLeads [-BelongsToCompanyId <string>] [-Id <string>] [-Name <string>] [-Status <string>] [-SalesReady
<bool>] [-MinScore <double>] [-Fields <string>] [-Filter <string>] [-OrderBy <string>] [-Expand <string>]
[-Top <int>] [-Skip <int>]  [<CommonParameters>]

Find-MDMAnyData [[-TableName] <string>] [-Fields <string>] [-Filter <string>] [-OrderBy <string>] [-Expand
<string>] [-Top <int>] [-Skip <int>]
  [<CommonParameters>]

The Find-MDMAnyData is the core cmd lets for finding records from any OData table which Dynamics Marketing exposes. You specify the table name, Fields to retrieve values for, an OData Filter string and sort fields to OrderBy. If you omit specifying the fields you are interested in, the call will load all columns.

The parameter “Top” allows to specify how many rows you want to retrieve from the top. Please note, if you do not specify a value by default in the comlet will try to load 10 rows. With the parameter “Skip” you can specify from where from top to start reading records. Parameters Top and Skip two together allow paging through your data, and you often will combine this with sorting and filtering.

This for example command will collect the last 100 Visits recorded:

$visits = Find-MDMAnyData -TableName:Visits -top:100 -OrderBy:"StartTime desc" –verbose $visits

Expand is a more advanced option to include values from related objects into your result object. In the following sample we load programs and include the Program KPIs which deliver aggregated numbers for Expenses, Purchase Orders, Estimates, Invoices and more for different currencies.

$programs = Find-MDMAnyData -TableName:Programs -Expand:ProgramKPIs -verbose $programs

Here is a simple way to find your Site company:

$comps = Find-MDMCompanies -Filter:"substringof('Site', Type) eq true" -verbose $site = $comps[0] $siteId = $site.Id An example that used both OData and the SDK endpoint to create Leads

With the site company id at hand we for example can start browsing our marketing contact and run operations:

$pagesize = 100 $skip = 0 do { $contacts = Find-MDMContacts -BelongsToId:$siteId -Top:$pagesize -Skip:$skip $skip += $contacts.Count "Debug: Skip:" + $skip #ProcessContacts($contacts) } while ($contacts.Count -ge 0)

In the sample below we start our journey with knowledge of the site company, a program and a campaign and then page through all contacts ad create leads for all contacts in a certain query.

$siteId = '[Assign Company.ExternalId]' $programId = '[Assign Program.ExternalId here]' $campaignId = '[Assign Campaign.ExternalId here]' $belongsTo = Get-MDMCompany $siteId -verbose $program = New-Object Microsoft.Dynamics.Marketing.SDK.Model.Program $program.Id = $programId $campaign = New-Object Microsoft.Dynamics.Marketing.SDK.Model.Campaign $campaign.Id = $campaignId $pagesize = 5 $skip = 30 $max = 5 $index = 0 do { $contacts = Find-MDMContacts -BelongsToId:$siteId -Top:$pagesize -Skip:$skip foreach ($contact in $contacts) { $lead = new-object Microsoft.Dynamics.Marketing.SDK.Model.Lead $lead.BelongsToCompany = $belongsTo $lead.Name = "Initiative Lead N0." + $index $ExId = $contact.Id $sdkContact = Get-MDMContact -ContactId:$ExId $lead.Contact = $sdkContact $lead.Program = $program $lead.Campaign = $campaign $lead.Date=[System.DateTime]::Now.ToUniversalTime() Add-MDMLead $lead $index++ } $skip += $contacts.Count "Debug: Skip:" + $skip #ProcessContacts($contacts) } while ($contacts.Count -ge 0)

With this blog post I wanted to give a quick glimpse into what is possible with the cmdlets for Dynamics Marketing OData endpoint and SDK. This topic certainly calls for more blog posts with practical sample applications. Stay tuned.

Enjoy!

Christian Abeln
Senior Technorati Tags: ,,, Program Manager
Microsoft Dynamics Marketing

Follow My Curah!s on Microsoft Dynamics Marketing

Our first Microsoft flagship store in Asia Pacific lands in Australia

Mon, 04/20/2015 - 14:39

Did you hear our exciting news today? Microsoft’s first flagship store in the Asia Pacific region is coming to Sydney by the end of the year!

The new store will be located at Westfield Sydney on Pitt Street Mall and will be the first of its kind outside of North America – and only the second flagship store in the world. It’s due to open its doors before the end of 2015 and you can read more about it in Pip Marlow’s blog here.

I see the flagship store as another great asset for our partners and is a natural extension to our existing online store. We know that consumer and commercial devices and experiences are blurring and your customers are making technology decisions with this in mind. This amazing store, where the very best of Microsoft can be experienced, will create fans and advocates for our technology - and that’s great news for our partners.

The new store helps us create truly meaningful connections with our customers – showcasing the complete Microsoft ecosystem and delivering outstanding choice, value and service to everyone who walks through the doors.

This is a significant development for our business locally as well as globally and is the latest in a series of Microsoft investments in Australia – last October we opened our Microsoft Australia Azure Geos and just a few weeks ago we bought Office 365 and Dynamics CRM Online to our local data centres.

As the opening date nears, I look forward to sharing more details with you. I hope you’re all as excited as I am - let the countdown begin!

Microsoft Dynamics GP + Microsoft Dynamics CRM Online success stories

Mon, 04/20/2015 - 13:12

These stories are great case studies that show what can be achieved when Dynamics GP and Dynamics CRM Online are sold together.

Both projects used the Dynamics connector to integrate between Microsoft Dynamics GP and Microsoft Dynamics CRM Online.

If you are a Dynamics GP partner its time to talk with us about extending your business to also include Dynamics CRM Online. After all why leave deals on the table in the form of sales people at your Dynamics GP customers who want CRM? Of course in addition to adding new services with CRM Online there are also 30 percent CSA fees to consider as well.

Don't leave dollars on the table and come and talk to us. If something is stopping you also moving into Dynamics CRM come and tell us what it is and work with it to remove that blocker.

Let's talk!

John O'Donnell

Want to become a Microsoft Dynamics CRM Online partner? - nominatepartner@microsoft.com

http://www.twitter.com/jodonnel

 

 

 

 

 

Data Latency Issues for Application Insights - 4/20 - Investigating

Mon, 04/20/2015 - 12:56

Initial Update: Monday, 4/20/2015 19:45 UTC

We are aware of issues with Application Insights Ad-Hoc search results and are actively investigating. Some customers may experience latency of up to 12 hours while performing ad-hoc searches.

Root cause has been isolated to a corrupted data partition due to an infrastructure failure.  To address this issue, we are recovering the corrupted partition which will also resolve the data latency.

• Work Around: none
• Next Update: Before 22:00 UTC 

We are working hard to resolve this issue and apologize for any inconvenience.

-Application Insights Service Delivery Team

 

Azure Service Fabric Announced

Mon, 04/20/2015 - 12:05

I'm very excited to see the Azure Service Fabric announcement!  My blog post from 2011, "Designed for the Cloud-it is all about State!", was about the problem this technology solves.  In the post I pointed out the AppFabric Container which was were this technology was supposed to become publically available but never launched.  I had to wait four long years to finally talk about this technology publically and now I can!

Many of Microsoft's cloud services run this technology; the first of which that I'm aware of was either the original Azure SQL or Azure Cache (they both use it but I can't remember which launched first, probably Azure SQL).  There are even a few packaged products like Lync Server which use it too.  The first place I saw the technology available to public developers was ActorFx; its release notes and posts like this refer to Windows Fabric which is the Microsoft internal API for the technology.

So why did this technology take so long to become publically available in Azure?  I think two things delayed its release:

  1. Distributed systems are complex and hard to debug.  Many teams within Microsoft have had trouble adopting the technology over the years.  Developing the code was doable but then came running it in production and debugging issues.  The tooling just wasn't adequate for mass adoption.  The teams who succeeded were very talented.  For Azure Service Fabric core developer scenarios were targeted and the required tooling / Visual Studio integration developed.
  2. The Azure fabric's maintenance algorithm by default was not very friendly to technologies that maintained state replicas and it was very hard for teams outside of Azure to get approval to enable the alternative maintenance settings required.  It was really only in the last year that this was addressed.

The Weather Ingestion Service is the largest service I developed with the technology.  The service downloads weather data from several weather data providers, performs calculations to augment the data, and then publishes it to the Weather's REST Service which is used by Bing, MSN, Cortana, Windows and Windows Phone apps, etc.  We had a very tight 15 minute SLA to complete the processing and publish the data out to many Azure regions and Bing datacenters.  Our initial attempt involved Azure Table Storage but we could not get adequate performance due to query and deserialization costs.  We then switched to using the technology to hold the data in memory with replicas to ensure availability.  This was back before the large memory Azure roles were available and it took 60 medium Worker Role instances to hold the data.  We ran it in production for a couple months but management disliked the cost so they eventually relaxed the processing SLA enough that we could use Azure Cache (the original one) instead.

The most impressive use of the technology I've seen is the service which powers Cortana.  As data is pushed to Cortana the service figures out which Cortana enabled devices receive the data.  The system is impressive not only for the sheer number of devices that it pushes data to but also in how they used the technology to enable high availability of their stateful service across Azure regions.  Their service simply keeps working when one of their clusters becomes unhealthy or the Azure region becomes unreachable.  The remaining regions have replicas of the state and simply take on the additional load from the failed region.  I doubt this initial release of the Azure Service Fabric will enable this scenario but it is nice to know that the technology is capable of it and such a feature may become available when the tooling is ready.

Understanding the Visual Studio ALM Rangers

Mon, 04/20/2015 - 10:49

Updated on: 2015-01-02

For more information see:

     

 

Who are we?

We started the VSTS Rangers program in March 2006 as a joint venture between the Visual Studio Team System team and the Worldwide Communities program, part of the Office of the CTO in the Microsoft Services organization. A couple of years ago, we renamed our program to Visual Studio ALM Rangers, but the vision remained the same: to accelerate the adoption of Visual Studio with out-of-band solutions for missing features. In addition, our secondary goal is to provide the opportunity for selected Microsoft consultants, support resources and partners to interact with product group experts so we can learn from our field and partners using Visual Studio ALM products and features with customers.

ALM Rangers provide the bulk of our resources which come from volunteer subject matter experts. Typically, they spend their private hours to do the Rangers project work, and, not just anyone is invited to participate — Rangers need to be knowledgeable about Visual Studio and ALM, have the desire to strengthen the community, and contribute regularly.

Relying on volunteer part-time work has led us to strive for more efficiency in our projects. To achieve this goal, we have implemented 100% dogfooding with our own Agile-based (Ruck) process model. For consistency, we use the same process model across the board, even for guidance type projects.

Our first three years was focused on a strictly internal team that crossed all field roles—consulting, support, sales, and evangelism. As we expanded our external Rangers communities, our goal has been to provide the same level and ease of access to external Rangers. We have reached this goal with our extranet SharePoint site and Team Foundation Service which has, as a side effect, improved our operational transparency significantly.

But the top lesson learned again is to keep on learning from real world customers. We leverage our vast customer connections through our Rangers to collect their business and technical requirements and test our beta releases in pilot customer environments.

We hope that this overview provides enough information to whet your appetite for more details. We appreciate any feedback and improvement suggestions.

Acronyms and Terms

  • ALM … Application Lifecycle Management
  • CTO … Chief Technology Officer
  • Ranger … To elaborate on the term Ranger, we often show a visual representation of a forest ranger who wanders the forests, clearing blocked paths and providing informational services to visitors.
What do we do?

The ALM Rangers are focused primarily on the delivery of out-of-band tooling and practical guidance to remove adoption blockers in real world environments. Typically an ALM Ranger solution is a hybrid of practical guidance and supporting out-of-band tooling and sample code.


Figure 1 – Aspects of ALM Rangers

Frequently asked questions Contact us?
  • Preferably contact your favourite and regional ALM Ranger from aka.ms/vsarIndex.

  • Alternatively contact us here.

Join us?
  • If you are a Microsoft employee please contact your regional ALM WWC lead and request to be nominated via Micheal Learned.

  • If you are not a Microsoft employee please contact your regional ALM MVP and request to be nominated via Willy-Peter Schaub.

  • Submit the following information as part of your request:

    • Why do you want to join the ALM Rangers?

    • Short and crisp bio, answering the common 5 index questions. See aka.ms/vsarIndex for examples.

      • Who you are?
      • What makes you “tick”?
      • Where you live?
      • Why are you active in the Rangers program?
      • What is the best Rangers project you worked in and why?
    • One or more non-IT (business) photo which includes you.
    • Contact details, including MSA (LiveID) account, contact email, postal address, and telephone number.

Renew membership?
  • To remain an active ALM Ranger you need to be actively involved in the ALM Ranger community.

  • Renewals and special awards, such as distinguished or champion Rangers, are based on feedback from your peers.

Recommend a new or upgrade project idea?
  • ALM Community, MVPs

    • Add your idea to Visual Studio UserVoice with as much supporting information as possible.

    • Encourage your communities and peers to vote for your idea, as project idea triages are based on strategic value, community value and number of votes.

    • Information needed:

      • What is the goal?
      • Why is it important?
      • When is a solution needed by?
  • MSFT
    • Contact the ALM Ranger PMs.
    • Information needed:
      • What is the goal?
      • Why is it important?
      • When is a solution needed by?
      • Who are the SMEs to contact for more info?
      • Who is the product owner?
  Mission

The Visual Studio ALM Rangers provide professional guidance, practical experience and gap-filling solutions to the ALM community.

aka.ms/vsarmission

Core Values

As a team the ALM Rangers have come to value the following:

Razor sharp focus on quality and detail on the work we do

  • Favor simplicity and low tech over complexity
  • Expect and adapt to change, delivering incremental value

Accountability and commitment

  • Actively manage the project triangle attributes: features, bandwidth and cost
  • Never go dark … always share the good, the bad and the ugly with the team

Non-stop and unrestricted collaboration

  • Empower the ALM community
  • Embrace open communications

Global transparency and visibility through collaboration and shared infrastructure

  • For all initiatives, track and publish status on a timely basis
  • Access to everything for everyone

Empathy, trust, humility, honesty and openness at all times

  • No one knows everything; as a team we know more
  • Learn from and share all experiences

Regular dogfooding of Visual Studio Application Lifecycle Management tools

  • Improve productivity of ALM Ranger teams
  • Gather and share real-life experiences
Champions

2012 - Brian Blackman
2013 - Brian Blackman
2014 - Brian Blackman
2015 - Brian Blackman

2013 - Tony Whitter
2014 - Casey O'Mara
2015 - Brian Blackman
2015 - John Spinella

2011 - Michael Fourie
2012 - Michael Fourie
2013 - Michael Fourie
2014 - Brian Blackman
2015 - Brian Blackman 2011 - Bill Essary
2012 - Gregg Boer
2013 - Joshua Webber
2014 - Jeff Beehler
2015 - Charles Sterling 2012 - Gregg Boer
2012 - Mario Rodriguez
2013 - Larry Guger
2014 - Andrea Scripa
2014 - Ed Blankenship
2015 - Keith Bankston
2015 - Sam Guckenheimer 2013 - Vladimir Gusarov
2014 - Vladimir Gusarov
2015 - Gordon Beeming 2013 - Gordon Beeming
2014 - Gordon Beeming
2015 - Donovan Brown 2015 - Hosam Kamel 2015 – Jeff Levinson                               Special Awards

Michael Fourie


Rob Jarratt

 

Honorary ALM Rangers

Who have retired from an active position in the past, but have actively contributed invaluable passion and contributions to the ALM Ranger community.

Alison Clark
Andrea Scripa
Ben Amodio
Bijan Javidi
Bill Essary
Buck Hodges
David Caufield
Eric Charran
Eric Golpe
Erwyn van der Meer
Gregg Boer
James Pickell
Jeff Beehler
John Jacob
Justin Marks
Kerry Gates
Larry Duff
Larry Guger
Lenny Fenster
Mike Schimmel
Neno Loje
Paul Meyer
Tim Omta
Tina Erwee
Zayd Kara

ebook deal of the week: Exam Ref 70-342 Advanced Solutions of Microsoft Exchange Server 2013 (MCSE)

Mon, 04/20/2015 - 10:25

List price: $31.99  
Sale price: $15.99
You save 50%

Buy

This deal expires on Sunday, April 26 at 7:01 GMT

Prepare for Microsoft certification Exam 70-342 and demonstrate your skills in implementing advanced solutions of Microsoft Exchange Server 2013. Learn more

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

The products offered as our eBook Deal of the Week are not eligible for any other discounts. The Deal of the Week promotional price cannot be combined with other offers.

Transforming data in a data warehouse through SQL views

Mon, 04/20/2015 - 10:18

This post is an add on to the another post titled Designing an ETL process with SSIS: two approaches to extracting and transforming data where Phill Devey responded with the following question:

With regard to your statement " With staging tables, transformations are implemented as database views"  Are you suggesting that your Dimension and Fact "tables" will be views or do you mean the query that populates the Dimension and Fact tables will be based on a view?

That’s a good question, and let me explain what I’ve found most practical.

Initially, when I create a new dimension or fact table, I simply create it as a view. Doing so allows me to quickly develop, test and debug the transformation in SSMS. If performance is not an issue, I might even deploy it this way into production.

In practice, once the transformation seems to work well, what I usually do is rename the view and create a stored procedure which creates a table using the original view. Because the new table has the same name as the original view, nothing breaks. Here’s what the stored procedure might look like:

1: CREATE PROCEDURE [dbo].[sp_AccountFact] 2: AS 3: BEGIN 4: SET NOCOUNT ON; 5:  6:  7: SELECT * 8: INTO #MyTempTable 9: FROM dbo.[Account Fact View] 10: 11: BEGIN TRANSACTION MY_TRANS 12:  13: IF EXISTS ( 14: SELECT [name] 15: FROM [sys].[tables] 16: WHERE name='Account Fact' 17: ) 18: BEGIN 19: DROP TABLE tabular.[Account Fact] 20: END; 21:  22: SELECT * 23: INTO tabular.[Account Fact] 24: FROM #MyTempTable 25:  26: COMMIT TRANSACTION MY_TRANS 27:  28: CREATE UNIQUE NONCLUSTERED INDEX [Account key] ON [tabular].[Account Fact] 29: ( 30: [Account key] ASC 31: )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] 32:  33: CREATE COLUMNSTORE INDEX [NonClusteredColumnStoreIndex-20130801-145157] 34: ON [tabular].[Account Fact]( 35: -- Add columns here 36: ); 37:  38: END

In this example, the stored procedure [dbo].[sp_AccountFact] does the heavy lifting of recreating the [tabular].[Account Fact] table. You would call this stored procedure during the transform phase of your ETL process.

The view with the transformation logic (which was previously called [tabular].[Account Fact]) is now called [dbo].[Account Fact View]. The content of this view is first copied into a temporary table using the SELECT INTO statement. On line 11 we start a transaction so that the entire update is atomic. In lines 13-20 the existing table is deleted if it exists, and then in line 22 the table is recreated from the temporary table we created before, again using the SELECT INTO statement. Then the transaction is committed. Finally we recreate the indexes. Note that the second index in this example is a column store index which greatly improves the performance of most ad hoc queries.

The transaction ensures that clients performing queries on the data warehouse get valid results while the ETL is running. However, those clients might get blocked until the transaction completes. Therefore it’s important to minimize the duration of the transaction. This is why we first copy the view into a temporary table before the transaction starts. We also recreate the indexes outside the transaction for this reason.

Another design decision is whether to drop the table and recreate it – as I do here – or to just truncate the table. If we would just truncate the table, the stored procedure would become even simpler because we don’t need to check if the table already exists before dropping/truncating it, and we don't need to recreate the indexes. On the other hand, a benefit of using the drop-and-recreate method is that we only need to maintain the table’s schema in one place, i.e. in the view. In the truncate scenario we would need to update the schema of the table every time we modified the view. A second benefit is that it simplifies the usage of column store indexes. In SQL Server 2012, you cannot modify the contents of a table which has a column store index. (I believe this constraint no longer exists in SQL Server 2014, but I have not yet verified that myself).

Note: another good practice is to use schemas. In this data warehouse, the dimensional tables are in a schema called “tabular”. I also typically create a separate schema for each source system and its staging tables. The dbo schema has “internal” objects which are important for the ETL but are not visible to normal users. Normal users of the data warehouse only need read access to the tabular schema to perform the OLAP queries of their desire.

Using RequireJS with Visual Studio

Mon, 04/20/2015 - 09:50

As applications become richer and more complex, it becomes important to structure code so that it’s easy to refactor components independently. To understand how one piece of code depends upon another, developers turn to the best practices of modularity and encapsulation to structure code as a collection of small, reusable parts. RequireJS is a popular script loading library that makes it easy to make your JavaScript modular, dividing it into these reusable parts.

Visual Studio provides great JavaScript editor support for RequireJS, with IntelliSense that can discover the modules of your application and provide accurate suggestions. If you use TypeScript, there’s built-in support for modules to compile into JavaScript that works with RequireJS. This post walks you through using RequireJS in Visual Studio with either JavaScript or TypeScript.

To use the RequireJS support in Visual Studio, install Visual Studio 2013 Update 4 or later, or Visual Studio 2015 CTP 6 or later. To follow along, you can also download the source code for the sample app.

Configuring your project

Start by adding RequireJS to your project with the NuGet package manager. You can right click on the project to show the NuGet package manager, search for RequireJS, and install it.

If you’re working with an ASP.NET 5 project, reference the “requirejs” package using the Bower package manager. You can learn more about using Grunt and Bower in Visual Studio 2015 from the ASP.NET website.

Before using RequireJS in your code, you need to add a reference to it in your main HTML file (or .aspx, .cshtml, etc.).For my application, I’ve placed require.js into a Scripts/lib folder.

<script src="/Scripts/lib/require.js"></script>

In an ASP.NET project, you also need to add a Scripts/_references.js file to tell Visual Studio what JS libraries to use for IntelliSense. In that file, add the following reference to require.js, it’s not necessary to add references to any other .js files that will be subsequently loaded using RequireJS.

/// <reference path="/Scripts/lib/require.js" start-page="/index.html" />

If you’re working in a project that uses a .jsproj extension, such as an Apache Cordova or Windows Store app project, you can skip this step. The JavaScript editor automatically finds the references in your .html files.

The start-page attribute is a new attribute used to configure RequireJS support. This setting tells Visual Studio which page should be treated as the start page; this has an effect on how RequireJS will compute relative file paths in your source code.

Using a module

When using RequireJS, your app becomes a series of modules with explicit dependencies defined between each module. At runtime, RequireJS dynamically loads these modules as needed.

When adding a reference to RequireJS in your HTML file, the common use pattern is to specify a .js file that represents the initial module to load and initialize the application. In the index.html file, reference require.js and tell it to start by loading an app.js file:

<script src="/Scripts/lib/require.js" data-main="/Scripts/app"></script>

The data-main attribute tells RequireJS what file to load first (the “.js” extension will be added automatically by RequireJS when the app is run). In app.js write any JavaScript code you want; you can even use the RequireJS require() function to load another module. Visual Studio shows me IntelliSense suggestions for RequireJS APIs as I type:

Using this require() function, you’ll load a services/photoService module that you’ll create next:

require(['services/photoService'], function callback(photoService) {

});

This require() call tells RequireJS to load a services/photoService.js file and attach the results to the photoService parameter in the callback function. This code waits for the photoService.js file to load, as well as any modules it references, before the callback function is called.

In the services/photoService.js file, add the following code to define a RequireJS module that returns a single function that gets a list of images:

define(function (require) {
    function getPhotoList() {
        return [
        '/images/image1.png',
        '/images/image2.png'
        ];
    }

    return {
        getPhotoList: getPhotoList
    }
})

Within the app.js file, we now see IntelliSense suggestions for the newly created photoService:

Configuring RequireJS

Often, you’ll want to modify the default RequireJS configuration to customize file paths for your application, making it easier to read the code. In this example, I use a specific version of the jQuery library, but I want to use shorthand in my code. I can do this using the RequireJS config() function to use the name “jquery” instead of the full path to where I have jquery installed (lib/jquery-2.1.3). I’ll add a Scripts\main.js file to my application with the following source in it:

requirejs.config({
    paths: {
        jquery:'lib/jquery-2.1.3'
    }
})

require('app');

In photoService.js, I can now refer to the jQuery library simply as jquery and use the .ajax() function to retrieve a list of photos from my server.

You can see the finished sample code in the PhotoBrowser sample app.

Using RequireJS with TypeScript

I’ve just shown how you can use RequireJS when building an application using JavaScript. Using TypeScript, I can build the same RequireJS application. Since version 1, the TypeScript compiler has built-in support to make this easy. When using TypeScript’s external module support with the AMD module target, the JavaScript generated by the TypeScript compiler is compatible with RequireJS.

For example, I can simplify the minimal code I need to define my photoService module from:

define(['jquery'], function (jq) {
    return {
        getPhotoList: function () {
            return jq.ajax('/photos.json');
        }
    }
})

To:

In the PhotoBrowser sample, you’ll find a workingexample of a RequireJS application built with TypeScript.

Learn more

To learn more about setting up RequireJS for your JavaScript project, see the Customizing IntelliSense for RequireJS article on MSDN. For a complete “getting started” walkthrough, see the RequireJS project site. To learn about using TypeScript AMD modules with RequireJS, see the external module support documentation on the TypeScript website.

Looking ahead, module loading is going to become increasingly popular for JavaScript applications. Modules are a core part of the specification for the next version of JavaScript, ES2015 (formerly ES6). We’ll continue to improve support for developing modularized codebases in the future.

If you haven’t already, install Visual Studio 2013 Update 4 or later, or Visual Studio 2015 CTP 6, to get started with RequireJS and let us know how well it works for you by sending us a smile or frown. You can also add your votes for future module loader support or other JavaScript editor features on our UserVoice site.

 

Jordan Matthiesen, Program Manager, Visual Studio JavaScript tools team
@JMatthiesen

Jordan has been at Microsoft for 3 years, working on JavaScript tooling for client application developers. Prior to this, he worked for 14 years developing web applications and products using ASP.NET/C#, HTML, CSS, and lots and lots of JavaScript.

Weekly Mobile News Issue 123 - April 13th–April 19th–Community Edition

Mon, 04/20/2015 - 09:48

 

Ex-Microsoft Windows Phone app designer explains why apps have the hamburger menu - http://www.winbeta.org/news/ex-microsoft-windows-phone-app-designer-explains-why-apps-have-hamburger-menu

Microsoft unveils touch friendly office apps for Windows Phone - http://www.reuters.com/article/2015/04/17/microsoft-office-phones-idUSL2N0XE02620150417

Navigate the world with Windows 10 maps for Windows Phone - https://blogs.windows.com/bloggingwindows/2015/04/17/navigate-the-world-with-windows-10-maps-for-phone/

Microsoft debuts dual sim Lumia 540 - http://www.zdnet.com/article/microsoft-debuts-dual-sim-five-inch-lumia-540-priced-at-150/

Project 10 Spartan shows up on Windows 10 preview for phones - http://www.eweek.com/mobile/project-spartan-shows-up-on-window-10-preview-for-phones.html

 

Three companies are helping Microsoft take Android from Google - http://www.fool.com/investing/general/2015/04/17/3-companies-are-helping-microsoft-corporation-stea.aspx More on this - http://www.wired.com/2015/04/microsoft-google-cyanogen/

Android for work shows up on Google Play - http://www.zdnet.com/article/open-for-business-android-for-work-shows-up-on-google-play/

Google releases hand writing app - https://play.google.com/store/apps/details?id=com.google.android.apps.handwriting.ime

Google Play store to designate family friendly apps - http://www.cnet.com/news/google-play-store-to-designate-which-apps-are-family-friendly/ More on this - http://android-developers.blogspot.com/2015/04/helping-developers-connect-with.html

 

A tool that lets designers tweak iPhone apps without code - http://www.wired.com/2015/04/tool-lets-designers-tweak-iphone-apps-without-code/

Apple acquisition may give next iPhone DSLR like capabilities - http://www.tomsguide.com/us/iphone-6s-dslr-camera,news-20778.html

Design 3D iPhone apps without writing any code - http://www.creativebloq.com/3d/design-3d-iphone-apps-without-writing-any-code-41514630

 

Samsung and BlackBerry Collaborating? - http://n4bb.com/blackberry-samsung-liaison/

BlackBerry’s health initiatives may have hit a wall - http://seekingalpha.com/article/3072976-blackberrys-health-initiatives-might-have-hit-a-wall

BlackBerry Oslo leaked - https://www.blackberrycentral.com/news/article/exclusive-first-image-blackberry-oslo/

 

Samsung to launch Tizen in more markets this year - http://www.itechpost.com/articles/14337/20150419/samsung-to-launch-tizen-smartphones-in-more-markets-this-year.htm

Samsung preparing Z2 for launch - http://www.vcpost.com/articles/57995/20150416/samsung-reported-preparing-two-new-smartphones-2015.htm more on this - http://www.slashgear.com/samsung-rumored-to-have-two-tizen-phones-in-the-works-15379105/

 

Sailfish 1.1.4 released - https://together.jolla.com/question/89804/release-notes-114-aijanpaivanjarvi-early-access/

How we design Sailfish OS - https://blog.jolla.com/design-insights-sailfish-os/

 

You can now buy an Ubuntu phone - http://www.pcworld.com/article/2909902/forget-flash-sales-the-first-ubuntu-phone-is-now-available-to-buy-all-the-time.html

 

Thank Phonebloks for Project Ara - http://www.techradar.com/news/phone-and-communications/mobile-phones/thank-phonebloks-for-google-s-project-ara-1290907

 

End of Weekly Mobile News Summary

Microsoft Dynamics CRM, SharePoint & One Note Step-by-Step Integration

Mon, 04/20/2015 - 09:47

Editor’s note: The following post was written by Dynamics CRM MVP Donna Edwards


Microsoft Dynamics CRM 2015 Update 1 offers a significant number of new and exciting features. There are several enhancements related to collaboration. In this article I will share how to get started with OneNote collaboration in Microsoft Dynamics CRM. Since OneNote requires SharePoint for document management, you must first enable SharePoint integration.

For those of you that have been working with CRM for several years, you might recall an article I wrote on SharePoint integration back in 2011. At that time, the list component and several steps were required to integrate SharePoint with Dynamics CRM. The good news is that the list component is no longer required and you can get the entire document management experience of SharePoint combined with OneNote integration setup in less than 30 minutes and all completed through a simple wizard experience.

In this example, I will connect SharePoint online and OneNote to a CRM Online tenant. It is important to note that SharePoint, OneNote and CRM Online must be in the same Office 365 tenant for the connections to work. We’ll need to begin with the SharePoint connection. We need our SharePoint URL so let’s grab that now. You can copy the URL by logging into Office 365, select the options area in the upper left corner of the browser window and select Sites.


Select the Team Site, not Public Site since you want the information you store in SharePoint to be available for internal use only.



Copy the URL for the internal site. It should be something like this https://<name>.sharepoint.com  If you prefer, you can also select the Admin area of Office 365, select SharePoint from the left navigation menu and copy the URL from the available list. Ensure you get the internal URL (Team Site) and not the My Site, Public Site or something different. It is fairly easy to identify the Team Site URL based on the naming convention used for each.



Copy the URL and save it. We’ll use it in just a few minutes. Next, select the “Enable Now” button for Server-Based SharePoint Integration or go to Settings, System, Document Management and select the Enable SharePoint Integration link.


 

 



A new dialog window will open; select Next



Select Online and select Next.


In this step, we will copy the URL we saved and paste it into the URL text box and select Next.


Now that the wizard has all the information needed to complete the configuration, it will validate the URL you provided and complete the connection. Select the Enable button.


At this point you can select the option to Open the Document Management Settings wizard by selecting the checkbox and completing the steps required for folder integration or you can choose to do that later by leaving the checkbox unselected.


In this example I am going to select the checkbox and complete the integration steps. Once I select Finish, a new dialog window opens which allows me to select the entities I want to enable for SharePoint integration. A few default entities are selected.  In this example, I am going to add Case and Competitor to the list, copy my SharePoint URL into the appropriate box and select Next.



The wizard will validate my SharePoint URL and allow me to either select to create my folder structure on a specific entity (Account or Contact) or accept the default setup. In this example, I am selecting the default setup by leaving the checkbox unselected and selecting Next.



I receive a notification that the SharePoint library is being setup and that the process might take several minutes.



Select Ok to accept the message and we’re taken to a page that displays the status of the library creation. At this point you can select Finish and the library creation will complete.



To check the setup, let’s go to Sales, Accounts and open an Account record.



Select any available account record from a view and select Documents


Since this is the first time we are adding a document to the Account record, you should receive a prompt requesting folder setup. Select Confirm to proceed.


The library is enabled for the Account.


Now we are ready for OneNote. Let’s go back to Settings, Document Management and select OneNote Integration


A dialog window opens that allows you to select the entities you would like to enable for integration.


Before selecting Finish, take a couple of minutes to read and understand how OneNote is accessed. As noted, OneNote is available from all CRM apps and users can access it from the CRM record (form). OneNote does not replace the Notes entity but is added in addition to it. That means that you will not lose notes you’ve already added to the account or the ability to continue to use the ‘standard’ notes entity. Having said that, I wouldn’t be surprised if OneNote one day replaces Notes in the application. After selecting Finish the setup is complete. You can confirm that by opening one of the entity records you enabled. You should see the OneNote option available from the Activity Wall.


Selecting the OneNote option will create a OneNote notebook that you can open and begin adding notes.


As you can see, by default the notebook is named “Untitled”. You can change that by selecting Documents from the account, select the OneNote notebook that you want to rename and select Properties.
Select Documents from the Account record where you created the OneNote notebook


Select the document named “Untitled” from the list and select Edit Properties from the command ribbon


Give the document a more descriptive file name and title; save your changes.


You can refresh the Account page to see the updated name.



You now have full document management with SharePoint and OneNote integration setup and ready to roll-out to your Dynamics CRM end users. Now it’s time to start thinking about how your organization can leverage these capabilities combined with Microsoft Office 365 Groups and Delve.


Cheers

About the author

Donna has been working with Dynamics CRM application beginning with the 1.2 version. She partner with all levels of an organization to develop and deliver flexible, scalable solutions that simultaneously address short-term business requirements and long-term strategic growth objectives. Her skill-set includes: Pre-sales support, solution design/architecture, functional consulting, requirements definition & analysis, business process engineering, process improvement & automation, end user adooption, system administrator and end user training, support and ISV solutions.

About MVP Monday

The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

After Hours @ Build in San Francisco–April 29th

Mon, 04/20/2015 - 09:36

Hey-ya folks in the Bay Area! Our team is speaking at a free community event on April 29th that I’m pretty excited about. We’ll have plenty of experts from the .NET, ASP.NET and Managed Languages teams there as well as the .NET Foundation and GitHub so please come to learn and mingle!

Check it the invite:

The .NET Foundation has partnered with the Microsoft Developer Evangelism team to bring to you After Hours at Build – a special event on April 29th designed for developers, startups, and technology leaders. Come join us to hear the latest news and innovations from Build 2015 during an exclusive evening at San Francisco's stylish Terra Gallery. 

Get the inside scoop from the .NET product teams, GitHub, and .NET Foundation on the innovations in .NET and the journey into open source. We'll also show you what's new and what we're building, including ASP.NET 5, C# 6, .NET compiler platform (Roslyn), and how we're taking .NET cross-platform in the open on GitHub.

REGISTER HERE: http://aka.ms/afterhours

WEDNESDAY, APRIL 29, 2015 | 6:00PM - 11:00PM

Terra Gallery - Directions
511 Harrison Street, San Francisco, CA 94105 | 415.896.1234
*Must be 21 years of age, with identification, to attend

AGENDA
6:00PM - 6:45PM - Registration
6:45PM - 7:00PM - Welcome & Announcements
   Featuring Matt Thompson (General Manager, US DX, Microsoft)
7:00PM - 8:00PM - Keynote
   Featuring Scott Hanselman (Principal Program Manager, .NET Product Group, Microsoft) & Phil Haack (GitHub) as well as other experts from the .NET, Managed Languages, and ASP.NET teams.
8:15PM - 8:30PM - Closing Remarks & Event Drawing
  Featuring Martin Woodward (Executive Director, .NET Foundation)
8:30PM - 11:00PM - After Hours at Build Reception

Doors open at 6:00PM and the presentation will begin at 6:45PM. The first 100 registrants to arrive will receive limited edition door prizes! Enjoy lively presentations, delicious food and drinks, and lots of time to chat and sip with your peers. Space is limited, so register now: http://aka.ms/afterhours

Hope to see you there.

Enjoy!

Best practices for establishing federation trust between two organizations.

Mon, 04/20/2015 - 08:52

Hi,

Recently, I saw a few questions on best practices for establishing federation trusts. I've listed out a few based on past experience with customers. There definitely may be more. Feel free to provide feedback via the comments section.

First some terminology:

IDP  Identity provider that authenticates the user  FedP  Federation Provider that accepts tokens from an IDP, validates the token and issues a new token for the application  Relying Party (RP) Trust  Object that represents the application on the FedP. Note that a FedP can also be represented in the same manner on the IDP  Claims Provider (CP) Trust  Object that represents the IDP on the FedP

 

Before we get to the best practices, a couple of operational procedure references

  1. How to create a Relying Party Trust in ADFS
  2. How to create a Claims Provider Trust in ADFS

Now onto best practices.

  • Use auto-update of federation metadata if possible on both sides. This allows roll over of keys in an automated fashion without any manual intervention. This is feasible if these were ADFS instances on both sides.
    • If auto-update is not possible, establish an operational procedure.
  • Agree on the claims that is required for the Identity Provider (the IDP that authenticates the user) to issue to the Federation Provider (the
    FedP that accepts these claims).
    • Decide upfront on what is the ID for the user that is immutable (does not change). This could be UPN, email, employeeID based on what is considered as unique and would not change for the user authenticated by the IDP.
  • If you are the Federation Provider.
    • Pass through claim information that is only agreed upon in #2
    • It is a good practice for you to add a claim that represents the IDP in the claims acceptance rules on the CP trust. This is useful when you want additional logic on the RP trust later that may need to differentiate across different IDP’s.
    • [Optional] Add validation rules in the claims acceptance rules on the Claims Provider Trust for the IDP. For example, you only want to pass through a certain UPN suffix or email suffix that the IDP is responsible for. This is good security practice.
    • [Optional] If you want to normalize certain claims values that works across all the RP trusts in the FedP, do so in the claims acceptance rules on the CP trust for the IDP.
    • [Optional] If you need to augment additional claims data that works across all the RP trusts in the FedP, do so in the claims acceptance rules on the CP trust for the IDP. Otherwise you can always augment in the RP trust rules if this is RP specific.
    • [Optional] Optimize HRD experiences as needed via sign-in customizations offered in Windows Server 2012R2
    • [Optional] Enforce additional authorization rules for specific apps.
  • If you are the Identity Provider
    • Only issue claims that you have agreed upon in #2
    • Add additional authentication rules on the RP Trust that represents the FedP based on the conditions that you would need MFA. If the user were accessing multiple applications behind the FedP, these rules would be applicable to all the applications. Read this blog post from Ramiro to understand the rule logic.

Hope this provides some of the best practices. If you have additional best practices, please add them in the comments section.

Thanks//Sam

 

 

Introducing Sam

Mon, 04/20/2015 - 08:46

Hi,

I'm Samuel Devasahayam, a lead Program Manager in the Active Directory team at Microsoft. I've been with the Active Directory team since 1998 when I joined after grad school. I drive Active Directory Federation Services as well as some of our recent onboarding efforts for Azure Active Directory/Office 365 through Azure AD Connect.

Of late, I find myself answering numerous customers both to Microsoft customers as well as internal Microsoft employees on things surrounding ADFS or Office 365/Azure AD authentication. This blog will primarily focus on making these questions (and their answers of course :)) more accessible and public.

Please use feedback/comments for any additional questions you would like answered around ADFS.

Thanks

/Sam

@MrAdfs

April 2015 release notes

Mon, 04/20/2015 - 08:39

The Microsoft Dynamics Lifecycle Services team is happy to announce the immediate availability of the April release of Lifecycle Services.

 

NEW FEATURES

 

  • Several new features are available for Cloud-hosted environments:

    • Azure Premium Storage is now generally available and supported. For more information, see:http://azure.microsoft.com/blog/2015/04/16/azure-premium-storage-now-generally-available-2

    • When you deploy high availability environments, you can now customize the SQL Server configuration. For the SQL Server virtual machines (VMs), you can select:

      • Disk size

      • Disk count

      • The SQL Server image source:

        • Gallery images (consumption pricing)

        • Custom images (bring your own license)

The above customizations apply to both Premium Storage deployments (for production environments) and Non-Premium Storage deployments (for testing environments).

  • The deployment status of each VM has been updated to clarify what is happening when a deployment is ongoing.

  • The Remote Desktop Services tier (for high availability environments) now supports more than 2 instances.

  • The VHDs used for deployments are now serviced with the latest patches from Microsoft for the pre-requisite software that is installed on the VHDs.

  • When selecting the size of the VMs that you want to deploy, the Size column now filters out Azure VM sku's that are not available for your subscription/region.

 

  • Business process modeler has several updates:

    • A new concept, called Views, has been added that allows several different slices of information and tasks to be performed against a library. Views are displayed along the left side of any BPM library.

    • A review processes view has been added that allows users to mark the processes that have been reviewed.

    • New KPI cards are available to map to this functionality in the project methodology.

    • A new feature for linking lines has been added to the author and edit views. This will allow you to create a new line by copying an existing one. Copied lines will always be in sync. For example, if you rename or modify one line in any way, the other linked lines will immediately show the same changes.

Learning Redis - Part 2: Getting Started with Redis

Mon, 04/20/2015 - 08:30

Want to bring more performance, speed, and scalability to your website? Or scale your sites for real-time services or message passing? Learn how, and get practical real-world tips in this exploration of Redis, part of a series on choosing the right data storage.

Steven Edouard and I (Rami Sayar) show you how to get up and running with Redis, a powerful key-value cache and store. In this tutorial series, you can check out a number of practical and advanced use cases for Redis as cache, queue, and publish/subscribe (pub/sub) tool, look at NoSQL and data structures, see how to create list sets and sorted sets in the cache, and much more. You can watch the course online on Microsoft Virtual Academy.

Level: Beginner to Intermediate.

Objectives

By the end of this module, you will:

  • Know how to install, setup and run Redis on your local machine.
  • Learn how to use common commands
  • Learn about Redis Data Types
  • Learn about Strings and Lists
  • Learn about Expiration
Getting Started

Redis is an open source key-value store. It is extremely popular in the web development community and it is often referred to as a data structure server.

The first thing you need to do is install Redis for your specific system. On Windows, Redis is supported by the MSOpenTech team that keeps a 64-bit port. You can download it from here.

Redis on Windows has achieved performance nearly identical to the POSIX version. Redis on Windows uses the IO Completion Port model. For the most part, none of the changes in the Windows port will impact the developer experience.

You should unzip Redis such that you have it in a folder that is in your PATH environment variable if you plan on using the Redis through a terminal. Alternatively, you can run Redis as a service with the Windows Services model.

Running Redis

Once you have extracted Redis, you can open up a console, navigate to that folder and simply execute the redis-server.exe with a configuration file.

redis-server redis.windows.conf

This command will start the Redis server on port 6379.

In the configuration file, you can find settings to change the port, bind to an IP or hostname, specify TCP keepalive settings, set the log file and more importantly set the settings for when Redis should snapshot the DB to disk. If you are using Redis only as a cache, you will not need to save to disk as that is a slow operation with an impact on performance.

Installing Redis as a Service

To install Redis as a service, you have to execute the --service-install command along with a Redis config file.

redis-server --service-install redis.windows.conf

Following a successful installation, you can start the service by running the --service-start command.

redis-server --service-start

To stop the service, you can run the --service-stop command.

redis-server --service-stop

You can also give an optional name to your service if you plan on running separate instances of Redis. Just add the --service-name NAME command to the install command and use the service-name to reference the specific service you want to start or stop. Make sure you also specific a different port for each Redis server.

redis-server --service-install --service-name cache1 --port 3001 redis-server --service-start --service-name cache1 redis-server --service-stop --service-name cache1 Redis Client & SET/GET

Once you've got redis-server started, you can use the redis-cli to connect to your server and do some basic commands. Simply executing redis-cli will connect you using the default ports and parameters as set inredis.windows.conf.

If you remember from the first installment, Redis is a key-value store and the basics of the data model are storing key and value pairs. We can retrieve the values only if we know the exact key. The command to store the key-value pair is SET.

SET key "value" or SET person:1:first_name "Rami"

The client will print OK if the command is executed successfully.

To retrieve the value stored for the above key, you use the GET command.

GET key or GET person:1:first_name

The above will print "Rami" if executed successfully.

Note: the quotation marks are not stored with the value.

A Walkthrough of Common Commands

Let us walk through some common Redis commands. First is DEL which serves to remove a key-value pair from the store. Second isSETNX which performs a SET command only on condition that the key-value pair does not already exist in the store. APPEND serves to append to your value. INCR and DECR serve to increment and decrement the integer value of a key. These two commands bring up the question of "what data types are supported as values?". We will answer that question shortly.

Let us run a series of commands and see what the output is:

> SET sales_count 10 OK > INCR sales_count (integer) 11 > INCR sales_count (integer) 12 > DEL sales_count (integer) 1 > INCR sales_count (integer) 1

Notice the last increment is done on a key that has already been deleted which results in a new key being created increment from a value of zero.

Data Types Supported

Redis supports multiple ways to manipulate values as you can see by the INCR and DECR operators in the previous section. The most basic Redis value is a string. Strings are binary safe so you can insert any kind of value you want so long as the maximum size does not surpass 512 MB. This is a hard limit. You can treat strings as numbers hence theINCR and DECR but you can also treat them as bits ( GETBIT & SETBIT) or as random access arrays (GETRANGE &SETRANGE).

There are also other data types that we will cover mostly in the next installment. Those data types include lists, sets, sorted sets, hashes, bitmaps and hyperloglogs.

On Redis Keys

Redis keys are just strings but they are also binary-safe. An empty string is also a valid key but typically you want to to use keys that describe the value well enough and are not too long. Typically you want to stick to a self-defined schema. For instance, "object-type:id" is a good idea but if you don't like the ':' separator, '-' & '.' are frequently used.

You can determine if a key exists by simply using the EXISTS command.

Lists

Redis also supports lists of strings. You can create a list by using the command LPUSH or RPUSH which prepend a value to a list or append a value to a list. If you had an X to the end of either command, it would only do that operation if the list already exists. The command LPOP removes the first element in a list and removes it, RPOP does the same for the last element. Thus you can see the beginnings of a queue.

> LPUSH countries "Canada" (integer) 1 > LPUSH countries "USA" (integer) 2 > RPUSH countries "Japan" (integer) 3 > RPUSH countries "France" (integer) 4 > RPOP countries "France"

You can also treat lists as random access. You can use LINDEX to get an element from a list by its index and you can get the length of the list by using the LLEN command. You can also use LINSERT to insert an element in the list after a specific element or use LSET to modify a value in a list at that index. You can use LRANGE to get a range of elements andLTRIM to remove a range of elements. RPOPLPUSH removes the last element in a list and prepends it to another list.

> LLEN countries (integer) 3 > LRANGE countries 0 3 1) "USA" 2) "Canada" 3) "Japan" > LRANGE countries 0 -1 1) "USA" 2) "Canada" 3) "Japan" > LINDEX countries 2 "Japan" > LINSERT countries before "Canada" "Armenia" (integer) 4 > LSET countries 0 "Argentina" OK > LRANGE countries 0 -1 1) "Argentina" 2) "Armenia" 3) "Canada" 4) "Japan"

Furthermore, you can turn lists into a queuing system with blocking until elements are available by adding the B symbol before LPOP or RPOP. You can also go further into the queueing commands by using BRPOPLPUSH which first performs a pop from one list, a push to the another while the returning the value and block if there is no value to be found. You can use this complex command to take a value from a work queue to put into a completed queue.

Querying for Keys

Typically, you should not querying for keys in production as it tends to deteriorate performance when the store has huge numbers of keys.

There are two ways to query for keys, first is by using the KEYS command which takes a pattern that can contain glob-style patterns. The complexity for this operation is O(N) which means you should be using this in production. SCAN is the preferred way to query for keys, in essence, its a cursor based iterator just like in programming languages such as Python or Java or other databases like MongoDB or SQL databases. The SCAN command has several options and supporting commands which you can explore here.

In most cases, you want to avoid querying for keys, you can use sets (which will be explained in the next installment) to keep a unique list of keys.

Adding Expirations to Keys

As mentioned in the first installment, Redis is best used as a cache yet one of the principles of a cache is that key-value pairs expire after a certain amount of time. We will cover how you can set the expiration of a key in this section.

Using the EXPIRE key ttl command with a time-to-live in seconds will set the key expiration. Redis stores the expiration as absolute Unix timestamps so even if the store goes down, when it goes back up it will expire the key.

If you no longer want to have an expiration, you can use the PERSIST command to remove it.

If you want to find out how much time is left before expiration, you can use the TTL command to find out.

> SET mykey "Hello" OK > EXPIRE mykey 10 (integer) 1 > TTL mykey (integer) 10 Stay Tuned!

Stay tuned for next installment of this tutorial series. You can stay up to date by following @ramisayar and @sedouard.

Pages

Drupal 7 Appliance - Powered by TurnKey Linux