You are here

Feed aggregator

ARM Template MSDeploy Race Condition Issue

MSDN Blogs - 2 hours 14 min ago

In Azure Resource Manager ARM Templates, you describe all the resources needed for your project. One of which could be an Azure App Service. For Azure App Services you can add an MSDeploy Resource, that describes what to publish to the app service. You could also define App Settings, Site Configurations, Connection Strings and many more. For more information about ARM Templates please visit the documentation here https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/.

In this post I will touch upon a scenario that might disrupt the flow of a template when dealing with an App Service, MSDeploy with other App configs (App Settings, Site Configurations, or Connection Strings). There are 2 manifestations of the problem:

1- An error from MSDeploy, something like “Deployment was interrupted and the process running it was terminated unexpectedly,…”

2- The deployment hangs and takes a lot of time to fail.

The reason behind this problem is a race condition caused by how the resources are ordered. App Settings changes or connection string changes cause a site restart Asynchronously, if MSDeploy goes after that step, then there is no guarantee that the site will not restart during the deployment. But if you make all steps depend on MSDeploy then you ensure MSDeploy runs first and then any other site restarting activities.

 

 

A working example of WordPress deployment template that is implementing this could be found here 

What do you think of our definition of done (DoD) for Extensions?

MSDN Blogs - 2 hours 25 min ago

The Scrum Guide recommends that when anyone states “Done”, everybody in the team understands what “Done” means. Furthermore, when there are multiple teams working on the same product, the teams must mutually agree on the Definition of Done (DoD).

WORK IN PROGRESS v0.1-2015.05.26

This post will evolve over time and shares our definition of done (DoD) for the DevLabs extension projects we are building for Visual Studio Team Services. You are welcome to re-use the DoD “as is” or adapt it to suit your needs.

Microsoft DevLabs is an outlet for experiments from Microsoft, experiments that represent some of the latest ideas around developer tools. Solutions in this category are designed for broad usage, and you are encouraged to use and provide feedback on them; however, these extensions are not supported nor are any commitments made as to their longevity.

Definition of Done (DoD) for our extension pipeline

        APPROVAL BETA BETA deployment must be pre-approved by the project lead or program manager. [   ]   PROD PROD deployment must be pre-approved by the project lead and program manager. [   ]     PROD deployment must be post-approved by the program manager. [   ]         PIPELINE Continuous Integration (CI)

Continuous integration build created in VS Team Service. See Build practices and “Embracing DevOps when building Visual Studio Team Services Extensions” article for more details.

[   ]   Continuous Deployment (CD)

Continuous deployment release created in VS Team Services. See Release practices and “Embracing DevOps when building Visual Studio Team Services Extensions” article for more details

[   ]         QUALITY Bugs No known critical or high priority bugs. [   ]   Impediments No known priority 1 impediments. [   ]   OSS OSS request approved for solution if open sourcing the code.
OSS request approval for 3rd party OSS artefacts include with the solution. [   ]
[   ]   Performance Reviewed solution with the TEST SMEs and Test Guidelines are met. [   ]   User Experience Reviewed solution with the UX SMEs and UX Guidelines are met. [   ]   PoliCheck All issues from code and documentation scan for sensitive terminology fixed. [   ]         SOLUTION Extension manifest

Recommended manifest settings:

- Set version to 0.0.0
- Set publisher to an empty string.
- Set ID to extension name, without spaces.
- Mark the extension as private (public: false).
- Remove all galleryFlags.

[   ]
[   ]
[   ]
[   ]
[   ]   Metrics Application Insights (AI) resource created on Azure using extension name.
Unique instrumentation key per extension | service.
Code instrumented as outlines under Application Insights practices. [   ]
[   ]
[   ]   Marketplace description Crisp overview with visuals.
Quick steps to get started.
Learn more section with:
- Link to OSS repository, if applicable.
- Link to support twitter.com/almrangers.
- DevLabs notice if publishing to ms-devlabs publisher.
- Active contributors as confirmed by the project lead (PL). [   ]
[   ]
[   ]   Source files All source files have a header, aligned with OSS request. See example below.
License matches OSS Request approval. [   ]
[   ]

 

Source header sample

//———————————————————————
// <copyright file=”{FILENAME}”>
// This code is licensed under the MIT License.
// THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF
// ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
// TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
// PARTICULAR PURPOSE AND NONINFRINGEMENT.
// </copyright>
// <summary>Crisp summary of what code is about.</summary>
//———————————————————————

Practices

Collection of common practices, cross-referenced by our Definition of Done (DoD).

Application Insights
  • Add a NuGet reference to Application Insights
  • Add telemetry client class and update the instrumentation key
  • Include imported AI scripts in  the VSIX
  • Ensure the VSS SDK is ready before calling the Init method
Build
  • Name: vsarVSTS SOLUTION NAME, example vsarVSTS Folder Management
  • Trigger: Continuous Integration (CI)
  • Output build output to a separate folder
  • Verify that build runs successfully
  • Verify that a valid VSIX is produced as output
Release
  • Name of release matches name of build
  • Link previously created Build Definition as an artefact source
  • Tag extension ID with the environment for Dev and Beta.
    • Dev environment: FolderManagementDev
    • Beta environment: FolderManagementBeta
    • Public environment: NO suffix!
  • Create three environments:
    • Dev uses team publisher, private extension, and shared with DEV sandbox
    • Beta uses alm-rangers publisher, private extension, and is shared UAT sandbox
    • Public uses ms-devlabs publisher, public extension and is NOT shared
  • Create/re-use three marketplace connections with access token for relevant environment
Test / Performance
  • All automated tests written, executed, passed and added to the main regression pack
  • All functional tests written, executed and passed
  • All non-functional tests written, executed and passed
  • All regression tests copied to the master regression test plan
  • All unit tests written, executed, passed and included in CI build
  • Build Verification Test (BVT) pack updated
User Experience

TO BE DEFINED.

Feedback?

Add a comment below or ping us here.

Remote Desktop for Windows 10 exiting preview

MSDN Blogs - 4 hours 16 min ago

I’d like to start by saying a big thank you to everyone who has installed and used our Microsoft Remote Desktop Preview client for Windows 10 and has provided us great feedback so far.

After a few months working on the core feature set, we’re excited to bring the app out of preview so everyone on a Windows 10 device, whether that be a desktop, tablet, phone or through Continuum for phone can benefit from the same great experience.

Since Windows 10 shipped, if you installed the Remote Desktop app from the Store you were using our Windows [Phone] 8.1 app. Our new Windows 10 Universal app was only available if you installed the Microsoft Remote Desktop Preview app. As we exit the initial preview phase, we are moving the Universal app to replace the 8.1 version under the Remote Desktop name for devices running both Windows 10 and Windows 10 mobile.

The Windows 10 version is rolling out to an increasing number of users over the next couple of weeks so it’s possible you won’t see the updated app yet even if you are reading this blog. If you don’t have it installed already, the app is available from the Store by searching for Remote Desktop.

During the upgrade, you should expect the following:

  • Desktop connections are preserved
  • User names are preserved
  • Passwords need to be re-entered
  • Gateways are preserved
  • Remote resources URLs are preserved from Windows Phone 8.1 but require new sign in
  • Remote resources are not preserved from Windows 8.1 and need to be re-added
  • Some general settings are preserved

Some features available in the Windows 8.1 version of the app haven’t yet made their way to the Windows 10 version. We appreciate feedback on which features are most important to you as we plan our future updates.

Here’s a list of features which are not yet available:

  • Multiple simultaneous connections
  • Dynamic resolution and rotation
  • Printer redirection
  • Smartcard redirection
  • Microphone support
  • Localized app (currently English only)

If these features are critical to you, it’s recommended that you use the Remote Desktop Connections app (MSTSC) which ships in Windows.

Exiting preview doesn’t mean we are done, quite the opposite. We have a set of features already in the works and will continue monitoring the Store comments and our feature requests site to help us focus on the next set of features and you can expect regular updates to the app.

How do I access the main version of the app?

The non-preview version of the app can be found under the Remote Desktop name in the Store. If you were already using our Windows 8.1 or Windows Phone 8.1 versions of the app on Windows 10, you will be automatically upgraded to the Windows 10 version of the app the next time the Store updates your list of installed applications once your device is selected to upgrade through the rollout process.

If you were using only the Microsoft Remote Desktop Preview or you are new to Remote Desktop, head over to the Store to download the app today and let us know what you think.

Why do I still see the Preview app in the Store?

While the initial preview period for the Windows 10 version of the app is over, you will continue seeing two apps in the Store: Remote Desktop and Microsoft Remote Desktop Preview.

If you simply want to use the app for your day-to-day remoting needs, it is recommended to install the Remote Desktop version. This app has a slower update cadence and minimized risks.

However, if you enjoy using pre-release software which may have more bugs and crashes, getting access to new features before everyone else and providing feedback to make the product better for everyone else, then the Microsoft Remote Desktop Preview is for you.

Both apps can be installed side-by-side.

The Remote Desktop client is also available on your other devices running Windows Phone 8.1, Windows 8.1, iOS, Mac OS X, and Android.

Note: Questions and comments are welcome. For troubleshooting requests, post a new thread in the Remote Desktop clients forum. Thank you!

Achieving regulatory agility in the era of cloud computing

MSDN Blogs - 5 hours 59 min ago

Cloud Security Director of Cloud Health and Security Engineering, Matt Rathbun, shares his thoughts and insights on ways to improve the agility of regulatory frameworks in a cloud-centric world.  Check out Matt’s post on the Azure blog.

Free Power BI Webinar 6/7: Advanced Power BI and Solving the Hard Problems

MSDN Blogs - 6 hours 15 min ago

The Power BI Community webinars are brought to you by the experts doing real world implementations and sharing their hard earned best practices.   In this weeks session Devin Knight shares with us what the world looks like when you leave: “Demo Land” and your data isn’t perfectly formed or business scenarios don’t translate perfect to a set of reports and dashboards.

 
Advanced Power BI: Solving the Hard Problems

By now you have probably seen many Power BI demos and likely love what you see in the product.  However, you may have noticed in most Power BI demos that they tend to show scenarios where everything just works right on the first try.  So what do you do when your data is not perfect or your business problem is more complex?  In this session, you will see what happens when you go beyond the basics and try to solve those difficult problems that you inevitably will run into when you’re back at work.  This session will give you many tips on how to solve real world problems with Power BI.

When:  6/7 10AM PST

Where: Registration link pending

 


About Devin Knight

Devin a Microsoft SQL Server MVP and the Training Director at Pragmatic Works.  He is an author of six SQL Server books and speaks at conferences like PASS Summit, PASS Business Analytics Conference, SQL Saturdays and Code Camps. He is also a contributing member to the PASS Business Intelligence Virtual Chapter.  Making his home in Jacksonville, FL, Devin is the President of the local users’ group (JSSUG). You can track Devin’s community and technology activities on his website: https://devinknightsql.com/

Insights on Container Security with Azure Container Service (ACS)

MSDN Blogs - 8 hours 50 min ago

Microsoft Azure has a number of security partners and these partners help us help you deploy more secure solutions in Microsoft Azure. We greatly appreciate our partners work and encourage you to seek out the wide range of partner security solutions available in the Azure Marketplace. We also like to share a diversity of voices on this blog and are always happy to host guest bloggers.

A very hot topic these days is containerization. Containers allow you to containerize applications in a way similar to how we’ve containerized operating systems (via OS virtualization technology such as VMware and Microsoft Windows Server Hyper-V). There are a lot of reasons why you want to consider deploying containerized applications. For a great review of containers, how they work and how you can use them in Microsoft Azure, make sure to check out Mark Russinovich’s presentation on containers.

Of course, we here on the Azure Security and Compliance Team blog are also interested in containers, but we want to make sure when you deploy containerized applications that you do it in a secure fashion. There are some unique security considerations for containerized applications and you’ll want to be aware of them before you deploy in production.

To this end, I’d like to introduce one of our Azure Partners, Twistlock. In this article Twistlock will share with you their insights into container security. I think you’ll learn a thing or two!

=========================

With more than 50 years of Microsoft experience at Twistlock, we were naturally excited to see the Azure Container Service (ACS) being launched recently.  ACS provides a simple way to manage and scale containerized apps using leading open source frameworks like DC/OS and Swarm.  Because it’s built on the same open source technologies already available, Twistlock is able to protect workloads on Azure Container Service just as effectively as if you’re running them in your own datacenter, in another cloud provider, or even just on Azure VMs directly.

At Twistlock, we believe that with the right tools, containers can improve your security relative to running the same apps in a more traditional architecture.  This is because of 3 essential characteristics of containers:

  1. Containers are immutable – you don’t service a deployed container when you want to update your app, you destroy it and create a new one
  2. Containers are minimal – they do one thing well and have just the bits they need to do it
  3. Containers are declarative – a container is built from an image, an image is composed of layers, and layers are described in a Dockerfile

For a security company like us, this means we can apply lots of advanced intelligence to these images throughout the development lifecycle. This helps us understand what they’re intended to do at runtime.  Then, throughout the entire time a container is running, we compare what it’s actually doing to this reference model.  When we see a variance, it can be an indicator of compromise (IoC) and we provide a policy framework so you can decide how to handle it (maybe you just want to alert in your test environment, but block in your PCI environment).

For example, if you have an image that’s supposed to run the Apache webserver, we understand what specific processes (like httpd) it should run, what syscalls it should make, and even what other containers it should talk to (like a backend database for example).  Once you’ve deployed that image into containers, Twistlock monitors them and looks for anomalies to this model.  

For example, if your Apache container starts listening on a different port or making a strange syscall or running netcat, it’s probably not a good thing.  We also supplement this reference model with real time threat data so we can also detect malware that may be written to a volume your container has mounted or if your container starts talking to a Tor entry node or command and control system.  

Most critically, all these protections happen automatically based on our knowledge of the image.  Rather than some human having to create a rules and modify them as image are updated, we can do this discovery and recalibration automatically, every time a new image is built.  This allows you to scale out an allow-list model of app security in ways not previously practical.

Let’s take a quick look at Twistlock on Azure Container Service in action.  First, notice the containers running in this ACS deployment:

The webapp and db containers are linked (green) and I have another, separate, container, running a Node.js app next to them (pink).  This is a common deployment model that containers help enable; having many different apps sharing the same kernel is safe and easy with containers.

I deployed the 2 tiered web app using docker-compose.  One of the cool things about Azure Container Service is that because it’s a packaged implementation of existing tools, all those tools continue to work as you’d expect.  So, deploying a multi-container app on ACS is as simple as running docker-compose up -d, just as you would in any other environment.  In this case, the YAML file would look something like this:

morello@swarm-master-5B00EF84-0:~$ cat demo/docker-compose.yml

web:

image: training/webapp

links:

   – db

ports:

   – 32769:5000

db:

image: training/postgres

environment:

   PASSWORD: examplepass

As with any other deployment, Twistlock scans all of the images and understands their vulnerability and compliance state, in addition to the runtime profiling we’ve discussed so far:

Once we know what the images should do, we can then compare to what they’re actually doing.  In this example, let’s pretend an attacker finds a flaw in the Node.js app and attempts to compromise it.  The first layer of defense in depth we provide is the syscall sensor.  Because we understand what system calls the Node app should make, we can detect anomalies outside that allow list.  In this case, the attacker exposes a flaw in the app to navigate directories and Twistlock detects it automatically:

For the purposes of this demo, let’s assume you didn’t configure to block this and ignored the alert.  Now assume the bad guy uses his access to try to download an exploit kit.  He first runs netcat on the machine and then wget.  The process sensor knows those aren’t valid executables based on what was in the origin image:

At the same time, network sensor detects the traffic to a malware distribution point:

And the file system sensor sees when it’s written:

Remember, all of this happened without anyone having to create any rules based on the app or image; all the protections were applied automatically based on what the origin image should do.  Here’s where it gets really cool, though.  Twistlock also understands linkages between containers, so we know when inter-container traffic is by-design and when it’s not.  

For example, if our hacked Node.js app has no reason to talk to the PostgreSQL database that’s running in a different container on the same host, you want detect and prevent attempts to do so.  Here’s the hacked Node app trying to connect to the database:

Here’s Twistlock immediately identifying the connection attempt from an unlinked container:

Again, all this happens without anyone having to create and manage any rules, it’s solely based on our knowledge of the images and how they should talk to each other.

Azure Container Service provides a great platform for running containers and we’re proud to have a solution that helps customers today.  However, there’s even more to come.  We’ve also been doing work with the Operations Management Suite team so our security alerts can be integrated into the OMS data warehouse and presented in the same familiar dashboards as other Operations Management Suite data.  Of course, we’re also excited about Windows Containers and you might guess that a team of ex-softies is going to make sure they’re protected too.

Thanks for reading!  If you’re running container and Twistlock looks interesting to you, please check us out at https://twistlock.com or @TwistlockTeam.  In addition to a free evaluation of our Enterprise Edition product, we also offer a completely free Developer Edition that’s great for individuals and small teams.

===================

I hope you enjoyed this blog post and learned something new about container security.

Please let us know if you have questions about the Azure Container Service or container security. Just enter a comment in the Comments section below and we’ll try to find the answers you need.

Containers are cool and secure containers are the best!

Thanks!

Tom
Tom Shinder
Program Manager, Azure Security
@tshinder | Facebook | LinkedIn | Email | Web | Bing me! | GOOG me!

Return a value from Windows 10 UWP MenuFlyout control

MSDN Blogs - 9 hours 34 min ago

Using the Windows 10 UWP MenuFlyout control, but don’t see how to return a value from the selected options? You could sub-class MenuFlyoutItem, but don’t overthinking it: just use the .Tag property in MenuFlyoutItem, like this:

 

private void button_Click(object sender, RoutedEventArgs e)
{
// Open a MenuFlyout

var myMenuFlyout = new MenuFlyout();

// Create the menu options
var option1 = new MenuFlyoutItem() { Text = “Return a value of 1″ };
var option2 = new MenuFlyoutItem() { Text = “Return a value of 2″ };
var option3 = new MenuFlyoutItem() { Text = “Returna  value of 3″ };

// Add the handler called when the user select the menu option
option1.Click += MenuFlyoutHandler_Click;
option2.Click += MenuFlyoutHandler_Click;
option3.Click += MenuFlyoutHandler_Click;

// Define the value you want the menu options to return
option1.Tag = 1;
option2.Tag = 2;
option3.Tag = 3;

// Add the options to the menu control
myMenuFlyout.Items.Add(option1);
myMenuFlyout.Items.Add(option2);
myMenuFlyout.Items.Add(option3);

// Display the menu, with an on-screen position relative to another control –
// usually the control that caused it to appear.
myMenuFlyout.ShowAt(button);

}

private void MenuFlyoutHandler_Click(object sender, RoutedEventArgs e)
{
var returnValue = ((MenuFlyoutItem)sender).Tag;
}
}

 

Investigating issues with Hosted Build in Visual Studio Team Services – 5/26 – Investigating

MSDN Blogs - 9 hours 47 min ago

Initial Update: Thursday, 26 May 2016 16:17 UTC

We are actively investigating issues with Hosted Build in South Central US region. Customers may experience builds being stuck in the queued state for longer than usual.

  • Next Update: Before 18:00 UTC

We are actively working to resolve this issue and apologize for any inconvenience.

Sincerely,
Manohar

CLRs, Web Services and JSON – Design Decisions

MSDN Blogs - 9 hours 51 min ago

Recently I was working on a CLR that needed to talk to a web service and return a tabular dataset. Within the development and deployment process there were several considerations concerning security and design. This post will documented these to help understand the challenges and options available.

Security problems should be considered early as they will colour how the CLR is developed.

If the CLR uses a DLL that is not installed on the SQL Server then the DLL will need to be installed. (A list of supported libraries in SQL can be found here https://msdn.microsoft.com/en-us/library/ms403279.aspx)

For an unsafe assembly the database has to be set to Trustworthy, this is a non-default security setting so some companies are reluctant to change.

If the database cannot be made Trustworthy then the CLRs may be deployed to MSDB as MSDB is Trustworthy by default. The extended function or procedure can be referenced from the user database with the syntax msdb.dbo.<CLRProcName>. However, bear in mind if using Always On the MSDB database will not move across in a failover event. To achieve high availability you will need to deploy to all replicas that might be failed over to.

Using third party DLLs can sometimes be worked around by refactoring the code.

This example CLR needs to call the free movie database webservice http://www.omdbapi.com/. When called this websevice returns JSON in the following format

http://www.omdbapi.com/?t=Simply+Irresistible&y=&plot=short&r=json

{"Title":"Simply Irresistible","Year":"1999","Rated":"PG-13","Released":"05 Feb 1999","Runtime":"96 min","Genre":"Comedy, Drama, Fantasy","Director":"Mark Tarlov","Writer":"Judith Roberts","Actors":"Sarah Michelle Gellar, Sean Patrick Flanery, Patricia Clarkson, Dylan Baker","Plot":"A magical crab works wonders for a terrible chef's culinary skills, leading her towards the man of her dreams.","Language":"French, English","Country":"Germany, USA","Awards":"N/A","Poster":"http://ia.media-imdb.com/images/M/MV5BMTYyNTg3Mzg2M15BMl5BanBnXkFtZTcwNzczNjUyMQ@@._V1_SX300.jpg","Metascore":"27","imdbRating":"5.3","imdbVotes":"11,290","imdbID":"tt0145893","Type":"movie","Response":"True"}

Usually when dealing with JSON it is common to use Json.NET from Newtonsoft. Json.Net provides quick and easy methods to convert JSON into an object that can then easily be converted to a tabular format for SQL. However, this could not be used in this case due to the security policy as it would require the deployment of a DLL on the SQL Server.

SQL 2016 has new functions for handling JSON. The CLR would only need to return the JSON string and then OPENJSON() could be used to convert the data. As shown in the following diagram:

 

However, in this case we were using a previous version of SQL.

Without SQL 2016 and without deploying Json.Net to the server the only option was to code in a way as to convert the JSON either within the CLR or within SQL Server with TSQL. Returning a tabled valued function from a CLR was a cleaner solution and as the JSON format returning from the web service was simple. This was achieved with simple string manipulation. (A regular expression could have been used but looking at the JSON we could break up into two columns using a few simple string replacements and Splits)

The resulting solution is as follows:

using System; using System.Collections.Generic; using System.Data; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Collections; public partial class StoredProcedures { [SqlFunction(FillRowMethodName = "OMDSearch", TableDefinition = "Name nvarchar(400), Value nvarchar(4000)")] public static IEnumerable tvfnOMDSearch(String title, String year) { System.Net.WebRequest req = System.Net.WebRequest.Create(String.Concat(@"http://www.omdbapi.com/?t=", title, @"&y=", year, @"&plot=short&r=json")); System.Net.WebResponse resp = req.GetResponse(); System.IO.StreamReader sr = new System.IO.StreamReader(resp.GetResponseStream()); string jres = sr.ReadToEnd().Trim(); /*Remove JSON brackets and mark the columns with a not(¬) sign*/ jres = jres.Replace("{"", "").Replace(""}", "").Replace("":"", "¬"); /*Split out the lines into an array*/ string[] movieinfo = jres.Split(new string[] { "","" }, StringSplitOptions.RemoveEmptyEntries); return movieinfo; } public static void OMDSearch(Object obj, out SqlString Name, out SqlString Value) { string[] resarr = obj.ToString().Split('¬'); Name = resarr[0].ToString(); Value = resarr[1].ToString(); } }

This may be called as follows:

CREATE TABLE #tbMovies (MovieYear varchar(150), MovieName varchar(150)) GO INSERT INTO #tbMovies VALUES ('2014', 'God''s Pocket'), ('1972', 'Malcolm X'), ('2012', 'Alter Egos'), ('2008', 'Step Up 2: The Streets'), ('2015', 'LEGO Friends'), ('2012', 'Nate & Margaret'), ('2014', 'Castle'), ('2012', 'Jack and Jill'), ('2016', 'Carol'), ('2012', 'Dragons'), ('2011', 'Beastly'), ('2015', 'April 9th'), ('2015', 'A Girl Walks Home Alone At Night'), ('2012', 'People Like Us') GO SELECT * FROM #tbMovies AS T CROSS APPLY MSDB.dbo.tvfnOMDSearch(T.MovieName, T.MovieYear)

To give the following result:

Docker Containers as the new Binaries of deployment

MSDN Blogs - 11 hours 59 sec ago

In prep for .NET Conf, I was asked by Vaso to explain some of the benefits of containers. I was talking with one of our engineering leaders in Azure, John Gossman about how we view containers more broadly. Our Azure Container Service is our Microsoft Container Orchestration solution, offering Container as a Service (CaaS).

We’ll be adding container support to Service Fabric, which we think of as a Micro service PaaS.

When people think of containers, are they a specific app pattern, or the new app deployment model, for all app solutions?

To answer the benefits of Containers compared to VMs, here’s an overly simply answer:

  • Containers spin up in seconds, compared to several minutes of a VM
  • Containers provide much more density, allowing you to run many more containers on a single VM, compared to how many VMs you could run on a host OS. This is achieved through a shared kernel model
  • Containers are designed to be instanced multiple times, from a single Image – in the same seconds metric
  • Docker hosts have a caching model for images, allowing them to spin up quickly
  • Containers are deployed using a Docker Registry,which handles a layering system, allowing only the deltas to be deployed across the network

With these primitives, a host of new scenarios are available, such as:

  • Instancing containers on demand for tasks, rather then leaving them running all the time.
  • Auto scaling, self healing, in seconds.
  • Blue/Green deployments, that don’t require you to keep the old instances running.

Today, we think of deploying code as binaries. We compile the code, we deploy those binaries to environments we prep to accept those specific binaries, and update the environment for each app/service version change we make.

If we look forward, we see containers as the new binary. You build/compile your app as a container (Docker) image. You then deploy your app/image to generic environments. Today, these are Container Orchestration systems, like ACS with Mesos and Swarm. Kubernetes, etc. If you look forward, when doing PaaS solutions, like WebSites, App Services, any cloud deployed solution, why would you deploy individual binaries? Wouldn’t it be nice if containers were the new binaries of deployment?

Issues while accessing userhub in Visual Studio Team Services – 5/26 – Investigating

MSDN Blogs - 11 hours 39 min ago

Initial Update: Thursday, 26 May 2016 14:23 UTC

We are actively investigating issues with userhub in Visual Studio Team Services .Some customers may experience 500 Internal Server error while accessing userhub.

  • Work Around: Users are requested to sign-out from existing session. Clear the cache and try accessing Visual Studio Team Services in  “InPrivateincognito” window.
 

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Zainudeen

       

Versioning NuGet packages in a continuous delivery world: part 3

MSDN Blogs - 11 hours 49 min ago

This is the third and final post in a series covering strategies for versioning a NuGet package. If you missed part 1 or part 2, you should read those first. Today’s post walks through a specific workflow that Git users could adopt, using a really powerful tool called GitVersion. GitVersion comes with some expectations about the layout of your branches, so it may not be for everyone.

Let’s walk through using Package Management, Team Build, and GitVersion to manage version numbers. Because it’s more complicated than previous walkthroughs, I’ve chosen to be more verbose and detailed in my explanation.

I’ve decided that I’ll follow the GitFlow branching model for this project. There’s a great tool called GitVersion which lets me translate GitFlow branches and tags directly into semantic versions. For brevity, I’m omitting a lot of experimentation and dead-ends I went down before finally settling on this flow. I encourage you to play around with GitFlow and GitVersion yourself.

Start by creating a new class library called MyComponent in Visual Studio. Let Visual Studio create a new Git repo for you.

Right-click the project and choose Manage NuGet Packages. From NuGet.org, add GitVersionTask.

Next, edit AssemblyInfo.cs to comment out the explicit assembly versioning. GitVersionTask will take care of versioning the assembly.

The last thing to do on the client side before building is to set up GitVersion’s config file. While the defaults are great for most people, I personally prefer the continuous deployment style of versioning. In the solution folder, add a file GitVersionConfig.yaml with the following contents:

mode: ContinuousDeployment
branches: {}

Commit the changes and build the solution. Look at the properties of the built DLL and notice all the GitVersion magic represented in the Product version field.

Next, push your code to a Git repo on Visual Studio Team Services. Everything we do from here on out will support creating and versioning a NuGet package using the tools VSTS provides plus the GitVersion extension. Go ahead and install GitVersion from the Marketplace into your VSTS account. This will introduce a new build step that we’ll need later. Also, if you haven’t installed Package Management in your account, do that as well.

In the Code hub, click the “build: setup now” badge.

Leave the defaults to create a new Visual Studio build. Then, make the following changes to your newly-created build:

  • Add four new steps: GitVersion Task (available in the Build category), PowerShell (Utility category), NuGet Packager, and NuGet Publisher (latter two available in the Package category).
  • Move GitVersion Task up so that it’s right after NuGet Installer. Check the box which says “Update AssemblyInfo files”.
  • Move PowerShell up so that it’s right after GitVersion. Change its type to “Inline Script”. In the Inline Script box, add the following code:

    $UtcDateTime = (Get-Date).ToUniversalTime()
    $FormattedDateTime = (Get-Date -Date $UtcDateTime -Format "yyyyMMdd-HHmmss")
    $CI_Version = "$env:GITVERSION_MAJORMINORPATCH-ci-$FormattedDateTime"
    Write-Host ("##vso[task.setvariable variable=CI_Version;]$CI_Version")We’ll cover what this script does a bit later.
  • Move NuGet Packager up so that it’s right after Visual Studio Test. For Path to CSProj or NuSpec, make sure it’s targeting ***.csproj (that’s the current default, but if you’re re-using an existing build definition, you might have something different there). In the Versioning section, change Automatic Package Versioning to “Use an environment variable”. For Environment Variable, put “CI_Version”.
  • Finally, leave the NuGet Publisher step at the end. Change its Feed Type to “Internal NuGet Feed” and set the URL to the URL of a feed in your account.

Save this build definition. The steps should appear in this order:

Let’s queue a build to see what you’ve set up, then we’ll walk through it step by step. The first thing you’ll likely notice is that your build numbering has changed: GitVersion has automatically called it something like “Build 0.1.0-ci.2”. That means you’re building a pre-release build (-ci.2) of version 0.1.0. So far so good.

Next, swing over to the Package hub and find the feed where you published your package. You should see the very first version of your shiny new NuGet package, properly semantically versioned as a prerelease of 0.1.0. If you were to queue another build right now, you’d get a new prerelease package of the same version of the code.

So what happened here? The magic is in three build tasks: GitVersion (which selected our semantic version), the PowerShell script (which added a timestamp to make the package version unique), and the NuGet Packager task (which reads the computed version number from an environment variable). Even if you don’t speak PowerShell, you can probably understand the first 3 lines of code. We get the UTC date and time, format it into a string, then append it to the environment variable that GitVersion had already set ($env:GITVERSION_MAJORMINORPATCH). The last line is the Team Build way to add/alter an environment variable for later build steps to read.

Now that we’ve exercised the whole workflow, let’s see what happens when we bump the version. Say we’ve done some development on our component and are preparing for a 1.0 release. Under GitFlow, we’re likely to do some final cleanup in a release branch, so let’s go back to Git and do that. In your solution, create a new branch called “release-1.0”.

Make and check in a small code change on the branch. Publish the branch to VSTS (git push --set-upstream origin release-1.0 if you’re command-line oriented) and queue a build of your release-1.0 branch.

Once the build finishes, check your feed again – you’ll find a CI package of a 1.0.0 build of your component! GitVersion automatically determined that this component should be versioned 1.0.0 because of the branch name.

Releasing a package

Great, now you can produce prerelease packages of your components. So once you’ve decided that a particular package is the one you want to release, how do you do it? Well, as Xavier explains, you need to “strip off the pre-release tag”.

In the simple scenario that I walked through here, you have two easy options:

  1. Like part 1, don’t worry about stripping the prerelease tag. Whatever “release” means to you (perhaps uploading your package to NuGet.org), simply promote the CI package straight to release. Your consumers will know it’s a “released” version because of where they got it rather than metadata on the package itself.
  2. Repack (but don’t rebuild!) your package with the new version number. A NuGet package is a zip file with a particular set of files in particular places. If you pull out the .nuspec from the root of the zip file, rewrite its <version> property, and re-insert it into the package, you’ve changed the version of that package without rebuilding its contents.

Note that option 2 only works if none of the packages output by your build depend on each other. If you produce only a single package, you’re probably safe; if you produce multiple packages, you’re probably not safe. As discussed in part 2, we’re considering how to take a build as input, find all the packages produced by that build, find all their inter-dependencies, and then rewrite versions/dependency specifiers with updated version numbers.

A recap

In part 1, we covered semantic versioning and how to automatically create prerelease packages in a continuous delivery model. Part 2 discussed some additional tools we want to develop to make these flows easier. Part 3, this post, covered version numbers managed with GitFlow and GitVersion. Feedback is always welcomed, here in the comments or via Send-a-Smile. We read every Send-a-Smile we get.

Data Science Game 2016 – An international student challenge for Data Science

MSDN Blogs - 13 hours 37 min ago

This year, prove your worth to the ever growing international big data community join us for the 2016 edition of the data science game –  An international data science student challenge to solve data-driven issue and develop your skills
BUILD A TEAM OF 4 STUDENTS – HANDLE DATA PROVIDED BY OUR PARTNERS

Try to answer very challenging questions and demonstrate yours skills among data science students from all around the world!

Join your data science community peers at a two-day competition in an exceptional setting near Paris. Come boost your skills, win great prizes and have fun! www.datasciencegame.com

Register before May 31 for the pre-selection challenge. An online qualifier will take place from 17 June to 10 July.

The final stage will be held near Paris the 10 and 11 September.

Twenty teams of four students will defend their universities in an international challenge and have the opportunity to meet professional data scientists.

Registration
Only students, regardless of the field of study, can register to the competition.
Participants must register by team of exactly 4 students from the same University. Each team should designate a captain who will be the reference contact with the Data Science Game staff.
Each team have a maximum limitation of two PhD students. A European team consists of at most two PhD students. An American, Asian or Australian team consists of at most two students from
the third or higher year of graduate school.
To complete the online registration, each student must provide a resume and an official document stating the level of study using the following form.
Each participating student allow the Data Science Game to communicate their resume and email addresses to our partners for recruitment purposes.
Each participating student allow the Data Science Game to use their images for communication purposes during the challenge.

Qualification
The competition will begin by an online qualifier round. The 20 best teams according to the private leaderboard will be invited to the final phase. These teams should match the following conditions: A
university can be represented by at most one team. Only the best team from the online qualification round will qualify.
A country can be represented by at most five teams. Only the five best from the online qualification phase will qualify.
To ensure that no team is cheating, each qualified team must provide the code reproducing their best submission to check that no forbidden method has been used. Such methods will be
described when the qualifier round begin

Resources

See www.datasciencegame.com

Twitter http://www.twitter.com/datasciencegame #DSG16

Facebook: https://www.facebook.com/datasciencegame

Test your DevOps Skills at the DevOps Factory

MSDN Blogs - 13 hours 50 min ago

In today’s software-driven economy, customers demand more, and businesses need to respond quickly to those demands. To gain a competitive advantage, companies must create, test and release new applications and features faster, and respond to issues instantly.

Puppet Labs published in the 2015 State of DevOps Report that high-performance organizations practicing DevOps ship code 30 times more frequently and experience 1/60th the failures of their lower-performing peers.

How can you compete better through such agility? Welcome to the Devops Factory https://www.thedevopsfactory.com

Explore the 7 departments, also known as Practices, at your own pace.

Learn to  master all or only one. Immerse yourself, learn why you should take advantage of DevOps Practices,

Learn skills around the following areas.

  1. Automated Testing
  2. Continuous Integration (CI)
  3. Infrastructure as Code (IaC)
  4. Application Performance Mgmt (APM)
  5. Continuous Deployment (CD)
  6. Release Management
  7. Configuration Management

Resources

Experiencing Data Latency for Many Data Types – 05/26 – Investigating

MSDN Blogs - 13 hours 51 min ago
Initial Update: Thursday, 26 May 2016 12:17 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Latency. The following data types are affected: Customer Event,Dependency,Exception,Metric,Page Load,Page View,Performance Counter,Request,Trace.

  • Work Around: none
  • Next Update: Before 05/26 17:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Girish Kalamati

Your app has just 5 fruitful days of life!

MSDN Blogs - 15 hours 37 min ago

Your app has just 5 fruitful days of life!

The above statement is part of my usual experiments over figuring out what drives triggers in human brain to take an action like clicking on my article. Jokes aside, the statement is completely true and reliable. Your app may have just 5 days of life as nearly 50% of applications get uninstalled within 5 days of their installation.

Top 3 reasons for short life -

  • Engagement Funnel
  • Spam (Too many useless notifications)
  • End of use-cases

As a co-founder of an app based company, I always ponder about essential elements to get success in startup world. While reading my old journal entries, I found an interesting note about building a successful startup. The key things to a startup is to develop a killer product, a budding team and loads of determination. Though these factors alone don’t guarantee any sure-shot success — or more importantly, user growth, adoption, loyalty or in context to Indian ecosystem – ‘Funding’. But to increase your chances of building a startup that lasts, I realized that the only thing which matters is ‘Customers’.

You can’t build an amazing product until you go into the market and experiment. The ability to gather crucial customer insights can give a massive advantage to a startup in this competitive world.

Thinking on the same line, I thought of revamping the whole communication & feedback engagement channel in my application so that I can engage better with all sort of users. I wanted to instrument a technology in my application that will permit me to “split” my consumers, so that I can run tests to small portions of the population without jeopardizing proven functionality. Apart from this, I also wanted to drive actionable insights from my users so that I can see what’s working and what’s not. During this exploration of finding right set of tools for monitoring, segmenting, reaching and gamifying the user engagement – I came to know of Azure Mobile Engagement also known as AzME.

Azure Mobile Engagement is a SAAS based, data-driven service targeted specifically for digital marketers/CMOs but could be used by any mobile app owner or publisher who wants to increase the usage, retention and monetization of their mobile apps. Nearly 75% users uninstall the application within first 30 days.

Azure Mobile Engagement aka AzME takes care of this by opening a highly personalized route to engage with your customers and convert them into happy users and eventually brand ambassadors of your application.

Azure Mobile Engagement provides data-driven insights into app usage, real-time user segmentation, and enables contextually-aware push notifications and in-app messaging.

Breaking these down, we have the following key characteristics which also highlights its unique value proposition:

Contextually-aware push notifications and in-app messaging: AzME can perform targeted and personalized push notifications. And for this to happen, it collects rich behavioral analytics data. Imagine receiving a notification about a special offer on a specific product which you have viewed multiple times on your favorite eCommerce application. You will be prompted to at least open the application and see what’s there for you. These types of contextually aware push notification help you in increasing user-engagement along with effective monetization.

Data-driven insights into app usage: AzME provide cross platform SDKs to collect the behavioral analytics about the app users. Note the term behavioral analytics (as against performance analytics) because AzME focus on how the app users are using the app. It also collects basic performance analytics data about errors, crashes. This data can be used to send event based notification and in-app messages as everything is happening in real-time. Imagine, receiving a message about a crash when using a new application and then later receiving notification about bug resolution. These all things matter as at the end of the day, your users determine whether you application will last in their phone or will get the harsh tap on “Uninstall”.

Real-time user segmentation: Once you have collected app users’ behavioral analytics data, we allow you to segment your audience based on various parameters and collected data to enable you to run targeted push campaigns. User segmentation can be done quickly and you can also expose this data to other applications like CRM, CSP etc. via use of Open APIs.

Software-as-a-service (SaaS): AzME or Azure Mobile Engagement also provided an optimized platform to interact and view rich behavioral analytics about the app users and run marketing push campaigns. The product is geared to get you going in no time! Truth be told, it took me less than 20 minutes to instrument this onto my own application.

To recap, the purpose of Mobile Engagement is not just to collect analytics – it is not “yet another Analytics product from Microsoft”. It is about sending targeted push notifications and for this targeting, we collect behavioral analytics data but the focus remains on sending push notifications which make the most sense to the app users so that it does not come across as spam.

For more details – take a look at this quick video about Mobile Engagement in a nutshell.

Thanks to Mark D’souza, mentor & colleague for an amazing factoid #MuchThanks

I hope, you unleash the power of AzME in retaining and monetizing your users more effectively. So – Until next time, That’s all from my end. Feel free to comment and share your thoughts on mobile engagement and other similar services available. I am always up for idea jamming or tech chit chat @Twitter.

Events: AxForm_ItemCommand, AxForm_ItemUpdating and AxForm_ItemUpdated not fire in Safari browser on SharePoint 2013

MSDN Blogs - 15 hours 54 min ago

The issue occurs when installing the Enterprise Portal on a claims web application, so actually you will also face the same the issue on SharePoint 2010 when using claims. The issue only impacts Safari browser; in IE or Firefox you will not see the issue.

There are two solutions to this issue

Cannot add Work Items from Lifecycle Services after completing the setup of Visual Studio Team Services integration in an AX 2012 project.

MSDN Blogs - 17 hours 35 min ago

When working with AX 2012 projects in LCS (Lifecycle Services), the option for adding Work Items from LCS is missing after completing the setup of Visual Studio Team Services (VSTS) integration. When creating a Dynamics AX (AX7) project and doing the exact same setup, then Work Items can be created from LCS and stored in VSTS.

In an AX 2012 project, if you choose a storage location of VSTS, then the work items need to be created within VSTS, but once created it can be opened from LCS.

If you choose the storage location of LCS, then you can create new work items in LCS, but then there is no VSTS integration.

When you configure VSTS integration in your LCS project, then you need to link it to a specific VSTS project. When you create work items in the linked VSTS project, then the work items will be seen in your LCS project/Work items. You can open the work items from LCS by clicking on the ID link (yellow mark in image below).

Note:

Only work items of type Bug or Task created within the linked VSTS project will be visible in your LCS project/Work items.

A Head Teacher’s Response – Find Professional Courage: The White Paper and a Profession in Crisis

MSDN Blogs - 17 hours 39 min ago

The following post originally appeared in the Issue 6 of #TheFeed, and was written by Tom Rees, Head Teacher of Simon De Senlis Primary School.

#TheFeed – Issue 6 – May 2016Tim BushDocs.com

Find Professional Courage; The White Paper and a Profession in Crisis?

It’s a funny old time in education at the moment; budget cuts and not enough teachers on the one hand yet higher expectations and increased accountability on the other. Every day another headline appears – either announcing the next government policy change or describing the latest union protest as they dig their heels in vehement opposition.

Forced Academies, a recruitment crisis, workload pressures and chaos around assessment; one could be forgiven for deciding it’s all too much and throwing in the towel. Many are considering joining those who write the seemingly mandatory blog about ‘why I’m leaving the profession’ which inevitably bemoans the Government and creates a long list of failings from successive Education Secretaries and then ends with a statement around why teaching isn’t the job they came into.

Well I for one refuse to become the next noble protagonist in a Shakespearian-style tragedy by letting anyone spoil my fun. I became a Headteacher to play my part in helping young people to develop the skills, confidence and moral purpose to go out and be brilliant in a future which is vastly different to the past. It’s this challenge to reform our industrial-age school system to meet the demands of the modern world that interests me enough to want to stick with it, at least for a little longer.

Has the profession changed a lot in the 16 years I’ve been in it? Yes and no. Of course there have been significant changes to curriculum and accountability and different governments have kicked the profession around like the political football it’s become, but then what other profession or industry hasn’t undergone transformation in recent years? In essence, teaching remains the same: a teacher in a room with a group of young people who need the right balance of engagement, discipline, personal development and academic instruction.

“We could perhaps be more pragmatic and invest our attention into what positive work we can do to make the most of the opportunities that exist at this time of great change.”

Teaching can be a tough job with long hours and (at times) can be stressful, but then so are many other professions without the privilege of working with children and sharing many special moments along the way.

Can we all please stop talking about Recruitment problems?

No one (well almost no-one) is now pretending that there isn’t a problem with Teacher recruitment and retention at the moment but there’s a lot of unhelpful rhetoric from all sides of the debate about how bad things are for everyone.   There’s just not enough action or leadership to help us navigate through these challenging waters and what we need most at the moment is an injection of pragmatism into proceedings.

Recent research by TES Global from a poll of 4,000 teachers backs up the unhelpfulness of the debate with the following key findings:

  • Talk of teacher shortages is selfperpetuating – over a third of teachers said talk about a “recruitment crisis” made them feel more likely to leave the profession.
  • But teachers want to play an active part in the debate about recruitment. 67% said they would feel more optimistic if they were treated as partners in the debate, rather than objects of discussion.

A clear message to all those outside of schools: politicians, unions, journalists and edubloggers (apparently they’re a category now) to resist the catchy headline or soundbite and just leave the profession alone for a bit.

And as it’s counterproductive, I’m not going to talk about it anymore and will instead turn my attention to what that White Paper offers us.

Seize the Opportunity that is the White Paper…

The White Paper has caused a predictable stir across the country with many now choosing to spend precious time in opposition to what it contains, particularly the section forcing all schools to become academies in the next five years. Blogs, letters, petitions and weekend conference events are all now part of a call to arms to oppose the reforms in which many teachers and school leaders are engaged.

Much as I hold many of these colleagues in great esteem and respect their perspectives and arguments, I think the horse has long since bolted, so we could perhaps be more pragmatic and invest our attention into what positive work we can do to make the most of opportunities that exist at this time of great change.

It reminds me of a poster that hung in my late Grandmother’s house:

God grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.’

Perhaps there are others with greater wisdom than I, but in my view the move for all schools to become academies was inevitable and, although unpopular, is the right call.

The alternative is to exist in a world of perpetual uncertainty. Everyone has felt for some time that this day would come and at least now there is an opportunity for schools to make concrete strategic decisions about their futures. Whether we like it or not, this Government was elected less than 12 months ago with a clear majority and has been adamant that there’s no ‘reverse gear’ on this reform.

“This is now a ‘self-improving’ system and so we have to look in the mirror for the leadership and solutions.

It was Gandhi who told us that we must ‘be the change we want to see in the world’”

Let’s not also forget that if there is a Local Authority which operates so effectively that all its schools want to continue the status quo, there’s nothing to stop there being a MAT created which is built around the practices and ethos of the organisations and individuals currently involved.

Maybe it was the strong coffee, maybe it was because I had Kula Shaker’s new album on the headphones while I read (most) of the White Paper but I found inspiration and ambition within it. Working in Northamptonshire, one of the dark blue ‘weak’ (and underfunded) Local Authorities on page 7, I accepted within it the challenge to make my county a better place to teach and learn, even though our separation into co-existing (and potentially competing) MATs adds further complexity to this.

It’s pointless waiting around for the Government to give us the answers; this is now a ‘self-improving’ system and so we have to look in the mirror for the leadership and solutions.

It was Gandhi who told us that we must ‘be the change we want to see in the world’ and there’s a shining personification of his words in the form of Dame Alison Peacock, Headteacher of The Wroxham School and a member of the DfE’s Commission for Assessment without Levels.

Whilst the rest of us were blogging and tweeting about changes to statutory assessments, moderation and interim frameworks, Dame Alison stepped forward and reminded us of the real opportunity there is to improve teaching through becoming better at assessment to inform and improve teaching. We were freed from levels to use Assessment for Learning as it was always intended – to enable better and ‘responsive teaching’ and Dame Alison has called on us all to display the ‘professional courage’ to achieve this. Her creation of the #BeyondLevels and #LearningFirst movements offer the profession the opportunity to avoid schools just collectively funding a range of new tracking systems to translate old money to new.

The #BeyondLevels movement exploded in the blink of an eye and tickets for a Saturday conference sold out within hours on twitter to colleagues across the country who are desperate to work together.   A concrete example of what a self-improving system might look like? Watch this space over the summer.

So let’s take a lead from Dame Alison:

“Seize the moment; find our professional courage and remember that, despite being part of an increasingly fragmented system, we are actually all in it together.”

5 suggestions for positive thinking to avoid a summer of discontent…

Follow those school leaders and teachers who are engaging in the positive #LearningFirst movement by attending conferences or following the key messages to help make the most of the opportunity that a world without levels offers us.

  1. Read up and share the reports from the DfE Workload challenge groups (published at the end of March) at a staff meeting in school to look at how you might make more effective use of time and avoid spending lots of time doing things that aren’t expected.
  2. Read this post from Secret Teacher in the Guardian which offers some refreshing perspective on teaching, entitled: ‘I refuse to let negativity in teaching get me down.’
  3. Follow Sean Harford on Twitter who engages readily with the education community in his role as National Director of OfSTED. It’s refreshing to read Sean’s thoughts and comments on inspection and school development in general; it’s also worth reading the OFSTED ‘mythbusting’ document which offers clarification to schools on lots of messages that are often misinterpreted.
  4. If the busy summer term gets all too much, logon to this live stream from the International Space Centre and grab some perspective on how small and insignificant our worries and problems are when looked down upon from a great distance.

A Head Teacher’s Response – Find Professional Courage: The White Paper and a Profession in Crisis

MSDN Blogs - 17 hours 40 min ago

The following post originally appeared in the Issue 6 of #TheFeed, and was written by Tom Rees, Head Teacher of Simon De Senlis Primary School.

#TheFeed – Issue 6 – May 2016Tim BushDocs.com

Find Professional Courage; The White Paper and a Profession in Crisis?

It’s a funny old time in education at the moment; budget cuts and not enough teachers on the one hand yet higher expectations and increased accountability on the other. Every day another headline appears – either announcing the next government policy change or describing the latest union protest as they dig their heels in vehement opposition.

Forced Academies, a recruitment crisis, workload pressures and chaos around assessment; one could be forgiven for deciding it’s all too much and throwing in the towel. Many are considering joining those who write the seemingly mandatory blog about ‘why I’m leaving the profession’ which inevitably bemoans the Government and creates a long list of failings from successive Education Secretaries and then ends with a statement around why teaching isn’t the job they came into.

Well I for one refuse to become the next noble protagonist in a Shakespearian-style tragedy by letting anyone spoil my fun. I became a Headteacher to play my part in helping young people to develop the skills, confidence and moral purpose to go out and be brilliant in a future which is vastly different to the past. It’s this challenge to reform our industrial-age school system to meet the demands of the modern world that interests me enough to want to stick with it, at least for a little longer.

Has the profession changed a lot in the 16 years I’ve been in it? Yes and no. Of course there have been significant changes to curriculum and accountability and different governments have kicked the profession around like the political football it’s become, but then what other profession or industry hasn’t undergone transformation in recent years? In essence, teaching remains the same: a teacher in a room with a group of young people who need the right balance of engagement, discipline, personal development and academic instruction.

“We could perhaps be more pragmatic and invest our attention into what positive work we can do to make the most of the opportunities that exist at this time of great change.”

Teaching can be a tough job with long hours and (at times) can be stressful, but then so are many other professions without the privilege of working with children and sharing many special moments along the way.

Can we all please stop talking about Recruitment problems?

No one (well almost no-one) is now pretending that there isn’t a problem with Teacher recruitment and retention at the moment but there’s a lot of unhelpful rhetoric from all sides of the debate about how bad things are for everyone.   There’s just not enough action or leadership to help us navigate through these challenging waters and what we need most at the moment is an injection of pragmatism into proceedings.

Recent research by TES Global from a poll of 4,000 teachers backs up the unhelpfulness of the debate with the following key findings:

  • Talk of teacher shortages is selfperpetuating – over a third of teachers said talk about a “recruitment crisis” made them feel more likely to leave the profession.
  • But teachers want to play an active part in the debate about recruitment. 67% said they would feel more optimistic if they were treated as partners in the debate, rather than objects of discussion.

A clear message to all those outside of schools: politicians, unions, journalists and edubloggers (apparently they’re a category now) to resist the catchy headline or soundbite and just leave the profession alone for a bit.

And as it’s counterproductive, I’m not going to talk about it anymore and will instead turn my attention to what that White Paper offers us.

Seize the Opportunity that is the White Paper…

The White Paper has caused a predictable stir across the country with many now choosing to spend precious time in opposition to what it contains, particularly the section forcing all schools to become academies in the next five years. Blogs, letters, petitions and weekend conference events are all now part of a call to arms to oppose the reforms in which many teachers and school leaders are engaged.

Much as I hold many of these colleagues in great esteem and respect their perspectives and arguments, I think the horse has long since bolted, so we could perhaps be more pragmatic and invest our attention into what positive work we can do to make the most of opportunities that exist at this time of great change.

It reminds me of a poster that hung in my late Grandmother’s house:

God grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.’

Perhaps there are others with greater wisdom than I, but in my view the move for all schools to become academies was inevitable and, although unpopular, is the right call.

The alternative is to exist in a world of perpetual uncertainty. Everyone has felt for some time that this day would come and at least now there is an opportunity for schools to make concrete strategic decisions about their futures. Whether we like it or not, this Government was elected less than 12 months ago with a clear majority and has been adamant that there’s no ‘reverse gear’ on this reform.

“This is now a ‘self-improving’ system and so we have to look in the mirror for the leadership and solutions.

It was Gandhi who told us that we must ‘be the change we want to see in the world’”

Let’s not also forget that if there is a Local Authority which operates so effectively that all its schools want to continue the status quo, there’s nothing to stop there being a MAT created which is built around the practices and ethos of the organisations and individuals currently involved.

Maybe it was the strong coffee, maybe it was because I had Kula Shaker’s new album on the headphones while I read (most) of the White Paper but I found inspiration and ambition within it. Working in Northamptonshire, one of the dark blue ‘weak’ (and underfunded) Local Authorities on page 7, I accepted within it the challenge to make my county a better place to teach and learn, even though our separation into co-existing (and potentially competing) MATs adds further complexity to this.

It’s pointless waiting around for the Government to give us the answers; this is now a ‘self-improving’ system and so we have to look in the mirror for the leadership and solutions.

It was Gandhi who told us that we must ‘be the change we want to see in the world’ and there’s a shining personification of his words in the form of Dame Alison Peacock, Headteacher of The Wroxham School and a member of the DfE’s Commission for Assessment without Levels.

Whilst the rest of us were blogging and tweeting about changes to statutory assessments, moderation and interim frameworks, Dame Alison stepped forward and reminded us of the real opportunity there is to improve teaching through becoming better at assessment to inform and improve teaching. We were freed from levels to use Assessment for Learning as it was always intended – to enable better and ‘responsive teaching’ and Dame Alison has called on us all to display the ‘professional courage’ to achieve this. Her creation of the #BeyondLevels and #LearningFirst movements offer the profession the opportunity to avoid schools just collectively funding a range of new tracking systems to translate old money to new.

The #BeyondLevels movement exploded in the blink of an eye and tickets for a Saturday conference sold out within hours on twitter to colleagues across the country who are desperate to work together.   A concrete example of what a self-improving system might look like? Watch this space over the summer.

So let’s take a lead from Dame Alison:

“Seize the moment; find our professional courage and remember that, despite being part of an increasingly fragmented system, we are actually all in it together.”

5 suggestions for positive thinking to avoid a summer of discontent…

Follow those school leaders and teachers who are engaging in the positive #LearningFirst movement by attending conferences or following the key messages to help make the most of the opportunity that a world without levels offers us.

  1. Read up and share the reports from the DfE Workload challenge groups (published at the end of March) at a staff meeting in school to look at how you might make more effective use of time and avoid spending lots of time doing things that aren’t expected.
  2. Read this post from Secret Teacher in the Guardian which offers some refreshing perspective on teaching, entitled: ‘I refuse to let negativity in teaching get me down.’
  3. Follow Sean Harford on Twitter who engages readily with the education community in his role as National Director of OfSTED. It’s refreshing to read Sean’s thoughts and comments on inspection and school development in general; it’s also worth reading the OFSTED ‘mythbusting’ document which offers clarification to schools on lots of messages that are often misinterpreted.
  4. If the busy summer term gets all too much, logon to this live stream from the International Space Centre and grab some perspective on how small and insignificant our worries and problems are when looked down upon from a great distance.

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux