You are here

Feed aggregator

How Can Your Startup Benefit From the $20,000 Business Tax Break?

MSDN Blogs - 1 hour 26 min ago

Guest post by Rob Chaloner, Founder and Managing Director of Stratton. Rob is passionate about developing smarter ways to buy and finance cars. With stratton, he's working to help Australian buyers disrupt the traditional car buying, financing, and insurance markets through smarter products and online services.

New and early stage startups are the types of businesses that are likely to benefit from the improved, simplified depreciation rules for small businesses that now include asset purchases up to $20,000. Announced as a defining feature of May’s Federal budget, this substantial increase to the existing business depreciation rules aims to boost small businesses in Australia. While a $20,000 tax break may sound enticing, what does this actually mean for your business? Here’s what you need to know:

 

The Basics

In previous years, accelerated depreciation was available to small businesses with an asset limit of $1,000. The Federal Government has now lifted the limit to $20,000 in an effort to improve business confidence and growth.

Accelerated, or simplified depreciation allows qualified businesses, including startups, to potentially claim the total purchase price of assets in the year they are purchased, as opposed to normal depreciation rules, which spread the claim out over a number of years.

Let’s look at an example: Startup XYZ qualifies for this scheme and purchases a commercial vehicle, solely for business use, for $15,000. In their tax return, Startup XYZ can claim $15,000 as an immediate deduction on their total taxable income. In principle, this will reduce the amount of the tax that Startup XYZ is then required to pay. 

Any assets within the program criteria that were purchased after 7:30 p.m. on the night of the budget announcement, 12 May 2015, and up until 30 June 2017, are eligible.

 

How relevant is this change for startups?

While the increase in accelerated depreciation will assist certain startups, those that are not yet turning over a profit or generating a taxable income are unlikely to benefit from these simplified depreciation rules. Furthermore, it is not possible to carry an accelerated depreciation amount forward into future years.

 

Does my startup qualify?

Many startups, especially those in the new and early stages, will be eligible to benefit from this improved accelerated depreciation, as it is restricted to businesses with an annual aggregated turnover of up to $2 million.

Whether operating as a company, trust, partnership or sole trader, your startup must also be a registered with the Australian Securities and Investments Commission (ASIC). This involves applying for an Australian Business Number (ABN), registering your business name and paying the applicable fees.

 

What assets are included in this scheme?

Most assets, both new and old, purchased by a startup for under $20,000 and for exclusive business use are likely to be within the guidelines and should be eligible for accelerated depreciation. This includes vital assets for many startups, including office furnishings, computers, and off-the-shelf software.

The main restrictions are horticultural plants, capital works, and assets already allocated to a low-value or software development pool. The cost of developing software that will only be used by your startup, where it totals less than $20,000, is eligible for accelerated depreciation, unless you have previously claimed deductions under the software development pool rules.

 

What about assets that cost more than $20,000?

Assets that are purchased for more than $20,000 do not qualify for accelerated depreciation. Furthermore, you cannot claim the portion under $20,000, with the total purchase amount instead being subject to the normal depreciation rules. The key difference for assets above the $20,000 threshold is, instead of being claimed in one financial year, the depreciation deductions are claimed over a number of years. 

It’s also important to note if your business is not registered for GST, the $20,000 threshold for this scheme will be calculated as a GST-inclusive amount. 

 

How many times can this $20,000 depreciation be claimed?

There are no restrictions on how many assets under $20,000 or how many times accelerated depreciation can be claimed—even within the same financial year. For example, if your business purchases two commercial vehicles each valued at $15,000 within the same financial year, as long as your startup and the assets qualify, the total value of $30,000 can possibly act as a deduction on your taxable income. 

 

Can I take out a loan to purchase assets?

While your startup may not have the cash available to purchase assets, you can take out a business loan and still qualify for accelerated depreciation. On the other hand, while leasing of assets—especially commercial vehicles—is common practice for many businesses, it is important to be aware that any leased assets are not eligible for this scheme.

 

Is there anything else I should know?

As always, discuss with your accountant or financial advisor to ensure that purchasing assets and claiming this accelerated depreciation is in the best interest for your startup.

Visit stratton.com.au for more information on business loans.

 

RyuJIT Bug Advisory in the .NET Framework 4.6

MSDN Blogs - 1 hour 48 min ago

A code generation (AKA "codegen") issue in RyuJIT in the .NET Framework 4.6 has been discovered that affects a calling pattern called Tail Call Optimization. The RyuJIT team has fixed the issue and has started the process of producing a .NET Framework 4.6 patch that will be freely available for anyone to download and install.

There is a workaround for this issue, with the .NET Framework 4.6. It is supported to use this workaround in production to safely avoid this issue. The workaround is enabling a RyuJit config switch to disable tail call optimizations. See the recommendation below, for a detailed explanation how to proceed.

Description of the Issue

This issue can affect apps running on the .NET Framework 4.6 in 64-bit processes. For example, you may have an app that was built for the .NET Framework 4.0. If you upgrade your machine to the .NET Framework 4.6, it could potentially be affected. Apps running in 32-bit processes are not affected by this issue. Note that the default process type for client apps (e.g. WPF, Windows Forms) is 32-bit.

This issue is narrow in nature. Your code has to use specific data types, pass them in specific ways and execute specific operations. Very few programs will satisfy all of these characteristics, which is required to trigger this codgen bug. We have reviewed this issue to determine if it is exploitable. We have not identified an exploit, but are pushing the change through our process at the same pace as we would an exploit.

The following annotated C# repro provides a detailed explanation of the bug.

The following F# repro provides the F# version of the issue.

Customer Bug Report

Nick Craver and Marc Gravell, a team of two at Stack Exchange (runs Stack Overflow), reached out to us on Thursday of last week on this issue. They were scouting the .NET Framework 4.6, to see if it was ready for their use in production and ran into some unexpected product behavior. They went the extra mile and reduced what they were seeing into a minimal repro. Thanks! Clearly, a very solid set of engineers.

We were able to diagnose the issue by Friday and provide a simple work-around to disable the specific RyuJIT optimization.

Advisory

Nick Craver published his own customer advisory yesterday, on Why you should wait on upgrading to .Net 4.6. It's a good post that you should read if you are deploying the .NET Framework 4.6.

The .NET team has concluded a detailed analysis of tens of thousands of test assets and internal customer data. The data suggests that the vast majority of .NET developers will not experience this same issue. We have extensive tests for the .NET Framework libraries (e.g. System.Xml). We have not been able to find a single case of this issue across that very large body of code. From a production standpoint, big Microsoft web properties have been running on pre-release versions of .NET Framework 4.6 for months without hitting this issue.

This bug requires a significant set of conditions that must be present to trigger it. It's unlikely that many developers have actually written matching code. We recognize that this bug is very real to StackExchange, and conclude that they are one of the few cases that have and will hit it.

Recommendation

Our recommendation to StackExchange and to any other customer is the following:

  1. Scout the .NET Framework 4.6 in your environment.
  2. If you run into an issue that you cannot diagnose, try disabling RyuJIT.
  3. If disabling RyuJIT resolves the issue, please re-enable RyuJIT and disable tail call optimization.
  4. If your issue is mitigated with the tail call optimization disabled, then you know that your app is subject to this issue. You can run your app in production in that configuration (tail call optimization disabled), to get the other .NET Framework 4.6 benefits. This work around will disable only the tail call optimization feature and should not negatively impact performance.
  5. If your issue is not mitigated with the tail call optimization disabled, but is mitigated with RyuJIT disabled, we want to hear from you on .NET Framework Connect. You can also run your app in production in this configuration (RyuJIT disabled).
  6. If your issue is not mitigated by disabling RyuJIT or tail call optimization, then it something else and unrelated to this advisory.

You may be wondering how it is OK to run the .NET Framework 4.6 in production with RyuJIT disabled. It's very similar in nature to the way that the .NET Framework 4.5 CLR runs, which doesn't have RyuJIT at all. The .NET Framework 4.6 includes both JIT64 and RyuJIT, providing additional flexibility for both testing and production use.

F# developers are encouraged to wait to deploy the .NET Framework 4.6. This issue affects F# programs more commonly. We will post an update on the blog when we are ready to make the all-clear on .NET Framework use for F# developers. We appologize for that situation. We are in the process of increasing our F# test coverage.

Closing

Thanks again to the StackExchange team for reaching out to us with this issue and for getting the word out about the issue.

As stated at the start of the post, we have already started producing a RyuJIT patch for the .NET Framework 4.6. We will post an update when it is is available.

We know that you rely on us to provide high-quality software. We take that very seriously. It's no accident that RyuJIT has several configuration settings. We use them for our own testing and we expected that someone somewhere would find an issue that required investigation and potentially a fix. These settings enabled us to quickly root-cause the issue and also provides a way of safely running the .NET Framework 4.6, without risk of running into codegen issues.

The .NET Framework 4.6 is a great release that we can continue to recommend deploying. It is perfectly safe to run the .NET Framework 4.6 with tail call optimizations disabled, while you are waiting for the patch. Your app will get the benefit of other .NET Framework 4.6 improvements.

Building High Performance, Highly Available SQL Servers on Azure

MSDN Blogs - 1 hour 50 min ago

Editor’s note: The following post was written by SQL Server MVP Warner Chaves as part of our Technical Tuesday series.

The cloud is ready for SQL Server enterprise-grade workloads. This is not my opinion but a simple fact that comes from helping multiple clients in different industries move very intensive SQL Servers to Azure. A couple of years ago the guidance was to focus on moving your Development or Test servers but now a days with proper planning and configuration you can deploy a production SQL Server confidently and easily.

In this article I’m going to focus on the two pillars of a SQL Server deployment: performance and availability. Of course we want our SQL Servers to be fast but we also want them to be highly available and ready to serve applications any time of the day, any day of the year. And we want to do these things in a cost efficient way. I will discuss the different options Azure offers to achieve these goals.

Building for Performance
The exact virtual machine size that you’ll need depends on your SQL Server size and load, however there are some best practices and recommendations that apply to any VM that you want to optimize for performance.

First, for high performance workloads you want to be looking at VM sizes for Microsoft Azure Virtual Machines. In my opinion, for high performance workloads the options are either a DS machine or a G series machine. The pick between the two of them right now comes down to differences in the CPU models, the amount of RAM per core and the type of storage allowed as we’ll see below.

Compute and Memory
On the compute front, the DS machines come with Intel Xeon E5-2660 2.2 GHz processors that are 60% faster than the previous CPU models used on the A tier. The G ones however come equipped with more powerful Intel Xeon E5-2698 2.3 GHz processors for more compute demanding workloads. Most SQL Server workloads are more IO bound than CPU bound but if yours doesn’t follow this rule and is heavier on CPU then a G series model could be better.

The amount of RAM per core also changes between the DS and G series with the G series coming equipped with more RAM per core. For example, the following 3 machines all come with 4 cores but differ based on the RAM provided:
• DS3: 14GB of RAM.
• DS12: 28GB of RAM.
• G2: 56GB of RAM.

The more RAM the more expensive the machine is, so you need to make a choice that makes sense for the size of your databases and your workload. The pricing details for the VM sizes can be found on pricing page, here.

Storage
Both DS and G series come with a temporary SSD drive attached to the VM with a size dependent on the specific VM model you choose. The G series come with larger temporary SSD drives at a higher cost. Since this drive is temporary it should only be used for storing a database like Tempdb on SQL Server. Or if using SQL Server 2014 or up, you can deploy a Buffer Pool Extension file on this drive.

For permanent storage there are two options: Page Blob storage and the recently introduced SSD Premium Storage.  Page blob storage volumes provide performance of approximately 500 IOPS, up to 60 MB/sec and variable latency. Depending on your VM model you’ll be able to attach a variable number of volumes to it as well. For example, a D3 4 core machine allows attaching up to 8 of these volumes, whereas a 32 core G5 allows up to 64. Using Windows Server Storage Spaces you can also stripe these volumes to provide higher throughput to a single Windows disk. Page blob storage is paid by the amount used and the amount of IO transactions.

Premium storage is more recent and right now only available on the DS series VMs. This type of storage is SSD-based and can sustain higher IOPS and throughput with lower latency than the classic Page Blob storage. Premium volumes come in 3 different performance levels (July 2015):

The amount of these volumes that you can attach to a single VM goes from 2 on a DS1 all the way to 32 on a DS14. You can stripe these as well to present them as one disk, though keep in mind that specific VM sizes will have a limit to the amount of IOPS and MB/sec that they can go up to. You can see those limits here: https://azure.microsoft.com/en-us/documentation/articles/storage-premium-storage-preview-portal/.
Premium storage unfortunately is not yet available for all regions so this could be a big factor in your decision. At the time I’m writing this (July 2015) these are the regions where you can create a DS series machine: West US, East US 2, West Europe, South East Asia, Japan West. Microsoft is constantly adding more capabilities to each region so make sure to check the Azure portal for the latest information.
Also note that the published Best Practices from Microsoft for data disks is to have no caching for Page Blob disks and Read-Only caching for Premium Storage disks. Refer to this article for the full details: https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx.
Putting all the information together, here are two example Virtual Machine configurations:

Configuring for Availability
SQL Server includes several High Availability and Disaster Recovery solutions right out-of-the-box that work well within Azure and provide different levels of resilience to suit different Recovery Time Objective (RTO) and Recovery Point Objective (RPO) requirements. These solutions are log shipping, database mirroring and AlwaysOn availability groups.

Regarding storage redundancy, locally redundant storage should be used so Azure will keep 3 copies of your Virtual Hard Disks. Geo-redundant storage should not be used for SQL Server because write-ordering is not guaranteed with this option. For geographic redundancy it’s recommended to use a SQL Server technology like the ones mentioned.

For new Enterprise grade deployments the best solution is to go with SQL Server 2014 Enterprise and AlwaysOn Availability Groups. For example, support for multi-subnet clusters in Windows Server 2012 and above means we can deploy two nodes to provide high availability on one Azure region and then another third node for disaster recovery on a second region.

One concept that is critical to understand in Azure is Availability Sets. An Availability Set is a logical grouping of virtual machines to maximize availability in the event of planned or unplanned downtime of the Azure physical host. Virtual machines inside an Availability Set are assigned and Update Domain and a Fault Domain and these govern the rules as to where the Virtual Machine is located in case of planned or unplanned maintenance. For example, if we have two SQL Servers in a Windows cluster, we can have them in different Update and Fault domains so that if planned or unplanned maintenance happens for one machine then the other one will not be affected and will be able to take over.

This is what the solution would look like: 


 
This is a 3 node Windows Server 2012 Failover Cluster called SQLCLUSTER running SQL Server 2014 Enterprise and using AlwaysOn Availability Groups to provide the redundancy and data synchronization capabilities. The Primary site has two nodes and a file-share witness that are part of the same Availability Set for fast local failover. There is also a third node that on a second Azure region that serves as the Disaster Recovery location. SQL1 replicates synchronously to SQL2 and provides automatic failover while SQL3 is being replicated to asynchronously and is able to do a manual failover if a disaster strikes.

Final thoughts
Azure has now matured to the point where critical SQL Server workloads can be designed and implemented on the platform with ease. Both performance and availability requirements can be met with the latest offerings like Premium Storage and SQL Server technologies like AlwaysOn Availability Groups.

The key to a successful deployment is in documenting the performance and availability requirements clearly and then comparing against the different virtual machine and configuration options mentioned in this article. If your organization is thinking of leveraging the cloud for efficiency and velocity, SQL Server can definitely go there and you can make sure that it does so without compromising any performance or availability.

About the author

Warner is a SQL Server MCM and SQL Server Principal Consultant at Pythian, a global Canada-based company specialized in data and infrastructure services. A brief stint in .NET programming led to his early DBA formation working for enterprise customers in Hewlett-Packard ITO organization. From there he transitioned to his current position at Pythian, managing multiple customers and instances in many versions and industries while leading a highly talented team of SQL Server DBAs.

.NET Networking APIs for UWP Apps

MSDN Blogs - 2 hours 28 min ago

This post was written by Sidharth Nabar, Program Manager on the Windows networking team.

At Build 2015, we announced that .NET Core 5 is the new version of .NET for developers writing Universal Windows Platform (UWP) apps. The set of networking APIs available for developers in .NET Core 5 is an evolution from the set that was available for Windows Store app developers in Windows 8.1 (API reference on MSDN). As was highlighted at Build, porting your apps to .NET Core and UWP enables you to target a wide variety of devices including Xbox, Windows Phone, Windows and HoloLens with the same codebase. Of course, you can still use all the .NET networking APIs that you used in Windows 8.1 Store apps (no API surface has been removed/deprecated), and more.

Although most of the networking API surface in .NET Core is the same as previous .NET Framework versions, the underlying implementation for some of these APIs has undergone a significant change as we move from the .NET Framework to .NET Core. We have also taken this opportunity to modernize the implementation of our networking APIs and make it more suitable to run in the context of Store apps on Windows. In this post, we outline all the .NET networking APIs available to UWP developers and provide some insights into the implementation underneath the APIs.

Note that all the APIs and changes discussed in this post are applicable only to .NET Core for UWP apps, and not to .NET Framework 4.6. We are also investing in networking API improvements in .NET Core for server platforms (through ASP.NET 5), but we will cover those in a separate blog post. Similarly, this post does not cover networking APIs outside of .NET APIs that are available to Windows apps developers.

What’s New

These are the new APIs and features that we have added into .NET Core 5 for UWP app developers.

System.Net.Sockets

With Windows 10 and .NET Core 5, System.Net.Sockets has been added into the API surface for UWP app developers. This was a highly requested API for Windows Store apps (it was already available for Windows Phone Silverlight apps) and includes types such as System.Net.Sockets.Socket and System.Net.Sockets.SocketAsyncEventArgs, which are used by developers for asynchronous socket communication. The current API surface of System.Net.Sockets in .NET Core is based on that of Phone 8.1 Silverlight and continues to support most of the types, properties and methods (some APIs that are considered obsolete have been removed). Moving forward, we plan to expand the API surface to support more types from this namespace – please see the Looking ahead section below.

The implementation underneath the System.Net.Sockets API has been significantly changed to eliminate dependencies on APIs that are not part of .NET Core, as well as to use the same underlying threading APIs as other WinRT APIs. Our goal is to ensure functional parity between the previous implementation and the new .NET Core version. Please send us your feedback on GitHub if you see any differences in behavior or performance as you port your Sockets code to UWP.

System.Net.Http gets HTTP/2

Developers writing UWP apps on Windows 10 and .NET Core 5 will get HTTP/2 support in System.Net.Http.HttpClient. HTTP/2 is the latest version of the HTTP protocol and provides much lower latency in web access by minimizing the number of connections and round-trip messages. Adding this support into the HttpClient API means that server responses come back much faster, leading to an app that feels more fluid at the same network speed. And the best part is – this feature on by default, so there is zero code change required to leverage this. For more details on how HTTP/2 provides faster web access to apps, see this talk from Build 2015. The talk also features a simple photo downloading app that shows approximately 200% improvement in latency upon switching to HTTP/2 (demo video).

The following code snippet shows how to query the HTTP version preference on the client as well as the actual HTTP version being used for the connection:

var myClient = new HttpClient();
var myRequest = new HttpRequestMessage(HttpMethod.Get, "http://www.contoso.com"); // This property represents the client preference for the HTTP protocol version.
// The default value for UWP apps is 2.0.
Debug.WriteLine(myRequest.Version.ToString());
var response = await myClient.SendAsync(myRequest); // This tells if you if the client-server communication is actually using HTTP/2
Debug.WriteLine(response.Version.ToString());

Notes:

  1. Setting the Request.Version property to 2.0 is not supported on other .NET platforms and will throw a System.ArgumentException when trying to send such a request. The default version on .NET platforms other than UWP is 1.1.

  2. The Request.Version property represents the client API preference to use HTTP/2. The actual HTTP version used will depend on the client OS, server and intermediate proxies. HTTP/2 is a negotiated protocol that will automatically fall back to HTTP 1.1 if the server or intermediaries do not support HTTP/2.

What’s Changed

In this section, we review some APIs that were already available to Windows Store developers but have undergone significant change in underlying implementation. Understanding this change may help you as a developer to gain insight into changes you may see in your code as you port it from a Windows 8.1 Store app to Windows 10 UWP.

System.Net.Http

In Windows 8.1, the implementation of HttpClient was based on a managed HTTP stack comprising of types such as System.Net.HttpWebRequest and System.Net.ServicePointManager. In .NET Core for UWP apps, this has been replaced by a completely new, lightweight wrapper on top of native Windows OS HTTP components such as Windows.Web.Http, which is based on WinINet. This allows us to leverage all the latest features (e.g. HTTP/2) from the OS and we are able to provide these new features much faster to .NET developers than we previously could. It also helps lower the memory consumption of .NET apps running on Windows 10, thereby giving the user a more fluid experience running multiple apps simultaneously. The available set of APIs from System.Net.Http documented here remains unchanged.

The new implementation has been tested to ensure functional parity with the previous Windows 8.1 implementation so that you, the developer, do not see any differences in API behavior as you port your HTTP client code to UWP. However, if you do see any issues/bugs, please file a bug on GitHub.

System.Net.Requests

This library contains types related to System.Net.HttpWebRequest and System.Net.HttpWebResponse classes that allow developers to implement the client role of the HTTP protocol. The API surface for .NET Core 5 is the same as that available for Windows 8.1 apps and is very limited compared to the surface in the .NET Framework. This is intentional and we highly encourage switching to the HttpClient API instead – that is where our energy and innovation will be focused going forward. Other parts of .NET Core 5 such as Windows Communication Foundation (WCF) have already migrated to HttpClient in their .NET Core implementation as well, as outlined here.

This library is provided purely for backward compatibility and to unblock usage of .NET libraries that use these older APIs. For .NET Core, the implementation of HttpWebRequest is actually based on HttpClient (reversing the dependency order from .NET Framework). As mentioned above, the reason for this is to avoid usage of the managed .NET HTTP stack in a UWP app context and move towards HttpClient as a single HTTP client role API for .NET developers.

What’s the same

Other types from System.Net and System.Net.NetworkInformation namespaces that were supported for Windows 8.1 Store apps will continue to be supported for UWP apps. There have been some minor additions to this API surface, but no major changes in implementation.

Looking Ahead

In this post, we discussed the initial version of the set of .NET networking APIs that will be available to Windows 10 UWP app developers. We will continue to build on this set and add more API surface to ensure that developers can write rich, full-featured UWP apps using .NET.

To ensure that we prioritize and focus on the right APIs, we need feedback from you – please send us feedback on which APIs are missing in .NET Core and are blocking you from delivering the best possible experience to your users in a UWP app. Please create or vote on an idea on Windows platform missing APIs uservoice or file an issue in GitHub. We look forward to working with you to deliver awesome apps to the entire breadth of Windows devices.

New CRM for phones app available for iPhone, Android, and Windows Phone

MSDN Blogs - 2 hours 43 min ago

The highly-anticipated CRM for phones app is now live for Apple, Android, and Windows phones! The new app provides the same intuitive experience as CRM for tablets.

If you want to try it out, download it from the app store for your phone:

Want more information? Check out these helpful resources:

Notes

  • The CRM for phones app requires Microsoft Dynamics CRM Online Update 1.
  • The previous version of the CRM for phones app is still available in app stores, but it's now called CRM for phones express.
Cheers, CRM Product Team

Changes to LCS – SharePoint authorization

MSDN Blogs - 2 hours 50 min ago

As part of the July release, we will be making some framework changes to SharePoint integration in LCS on 7/31. Each project user will be promoted to re-authorize your account in LCS for SharePoint access to continue using SharePoint with LCS. 

MPUG Seattle–Project Support and what’s coming next

MSDN Blogs - 2 hours 58 min ago

In September I will be presenting, along with my Manager Larry Block, at the MPUG Seattle chapter meeting in Bellevue at the Microsoft Store,  September 17th at 6pm – mark your diaries.  We will be talking about how we work in support for Microsoft Project – giving you some insight into one of the longest tenure support teams in Microsoft.  I’m still the 2nd newest recruit in the US team – with ‘only’ 12 years under my belt.  We will also talk about some of the new features coming down the road in both Project Online and Project and Project Server 2016.  And with such a cool venue if we don’t capture your attention with that agenda you can always wander off and play Xbox or buy a Surface 3…  Hope to meet up with some of my blog readers there – come by and say ‘Hi’!

You can register at http://www.mpug.com/event/seattle-project-support-and-whats-coming-next/

Monitoring your SQL Sentry data with Power BI

MSDN Blogs - 3 hours 23 min ago

We’re excited to announce that this week’s update to Power BI now offers database performance tracking with the SQL Sentry content pack. This content pack includes a dashboard and reports that help you monitor the SQL Server deployments you track using the SQL Sentry Cloud. It makes it really easy to share insights throughout your organization.

 

 

This post details how the Power BI content pack helps you explore your SQL Sentry data. For additional details on how to get started, please see the SQL Sentry content pack for Power BI help page.

 

 

The content pack brings in data about the current state of the servers you monitor in SQL Sentry. You can monitor server health, memory usage, and downtime. The content pack makes it easy to track which sites and servers are working well and which need your attention. The content pack also helps database administrators communicate deployment health information with their managers. To get started, just connect to your SQL Sentry Cloud account.

 

 

The content pack includes a report that lets you drill into the details.  You can use the tree map to quickly see a view of all the events generated by servers and the distribution across the severity levels of events.  You can highlight on category to see which servers are affected the most by each type of event.

 

The Server Health Events page lets you see the conditions that most affect server health and break them down by Alert Level.  You can see the severity of the events for each server.

 

The Server Availability page shows uptime and downtime for servers in your environment. You can also use the Slicer: Uptime so you can focus on those servers that have the worst availability to rectify issues affecting your users.

  

The Server Health – Memory & CPU page helps you understand the relationship between the number of CPUs and events.  You can again look across the Alert Level. The reports can be customized to ensure each page include metrics and content that is important to you.

 

You can also use the question box above the dashboard to explore the data.  A good question to ask is “what is my uptime % by date”.  The result can be pinned or explored further using Power BI’s tools. 

 

After the initial import, the dashboard and the reports continue to update daily. You can control the refresh schedule on the dataset.

You can read more about this release on the SQL Sentry blog:http://blogs.sqlsentry.com/rickpittser/analyze-this/

We’re always interested in hearing your feedback – please contact us at http://support.powerbi.com to let the team know how your experience was and if there’s anything we can do better. We look forward to your feedback!

What’s happening in Project Land–Summer catch-up

MSDN Blogs - 3 hours 38 min ago

Been a while since my last round up of Project bloggery – but here goes.

First I’d like to bring your attention to the Product Group blog recently on some API changes coming your way.  We started supporting CSOM with Project Server 2013 and Project Online, but this latest blog moves that along a bit with the announcement that the Project methods of the PSI are being deprecated in Project Server 2016 and Project Online (they were never actually supported in Project Online, but it was possible to use them).  This won’t change anything for Project Server 2013 or 2010 – but if you are doing new development in 2013 then certainly worth considering CSOM to ensure you have something that will continue to work in the future.  For the full blog see https://blogs.office.com/2015/07/14/a-unified-scheduling-engine-and-api-in-project-online-and-project-server-2016/

The Product Group also blogged about the new Resource Engagement feature - New Feature–Resource Engagements–coming to Project Online and Project Server 2016

Next up – Our documentation team have some new stuff out – giving some thoughts on the Project Management Office (PMO) - https://support.office.com/en-my/article/Supporting-your-Project-Online-adoption-with-a-Project-Management-Office-PMO-567b2415-5973-4e38-b796-dd20ebcb00c8 .  Great work Efren and Co.  I loved the analogies to managing a household.

Since my last round up the June and July updates have been released, and I also blogged on the migration of info into and between sites, OData reserved words and refreshing OData reports when using multiple languages.  Not a new blog post – but did just update the one on jumbled ribbon icons – we should have that fixed with some changes in our build process.

On our French support blog Marc added a great article on collecting Project Server queue stats – by use of the Enable-SPProjectQueueStatsMonitoring cmdlet and referencing the usage and health data page at https://technet.microsoft.com/en-us/library/ee663480.aspx.

Jorge has been busy on the Spanish support blog – one interesting article on creating projects with PowerShell.

From our MVPs and others, Tim Runcie took us through scenario of starting a schedule by concentrating on the finish date - How to Reverse Engineer a Microsoft Project Schedule

Alex Rodov was posted about on the Trusted IT Group blog – regarding his PMI Global Congress appearance - Presenting at PMI Global Congress 2015 – North America

Andrew Lavinsky posted on the importance of baselining benefits – particularly when working with volatile items such as oil (sorry for the pun…) – also a topic from the PMI Houston Annual Conference – and also Segmenting Portfolios Part 1 and 2.

Ben Howard covered Nested IFs and Risk Lists, the Project Virtual Conference (more later) and Save Site as Template – No longer an option! – which gives an answer if my own ‘save as template’ post didn’t help.

Guillaume Rouyre gave us an option for using audiences to stop people from accessing timesheets, his thoughts on the deliverable/dependency feature, Power BI, deactivating deleted users, more on the Project Virtual Conference and finally formatting Gantt charts.

Khurram Jamshed gave a review of Project 2016 and Michael Wharton posted on PMO Strategy.

Nenad has had a busy summer – telling us about Must Finish On Constraints, Milestones with duration, password protecting projects, project start times, 99% completed projects and tasks, material resources and start/finish, and actuals on summary tasks.

Oleksiy Prosnitskyy contributed a post on beautifying our report pages and using Power Query for Project reporting.

Paul Mather also had a busy summer so far – with an interesting approach to using the new Office 365 Groups feature, deleting sites with PowerShell, the Project Virtual Conference, updating labels on PDPs (very cool!) and the great news about CPS being an award finalist for their PS+ solution.

Peter Kestenholz blogged about a new free App for search in Project Online – and I’ll also add that his company, Projectum were awarded the Microsoft PPM Partner of the Year award.

PJ Mistry posted on choosing methodologies, support lifecycle and Office2013 RTM, OLAP Excel reports and working in Days, and the Wunderlist acquisition.

Prasanna Adavi announced the Project Virtual Conference.

Last MVP posting and certainly not least a familiar name blogging in a new place – Dale Howard joined Sensei Project Solutions and has posted a few articles over the summer so far -  intentionally splitting tasksadd new column functionality and hiding the task mode indicator.  I’ll be adding Dale’s Sensei blog to my list once we roll out some platform changes to the TechNet and MSDN blogs in the coming months.

Outside the MVP list Erik has been busy too – recounting his experiences on a TPG partner training course as well as a series on tools for Project Management – Part 1, 2 and 3.

Wow – that’s a ton of stuff.  Thanks everyone for the great content you put out there!

Registration is now open for AX Accelerate for Microsoft Dynamics

MSDN Blogs - 3 hours 44 min ago

 

AX Accelerate is a cross-training functional program for individuals who have experience with Enterprise Resource Planning systems, but are new to Dynamics AX. This intensive hybrid training utilises a mix of training activities, including virtual classes and self-paced time, and merges product and business process based on the core fundamentals of the Microsoft Dynamics AX solution for both Financials and Supply Chain. Attendees will gain hands-on experience through case studies, labs, discussions and certification content that will help build a successful start in AX consulting practices WW. The course kicks-off with a 1 hour prep session and is followed by 9 days of 3-hour virtual sessions and then finishes with a 3-day in-person workshop where individuals will form teams and implement the case study in a real AX environment. It is required the students attend ALL session and NO substitutes will be allowed after the course has started.

REGISTER NOW! 

Sydney, Australia (Instructor-Led Training September 8 - 10)

Price: $1,600.00 USD per person.

Access to Dynamics Learning Portal (DLP)

To complete the homework outlined for this course you must have access to the DLP. 

Please verify first if your access to DLP works.

  • Launch DLP Welcome Page at https://mbspartner.microsoft.com/
  • If you are a Partner click Microsoft Dynamics Partners Sign In
  • If you are a Microsoft Employee, click Microsoft Employees Sign In
  • If the access fails for Partners, note that access to the DLP is available as a result of your company meeting one of the following criteria:
  • is enrolled in the Microsoft Dynamics Partner Advantage or Partner Advantage PLUS Service Plans.
  • has purchased the Microsoft Dynamics Training Pack.

To order or renew the Microsoft Dynamics Training Pack please review the Training Pack options available for purchase here. For assistance, please contact your local Microsoft Regional Operations Centre (ROC)

  • Scroll down to the Ordering Information section at the bottom of the page
  • Review the ordering details, access the links included and purchase the “Dynamics Training Pack (pay per incident)” $1,000 USD under select service options

To order or renew the Microsoft Dynamics Partner Advantage\Advantage Plus Service Plan please review the purchase options here. For assistance, please contact your Services Account Manager or contact your local Microsoft Regional Operations Centre (ROC)

Before accessing the DLP again, please verify your Order Status through PartnerSource by clicking Pricing & Ordering > View Order Status.

It can take up to 2-3 business days before the order is fully completed.

Important note: After your order status is completed, you are then eligible to access DLP. Please note, however, that if your employee contact detail is not yet registered in the PartnerSource Business Centre (PSBC) or if you do not have your email account correctly associated to the exact same Microsoft account that you access PartnerSource, your access to DLP will fail and you will be prompted with a “We’re sorry” page. To assist you resolving this issue, please watch this short video presentationResolutions to common issues when accessing DLP for the first time” to learn about known issues regarding access to DLP.  If after watching this video and applying our instructions, you still face issues accessing DLP, please contact DLP Support.

For questions, please contact our Dynamics Events Team at msdynevt@microsoft.com.

Windows Management Framework (WMF) 4.0 Update is coming your way …

MSDN Blogs - 6 hours 28 min ago

As part of the November 2014 Update Rollup (KB3000850) for Windows RT 8.1, Windows 8.1 and Windows Server 2012 R2, we substantially improved stability, diagnosability, and reliability of PowerShell Desired State Configuration (DSC). We also enhanced PowerShell auditing functionality, and added Software Inventory Logging (SIL). Around Q4 of 2015, we expect to make these improvements available on Windows 7 SP1, Windows Server 2008 R2 SP1, and Windows Server 2012.

PowerShell DSC improvements were based on your direct feedback. With your help, we fixed scenario blocking issues, and made DSC more usable in real-world production environments. You can learn more about these enhancements in TechNet documentation and this PowerShell Magazine article.

We also improved PowerShell transcription and logging to enable more stringent auditing. PowerShell transcription has been improved to apply to all hosting applications (such as Windows PowerShell ISE) rather than just the console host (powershell.exe). While PowerShell already has the ability to log the invocation of cmdlets, PowerShell’s scripting language has plenty of features that you might want to log and/or audit. The new, detailed script tracing feature lets you enable detailed tracking and analysis of PowerShell scripting use on a system.

The Software Inventory Logging (SIL) feature that was introduced in Windows Server 2012 R2 is intended to help datacenter administrators reduce their operational costs by easily logging Microsoft software asset management data for their deployments over time. For more information, please refer to TechNet documentation.

‘WMF 4.0 Update’ will make existing versions of Desired State Configuration and PowerShell auditing better in PowerShell 4.0, along with adding Software Inventory Logging. We will continue to introduce new functionalities in WMF 5.0 Previews that are being released on a regular basis.

Hemant Mahawar [MSFT]
Senior Program Manager
Windows PowerShell

Everything you need to know about Shared Access Signatures from multiple languages

MSDN Blogs - 6 hours 37 min ago

New article posted to azure.com details how to generate your own Shared Access Signatures in Node, PHP, Java, and C# so you can work with Service Bus and Event Hubs from more platforms.

 See: Shared Access Signatures for more information. And since this article is on GitHub feel free to click Edit on GitHub in the upper right as you can with all azure.com articles.

The 10 minute ASA Challenge!

MSDN Blogs - 6 hours 45 min ago

Did you know that you could set up an ASA pipeline with an end-to-end Internet of Things(IOT) scenario in 10 minutes? We have recently published a sample in GitHub which lets you create an ASA pipeline and run it with all its dependencies. Using a single PowerShell command you can have this set up in minutes and see the end to end working. The ReadMe has details of the scenario and how you can setup it up.

This also works great with Sensor Tag from Texas Instruments or you could also use the simulator which comes with the sample.

One of our recent videos on Azure Friday shows this sample in action using real sensors.

We look forward to hearing from you on the Stream Analytics Forum and Azure Feedback Forum. Happy Coding!

Keep up to date on the latest learning material from Microsoft Channel 9

MSDN Blogs - 6 hours 48 min ago

Channel 9 is a online learning community, at Channel 9 Microsoft brings forward the people behind our products and connect them with those who use them.

The heart of Channel 9 is Microsoft can talk about its work to what you the customers want to learn.

Channel 9 is all about the conversation. Channel 9 should inspire Microsoft and our customers to talk in an honest and human voice. Channel 9 is not a marketing tool, not a PR tool, not a lead generation tool. For the original story of where Channel 9 came from, don't miss the classic and inspirational The 9 Guys -Who We Are video.

So who is Channel9

Meet the team, check out this video to meet the Channel9 team!

Accessing Channel9 on the device you own and view content when and where you want!

Windows 8

Head over to the Windows Store to download this for your Windows 8 or 8.1 machine: Channel 9 app for Windows 8

Windows Phone

Grab the phone app for your Windows Phone 8 or 8.1 device: Channel 9 app for Windows Phone 8 and 8.1

Xbox 360 and Xbox One

To get the Channel 9 Xbox app, fire up your console, sign in and then go to 'Apps'. Browse or search for Channel 9, select it and download. Once it is downloaded, you can run it from your list of apps, or you can pin it to make it easier to get back to later.

iOS

Our iPad and iPhone application is now available in iTunes and includes the ability to sign in and to sync videos offline for viewing when you are disconnected. Click here to download it: Channel 9 on iPad and iPhone

Roku

Click this link (you may have to sign in to your Roku account) to add our Roku channel to your device(s): Roku channel for Microsoft's Channel 9. This is a quick and dirty beta, but if there is enough interest then we will try to work on it to make it more useful and fully supported.

Android

Our Android phone and tablet application is now available in the Google Play store and includes the ability to sign in and to sync videos offline for viewing when you are disconnected. Click here to download it: Channel 9 for Android

Using PowerShell Direct for Script Locking

MSDN Blogs - 7 hours 39 min ago

Here is one of the most helpful code snippets that I have come up with using PowerShell Direct:

function waitForPSDirect([string]$VMName, $cred){
   Write-Output "[$($VMName)]:: Waiting for PowerShell Direct (using $($cred.username))"
   while ((icm -VMName $VMName -Credential $cred {"Test"} -ea SilentlyContinue) -ne "Test") {Sleep -Seconds 1}}

In essence this function allows you to block your script until the requested virtual machine has booted and is responding to PowerShell Direct.  This is immensely useful when you are provisioning virtual machine and need to know when the guest operating system is actually up and running - and has become a staple function in many of my scripts.

Cheers,
Ben 

MSBuild and Team Foundation Server integration with SonarQube: version 1.0 released

MSDN Blogs - 8 hours 24 min ago
Release of MSBuild.SonarQube.Runner 1.0

As you might recall, we announced back in April at the //build conference that we were working with SonarSource to provide a better integration of SonarQube with MSBuild and Team Foundation Server. At that time, SonarSource shipped the result of this initial collaboration, the SonarQube.MSBuild.Runner 0.9, which enabled the analysis of technical debt during a build in TFS 2013. The ALM Rangers also produced a nice guidance document explaining how to install SonarQube, especially with SQL Server.

The collaboration has continued, and yesterday SonarSource released a new version of this integration, together with new versions of their related SonarQube plug-ins:

· MSBuild.SonarQube.Runner 1.0 (product page, zip file, open source project)

· SonarQube C# plug-in 4.1 (­available directly from the SonarQube update center)

· SonarQube VB plug-in (available directly from the SonarQube update center)

Meanwhile, Hosam Kamel from the ALM Rangers has transformed the installation guidance into a markdown format and  it’s now available as an Open Source project of its own on GitHub, enabling the community to propose contributions to the guidance (PDF, Markdown). The document has been updated for the new version, and includes an appendix describing how to upgrade from v0.9.

What’s new?

The April version filled a gap in the sense that it enabled the simple and reliable analysis of Visual Studio solutions and projects as part of a XAML build in TFS 2013. However, it did not support a number of commonly-used plug-ins such as those for StyleCop, ReSharper, and VB.NET. We soon received feedback that we needed to make it possible to simply integrate them, as well as other tools—though this was actually already high on our backlog. Furthermore, it was not possible to run an analysis on your local development machine using the command line either, and some customers and partners thought that this would be very useful. Finally, the installation of the MSBuild integration on the agent was still a bit complicated, as it required the user to manually install the SonarRunner and manually install build targets to a location that required administrator privileges.

The new MSBuild.SonarQube.Runner 1.0 fixes these issues by enabling:

  • Simplified installation. The MSBuild.SonarQube.Runner is now installed by unzipping three files, and perhaps making changes to a new XML configuration. The only pre-requisite is Java.
  • Execution of additional SonarQube plug-ins by making it possible to pass parameters to the SonarQube analysis in many different ways.
  • Execution on the command line to perform a local analysis.
  • Accepting source files other than C# (including TypeScript, which was requested by customers)
  • A number of bugs were fixed, and support was added for analysing code in Visual Studio Online. Details are in the release notes.
What’s next?

The cooperation with SonarSource is ongoing. The next new feature you should see (in VSO) is the implementation of SonarQube analysis build tasks for the new build system. Stay tuned, we’ll share our future plans soon.

As usual, we look forward to hearing from you - please send us your feedback. You can raise bugs on GitHub. You can also propose suggestions on what you would like us to do next, for instance from User Voice.

Sample chapter: Deploy Your First Active Directory Forest and Domain

MSDN Blogs - 9 hours 33 min ago

In this chapter from Deploying and Managing Active Directory with Windows PowerShell: Tools for cloud-based and hybrid environments, Charlie Russel covers how to create a new Active Directory Domain Services (AD DS) forest where one has never existed before. This is, in some ways, the easiest task you're likely to face, but it's also one where getting it right is really important. The decisions you make here will affect the entire organization for the life of this deployment.

Active Directory Windows PowerShell nouns used in this chapter:

  • ADDSDomainController
  • ADDSForestInstallation
  • ADDSForest
  • ADRootDSE
  • ADObject

Other Windows PowerShell commands used in this chapter:

  • Get-NetAdapter
  • Get-Member
  • Set-NetIPAddress
  • New-NetIPAddress
  • Set-DnsClientServerAddress
  • Get-NetIPAddress
  • Rename-Computer
  • Install-WindowsFeature
  • Get-Command
  • Format-Table
  • Update-Help
  • ConvertTo-SecureString

Read the complete chapter here: https://www.microsoftpressstore.com/articles/article.aspx?p=2418906.

The Fantastic People of Worldwide Partner Conference 2015. Thank you for joining us!

MSDN Blogs - 9 hours 48 min ago

Thank you to all of the amazing Microsoft Partners who joined us in Orlando for Worldwide Partner Conference 2015! As always, one of the biggest highlights of the entire event for me is being able to see and connect with you, our Microsoft partners and influencers from around the world, whether I knew you before or if this was my first time meeting you. In addition to having the opportunity to meet you, I love hearing the incredible stories you all share with me about yourselves, your companies, your successes and your plans for the future are inspiring and exciting at the same time.

Continuing on the now annual tradition that began several years ago as just an experiment (Connecting with others – A HUGE value of #WPC10 (Look who I found), I am very happy to bring you the collection of pictures from the Fantastic People of Worldwide Partner Conference 2015. For those not familiar with The Fantastic People of WPC concept, it is quite simple. Since building connections is such a huge value of WPC and for me, being fortunate enough to get to spend time with all of the partners from around the world, I offer to capture pictures with partners at WPC that I meet and share them here on my blog to thank them for being there and to help build virtual connections. This is a virtual introduction of these partners and WPC attendees to you and you to them and my “Thank you” to all of you for not only being at WPC, but also for being one of our highly valued Microsoft Partners and Worldwide Partner Conference attendees..

You can click any picture below to see it in larger size.


With Darren Bibby of IDC (@darrenbibby)

With Mary Jo Foley (@MaryJoFoley) of ZDNet
With Scott Bekker (@ScottBekker) of Redmond Channel Partner

With Richard Hay (@WinObs) of Penton Technology

With Barb Levisay (@blevisay) of Redmond Channel Partner
 
With Jan Spring and Nancy Williams from eFolder (@efolder), Jamison West (@jamisonwest) of Arterian, and Harry Brelsford of SMB Nation (@SMBNation)
With Jeff Shuey (@jshuey)

With Tiffany Wallace (Ingargiola) (@TiffanyWI) of New Horizons

With Christine Bongard (@cdbongard) of QTS
With Tim Martin and David Jeffreys of Action Point Software, Ltd.  
With Jeff Hilton (@KnowledgeCircl) from Alliance For Channel Success

With Anders Trolle-Schultz (@TrolleSchultz) from Odin
With Bill Hole (@USLicensing) of US Licensing Group
With Andy Trish (@AndyTrish) formerly of NCI Technologies, now part of an exciting new adventure.

With Carl Mazzanti (@cmazzanti), Jennifer Mazzanti (@JMazzanti) of eMazzanti, and the next generation of Mazzanti’s

With Christian Buckley (@BuckleyPlanet) from Beezy
With Guy Gregory (@GuyGregory) of The Final Step

With Greg Starks (@gregstarks) of HP
With Srdjan Stosic of E-Smart Systems
With John Krikke (@JohnKrikke) of Onward Computer Systems

With Jon Rivers (@jon_rivers) from Data Masons
With Mark Aschemeyer of Beezy
With Charlie Ramirez (@charlieramirez) of Team Venti

With Kevin Fream (@kevinfream) of Matrixforce
With Dave Seibert of IT Innovators
With David Gersten (@dsgersten) of Bond Consulting Services

With Jeff Baker and Cort Baker of General Networks
With Juan Rodriguez (@IDTcorp) of Integrated Digital Technologies, Karen Chastain (@karenchastain) of EpiServer, and Tiffany Wallace (Ingargiola) (@TiffanyWI) of New Horizons

With Melanie Gass a.k.a “The Microsoft Princess” (@MSFTPrincess) formerly of CenterPoint Solution, but now brand new to Microsoft.

With Jason Lambiris (@JasonLambiris) of Apex Digital Solutions and Pete Zarras (@PZarras) from Cloud Strategies LLC
With Petri Salonen(@DrSalonen) of TELLUS International, Inc
With Rajeev Perera (@RajeevPerera) of Microsoft

With Shelley Svien of EPiServer
In honor of our dear friend we lost this year, who was always a joy to have as part of “The Fantastic People of WPC” and an honor to have known and called a friend, Dino White, from our last WPC together.

Thank you again to you all, our Microsoft Partners and influencers from around the world, for everything you do each and every day to transform our world through enabling others to achieve and exceed their goals through the adoption and use of Microsoft technologies! Also, thank you again for taking the time out of the busy WPC week to stop by, come up, and grab a picture as part of the Fantastic People of WPC15 Collection. I am looking forward to seeing you all again at Worldwide Partner Conference 2016 next year and meeting even more of you, our Fantastic People of Worldwide Partner Conference. Until then, here’s wishing you all the very best as you continue helping our customers everywhere understand, acquire, deploy, and consume the benefits of technology to ensure they all have incredible success, translating into the same for each and every one of you!

Did you find this information helpful? If so, you may want to make sure you are utilizing all of the areas I share information online, such as:

Get the Microsoft Info Partner Mobile App and get access to the latest from all of those plus: product teams, MPN teams, Microsoft News and hundreds more resources here at Microsoft right on your phone:

Thanks again for being a reader of my blog!

Tweet this: tweetmeme_style = 'compact'; tweetmeme_source = 'EricLigman';

Thank you and have a wonderful day,

Eric LigmanFollow me on TWITTER, LinkedIn, and Facebook
Senior Sales Excellence Manager
Microsoft Corporation
This posting is provided "AS IS" with no warranties, and confers no rights

digg_skin = 'compact'; digg_window = 'new';
Bookmark on: MSDN , TechNet, and Expression

Technorati Tags: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, del.icio.us Tags: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

How do I know the space left in my drive? (SQL Server viewable)

MSDN Blogs - 10 hours 17 min ago

You all might be using different types of disk subsystem in Windows OS. If you have had a tough time figuring out going to each folder and understand what's the space left like below, here's an easy method.

If you open D drive, you'll see many folders (looks like shortcuts).

Since you're a DBA, you can use this command to find out the space and free space left in each folder.

Yes, sp_fixedluns came in very handy for me. I'd be glad if it helps you.

-KKB

WINDOWS 10 Launch and Imagine Cup

MSDN Blogs - 10 hours 22 min ago

This is a pretty awesome week to be in Seattle.

I am currently out in Seattle with the UK imagine Cup team, the past few months we have preparing for the finals of the Imagine Cup 2015 with our initial presentations taking place on Wednesday. The atmosphere here on campus at the University of Washington is electric with all the world wide teams saying at Alder Hall.

Teams have travelled from all over the word to be here and take place at the world wide finals.. its this is such amazing week and a once in a life time experience for these student to make this even more impressive the teams are going to be on the Microsoft campus for the launch of Windows 10.

Microsoft has always embraced a vision for connecting people and technology in powerful new ways and the release of Windows 10 will connect our Imagine Cup students with new opportunities to realise their potential.

Windows 10 brings the most unified platform we’ve ever had, making it easier than ever for students to code for multiple devices and interfaces.

Every year we see the latest innovative technologies in Imagine Cup projects. Students have been doing great work with motion control and natural user interfaces for years now, as well as connected devices in the Internet of Things, voice control, and virtual and augmented reality.

Now we have a unified platform in Windows 10 that supports these and other technologies, making it easier than ever for students to get from dream to working code in the shortest possible time.

Our 2016 season of Imagine Cup kicks off at the conclusion of the World Championship on July 31.We’re going to see incredible work over this next year as our students master Windows 10 and show us how big their dreams can be.

On July 29, we will make Windows 10 available to the world, across 190 countries, as a free upgrade.

We are really excited to deliver Windows 10, and its many, many innovations, to the world. We want to celebrate with people who use Windows - all 1.5 billion of you - and the nearly 5 million Windows Insiders who helped us build Windows 10.

 

We seek to inspire and empower people around the world - to not just upgrade Windows, but also to upgrade the world. The opportunity is unique but it is true to the mission of the company and to the passion we each share to make a difference in the lives of our customers.

We’ll get started on July 29, when Windows 10 first becomes broadly available.  We’ll celebrate the launch of Windows 10 by celebrating the people and organisations who upgrade the world every day – and by helping them do even more good in their communities. 

We’d love for you to join us. #UpgradeYourWorld.

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux