You are here

Feed aggregator

Choir to Give Voice to Works of “Mozart and More”

SPSCC Posts & Announcements - Tue, 03/17/2015 - 13:07

The South Puget Sound Community College Concert Choir and the Puget Sound Community Choir join voices on Tuesday, March 17 for “Mozart and More.”

Joining the choirs on the stage will be choir director Molly McNamara, and piano accompanists Jennifer Hermann and Patrice Barnett. Hermann will also perform as a piano soloist. Additionally, special guest Huw Edwards, conductor of the Olympia Symphony Orchestra, will bring symphony musicians as well.

In addition to works by Mozart, the choir and company will perform pieces by Brahms, Dan Forrest, Allister MacGillivray and Ralph Vaughn-Williams, and will include arrangements by Lajos Bardos, Diane Loomer, Mark Wilberg and Joseph Flummerfelt.

The concert begins at 7 p.m. on Tuesday, March 17 on the Kenneth J. Minnaert Center for the Arts Main Stage. Tickets are free. Donations can be made at the door.

More information is available online at www.spscc.edu/entertainment or by calling (360) 596-5507.

Error handling, part3: the ETW way

MSDN Blogs - Tue, 03/17/2015 - 13:00

<< Part 2

The Part 2 ended with the summary: In a good error reporting system, the errors should have both the types/codes for the automatic handling and the free-form human-readable strings with the detailed description. However there is one more way, a sort of "middle" way, that is used by the Event Tracing for Windows (ETW).

The idea there is that a message contains not just a simple string but an error code that allows to find the formatting string in a manifest plus the data to be used in the formatting string. The actual conversion to a string happens at the last stage, when the user wants to read the message. Then the information about the message type gets looked up by its code, the formatting string and the format of arguments get extracted from there, and the formatting gets applied.

That has the advantages:

  • The messages in the binary format are short and efficient. Since the ETW messages seem to be born out of the profiling tools, that probably was important.
  • The localization is applied at the last stage, so everything gets translated to the end user's language, even if the message was generated on a system with another locale.
  • The automated tools that process the error logs don't need to parse the message text (which is particularly difficult if the messages may be in different languages), they get the data values directly in the binary format.

The catch is that to do all that, the reader must have access to the manifest that describes the meaning and formatting of the messages. Moreover, to the exact same (or at least s future backwards-compatible) version of the manifest as used by the producer of the messages. Which is OK for the messages being processed on the same computer in real time but much more difficult after the messages have been copied to another computer for processing, or even if the messages have simply been stored for some time. So it's really not the best approach for the massive collection and long-term storage. Without the manifest, the messages are just an unreadable jumble of bits.

And of course the files with the ETW messages are the binary files, and need tools for reading them. PowerShell provides a bunch of commands like Get-WinEvent that help a bit.

And there is also the pain of writing the manifests in XML.

And there is no provision for linking the ETW messages together, as was discussed in Part 1.

<< Part 2 (To be Continued...)

 

Application Insights GSM service experiencing issues 3/17 - Resolved

MSDN Blogs - Tue, 03/17/2015 - 12:17

Final Update: Tuesday, 3/17/2015 20:15 UTC

We’ve confirmed that all systems are back to normal with no customer impact as of 3/17  19:41 UTC. Our logs show the incident started on 3/17 16:21 UTC and that during that time nearly 25% of customers experienced issues with creating webtests or alert rules.

• Root Cause: There was an issues with Azure Storage service in East-US which caused our service to fail intermittently. Here is the link for more information http://azure.microsoft.com/en-us/status/ 

• Incident Timeline:  3/17 16:21 UTC through  3/17 19:41 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Application Insights Service Delivery Team

Initial Update: Tuesday, 3/17/2015 19:09 UTC

We are actively investigating issues  with Application Insights GSM service.  Some customers may experience failure in creating webtests and alertrules . We  are working hard to resolve this issue however currently have no estimate for resolution.

• Next Update: Before 22:00

We apologize for any inconvenience.

-Application Insights Service Delivery Team


 

Win2D and CoreWindow

MSDN Blogs - Tue, 03/17/2015 - 12:13

We recently added the ability for Win2D to draw into a CoreWindow swapchain.  This allows Win2D to be used standalone, creating C# or C++ applications that do not also require XAML.  The new CoreWindowExample shows how to do this.

There are many things we could do to make CoreWindow support easier and more complete.  Our backlog lists some of them:  for instance we could create a Visual Studio template to help get you started writing Win2D apps without XAML.  We could go further and provide things like game loop behavior along the same lines as CanvasAnimatedControl.

We could do that, but my question is: should we?

Would you use it if we did?

Are there people out there hankering to use Win2D without XAML, or are you all using XAML in any case for other reasons?  (in which case it would be a waste of time for us to spend much more effort improving our CoreWindow support).

How to create a RemoteApp template image in Azure

MSDN Blogs - Tue, 03/17/2015 - 12:06

You asked for it and we have it for you – you can now use Azure VMs to create images for RemoteApp. We’re also making available an Azure VM Gallery Image that meets all the requirements for RemoteApp – you can use this as a template to create your own custom RemoteApp images.

My name is Sandeep Patnaik, and I work in the Remote Desktop team. Many of you reached out to us via the Azure RemoteApp forum and asked for a way to import images from your Azure virtual machines library. This feature is now live on the Azure Management Portal. Step-by-step instructions are provided below for your help.

We went a step further and have created a starter image that meets all Azure RemoteApp prerequisites. This image, Windows Server Remote Desktop Session Host, is now available in the Azure Virtual Machine image gallery for you to use. The image is based on Windows Server 2012 R2 and meets all the requirements for RemoteApp (as outlined here).

 

Creating a custom image based on the VM image

You can follow these steps to create your custom image based on the image we provide:

1. Create an Azure virtual machine using the “Windows Server Remote Desktop Session Host” image from the Azure virtual machine image gallery. This image meets all the Azure RemoteApp template image requirements. For details, see Create a VM running Windows.

2. Connect to the VM and install and configure your applications and perform any additional Windows configurations required by your applications. For details, see How to Log on to a Virtual Machine Running Windows Server.

3. Run the validation script by double-clicking "ValidateRemoteAppImage" icon on the desktop. This script validates all the pre-requisites for Azure RemoteApp. Ensure that all errors reported by the script are fixed before proceeding to the next step.

 

 

 

 

 

 

 

 

 

 

 

 

4. SYSPREP generalize and Capture the image. For details, see How to Capture a Windows Virtual Machine to Use as a Template.

 

Adding this image to Azure RemoteApp template image library

Follow these steps to import the above image into Azure RemoteApp template gallery:

1. Go to the Template Images tab under the RemoteApp extension in Azure Management Portal.

2. To launch the Add new template image wizard, click on "UPLOAD OR IMPORT A TEMPLATE IMAGE" link (shown when there are no template images):

or click the "+Add" button (shown when you have at least one template image):

3. Select the "Import an image from your Virtual Machines library" option.

4. In the next page, select your custom image from the drop down and confirm that you have followed the steps listed below to create your image.

5. In the next page, provide a name and location for the new RemoteApp template image.

You can import images from any Azure location supported by Azure Virtual Machines to any Azure location supported by Azure RemoteApp. Depending on the locations the import can take up to 25 minutes.

Thank you!

Note: Questions and comments are welcome. However, please DO NOT post a request for troubleshooting by using the comment tool at the end of this post. Instead, post a new thread in the Azure RemoteApp forum.

Becoming a change agent for growth and profitability

MSDN Blogs - Tue, 03/17/2015 - 11:57

In my previous blog posts I’ve talked about the need to rethink and make changes to various parts of your business to truly take advantage of the opportunity the growth of online services represents.  Those changes will differ depending on each unique Partner organisation and understanding WHAT those changes look like is still only the first step.  As you build out your plan it’s absolutely critical to consider the HOW in terms of landing those changes effectively across your business.

  • How do you juggle continuing to operate an effective, profitable business today whilst investing time, energy and resources in making necessary changes to remain profitable in the future?
  • How do you gain buy-in from your management team, board of directors, shareholders, peers and employees to the changes so they are aligned and driving in the same direction?
  • How do you maintain clarity in direction and purpose when the reality is your dealing with varying levels of ambiguity?

We have teamed up with Fiona Hathaway, APAC HR Director for Microsoft Consulting Services to deliver a workshop to help guide you through some of these questions. Fiona has been leading a similar workshop with our own Services Leadership Team to assist them in leading change in their own organisation.

Below is a summary of Fiona’s half day workshop:

The Leaders Role in Driving Change

The technology world is changing and as leaders we must respond to and re-shape the industry. Our organisations are looking to us for clarity, ask yourself “Who am I in the face of these Challenges?” The session is designed to prompt leaders to effectively lead change, transform businesses, and inspire their teams. We discuss the need to detect early signals at the consumer, customer, competitor, and ecosystem levels, and translate those signals into insights and proactive business choices. To create clarity on your organisation’s purpose and future, to drive growth and profitability across an increasingly changing business portfolio while learning to balance short- and long-term financial trade-offs.

  • Wednesday 29th April – Melbourne
  • Friday 1st May – Sydney

To register for the workshops you will need your Microsoft Partner Network login.  If you’re not yet an MPN member take a look at Jack Pilon’s blog post on MPN 101 which highlights the benefits and how to sign up. If you have any further questions or to register please email partnerau@microsoft.com.

Microsoft receberá o evento C&C ++ Brasil Conference no dia 28 de março

MSDN Blogs - Tue, 03/17/2015 - 11:18
Chegou a hora! Venha participar da reunião mais famosa, mais produtiva e mais divertida da linguagem mais famosa, mais produtiva e mais… produtiva! No dia 28 de março, a Microsoft receberá o evento C & C++ Brasil Conference, organizado pelos MVPs e experts na linguagem, Wanderley Caloni e Fabio Galuppo. Fernando Figuera, Product Mkt Manager para Visual Studio no Brasil, fará o Keynote, apresentando o futuro da plataforma. Em um sábado de evento, a proposta é mostrar à comunidade desenvolvedora...(read more)

Microsoft announces the Azure Internet of Things suite

MSDN Blogs - Tue, 03/17/2015 - 10:44

Microsoft Azure IoT services

The Internet of Things (IoT) provides the opportunity to enable and extend digital business scenarios, helping you better connect people, processes and assets, and better harness data across your business and operations.

Improving efficiencies, enabling innovation and fueling transformation are the cornerstones of Microsoft’s vision for the digital business. With Microsoft Azure IoT services, you can monitor assets to improve efficiencies, drive operational performance to enable innovation, and leverage advanced data analytics to transform your company with new business models and revenue streams.

Learn more about the Azure IoT services:

Azure Event Hubs
Azure DocumentDB
Azure Stream Analytics
Azure Notification Hubs
Azure Machine Learning
Azure HDInsight
Microsoft Power BI.

Announcement of Azure IoT Suite.

The Azure IoT Suite is an integrated offering that takes advantage of all the relevant Azure capabilities to connect devices and other assets capture the diverse and voluminous data they generate, integrate and orchestrate the flow of that data, and manage, analyse and present it as usable information to the people who need it to make better decisions as well as intelligently automate operations.

The offering, while customizable to fit the unique needs of organizations, will also provide finished applications to speed deployment of common scenarios we see across many industries, such as remote monitoring, asset management and predictive maintenance, while providing the ability to grow and scale solutions to millions of “things.”

The Azure IoT Suite will provide a simple and predictable pricing model despite the rich set of capabilities and broad scenarios it delivers, so our customers can plan and budget appropriately. This approach is aimed at simplifying the complexities that often exist with implementing and costing IoT solutions

For more details on the Azure IoT suite see http://bit.ly/1xrMBiD

Top Respected Reseller Introduces a Cloud Financial Online Business Solution Built On Microsoft Dynamics CRM - Gravity Software™

MSDN Blogs - Tue, 03/17/2015 - 10:14

Online Business Management Cloud Solution Built On Microsoft Dynamics CRM

Southfield, Michigan - March 17, 2015 – John Silvani, President & Founder of Gravity Software™ (Gravity) introduces the launch of Gravity. Gravity is the first online business management application exclusively written for smart businesses.  Gravity provides small to medium businesses (SMBs) the necessary tools and processes to help them grow.

Gravity’s robust solution is built on the Microsoft Dynamics™ CRM Online platform, and will provide SMBs the distinct advantage of having your Financials and CRM fully integrated on one platform to meet your unique business needs. Microsoft Dynamics CRM hosts over 40,000 companies worldwide and has become one of the most reliable and scalable CRM platforms available today. In one screen, for example, you can easily navigate through Gravity’s back office financial business solution to your front office Sales, Service, and Marketing.

“This is an exciting time to be part of the Gravity Software team as an increasing amount of SMBs are adopting cloud based applications,” said Randall Ykema, CTO of Gravity Software.  “When we considered the needs of businesses and the solutions we wanted to bring to market, we evaluated several development tools and platforms.  We felt the combination of Microsoft’s development tools along with Dynamics CRM gave us the greatest platform for offering a complete toolset to aid smart businesses growth.”

By utilizing the Microsoft Dynamics CRM Online cloud based platform, Gravity makes it simple for businesses to operate from anywhere, at any time without all the startup costs associated with other accounting applications.

“We are thrilled for the launch of Gravity’s business management capabilities. Our goal is to fill the gap between the lower end applications like QuickBooks and higher end applications like Microsoft Dynamics™ GP,” said John Silvani.  “Over time, Gravity will provide SMBs more than just accounting tools to our customers. We are passionate about helping companies become smarter.”

More SMBs are considering a cloud-based applications so they don’t have to worry about servers, IT infrastructure, security issues, and upgrade costs to name a few. Gravity’s guiding principle is to simplify the lives of our users while providing SMBs the platform it needs to grow. Gravity will be available on a monthly subscription basis in April 2015.

About Gravity Software

Gravity Software, LLC (Gravity) is an online cloud business management software company that provides financial business solutions exclusively written for smart businesses. Gravity’s robust solution is built on the Microsoft Dynamics CRM platform to give businesses the distinct advantage of having your Financials and CRM fully integrated on one platform.  More than just accounting, Gravity provides businesses with the necessary tools and processes to help drive sales, improve customer service and increase productivity. Gravity Software - Simply Innovative Business Management.  www.go-gravity.com.

Gravity Software was founded in October 2013 by John Silvani, one of the nation’s top respected resellers of business applications. In May 2001, he was recipient of the Crain’s “Who’s Who in Technology” award and continued to receive awards for his companies throughout his career. In 2011, he was recognized by Lawrence Technological University as one of the “Leaders & Innovators Honoree” in Michigan. His background and leadership has provide him with an understanding of the demands and needs for growing organizations.

# # #

Gravity Software and the Gravity logo are trademarks of Gravity Software LLC. All other company and product names mentioned herein may be trademarks of their respective owners.

Free ebook: Microsoft System Center Deploying Hyper-V with Software-Defined Storage & Networking

MSDN Blogs - Tue, 03/17/2015 - 10:02

Download all formats (PDF, Mobi and ePub) hosted by the Microsoft Virtual Academy

We’re happy to announce the release of our newest free ebook, Microsoft System Center Deploying Hyper-V with Software-Defined Storage & Networking (ISBN 9780735695672), by Microsoft TechNet and the Cloud Platform Team; Series Editor: Mitch Tulloch.

Introduction
When you’re looking at testing a new IT solution—such as implementing a software-defined datacenter that includes virtualization, networking, and storage—the best starting point is always to get advice from someone who has already done it. You can learn from experience what to do and what to avoid. That’s the idea behind this book. We’ve gone through the work of deploying Windows Server, Microsoft System Center, and the innovations that Microsoft Azure has brought to these technologies. Our goal is to give you the step-by-step benefit of our proof-of-concept implementation to save you time and effort. And we want to show you how you can take advantage of innovation across the datacenter and the cloud to simplify your infrastructure and speed delivery of services to the business.

Transforming the datacenter
You know that IT infrastructure matters. With the right platform, you can reduce costs, respond more quickly to business needs, and take on the challenges of big data and mobility. IT today is under more pressure than ever before to deliver resources faster, support new business initiatives, and keep pace with the competition. To handle these demands, you need a flexible, resilient infrastructure that is easy to manage and easy to scale. This means you need to be able to take everything you know and own today and transform those resources into a software-defined datacenter that is capable of handling changing needs and unexpected opportunities.

With Windows Server, Microsoft System Center, and Microsoft Azure, you can transform your datacenter. Virtualization has enabled a new generation of more efficient and more highly available datacenters for your most demanding workloads. Microsoft virtualization solutions go beyond basic virtualization capabilities, such as consolidating server hardware, and let you create a comprehensive software-defined compute engine for private and hybrid cloud environments. This flexibility helps your organization achieve considerable cost savings and operational efficiencies with a platform on which you can run the most demanding, scalable, and mission-critical of workloads.

You can find a large part of those savings and some of the best options for simplifying the datacenter in the area of storage. Microsoft’s software-defined storage (SDS) capabilities enable you to deploy low-cost, commodity hardware in a flexible, high-performance, resilient configuration that integrates well with your existing resources. Another area of savings and optimization is in networking innovation. With software-defined networking (SDN), you can use the power of software to transform your network into a pooled, automated resource that can seamlessly extend across cloud boundaries. This allows optimal utilization of your existing physical network infrastructure, as well as agility and flexibility resulting from centralized control, and business-critical workload optimization from deployment of innovative network services. Virtual networks provide multitenant isolation while running on a shared physical network, ultimately allowing you to manage resources more effectively, without the complexity associated with managing traditional networking technologies such as Virtual Local Area Networks (VLANs).

System Center provides the unified management capabilities to manage all of this virtualized infrastructure as a whole. This software-defined model lets you pool resources and balance demand across all the different areas of the business, moving resources to the places where you need them most, increasing agility and the overall value of IT to the business. Although the benefits of a software-defined datacenter are clear, designing and implementing a solution that delivers the promised benefits can be both complex and challenging. As with all new advances in technology, experienced architects, consultants, and fabric administrators often find it difficult to understand the components and concepts that make up a software-defined datacenter solution. We wrote this book to help.

Who should read this book?
You only have to perform a quick web search on “deploying Hyper-V,” “configuring Storage Spaces,” or “understanding Hyper-V Network Virtualization,” to realize that a wealth of information is available across Microsoft TechNet, blogs, whitepapers, and a variety of other sources. The challenge is that much of that information is piecemeal. You’ll find an excellent blog post on configuring Storage Spaces, but the networking configuration used is vastly different from the whitepaper you’ve found that guides you through configuring network virtualization. Neither of these sources align with a bare-metal Hyper-V deployment article you’ve been reading. The point here is that it’s difficult to find a single end-to-end resource that walks you through the deployment of the foundation of the Microsoft software-defined datacenter solution, comprising software-defined compute, storage, and networking, from the racking of bare-metal servers, through to the streamlined deployment of virtual machines (VMs). This book does just that.

Providing a POC deployment, this book gives the what, why, and the how of deploying the foundation of a software-defined datacenter based on Windows Server 2012 R2 and System Center 2012 R2. If you’re an IT professional, an infrastructure consultant, a cloud architect, or an IT administrator, and you’re interested in understanding the Microsoft software-defined datacenter architecture, the key building blocks that make up the solution, the design considerations and key best practices, this book will certainly help you. By focusing on a POC scale, you can implement a solution that starts small, is manageable, and is easy to control yet helps you learn and understand why we chose to deploy in a certain way and how all of the different pieces come together to form the final solution.

What topics are included in this book?
This book, or proof-of-concept (POC) guide, will cover a variety of aspects that make up the foundation of the software-defined datacenter: virtualization, storage, and networking. By the end, you should have a fully operational, small-scale configuration that will enable you to proceed with evaluation of your own key workloads, experiment with additional features and capabilities, and continue to build your knowledge.

The book won’t, however, cover all aspects of this software-defined datacenter foundation. The book won’t, for instance, explain how to configure and implement Hyper-V Replica, enable and configure Storage Quality of Service (QoS), or discuss Automatic Virtual Machine Activation. Yet these are all examples of capabilities that this POC configuration would enable you to evaluate with ease.

Chapter 1: Design and planning This chapter focuses on the overall design of the POC configuration. It discusses each layer of the solution, key features and functionality within each layer, and the reasons why we have chosen to deploy this particular design for the POC.
Chapter 2: Deploying the management cluster This chapter focuses on configuring the core management backbone of the POC configuration. You’ll deploy directory, update, and deployment services, along with resilient database and VM management infrastructure. This lays the groundwork for streamlined deployment of the compute, storage, and network infrastructure in later chapters.
Chapter 3: Configuring network infrastructure With the management backbone configured, you will spend time in System Center Virtual Machine Manager, building the physical network topology that was defined in Chapter 2. This involves configuring logical networks, uplink port profiles, port classifications, and network adaptor port profiles, and culminates in the creation of a logical switch.
Chapter 4: Configuring storage infrastructure This chapter focuses on deploying the software-defined storage layer of the POC. You’ll use System Center Virtual Machine Manager to transform a pair of bare-metal servers, with accompanying just a bunch of disks (JBOD) enclosures, into a resilient, high-performance Scale-Out File Server (SOFS) backed by tiered storage spaces.
Chapter 5: Configuring compute infrastructure With the storage layer constructed and deployed, this chapter focuses on deploying the compute layer that will ultimately host workloads that will be deployed in Chapter 6. You’ll use the same bare-metal deployment capabilities covered in Chapter 4 to deploy several Hyper-V hosts and then optimize these hosts to get them ready for accepting virtualized workloads.
Chapter 6: Configuring network virtualization In Chapter 3, you will have designed and deployed the underlying logical network infrastructure and, in doing so, laid the groundwork for deploying network virtualization. In this chapter, you’ll use System Center Virtual Machine Manager to design, construct, and deploy VM networks to suit a number of different enterprise scenarios.

By the end of Chapter 6, you will have a fully functioning foundation for a software-defined datacenter consisting of software-defined compute with Hyper-V, software-defined storage, and software-defined networking.

This book is focused on the steps to implement the POC configuration on your own hardware. Where applicable, we have included detail on design considerations and best practices and extra detail on certain features and capabilities. These are intended to ensure that you come away from this book with a rounded view of the what, why, and how when it comes to deploying the foundation of a software-defined datacenter based on Windows Server 2012 R2 and System Center 2012 R2.

Acknowledgments
The authors would like to thank Jason Gerend, Jose Barreto, Matt Garson, and Greg Cusanza from Microsoft for providing valuable guidance and contributions for the content of this book. Without their expertise and guidance, this book would not be as thorough, detailed, and accurate. Our sincere thanks go to them for their time and efforts in making this happen. The authors would also like to thank Karen Forster for proofing and copyediting their manuscript, Deepti Dani for her work on formatting and final layout, and Masood Ali-Husein for his reviewing work on this project.

Convergence 2015 Interactive Discussion Topics: Customization Tips and Tricks

MSDN Blogs - Tue, 03/17/2015 - 09:43
Here are the 15 tips in 15 minutes that we shared as part of our Microsoft Dynamics CRM Customizations Tips and Tricks interactive discussion (ID15C351-R1 & ID15C351-R2) at Convergence 2015. We wanted to share them in a blog article to those who attended (to reduce the need for notes), for those who could not attend, and help reduce our own need for more PowerPoint slides. For those in attendance, feel free to ask any questions during the last 45 minutes of our session, or come find is in the...(read more)

TechNet Wiki International Summit 2015 (TNWikiSummit15) Begins Today!!! - Here are the Day 1 Presentations

MSDN Blogs - Tue, 03/17/2015 - 09:30

Today is a special day for all TechNet and MSDN Communities, especially for those who collaborate by sharing their knowledge on TechNet Wiki!

This is because we are starting a new phase, where some of our top authors will have the opportunity to present their knowledge in "real time", demonstrating how to get a solution provided in one of their articles awarded on TechNet Guru! In the Wiki track, we'll follow our expert Wiki Ninja bloggers and community council members as they dig deeper on their top passions that they drive on the Wiki Ninjas blog, as well as on TechNet Wiki!

 

 

Today we will also have presentations from 2 International Communities who contributed greatly to the success of our TechNet Wiki in last year: Turkish and Brazilian!

Some of our top members from the Turkish Community, the TAT (Turkish Avengers Team), will present "Best Practices" that they have used to get as much success as possible when sharing knowledge in their technical articles.

The Brazilian Community will also present their "Best Practices", showing how they do it... in the Portuguese language... with a high level of quality, for so many years!

See today's schedule below (March, 17th, 2015):

 

Time Session GMT(-8):01h00PM
GMT(-2):07h00PM
GMT(0):09h00PM
GMT(+2):11h00PM

"TechNet Wiki Social Synergy" by Ed Price

Description: Why should you write a TechNet Wiki article when you blog, write answers on forums, create Gallery samples, or make videos instead?

Join Ed as he digs into the idea that you can do it all, synergize it together, and come up with something far more impacting than any one solution.

GMT(-8):01h45PM
GMT(-2):07h45PM
GMT(0):09h45PM
GMT(+2):11h45PM

"SharePoint:Adding client-side controls to an AngularJS app in Office 365" by Matthew Yarlett

Description: A quick look at how to implement a number of the client-side controls in an AngularJS single page app, hosted on Office365.

We'll be looking at the people picker, rich-text, taxonomy picker, a calendar control and hopefully more!

"Wiki Life (Turkish):Best Practices" by Hasan Dimdik, Erdem SELÇUK, Recep YUKSEL and Ugur Demir

Description: Best Practices to write excellent articles, based on http://aka.ms/WikiUserGuide.

"SQL Server Memory" by Shashank Singh aka Shanky

Description: This session would deal about how SQL Server memory evolved from 2005 to 2014.

Various changes and how SQL Server memory functions.

GMT(-8):02h30PM
GMT(-2):08h30PM
GMT(0):10h30PM
GMT(+2):12h30AM

"Wiki Life (Portuguese):Best Practices" by Alan Carlos, Durval Ramos and Luciano Lima

Description: Best Practices to write excellent articles, based on http://aka.ms/WikiUserGuide.

"Exchange Server Kurulum SenaryolariKSEL" by Recep YUKSEL (only Turkish Language)

Description: Exchange Server platformunu kullanmak isteyen kişilere ürünü ortamlarına nasıl konumlandırabileceklerini, kendilerine en uygun kullanım ve kurulum senaryosunu belirlemelerine yardımcı olmak için Exchange Server platformunun kurulum ve kullanım senaryoları içermektedir.

GMT(-8):03h15PM
GMT(-2):09h15PM
GMT(0):11h15PM
GMT(+2):01h15AM

"Small Basic" by Ed Price

Description: Join us as Ed gives a high-level presentation of our Small Basic curriculum, references, and content on TechNet Wiki!

"Writing a Good Wiki Article" by Matthew Yarlett

Description: A few tips on making a good wiki article a great wiki article!

"Segurança em Profundidade em Ambientes Microsoft" by Luciano Lima (only Portuguese Language)

Description: This session will be presented best practices for maintaining a Microsoft environment protected from threats and attacks.

 

 

Join now to enjoy these TNWikiSummit15 Presentations and others.

Remember to watch live or to record the Presentations, you must be previously registered !!!

Registration's limited to each Presentation!

 

 

See you soon here, at the TNWikiSummit15!

Brazilian Wiki Ninja Durval

Wiki Ninja Ed

Lab of Things enables research and teaching

MSDN Blogs - Tue, 03/17/2015 - 09:00

Lab of Things enables research and teaching
Learn how the LoT platform is facilitating experimental research that uses connected devices.

...(read more)

dotnetConf 2015 – Join us live March 18th and 19th

MSDN Blogs - Tue, 03/17/2015 - 09:00

dotnetConf is a free, live web event featuring speakers from Microsoft product teams and the .NET community at-large. Together, across two content-packed days, we’ll cover .NET 2015, ASP.NET 5, .NET open source, and cool community presentations – including several from our MVPs, Xamarin partners, and internal Microsoft teams.

For the complete list see: dotnetConf 2015 agenda

Join our Live Q&A

dotnetConf is about you, the .NET developer, and what’s on your mind! We want you to participate in the discussion by joining the live Q&A on Channel 9 Live, where you can submit your questions to our .NET experts. You can also follow us on Twitter and use the #dotnetconf tag to continue the dialog.

New to dotnetConf?

Get started by reading this dotnetConf 2015 post on the .NET blog. You can also view recordings of dotnetConf 2014 available on Channel 9 for last year’s topics and presentations.

See you there!

Dmitry Lyalin, Sr. Product Manager for Visual Studio
Follow me on Twitter, @lyalindotcom

Dmitry has been at Microsoft for over 7 years, working first in Microsoft Consulting and Premier Support out of NYC before joining the Visual Studio Team and moving to Redmond, WA. In his spare time he loves to tinker with code, create apps, and is an avid PC gamer.

Error opening installation log file. Verify that the specified log file location exists and that you can write to it.

MSDN Blogs - Tue, 03/17/2015 - 08:58
“Error opening installation log file. Verify that the specified log file location exists and that you can write to it.” error when attempting to install any Setup package. For example, try installing any one of the below packages: The Microsoft .Net Framework patch KB2972216 The Microsoft Visual C++ 2008 Redistributable Package The Microsoft Forefront Endpoint Protection   The sample error screenshots are displayed below:     There was a log file (dd_NDP45-KB2972216-x64_decompression_log...(read more)

Guest Post: Gil Amran talks about using TypeScript at Wix

MSDN Blogs - Tue, 03/17/2015 - 08:45

I'm pleased to share a contributed post from one of TypeScript's community members.  Today, guest writer Gil Amran from the Wix development team talks about using TypeScript to build WixStores, some of the advantages and challenges of using TypeScript, and what they learned doing so.

A big "thanks!" to Gil for telling us about the process at Wix and for detailing his teams' experiences in this blog post.

 

Developing Large-Scale Applications with TypeScript

As a front-end developer, you’ve probably heard about TypeScript. Maybe you even tried using it. But not many developers know what it is like to build a large-scale project from scratch using TypeScript.

A year ago, with a team of eight front-end developers, we started rebuilding the entire Wix eCommerce solution; the new product is called WixStores. Most of the developers worked on the codebase at the same time. And working together without breaking one another’s code was a challenge.

The WixStores application was divided into several small projects. We started using TypeScript for only one of the projects. The others used pure JavaScript. After seeing the benefits of TypeScript, we decided to use it for all of the projects.

Developing Large-Scale Applications Today

Large-scale web applications are usually divided into several layers: the framework (such as AngularJS, Ember, or Backbone), the view (HTML with CSS, SASS, or LESS), and the language.

This blog post is focused on the language layer. Obviously, writing pure JavaScript is an option. But in a large-scale web application, it is less than ideal. Instead, there are languages that compile to JavaScript. These languages are very helpful as you will see next, and there are already a few of them. The most well known are CoffeeScript, ClojureScript, Dart, and TypeScript.

 

So, Why Choose TypeScript? Technology Decoupling

CoffeeScript and Dart are languages that compile to JavaScript. Unfortunately, the generated JavaScript does not look anything like the original code. It can be hard to understand, read, and debug. On the other hand, TypeScript is a superset of JavaScript. TypeScript generates JavaScript code that is easy to read and debug, and that looks very much like the original TypeScript code. This means that if (for any reason) you wish to return to plain JavaScript, you can take the generated JavaScript and work with it directly. In other words, there is no dependency on TypeScript, so it is easy to stop using it. With CoffeeScript and Dart it would be difficult to do that.

Type Safety

As the name suggests, TypeScript is a strongly typed language, and type safety is the most important added value that TypeScript offers. When creating a small to medium JavaScript application with one or two developers, it’s OK to go without any type safety. But when the application grows, that growth might lead to messy code that is very hard to maintain and debug.

Taking advantage of types means that you will get type errors at compile time instead of runtime (or not at all). And when I’m saying “types,” I don’t just mean numbers or strings, I mean interfaces with a very clear definition. For example:

interface CatalogAPI {

 productsCount  : number;

 defaultCategory  : Category;

 loadProductDetails(productId : string):ng.IPromise<DetailedProduct>;

 getProducts(maxProducts? : number):Product[];

 updateProduct(product : Product | DetailedProduct):boolean;

}

Here are three benefits of type safety:

It’s Optional

Type safety is a wonderful feature, but you don’t have to use it if you don’t want to.

On the WixStores team, we were strict about types when defining a class API. We wanted to be sure that the developer who calls an API will know how to work with it. Any misuse of the API or breaking changes to the API will raise compile errors.

Because TypeScript type safety is optional, in the internal function implementation, we allowed the freedom to work with no types. It was up to each developer to decide whether to use it.

It’s Just Another Test

The result of TypeScript’s compilation is of course JavaScript. The most obvious difference between the TypeScript code and the generated code is that the types have been removed—they are used during the compilation phase only. You can think of it as another test that runs at build time to verify that all the function calls are valid. We nicknamed it WarningScript. :-)

API Definitions and External Libraries

Interfaces are great for describing APIs. For example:

function listProducts(products) { ... }

This function definition does not tell you whether products is a list of actual product objects or a list of product IDs. Also, you cannot tell whether this function returns a result. The only way to get this information is to investigate the actual code, which wastes time.

On the other hand, an API like this:

function listProducts(products:Product[]):Product[] { ... }

Tells you at a glance that products is an array of Product objects. The result is an array of Product objects. As you can see, the API can fully describe the function implementation without having to investigate the implementation. And most IDEs can work with this information and will give you real code completion and not just an estimation.

Working with external libraries like Underscore or jQuery might also be a challenge; you can look up the API by searching the web or investigating the library code. But when working with types it is much easier to find out how to work with the library correctly. You can find definition files for almost any library at DefinitelyTyped. Again, IDE code completion and compile-time type checking will use this information to alert you of any misuse of the API.

While it is possible to address this problem by annotating code with JSDoc-style comments, this approach relies on developers to read the comments to understand API contracts. However, comments cannot truly replace tooling support to guard against lack of types.

ES6

ECMAScript 6 is a great leap forward for the JavaScript language. There are loads of cool features, and the code is much more organized and readable.

So far, TypeScript implemented a few of its features like modules, classes, arrow functions, and more. TypeScript (see the roadmap) is closing the gap even further and will include almost all ES6 features with the addition of metadata and annotations. This means that you can use these features today and the compiler will generate the required code (same as Traceur).

Community

TypeScript was created about three years ago and is constantly being updated. Lately, TypeScript is getting more and more attention. The AngularJS team and Microsoft recently announced that Angular 2 will be written in TypeScript. Facebook is also pushing in the same direction with Flow, an alternate static type checker for JavaScript.


Can It Be That Good? What Are the Cons? Boilerplate Code

Working with types, classes, and interfaces can lead to over-engineering. Many Java developers feel that they waste too much time on boilerplate code just to define a class. This can also happen in TypeScript if you are not careful. When a language gives you tools, you can abuse them.

Generics (type safety feature) are a good example. We did not create our own classes that use generics, but we did use generics from external libraries like AngularJS Promises.

IDE Support

There are plugins for almost all the common IDEs, including Sublime, IntelliJ, WebStorm, Vim, and even Emacs. But they are not perfect. Sometimes you will see an error in the IDE but not at compile time (in other words, a bug in the IDE plugin). In addition, you would expect the IDE to take advantage of the type definitions to let the developer navigate more easily within the code. But some IDEs do not do it.

On the plus side, the TypeScript team is now working on better tooling support and soon we will see much better IDE support.

Popularity

Despite all the attention that TypeScript is getting lately, it is gaining popularity, so it’s not easy to find solutions to problems. For example, at first I didn’t understand whether to use internal or external modules, and I could not find enough information online about this topic. However, I expect that as more developers begin using TypeScript, more solutions will be available via Stack Overflow, Google, and other online resources.

Also, some developers may need to learn how to use the language, unless they are up to date with ES6 or came from .NET.

Conclusion

As I’ve mentioned throughout, large-scale web applications are not easy to deal with when using plain JavaScript. Yes, ES6 will help organize large projects (any front-end developer should get familiar with it as soon as he can), but this is not enough for large-scale applications. Type safety can help reduce the time wasted on simple human errors, and much like TDD, it gives you the comfort of knowing that your code is covered by tests.

From type safety, ES6 support, and the current attention that TypeScript is getting, to the simplicity of reading and debugging the JavaScript code it generates, TypeScript is currently the best pick for large-scale applications. I don’t see us going back to coding in plain JavaScript.

So, if your team is larger than two or three people, give TypeScript a try when developing your next large-scale web application. If you don’t like it, just use the generated code.

Accidental Denial of Service through Inefficient Program Design Part 2 – Incorrectly Implementing Interfaces

MSDN Blogs - Tue, 03/17/2015 - 08:37

 

There are few things that are more annoying as a user than to have the performance of a computer which they’re using grind to a halt.  This series will outline program design flaws that I’ve run across, explain what they are, how they impact the system, example scenarios where the impact will be exacerbated, and safer alternatives to accomplish the same programming goal.  In part two of this series, we’ll be exploring:

Incorrectly Implementing Interfaces   What It Is

There are a multitude of entry points in Windows where functionality can be added to, enhanced, or even replaced through the use of objects which implement interfaces.  In many if not most of those cases, such entry points affect the user experience across many applications but are not implemented in their own processes.  In other words, the system will load those objects in other processes to use them as those processes use system functionality that those objects integrate with.  Some common examples of this would be user mode drivers and shell extensions.

How It Impacts the System

When an interface is documented, the expected behavior of each interface member is described as well as the expected behavior of the module function exports which will give access to initial objects.  When those members or functions do not behave in ways the documentation states that they are required to behave, unexpected results can happen.  Commonly these unexpected results take place in processes who haven’t done anything wrong, and it is difficult to figure out what the real root problem is for a veteran programmer while the average user or administrator incorrectly blames the application who was the victim of the bad code.  It can lead to frustration with the system and even system restores and reinstalls by users who can’t resolve their issues.

Example Scenario of Exacerbation

I recently had a case where this exact kind of thing was happening; while I won’t name the specific parties involved, the scenario that I’m about to go over is in a commonly distributed component and has been known to affect the users of many applications.  The interface in this scenario is IWiaTransferCallback, and the object implementing the interface which ultimately causes the problem is passed to the IWiaTransfer::Download method of a system provided object. 

First, let’s go over what an object implementing IWiaTransferCallback, which inherits from IUnknown and is COM based, and the module which contains is required to do based on the documentation above and the content which it links to:

1.  The object needs to implement the GetNextStream, TransferProgress, QueryInterface, AddRef, and Release methods. 

2.  The module needs to implement and export the DllCanUnloadNow and DllGetClassObject methods.

Due to the actual problem in this scenario, we’re going to ignore the DllGetClassObject, GetNextStream, TransferProgress, and QueryInterface and assume that they are all implemented correctly.  This takes out all of the functionality specific to IWiaTransferCallback and leaves us with some basic COM interface implementation methods.  The linked documentation on MSDN already does a great job of what each of these are supported to do, so I’ll only include a brief snippet here:

AddRef – increments the number of references to an object

Release – decrements the number of references to an object; also frees the object when there are no longer any references to the object

DllCanUnloadNow - determines whether the module from which it is exported is still in use; a module should be considered to be no longer in use when it is not managing any existing objects which should be determined by the reference count for all of its objects to be zero

What that boils down to is that each object should have its own reference counter so it knows whether or not it is safe to be freed, and that each object type should have its own reference counter so that the module knows whether or not an instance of each of its types is open or not.   There is a caveat to this though; the DllCanUnloadNow function doesn’t have to be exported in some scenarios and in other scenarios it is an informative context instead of an inquisitive context.  In the former case, the module won’t be unloaded until COM is uninitialized.  In the latter, the function will be called before the module is unloaded but the result of the function will be ignored.  In both cases, the module is still responsible for keeping objects and itself available while they are still in use.  Whether or not these apply can be derived from the documentation for the specific interface(s) being implemented. 

In this scenario, the implementer of IWiaTransferCallback, part of a scanner mini-driver,  neglected to properly implement these functions.  AddRef and Release always returned zero, it didn’t keep track of its references properly otherwise, and it allowed the module to be unloaded prematurely.  When IWiaTransfer::Download was called, the system called AddRef more times than it called Release on the object before returning since it was still using that object in a RPC call.  The module didn’t account for this this, the Release function pointed to an address within the module, and when the system later went to call Release on the object when it was done with it, the Release function no longer existed in memory.  The result of that was an access violation which crashed whichever application was using the driver to retrieve a scan when the application called CoUninitialize to initiate cleanup on a thread.  In the case of an application that only used COM from one thread, this resulted in the application crashing on exit.  This turned out to be particular problematic for applications which implemented COM on multiple threads though as it caused the application to crash whenever it cleaned up the thread that was doing the scanning.

Luckily, there was an easy way to work around this particular problem.  An application using this driver could call GetModuleHandleEx with the flags set to not unload the module before the module was unloaded.  This came with the cost of having to leave the module in memory when it really shouldn’t be needed and wasn’t something that the application should have needed to do had the driver implemented things correctly.  It did fix the problem for end users with a fairly trivial amount of code though.

Safer Alternatives

The simplest thing to do in a case like this would be to use a framework or language that already takes care of basic IUnknown implementation for you when implementing interfaces which inherit from IUnknown.  A couple of examples of this would be ATL for C++ or exposing a COM Interface from a CLR binary.  If you are going to implement IUnknown on an object from scratch, make sure you do it correctly.  Returning the correct value for AddRef and Release and makes it substantially easier to verify behavior external to your object, but the more important thing is that your module doesn’t let itself be unloaded while its address space is being used.

Follow us on Twitter, www.twitter.com/WindowsSDK.

Live Stream Tomorrow's dotNetConf

MSDN Blogs - Tue, 03/17/2015 - 08:32

On March 18th & 19th, MVPs will join with Microsoft product team members during the dotnetConf, a free virtual event .  The two-day event co-organized by the .NET community and Microsoft will provide a variety of live sessions designed to answer your questions and inspire your next software project. 

"Events like DotNetConf are great for the community since they provide great, repeatable content for everyone," said ASP.NET/ MVP Javier Lozano who will welcome attendees alongside Microsoft Principal Program Manager, Scott Hanselman.  "By having the option to tune in during the conference or to watch on demand, it allows both presenters and attendees to have flexibility. Speakers can use the recorded session for their session portofolio and the attendee can chose to watch whenever and on which ever device they want. To me, that's a win-win."

You don't even need to register, just connect to the live stream on Channel 9. The two-day event we will have more than 16 hours of content on subjects such as ASP.NET 5, .NET Core, C#, F#, Roslyn, debugging with VS2015, learnings from running large scale websites, and more!

"Watching the Twitter feed, #dotnetconf, is fun since it allows people to participate with the event from the comfort of their home/office/etc," said Lozano.  "The impact the event has on people is pretty awesome and I'm very honored to be a part of it."

Check out these MVP presenters! 

 

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux