You are here

Feed aggregator

Production Checkpoints in Windows 10

MSDN Blogs - Wed, 08/05/2015 - 09:04

When we were first developing Hyper-V we worked hard to get checkpointing functionality to be part of the 1.0 release.  At that stage, we knew that while virtual machine checkpoints were good for development and testing environments, they were not suitable for production environments.  To deal with this we provided detailed documentation and guidance outlining how you should never use virtual machine checkpoints in production environments.

However, no one listened to us :-)

12 months after the initial release of Hyper-V - problems encountered when people were using virtual machine checkpoints in production environments were our number 1 support call generator.  Over the ensuing releases we have been steadily improving checkpoints to address the problems that people were encountering (we changed where the checkpoint files were stored, how they were named and added the ability to merge in changes from a deleted checkpoint while the virtual machine was running).  But we still did not recommend them for production environments.

The primary reason for this is that when you take virtual machine checkpoint we store all the memory state of running applications - and we restore it exactly when you apply the checkpoint.  This is great for development and testing - but not what you want to do to a SQL server, or a mail server, or any workload that has network clients connected to it.  This is why we developed Production Checkpoints.

Production Checkpoints give you exactly the same experience you have always had with virtual machine checkpoints.  There are no changes to how you create, apply, name or delete checkpoints.  But now when we create the checkpoint - instead of capturing the memory state of the virtual machine, we utilize VSS (or on Linux - file system freeze) to create a data consistent storage snapshot.  When you restore one of these checkpoints, any server applications inside the virtual machine believe they have just been restored from a backup - and are able to handle the changes correctly.

You can read more about this here: 

Finally, if you are worried about the demise of standard checkpoints - don't worry - they are still there.  Tomorrow I will do a post about how to change the default checkpoint type on a virtual machine and get standard checkpoints back if you want them.


NextStage turns Kinect sensor into a virtual production camera

MSDN Blogs - Wed, 08/05/2015 - 09:00

We’re used to seeing applications in which Kinect for Windows tracks the movements of users—after all, that’s what the Kinect sensor is designed to do. But what if you picked up the Kinect sensor and moved it around, so that instead of the sensor tracking your movements, you could track the position and rotation of the sensor through three-dimensional space?

That’s exactly what filmmaker Sam Maliszewski has set out to do with NextStage, a Kinect for Windows application that effectively turns the Kinect for Xbox One sensor into a real-time virtual production camera. Maliszewski places retroreflective markers throughout the scene he intends to film, and then he physically moves the Kinect sensor around the set, using its onboard cameras to record the action while data from the reflective markers instantly and accurately tracks the sensor’s 3D position.

(Please visit the site to view this video)

The resulting video footage can then be combined with virtual objects and sets, with no need for frame-by-frame processing. Moreover, since NextStage provides depth-based keying, it lets filmmakers separate live-action subjects from the background, and it allows them to place live-action actors or objects on a virtual set without using green-screen techniques. Alternatively, depth mattes created in NextStage can provide a high-quality “garbage” matte for green-screen overlays.

Maliszewski has developed two versions of NextStage, both currently in beta: NextStage Lite, a free download that captures the video by using the Kinect sensor’s color and depth cameras, and NextStage Pro, which enables filmmakers to sync the tracking data to an external camera and to export it to such applications as Blender and Maya.

The Kinect for Windows Team

Key links

Preserving original file created date/time in a compressed archive file

MSDN Blogs - Wed, 08/05/2015 - 08:53

Once in a while I scratch and reinstall my laptop as I have done now to migrate to Windows 10 RTM and among the things I save and restore are my Internet Explorer favorites/bookmarks. I usually access my favorites from the "Favorites" folder among of my profile folders and in retrieving bookmarks the created date/time is for me an important attribute (I usually order bookmarks by the "Date Created" column) so saving and restoring this file information is fundamental for me. What I usually do is compress the entire "Favorites" folder using WinRAR after having specified to store creation time:

To restore saved file creation date/time, files extraction needs to be executed from the command line with something like the following:

"C:\Program Files\WinRAR\WinRAR.exe" x "path to rar file" -tsc "path to destination folder"

the "x" command requests extraction of files with full path while the "tsc" switches ask to restore file creation time.


Using Windows Photo Viewer as default image viewer on Windows 10

MSDN Blogs - Wed, 08/05/2015 - 08:18

Out of the box Windows 10 has Trusted Windows Store apps as default programs for numerous file extensions including most popular image formats (jpg, png, bmp, gif, ico, etc.). Via the "Default Programs" functionality in Control Panel you may not even be able to change the default program for some file formats as in the case of the Windows Photo Viewer (which I personally prefer over the Photos Windows Store app for viewing images) because it does not even have the requested file extensions associated to it (Windows Photo Viewer is associated to .tif and .tiff files only). If you'd like to use Windows Photo Viewer for viewing other supported image files you can do as it is described in this article.

Allowing SCOM to Monitor SQL with Local System

MSDN Blogs - Wed, 08/05/2015 - 08:02

Configuring SQL Monitoring with Local System

In previous versions of SQL, Local System had access to SQL (unless removed by administrators, a known best practice).  However, newer versions do NOT grant this access out-of-the-box.  To allow the SCOM agent running under Local System to monitor SQL (engine, instances, and DBs) you need to make the following changes.

  1. In SQL Server Management Studio, create a login for “NT AUTHORITY\System” on all SQL Server instances to be monitored on the agent machine, and grant the following permissions (Securables page of the Login Properties page) to the “NT AUTHORITY\System” login:
  2. Create a NT AUTHORITY\System user that maps to the NT AUTHORITY\System login in each existing user database, master, msdb, and model. By putting user in the model database, it will automatically create a NT AUTHORITY\System user in each future user-created database. You will need to manually provision the user for attached and restored databases. (Note: you can do this via the User Mappings page on the instance login created in step 1.  Simply check the DBs.)
  3. Add the NT AUTHORITY\System user on msdb to the SQLAgentReaderRole database role.
  4. Add the NT AUTHORITY\System user on msdb to the PolicyAdministratorRole database role.

Note: This won't necessarily allow you to run SQL tasks.  See the MP Guide for details on the permissions required for that.

See also

New content in the Microsoft Press Guided Tours app: Microsoft System Center Operations Manager Field Experience

MSDN Blogs - Wed, 08/05/2015 - 08:00

The free Microsoft Press Guided Tours app is newly updated on Windows Store! The newest tour on our growing list is “Microsoft System Center Operations Manager Field Experience.”

In this Windows Store app, Microsoft Press authors provide insightful coverage of new and evolving Microsoft technologies. You can use the app to explore technical topics in powerful new ways, and you can mark up content in multiple ways so that it’s more useful to you.

The following eight free guided tours are included in our app – and more are coming soon!

  • Building cloud apps with Microsoft Azure (including best practices for DevOps, data storage, high availability, and more), by Scott Guthrie, Mark Simms, Tom Dykstra, Rick Anderson, and Mike Wasson
  • Programming Windows Store apps with HTML, CSS, and JavaScript, by author Kraig Brockschmidt
  • Using Microsoft Azure HDInsight, by Avkash Chauhan, Valentine Fontama, Michele Hart, Wee Hyong Tok, and Buck Woody
  • Microsoft Azure Essentials: Fundamentals of Azure, by Michael S. Collier and Robin E. Shahan
  • Microsoft Azure Essentials: Azure Machine Learning, by Jeff Barnes
  • Introducing Windows 10 for IT Professionals, Preview Edition, by Ed Bott
  • Microsoft System Center Deploying Hyper-V with Software-Defined Storage & Networking, by Microsoft TechNet and the Cloud Platform Team; Series Editor: Mitch Tulloch
  • Microsoft System Center Operations Manager Field Experience, by Danny Hermans, Uwe Stürtz, Mihai Sarbulescu; Series Editor: Mitch Tulloch

Look for additional tours in the near future. Learn more about the app’s features in this previous blog post. More details on contents included in this newest tour are below.


If you’re responsible for designing, configuring, implementing, or managing a Microsoft System Center Operations Manager environment, then this guided tour is for you. This guided tour will help you understand what you can do to enhance your Operations Manager environment, and will give you the opportunity to better understand the inner workings of the product, even if you are a seasoned Operations Manager administrator.

This guided tour assumes that you have a deep working knowledge of the Operations Manager product and its concepts, that you understand the concept of management packs, and that you are basically familiar with Microsoft Azure as an infrastructure-as-a-service platform. This is a guided tour about best practices, design concepts, how-tos, and in-depth technical troubleshooting. It covers the role of the Operations Manager product, the best practices for working with management packs, how to use the reporting feature to simplify managing the product, how to thoroughly troubleshoot, and how to use and install Operations Manager in a Microsoft Azure Public Cloud environment.

About the companion content

The companion content for this guided tour can be downloaded from the following page:

The companion content includes the following:

  • The SQL query in Section 1 that you can run in SQL Server Management Studio to determine which collation settings you are using
  • The series of commands used in the example in Section 2 to run workflow tracing manually
  • The Windows PowerShell script used in Section 4 to view all TLMEs that exist order per resource pool and per current owning pool member (management server)
  • The various SELECT queries included in Section 4
  • A PDF file titled HealthService Event Reference that provides information about the events that Operations Manager can log to its event log from the HealthService features.


We would like to thank Daniele Muscetta, Microsoft Program Manager for Azure Operational Insights, for his review and comments on the Azure Operational Insights information in Section 5; Stefan Stranger, Microsoft Senior Premier Field Engineer, for the review of and his input on the remainder of Section 5; and Danny’s loving wife, Vita Martinsone, for the pre-editing and formatting of our work.

Ruby, TinyTDS, and Azure SQL Database update

MSDN Blogs - Wed, 08/05/2015 - 07:43

I got a note the other day that the 0.6.3-rc2 version of TinyTDS now includes the security bits needed to communicate with Azure SQL Database, on the Windows platform (it worked on other platforms just fine.) Previously, you had to manually build it against OpenSSL for it to work on Windows platforms, but with the new version it 'just works'. Use the following to install it from the command line:

gem inst tiny_tds --pre

or in a Gemfile use:

gem "tiny_tds", "0.6.3-rc2"

The following code is an example of connecting to SQL Database using the tiny_tds gem:

require 'tiny_tds'>’user’, :password=> ‘password’,
 :dataserver=>’', :port=>1433, :database=>’databasename’,
results=client.execute("select * from [names]")
results.each do |row|
puts row end

Tiny_tds can also be used with ActiveRecord. The following is an example database.yml for using a dblib connection to SQL Database using the tiny_tds gem.

adapter: sqlserver
mode: dblib
dataserver: ''
database: databasename
username: user
password: password
timeout: 5000
azure: true NOTE: All tables in SQL Database require a clustered index. If you receive an error stating that tables without a clustered index are not supported, add a :primary_key field. NOTE: If you are using an older version of Ruby on Rails, Active Record, or the activerecord-sqlserver-adapter gem, you may receive an error when running migration (rake db:migrate). You can run the following command against your SQL Database to create a clustered index for this table after receiving
this error:
CREATE CLUSTERED INDEX [idx_schema_migrations_version] ON [schema_migrations] ([version]) After creating the clustered index, rerun the migration and it should succeed. Enjoy! - Larry


Joseph Sirosh on Connected Cows

MSDN Blogs - Wed, 08/05/2015 - 07:25

I have heard Joseph Sirosh tell this story at multiple conferences, and I LOVE IT.  If you're into the Internet of Things/big data/analytics, take 8.5 minutes and watch it right now – you won’t regret it.  Awesome story. 

Quick summary: Cows go into heat for a very short window (12-18 hours every 21 days, often during the night).  Cows drastically increase their activity level when going into heat, so the cows now wear pedometers which send data to the cloud, and alerts are sent to the farmer when they are in heat.  It has increased their cattle production by 12% (not including the labor savings).  They can even inseminate at different times to be more likely to get a female cow (for more milk) or a male cow (for more meat). 

AI (artificial intelligence) meets AI (artificial insemination).  Brilliant. 

You can watch at or embedded below.  There’s a nice summary writeup here as well. 

Announcing Sway General Availability, Windows 10 app and more!

MS Access Blog - Wed, 08/05/2015 - 07:00

Today, we are excited to announce that our digital storytelling app, Sway, is moving from Preview to General Availability! Sway is also moving beyond First Release and rolling out to all eligible Office 365 for business and education customers worldwide.* This makes it possible for many additional businesses, schools and other organizations to start using Sway to create and share interactive reports, presentations, assignments, lessons, projects and more. And of course, any consumer can use Sway with a free Microsoft account. Today, we are also introducing Sway for Windows 10, along with new layout and publishing capabilities.

We introduced Sway as a member of the Office family 10 months ago. Sway helps you create professional designs in minutes. You bring your ideas and raw content, and Sway’s intelligent design engine creates a polished, cohesive layout that helps your images, text, videos and other media flow together in a way that enhances your story. Sway makes sure your creations look great no matter what device they’re being viewed on—phones, tablets, laptops, PCs or even the largest Microsoft Surface Hub!

During Sway Preview, we’ve learned from the hundreds of thousands of amazing Sways you’ve built. Your invaluable feedback has helped us improve Sway to meet your needs—from adding fundamentals like multi-user collaboration, to the very “Sway” way we addressed photo cropping.

Sway for Windows 10 is now available

Sway for Windows 10 is now rolling out to the Windows Store. Sway for Windows combines the full richness of Sway on the web with additional capabilities on your PC or tablet.** This means you can use all of Sway’s integrated content sources along with the power of the built-in design engine to build, edit and share your Sways, whether you’re on the go with your Windows tablet or working at your desk with a PC or laptop. Want to capture the moment? Snap photos right into Sway using the built-in camera on your Windows device. And if you’re ready to present at a conference or to students and classmates, the Sways you’ve already loaded will be available offline when you don’t have Internet access or Wi-Fi is spotty. While some elements of your Sway may still need the Internet (such as interactive maps or cloud-hosted videos), this is a first step in addressing one of the most common feedback requests we heard during Sway Preview. Sway for Windows also allows you to stay logged in with multiple accounts at once if you use the same device for both work and home.

Sway for Windows offers a rich, consistent experience that integrates with your PC/tablet.

This initial release of Sway on Windows 10 is for PCs and tablets. We’re working on Sway for Windows Phones, which will arrive in the coming months. Stay tuned for more!

Present with confidence

Sway’s built-in design engine makes sure your creations look great not only on any device they’re viewed on but in whatever layout you’ve chosen as well. Sway already offers vertical scrolling and horizontal panning layouts where your content flows continuously as you swipe. One of the most popular requests we heard during Sway Preview was for another layout that lets you better control the timing of when content is revealed, particularly for presentation scenarios.

So, today we’re rolling out a new layout in the Navigation pane, which displays groupings of content (images, text, videos, tweets, etc.) one screen at a time. Whether it’s a few words with a knockout background image, a set of images and supporting bullet points, or a cluster of interactive elements, now you can deliver a killer presentation in-person or for viewing across many devices. Check out this interactive example:

Sway now has a layout for grouping collections of content, one screen at a time.

Share your Sways with the world using

You’ve also been asking for a place where you can publish collections of your Sways to share with friends, fans and the rest of the Internet community. We’re excited to announce that with just a tap or click of the Share button, you can now choose to publish Sways directly to the newly relaunched, an Internet destination to publish Office documents in full fidelity for the world to find, browse and share. Along with adding support for Sway, we’ve also improved the overall experience for publishing, managing and consuming content. lets you organize your Sways and other Office content into collections. Word, Excel, PowerPoint and Office Mix files are displayed interactively and with full fidelity. You can also add PDFs and web links. Create a stylish profile page using a Sway to share more about your passions and expertise. On, Sways, documents, collections or profiles can be discovered by search engines, browsed on, and shared in social media or on the web. provides data on how many views you’re getting, and it’s easy for anybody viewing your files to add comments and discover your other published work as well.

With just a tap or click, you can publish your Sways to for the world to find, browse and share.

Sway is being used by so many people in such amazing ways

It’s been wonderful to see people use Sway in ways we anticipated, and it’s been incredible to watch them use Sway in really cool and inspiring ways we hadn’t even imagined.

For example, teachers have been using Sway to reimagine class lessons, recap class projects, provide supplemental material for parents, provide new accessible storytelling tools to their students and more. Sway has helped students breathe new life into school projects, class reports and even personal portfolios. Sway is helping even the youngest students become “active producers of their own original content” at the Bureau of Fearless Ideas (BFI), a local non-profit after-school program in Seattle. Watch this video to learn more:

As Sue from BFI explains, with Sway, students are “learning without realizing they’re learning, which is […] the gold standard.”

In business, professionals have been using Sway to save time at work and easily create engaging, eye-catching interactive reports, presentations, newsletters, trainings and more. Sway can help you showcase a custom solution your IT services company built or demonstrate your industry thought leadership. Use Sway’s new format to blog publicly or share expense report training with your employees. Pitch prospective clients or attract customers with marketing materials that stand out and flow responsively across devices. Here at Microsoft we’re using Sway ourselves for internal trip reports, presentations, newsletters, corporate storytelling pieces and more—we even used Sway to share and collaborate on our plans for today’s announcements!

Check out the following video on how Sway has helped the interaction design firm Potion work together with clients and show off their work in new ways:

Potion Design Principal, Phillip Tiongson, explains how his firm uses Sway professionally.

As you can see in the video below about singer/songwriter Daria Musk, Sway has been a hit with musicians as well as digital artists and creatives of all sorts to combine mixed multimedia in a living digital collage or virtual portfolio that is easy to share with others. Sway has been helping talented and passionate hobbyists, foodies, travelers and even families stay in touch, share their adventures, and highlight their pursuits in meaningful new ways!

Singer/songwriter Daria Musk demonstrates how she uses Sway creatively to make a “living, breathing mood board” for her music.

Get started with Sway today

Professionals, teachers, students and consumers can all get started right away using your work, school or Microsoft accounts to log in to, the new Sway for Windows 10 or updated Sway for iPhone and iPad. Sway is also integrated into the web-based Office Online, so it’s easy to switch between Sway and other familiar Office apps and Office 365 services in your browser and at

Quickly access Sway alongside your other Office 365 apps and services and at (not shown).

Sway Preview has been an amazing journey over the last 10 months, but in many ways the journey really gets going today. We’ll keep listening to your feedback and rapidly update the product with your help. Drop us a line on UserVoice to let us know what you think of Sway, how you use it, and how you’d like to see it improve!

—Sway team, @Sway

Get Sway | Follow Sway


*Sway is available to most customers with an Office 365 plan that includes Office Online, Office 365 Business, or Office 365 ProPlus. For Government Community Cloud (GCC) customers and customers in certain geographies, Sway will be made available at a later date. Some legacy Office 365 plans that are no longer in market as of August 2015 may also not have access to Sway.

**Sway for Windows is now available in 214 markets: Åland Islands, Albania, American Samoa, Andorra, Angola, Anguilla, Antarctica, Antigua and Barbuda, Argentina, Armenia, Aruba, Australia, Austria, Azerbaijan, The Bahamas, Bangladesh, Barbados, Belarus, Belgium, Belize, Benin, Bermuda, Bhutan, Bolivia, Sint Eustatius and Saba Bonaire, Bosnia and Herzegovina, Botswana, Bouvet Island, Brazil, British Indian Ocean Territory, British Virgin Islands, Brunei Darussalam, Bulgaria, Burkina Faso, Burundi, Cape Verde, Cambodia, Cameroon, Canada, Cayman Islands, Central African Republic, Chile, China, Christmas Island, Cocos (Keeling) Islands, Colombia, Republic of Congo, Congo (DRC), Cook Islands, Costa Rica, Côte D’Ivoire (Ivory Coast), Croatia, Curaçao, Cyprus, Czech Republic, Denmark, Dominica, Dominican Republic, Ecuador, El Salvador, Equatorial Guinea, Estonia, Ethiopia, Falkland Islands (Islas Malvinas), Faroe Islands, Fiji Islands, Finland, France, French Guiana, French Polynesia, French Southern and Antarctic Lands, Gabon, The Gambia, Georgia, Germany, Ghana, Gibraltar, Greece, Greenland, Grenada, Guadeloupe, Guam, Guatemala, Channel Islands – Guernsey, Guinea, Guinea-Bissau, Guyana, Haiti, Heard Island and McDonald Islands, Honduras, Hong Kong SAR, Hungary, Iceland, India, Indonesia, Ireland, Isle of Man, Italy, Jamaica, Japan, Channel Islands – Jersey, Kazakhstan, Kenya, Kiribati, Korea, Kyrgyzstan, Laos, Latvia, Lesotho, Liberia, Liechtenstein, Lithuania, Luxembourg, Macau SAR, Macedonia, Madagascar, Malawi, Malaysia, Maldives, Mali, Malta, Marshall Islands, Martinique, Mauritius, Mayotte, Mexico, Micronesia, Moldova, Monaco, Mongolia, Montenegro, Montserrat, Mozambique, Myanmar, Namibia, Nauru, Nepal, Netherlands, New Caledonia, New Zealand, Nicaragua, Niger, Nigeria, Niue, Norfolk Island, Northern Mariana Islands, Norway, Palau, Panama, Papua New Guinea, Paraguay, Peru, Philippines, Pitcairn Islands, Poland, Portugal, Réunion, Romania, Russia, Rwanda, Saint Barthélemy, Saint Kitts and Nevis, Saint Lucia, Saint Martin, Saint Pierre and Miquelon, Saint Vincent and the Grenadines, Samoa, San Marino, Sao Tome and Principe, Senegal, Serbia, Seychelles, Sierra Leone, Singapore, Sint Maarten (Dutch part), Slovakia, Slovenia, Solomon Islands, South Africa, South Georgia and the South Sandwich Islands, Spain, Sri Lanka, Saint Helena, Suriname, Svalbard and Jan Mayen, Swaziland, Sweden, Switzerland, Taiwan, Tajikistan, Timor-Leste, Togo, Tokelau, Tonga, Trinidad and Tobago, Turkey, Turkmenistan, Turks and Caicos Islands, Tuvalu, United States Minor Outlying Islands, U.S. Virgin Islands, Uganda, Ukraine, United Kingdom, United States, Uruguay, Uzbekistan, Vanuatu, Vatican City, Venezuela, Vietnam, Wallis and Futuna, Zambia, and Zimbabwe.

The post Announcing Sway General Availability, Windows 10 app and more! appeared first on Office Blogs.

The Itanium processor, part 8: Advanced loads

MSDN Blogs - Wed, 08/05/2015 - 07:00

Today we'll look at advanced loads, which is when you load a value before you're supposed to, in the hope that the value won't change in the meantime.

Consider the following code:

int32_t SomeClass::tryGetValue(int32_t *value) { if (!m_errno) { *value = m_value; m_readCount++; } return m_errno; }

Let's say that the Some­Class has m_value at offset zero, m_errno at offset 4, and m_readCount at offset 8.

The naïve way of compiling this function would go something like this:

// we are a leaf function, so no need to use "alloc" or to save rp. // on entry: r32 = this, r33 = value addl r30 = 08h, r32 // calculate &m_errno addl r29 = 04h, r32 ;; // calculate &m_readCount ld4 ret0 = [r30] ;; // load m_errno cmp4.eq p6, p7 = ret0, r0 // p6 = m_errno == 0, p7 = !p6 (p7) br.ret.sptk.many rp // return m_errno if there was an error¹ ld4 r31 = [r32] ;; // load m_value (at offset 0) st4 [r33] = r31 ;; // store m_value to *value ld4 r28 = [r29] ;; // load m_readCount addl r28 = 01h, r28 ;; // calculate m_readCount + 1 st4 [r29] = r28 ;; // store updated m_readCount ld4 ret0 = [r30] // reload m_errno for return value br.ret.sptk.many rp // return

First, we calculate the addresses of our member variables. Then we load m_errno, and if there is an error, then we return it immediately. Otherwise, we copy the current value to *value, load m_readCount, increment it, and finally, we return m_errno.

The problem here is that we have a deep dependency chain.

addl r30 = 08h, r32 ↓ ld4 ret0 = [r30] ↓ cmp4.eq p6, p7 = ret0, r0 ↙ ↓ (p7) br.ret.sptk.many rp ld4 r31 = [r32] ↓ st4 [r33] = r31 addl r29 = 04h, r32 non-obvious dependency ↓ ↙ ld4 r28 = [r29] ↓ addl r28 = 01h, r28 ↓ st4 [r29] = r28 non-obvious dependency ↓ ld4 ret0 = [r30] ↓ br.ret.sptk.many rp

Pretty much every instruction depends on the result of the previous instruction. Some of these dependencies are obvious. You have to calculate the address of a member variable before you can read it, and you have to get the result of a memory access befure you can perform arithmetic on it. Some of the dependencies are not obvious. For example, we cannot access m_value or m_readCount until after we confirm that m_errno is zero to avoid a potential access violation if the object straddles a page boundary with m_errno on one page and m_value on the other (invalid) page. (We saw last time how this can be solved with speculative loads, but let's not add that to the mix yet.)

Returning m_errno is a non-obvious dependency. We'll see why later. For now, note that the return value came from a memory access, which means that if the caller of the function tries to use the return value, it may stall waiting for the result to arrive from the memory controller.

When you issue a read on Itanium, the processor merely initiates the operation and proceeds to the next instruction before the read completes. If you try to use the result of the read too soon, the processor stalls until the value is received from the memory controller. Therefore, you want to put as much distance as possible between the load of a value from memory and the attempt to use the result.

Let's see what we can do to parallelize this function. We'll perform the increment of m_readCount and the fetch of m_value simultaneously.

// we are a leaf function, so no need to use "alloc" or to save rp. // on entry: r32 = this, r33 = value addl r30 = 08h, r32 // calculate &m_errno addl r29 = 04h, r32 ;; // calculate &m_readCount ld4 ret0 = [r30] ;; // load m_errno cmp4.eq p6, p7 = ret0, r0 // p6 = m_errno == 0, p7 = !p6 (p7) br.ret.sptk.many rp // return m_errno if there was an error ld4 r31 = [r32] // load m_value (at offset 0) ld4 r28 = [r29] ;; // preload m_readCount addl r28 = 01h, r28 // calculate m_readCount + 1 st4 [r33] = r31 ;; // store m_value to *value st4 [r29] = r28 // store updated m_readCount br.ret.sptk.many rp // return (answer already in ret0)

We've basically rewritten the function as

int32_t SomeClass::getValue(int32_t *value) { int32_t local_errno = m_errno; if (!local_errno) { int32_t local_readCount = m_readCount; int32_t local_value = m_value; local_readCount = local_readCount + 1; *value = local_value; m_readCount = local_readCount; } return local_errno; }

This time we loaded the return value from m_errno long before the function ends, so when the caller tries to use the return value, it will definitely be ready and not incur a memory stall. (If a stall were needed, it would have occurred at the cmp4.) And we've also shortened the dependency chain significantly in the second half of the function.

addl r30 = 08h, r32 ↓ ld4 ret0 = [r30] ↓ cmp4.eq p6, p7 = ret0, r0 addl r29 = 04h, r32 ↙ ↓ ↘ ↓ (p7) br.ret.sptk.many rp ld4 r31 = [r32] ld4 r28 = [r29] ↓ ↓ st4 [r33] = r31 addl r28 = 01h, r28 ↓ ↓ ↓ st4 [r29] = r28 ↓ ↓ br.ret.sptk.many rp

This works great until somebody does this:

int32_t SomeClass::Haha() { return this->tryGetValue(&m_readCount); }

or even this:

int32_t SomeClass::Hoho() { return this->tryGetValue(&m_errno); }


Let's look at Haha. Suppose that our initial conditions are m_errno = 0, m_value = 42, and m_readCount = 0.

Original Optimized local_errno = m_errno; // true if (!m_errno) // true if (!m_errno) // true readCount = m_readCount; // 0 *value = m_value; // m_readCount = 42 *value = m_value; // m_readCount = 42 m_readCount++; // m_readCount = 43 m_readCount = readCount + 1; // m_readCount = 1 return m_errno; // 0 return errno; // 0

The original code copies the value before incrementing the read count. This means that if the caller says that m_readCount is the output variable, the act of copying the value modifies m_readCount. This modified value is then incremented. Our optimized version does not take this case into account and sets m_readCount to the old value incremented by 1.

We were faked out by pointer aliasing!

(A similar disaster occurs in Hoho.)

Now, whether the behavior described above is intentional or desirable is not at issue here. The C++ language specification requires that the original code result in the specified behavior, so the compiler is required to honor it. Optimizations cannot alter the behavior of standard-conforming code, even if that behavior seems strange to a human being reading it.

But we can still salvage this optimization by handling the aliasing case. The processor contains support for aliasing detection via the ld.a instruction.

// we are a leaf function, so no need to use "alloc" or to save rp. // on entry: r32 = this, r33 = value addl r30 = 08h, r32 // calculate &m_errno addl r29 = 04h, r32 ;; // calculate &m_readCount ld4 ret0 = [r30] ;; // load m_errno cmp4.eq p6, p7 = ret0, r0 // p6 = m_errno == 0, p7 = !p6 (p7) br.ret.sptk.many rp // return m_errno if there was an error ld4 r31 = [r32] // load m_value (at offset 0) ld4.a r28 = [r29] ;; // preload m_readCount addl r28 = 01h, r28 // calculate m_readCount + 1 st4 [r33] = r31 // store m_value to *value chk.a.clr r28, recover ;; // recover from pointer aliasing recovered: st4 [r29] = r28 ;; // store updated m_readCount br.ret.sptk.many rp // return recover: ld4 r28 = [r29] ;; // reload m_readCount addl r28 = 01h, r28 // recalculate m_readCount + 1 br recovered // recovery complete, resume mainline code

The ld.a instruction is the same as an ld instruction, but it also tells the processor that this is an advanced load, and that the processor should stay on the lookout for any instructions that write to any bytes accessed by the load instruction. When the value is finally consumed, you perform a chk.a.clr to check whether the value you loaded is still valid. If no instructions have written to the memory in the meantime, then great. But if the address was written to, the processor will jump to the recovery code you provided. The recovery code re-executes the load and any other follow-up calculations, then returns to the original mainline code path.

The .clr completer tells the processor to stop monitoring that address. It clears the entry from the Advanced Load Address Table, freeing it up for somebody else to use.

There is also a ld.c instruction which is equivalent to a chk.a that jumps to a reload and then jumps back. In other words,

ld.c.clr r1 = [r2]

is equivalent to

chk.a.clr r1, recover recovered: ... recover: ld r1 = [r2] br recovered

but is much more compact and doesn't take branch penalties. This is used if there is no follow-up computation; you merely want to reload the value if it changed.

As with recovery from speculative loads, we can inline some of the mainline code into the recovery code so that we don't have to pad out the mainline code to get recovered to sit on a bundle boundary. I didn't bother doing it here; you can do it as an exercise.

The nice thing about processor support for pointer aliasing detection is that it can be done across functions, something that cannot easily be done statically. Consider this function:

void accumulateTenTimes(void (*something)(int32_t), int32_t *victim) { int32_t total = 0; for (int32_t i = 0; i < 10; i++) { total += something(*victim); } *victim = total; } int32_t negate(int32_t a) { return -a; } int32_t value = 2; accumulateTenTimes(negate, &value); // result: value = -2 + -2 + -2 + ... + -2 = -20 int32_t sneaky_negate(int32_t a) { value2 /= 2; return -a; } int32_t value2 = 2; accumulateTenTimes(sneaky_negate, &value2); // result: value2 = -2 + -1 + -0 + -0 + ... + -0 = -3

When compiling the accumulate­Ten­Times function, the compiler has no way of knowing whether the something function will modify victim, so it must be conservative and assume that it might, just in case we are in the sneaky_negate case.

Let's assume that the compiler has done flow analysis and determined that the function pointer passed to accumulate­Ten­Times is always within the same module, so it doesn't need to deal with gp. Since function descriptors are immutable, it can also enregister the function address.

// 2 input registers, 6 local registers, 1 output register alloc r34 = ar.pfs, 2, 6, 1, 0 mov r35 = rp // save return address mov r36 = // save loop counter or r37 = r0, r0 // total = 0 ld8 r38 = [r32] // get the function address or r31 = 09h, r0 ;; // r31 = 9 mov = r31 // loop nine more times (ten total) again: ld4 r39 = [r33] // load *victim for output mov b6 = r38 // move to branch register rp = b6 ;; // call function in b6 addl r37 = ret0, r37 // accumulate total br.cloop.sptk.few again ;; // loop 9 more times st4 [r33] = r37 // save the total mov = r36 // restore loop counter mov rp = r35 // restore return address mov ar.pfs = r34 // restore stack frame br.ret.sptk.many rp // return

Note that at each iteration, we read *victim from memory because we aren't sure whether the something function modifies it. But with advanced loads, we can remove the memory access from the loop.

// 2 input registers, 7 local registers, 1 output register alloc r34 = ar.pfs, 2, 7, 1, 0 mov r35 = rp // save return address mov r36 = // save loop counter or r37 = r0, r0 // total = 0 ld8 r38 = [r32] // get the function address or r31 = 09h, r0 ;; // r31 = 9 mov = r31 // loop nine more times (ten total) ld4.a r39 = [r33] // get the value of *victim again: r39 = [r33] // reload *victim if necessary or r40 = r39, r0 // set *victim as the output parameter mov b6 = r38 // move to branch register rp = b6 ;; // call function in b6 addl r37 = ret0, r37 // accumulate total br.cloop.sptk.few again ;; // loop 9 more times invala.e r39 // stop tracking r39 st4 [r33] = r37 // save the total mov = r36 // restore loop counter mov rp = r35 // restore return address mov ar.pfs = r34 // restore stack frame br.ret.sptk.many rp // return

We perform an advanced load of *value in the hope that the callback function will not modify it. This is true if the callback function is negate, but it will trigger reloads if the accumulator function is sneaky_negate.

Note here that we use the .nc completer on the ld.c instruction. This stands for no clear and tells the processor to keep tracking the address because we will be checking it again. When the loop is over, we use invala.e to tell the processor, "Okay, you can stop tracking it now." This also shows how handy the ld.c instruction is. We can do the reload inline rather than have to write separate recovery code and jumping out and back.

(Processor trivia: We do not need a stop after the You are allowed to consume the result of a check load in the same instruction group.)

In the case where the callback function does not modify value, the only memory accesses performed by this function and the callback are loading the function address, loading the initial value from *value, and storing the final value to *value. The loop body itself runs without any memory access at all!

Going back to our original function, I noted that we could also add speculation to the mix. So let's do that. We're going to speculate an advanced load!

// we are a leaf function, so no need to use "alloc" or to save rp. // on entry: r32 = this, r33 = value r31 = [r32] // speculatively preload m_value (at offset 0) addl r30 = 08h, r32 // calculate &m_errno addl r29 = 04h, r32 ;; // calculate &m_readCount r28 = [r29] // speculatively preload m_readCount ld4 ret0 = [r30] ;; // load m_errno cmp4.eq p6, p7 = ret0, r0 // p6 = m_errno == 0, p7 = !p6 (p7) invala.e r31 // abandon the advanced load (p7) invala.e r28 // abandon the advanced load (p7) br.ret.sptk.many rp // return false if value not set ld4.c.clr r31 = [r32] // validate speculation and advanced load of m_value st4 [r33] = r31 // store m_value to *value ld4.c.clr r28 = [r29] // validate speculation and advanced load of m_readCount addl r28 = 01h, r28 ;; // calculate m_readCount + 1 st4 [r29] = r28 // store updated m_readCount br.ret.sptk.many rp // return

To validate a speculative advanced load, you just need to do a ld.c. If the speculation failed, then the advanced load also fails, so all we need to do is check the advanced load. and the reload will raise the exception.

The dependency chain for this function is even shorter now that we were able to speculate the case where there is no error. (Since you are allowed to consume an ld4.c in the same instruction group, I combined the ld4.c and its consumption in a single box since they occur within the same cycle.) r31 = [r32] addl r30 = 08h, r32 addl r29 = 04h, r32 ↓ ↓ ↓ ↓ ld4 ret0 = [r30] r28 = [r29] ↓ ↓ ↓ ↓ cmp4.eq p6, p7 = ret0, r0 ↓ ↓ ↙ ↓ ↘ ↓ ld4.c st4 [r33] = r31 invala.e r31 invala.e r28 br.ret rp ld4.c addl r28 = 01h, r28 ↓ ↓ ↓ st4 [r29] = r28 ↓ ↓ br.ret.sptk.many rp

Aw, look at that pretty diagram. Control speculation and data speculation allowed us to run three different operations in parallel even though they might have dependencies on each other. The idea here is that if profiling suggests that the dependencies are rarely realized (pointers are usually not aliased), you can use speculation to run the operations as if they had no dependencies, and then use the check instructions to convert the speculated results to real ones.

¹ Note the absence of a stop between the cmp4 and the br.ret. That's because of a special Itanium rule that says that a conditional branch is permitted to use a predicate register calculated earlier within the same instruction group. (Normally, instructions within an instruction group are not allowed to have dependencies among each other.) This allows a test and jump to occur within the same cycle.

Using Excel Surveys to test student understanding and collect feedback data

MSDN Blogs - Wed, 08/05/2015 - 06:45

Teachers need efficient ways to collect student data. A flexible, powerful tool that can be used for a variety of different purposes, Excel Surveys provide an easy way for teachers to quickly and easily create and distribute feedback and assessment surveys from directly within the Office 365 environment.

Once the survey has been sent out, the results are automatically collected in a spreadsheet, giving teachers the opportunity to even graph that data in order to provide additional insight into student progress. This can help with quickly ascertaining who in the class has understood the lesson content, and likewise who might need more help.

In the following video we hear from Canadian MIEE James Pedrech, who talks us through the basics of creating an Excel Survey from OneDrive, and looks at some of the ways that teachers can utilise the responses of their students to provide more targeted and tailored tutoring to the individuals or groups in their class that need it the most:


Excel is free with Office 365, which is also available to students and teachers at no cost through their academic institution’s existing Microsoft Education Subscription. You can check your eligibility at

Using Excel Surveys to test student understanding and collect feedback data

MSDN Blogs - Wed, 08/05/2015 - 06:45

Teachers need efficient ways to collect student data. A flexible, powerful tool that can be used for a variety of different purposes, Excel Surveys provide an easy way for teachers to quickly and easily create and distribute feedback and assessment surveys from directly within the Office 365 environment.

Once the survey has been sent out, the results are automatically collected in a spreadsheet, giving teachers the opportunity to even graph that data in order to provide additional insight into student progress. This can help with quickly ascertaining who in the class has understood the lesson content, and likewise who might need more help.

In the following video we hear from Canadian MIEE James Pedrech, who talks us through the basics of creating an Excel Survey from OneDrive, and looks at some of the ways that teachers can utilise the responses of their students to provide more targeted and tailored tutoring to the individuals or groups in their class that need it the most:


Excel is free with Office 365, which is also available to students and teachers at no cost through their academic institution’s existing Microsoft Education Subscription. You can check your eligibility at

Using Excel Surveys to test student understanding and collect feedback data

MSDN Blogs - Wed, 08/05/2015 - 06:45

Teachers need efficient ways to collect student data. A flexible, powerful tool that can be used for a variety of different purposes, Excel Surveys provide an easy way for teachers to quickly and easily create and distribute feedback and assessment surveys from directly within the Office 365 environment.

Once the survey has been sent out, the results are automatically collected in a spreadsheet, giving teachers the opportunity to even graph that data in order to provide additional insight into student progress. This can help with quickly ascertaining who in the class has understood the lesson content, and likewise who might need more help.

In the following video we hear from Canadian MIEE James Pedrech, who talks us through the basics of creating an Excel Survey from OneDrive, and looks at some of the ways that teachers can utilise the responses of their students to provide more targeted and tailored tutoring to the individuals or groups in their class that need it the most:


Excel is free with Office 365, which is also available to students and teachers at no cost through their academic institution’s existing Microsoft Education Subscription. You can check your eligibility at

¿Cómo importar transacciones de nómina en un diario general?

MSDN Blogs - Wed, 08/05/2015 - 05:25

Hola a tod@s,

En esta ocasión queremos indicaros como importar transacciones de nómina en un diario general, funcionalidad existente desde la versión NAV 2013 R2.

Realizaré un ejemplo en NAV 2105 sobre el siguiente fichero variable que me he creado de forma manual con algunos registros a importar.

Recordar que la flexibilidad de esta funcionalidad permite configurar la aplicación dependiendo del fichero que vamos a importar.

Lo primero que debemos tener en cuenta es que esta funcionalidad no está mostrada de forma estándar ya que no se suministra ninguna importación predefinida. Tendremos por tanto que mostrar en las siguientes páginas la configuración relacionada:
En la ficha de Configuración de Contabilidad, utilizando la personalización de la página mostraremos el desplegable “Importación de transacción de nómina”

Una vez mostrado la ficha de configuración de contabilidad quedará de la siguiente forma, con una nueva pestaña:

En el campo “Formato de importación de transacción de nóminas” indicaremos una definición de intercambio de registro tipo= Importación nóminas que posteriormente configuraremos

De forma adicional en la página Diario General es necesario modificar la cinta de opciones para mostrar el punto de menú referente a la importación de transacciones de nómina:


Una vez mostrados todos los puntos de menú necesarios, estamos en disposición de proceder con la configuración de la ficha de Definición de intercambio de registro.
Es necesario tener en cuenta que se trata de un formato de texto variable, que necesita unas configuraciones diferentes al tipo de archivo fijo que vimos en la anterior entrada de blog “¿Cómo importar el fichero del extracto bancario en formato .txt?”
Os resalto a continuación:

Ahora debemos configurar las columnas de registro:

Y asignar los campos que se mostraran en NAV:


Finalmente procedemos a importar el fichero en el Diario General:

NOTA: Importante utilizar una sección de diario que no tenga configurada cuenta de contrapartida, si el fichero contiene las cuentas de contrapartida indicadas.

Con esto disponemos de nuestros registros de nóminas listos para registrar.

Espero que la información indicada haya sido de utilidad y os ayude a configurar NAV 2015 para poder importar las transacciones de nómina en un diario general.
Cualquier comentario no dudéis en indicarnos.

Seguimos en contacto
Feliz verano 

Equipo Financiero de Soporte Microsoft Dynamics NAV.

Spark pour Azure HDInsight et Power BI – 2nde partie

MSDN Blogs - Wed, 08/05/2015 - 05:16

J’ai le plaisir de publier dans ce blog la seconde partie du billet rédigé par Romain Casteres, Microsoft Premier Field Engineer (PFE) - SQL Server & BI chez Microsoft France et également du membre du bureau du Groupe des Utilisateurs francophones de SQL Server (GUSS).

Après une première partie dédiée à Apache Spark pour Azure HDInsight en version préliminaire publique le 11 juillet dernier, ce second volet s’intéresse à l’utilisation de Power BI avec Spark.

Je vous souhaite une bonne lecture de ce billet fort intéressant et n’hésitez pas à consulter sur le blog de Romain tous ses autres billets déjà publiés ! ;-) Vous pouvez aussi retrouver Romain bien évidemment sur Twitter et LinkedIn.



Power BI avec Spark

Microsoft Power BI est un ensemble de services et de fonctionnalités en ligne qui vous permettent de rechercher et de visualiser des données, de partager des découvertes et de collaborer en utilisant de nouvelles méthodes intuitives.

Depuis le 24 Juillet dernier, la dernière version de Power BI est en GA, je vous invite à essayer le Designer, le nouveau Portail ou encore les applications mobiles et Desktop.

Voici le portail Power BI :

Depuis le portail il est possible de récupérer un jeu de données provenant de :

  • Votre organisation
  • Services externes comme Github, MailChimp, Google Analytics, etc.
  • Fichiers locaux, OneDrive
  • Services comme Azure SQL Database, Azure SQL Data Warehouse, SQL Server Analysis et depuis un cluster HDInsight Spark (via le driver Spark ODBC)

Je vais donc me connecter au cluster HDInsight Spark depuis le portail Power BI :

Après avoir enregistré le rapport, il est possible de publier les différents éléments de celui-ci dans un Dashboard :

En guise de conclusion

Il est de plus en plus aisé d’analyser de grosse volumétrie de données et ceux avec des temps d’exécutions de moins en moins longs !

HDInsight Spark vient compléter les services Big Data dans Azure, il faut le voir comme un complément et non comme un remplaçant de HDInsight Hadoop. Dans Hadoop vous stockez toutes vos données semi-structurées dans un HDFS et profitez de la flexibilité du Map Reduce pour les requêter. HDInsight quant à lui tire parti de l’In-Memory pour exécuter des algorithmes de Datamining, pour effectuer des analyses interactives ou encore du streaming.

Voici un tableau récapitulatif des outils évoqués et leurs utilisations :





Exécution de tâches en parallèle

Map Reduce ou Tez



Exécution de tâches de type SQL




Stockage de données non structurées


(HDFS via Hadoop)

Azure Blobs

Stockage NoSQL



Document DB

Machine Learning


Spark MLlib

Azure ML

Streaming data


Spark Streaming

Stream Analytics

Et quelques ressources pour la route des vacances ;-)

Voici quelques ressources sur les sujets abordés :

Spark pour Azure HDInsight et Power BI – 1ère partie

MSDN Blogs - Wed, 08/05/2015 - 05:03

Comme nous avons pu déjà le souligner, ce blog se veut un cadre d’échanges et de partage. J’ai donc aujourd’hui le plaisir de publier dans ce blog ce billet rédigé par Romain Casteres.

Je profite donc de cette occasion pour le remercier très sincèrement pour cette nouvelle contribution qui tombe, il faut le dire, à point nommé pour notre blog avec la sortie d’Apache Spark pour Azure HDInsight en version préliminaire publique le 11 juillet dernier. (Vous pouvez relire les détails dans l’annonce dans le billet Announcing Spark for Azure HDInsight public preview et retrouver les pointeurs clés dans le billet Microsoft delivers interactive analytics on Big Data with the release of Spark for Azure HDInsight.)

C’est également le cas avec toutes les évolutions de Power BI sur lesquelles nous sommes revenues récemment et bien sûr de fait la disponibilité générale de Power BI le 24 juillet (avec quelques précisions pour cette annonce dans le billet Over 500,000 unique users from 45,000 companies across 185 countries helped shape the new Power BI).

Romain est également désormais un collègue de travail comme il vient de rejoindre Microsoft France en tant que Microsoft Premier Field Engineer (PFE) - SQL Server & BI. Souhaitons-lui donc au passage la bienvenue et bonne chance dans ses nouvelles fonctions. Pour mémoire, Romain est également du membre du bureau du Groupe des Utilisateurs francophones de SQL Server (GUSS).

Romain reviendra, je l’espère, régulièrement en son nom propre partager sur ce blog. En attendant, je vous souhaite une bonne lecture de ce billet fort intéressant et n’hésitez pas à consulter sur le blog de Romain tous ses autres billets déjà publiés ! ;-) Vous pouvez aussi retrouver Romain bien évidemment sur Twitter et LinkedIn.

Le billet est découpé en deux parties, une première dédiée à Spark pour HDInsight et une seconde sur Power BI avec Spark



La famille Azure HDInsight s’agrandit, en effet il existe désormais quatre configurations de cluster dans Azure, en plus de la capacité à les personnaliser par des scripts !

Voici les quatre versions de HDInsight :

  1. Hadoop : Version adaptée du fameux Framework Big Data Hadoop. Aujourd’hui la version la plus récente est la HDInsight 3.2, elle se base sur la distribution Hortonworks Data Platform 2.2 (Hadoop 2.6) et est disponible sous Ubuntu 12.04 (UI Ambari !) comme sous Windows Server 2012 R2 !
  2. HBase : Apache HBase est une base de données NoSQL open source basée sur Hadoop. Dans sa dernière version disponible (HDInsight 3.2), le cluster Hadoop intègre HBase 0.98.4.
  3. Storm : Apache Storm est un système de calcul distribué et en temps réel permettant le traitement rapide de grandes volumétries de données. Dans sa dernière version disponible (HDInsight 3.2), le cluster Hadoop intègre Storm 0.9.3.
  4. Spark : Le Framework Apache Spark offre un modèle de programmation plus simple que celui d’Hadoop et offre des temps d’exécution jusqu’à 100 fois plus courts. Cette version est actuellement en version préliminaire publique, elle intègre la version 1.3.1 de Spark basée sur le cluster HDInsight 3.2. Actuellement, cette version est uniquement disponible sous Windows Server 2012 R2.

Analysons la tendance dans le monde sur ces outils depuis 2004 :

<script type="text/javascript" src="//,+Apache+Spark,+Apache+Storm,+Apache+HBase,+HDInsight&cmpt=q&tz=Etc/GMT-2&tz=Etc/GMT-2&content=1&cid=TIMESERIES_GRAPH_0&export=5&w=500&h=330"></script>

Avec un zoom sur une année :

Il est clair que l’engouement pour Spark est aujourd’hui plus important que celui de Hadoop !

Dans ce billet, je vais donc présenter Spark. Je mettrai en exergue les différences avec Hadoop, ses qualités et ses faiblesses puis je créerai un cluster HDInsight Spark afin de le tester. Pour finir, nous analyserons les données du cluster avec Power BI !

Présentation de Spark

Originellement développé par AMPLab en 2009 dans l’Université UC Berkeley, Spark est rendu Open Source par Apache en 2010.

Au même titre que Hadoop, Spark est un Framework Big Data. Il peut s’exécuter au-dessus du Framework Hadoop ou en mode Standalone. Il ne possède pas d’infrastructure de fichiers distribués comme HDFS (Hadoop Distributed File System), c’est donc l’une des raisons pour l’exécuter au-dessus de Hadoop (Windows Azure Storage via l’Api HDFS dans Azure). Spark dispose de son propre gestionnaire de ressources (Standalone Scheduler) mais supporte aussi d'autres gestionnaires de ressources comme Mesos ou Yarn. Dans HDInsight, Spark utilise son propre gestionnaire de ressources et non Yarn.

Spark maintient les résultats intermédiaires en mémoire plutôt que sur disque, ce qui améliore les performances, en particulier lorsqu’il est nécessaire de travailler à plusieurs reprises sur le même jeu de données. Cela se rapproche d’un projet d’ores déjà disponible dans Hadoop : Tez.

Il nécessite cependant beaucoup de mémoire, lorsque la volumétrie des données à traiter excède la mémoire cumuler des nœuds du cluster, il bascule les données sur disque ce qui ralentit les traitements. Spark utilise les évaluations paresseuses (Lazy Evalutation), cela lui permet de lire le minimum nécessaire pour terminer la requête.

Spark manipule des RDDs (Resilient Distributed Datasets), ils représentent des ensembles de données d'objets qui sont distribués sur les nœuds du cluster et sont tolérants aux pannes. Comme Hadoop, Spark utilise des opérations de Shuffle, dans le cas de HDInsight Spark les données intermédiaires sont écrites sur le disque local des VM et non le Windows Azure Storage. Enfin Spark (développé en Scala) dispose d'un riche choix de langage de programmation : Java, Scala, Python, R (Spark 1.4), etc.

L’écosystème Spark :

  • Spark SQL : Un module SQL-Like pouvant utiliser le métastore de Hive, SerDes ou encore les UDF’s.
  • Spark Streaming : Un module pour le traitement en continu de flux de données.
  • MLIib : Un module de Machine Learning.
  • GraphX : Un module pour le calcul de graphe.

A contrario de Hadoop où les différentes couches d’abstractions au modèle de programmation Map Reduce ont été développées par différentes sociétés comme Pig par Twitter, Hive par Facebook, etc. Les différents composants de Spark ont été développés ensemble ce qui leur confère une meilleure interopérabilité.

Plutôt que de voir Spark comme un remplaçant de Hadoop, il est plus correct de le voir comme une alternative au Map Reduce ou encore à Tez. Je vous conseille de regarder la présentation d’Hortonworks comparant les performances de Hive Tez, Spark-SQL et de Hive Spark :

Spark pour HDInsight

Il est possible de personnaliser un cluster HDInsight lors de sa création et ainsi d’installer Spark sur un cluster HDInsight Hadoop, plus d’informations ici.

Cependant vous ne profiterez pas des avantages que le service HDInsight Spark offre :

  • Blocs-notes Zeppelin et Jupyter : préconfigurés pour le traitement interactif et la visualisation des données, ils sont disponibles à partir du tableau de bord du cluster.
  • Spark Job Server : Il s’agit d’un serveur d’API REST qui permet aux utilisateurs de soumettre et de surveiller à distance des travaux en cours d’exécution.
  • Prises-en en charge les requêtes simultanées : plusieurs requêtes d’un même utilisateur ou plusieurs requêtes de différents utilisateurs et applications peuvent partager les mêmes ressources de cluster.
  • Mise en cache sur des disques SSD : vous pouvez choisir de mettre en cache des données en mémoire ou dans les disques SSD attachés aux nœuds de cluster.
  • Connecteur à Azure Event Hubs : les clients peuvent créer des applications de diffusion en continu à l’aide de la fonctionnalité Event Hubs, et de Kafka.
  • Connecteurs pour les outils décisionnels les plus utilisés tels que Power BI et Tableau.
  • Bibliothèques Anaconda préinstallées : Anaconda fournit près de 200 bibliothèques pour l’apprentissage automatique, l’analyse des données, la visualisation, etc.
  • Scalabilité du nombre de nœuds du cluster.
  • Support technique 24 heures sur 24, 7 jours sur 7.
  • Etc.

Dans cet exemple, je vais installer un cluster HDInsight Spark et ainsi profiter de ces différents services. Après m’avoir authentifié sur le portail Azure : Création d’un service HDInsight ; Création personnalisée :

Pour des raisons de coûts, j’ai limité le nombre de nœuds de calcul du cluster à 1 et j’ai spécifié la région d’Europe du Nord.

Après une dizaine de minutes, mon cluster HDInsight Spark est disponible :-)

Une fois le cluster créé, vous avez accès on plusieurs consoles d’administration et de développement. Le tableau de bord Spark permet d’avoir une vie d’ensemble du cluster, de gérer les ressources de celui-ci, d’accéder au Notebooks (Jupyter ou Zeppelin), de naviguer dans le système de fichier ou encore d’exécuter des requêtes HiveQL :

Le gestionnaire de ressources du cluster Spark vous permet de gérer des ressources telles que les CPUs, la RAM utilisées par les services du cluster (spark.executor.memory, java xmx setting, …), etc.

Pour tester HDInsight Spark, j’ai téléchargé un jeu de données Open Source provenant de la SNCF : « Les incidents de sécurité de la SNCF depuis 2014 » puis, j’ai uploadé le fichier .CSV dans un nouveau container du Blob Storage « sncf » :

Pour télécharger le fichier, rendez-vous ici :

Informations sur les données : Ne sont listés dans ce fichier que les évènements de sécurité impliquant un dysfonctionnement du système ferroviaire, qu'il soit d'origine interne ou externe. Les commentaires associés aux incidents de sécurité résultent d'informations obtenues à chaud, bien souvent avant que l'enquête ne soit achevée. Ce jeu de données indique également les incidents de sécurité de type "évènements de sécurité remarquable" (ESR). Un ESR est un incident de sécurité lié à la circulation effective d'un train qui met, ou risque de mettre, en danger la vie des personnes transportées et aux abords des installations ferroviaires (y compris les personnels, salariés de prestataires et sous-traitants).

<iframe src="" width="400" height="300" frameborder="0"></iframe>

Depuis la fenêtre de requêté HiveQL, j’ai créé la table externe SNCF :



Pour des questions d’optimisation, j’ai créé la table « sncf_parquet », en utilisant le format PARQUET (format de stockage en colonne) :



INSERT OVERWRITE TABLE sncf_parquet SELECT * FROM sncf WHERE Date <> 'Date';

Zeppelin est un service web permettant l’analyse et le requêtage de données interactives. Vous pouvez vous y connecter à partir de l’onglet Notebooks du cluster :

Les temps de réponse sont très bons, car la petite volumétrie de données rentre en mémoire.

Ceci conclut cette première partie. La seconde partie s’intéresse à l’utilisation de Power BI avec Spark avant de conclure.

Das Data Exchange Framework verstehen Teil 2

MSDN Blogs - Wed, 08/05/2015 - 04:07

Im zweiten Teil soll es darum gehen Daten aus einer CSV Datei in ein Buchblatt zu importieren. Allgemeine Erklärungen zum Data Exchange Framework finden sich in Teil 1. Wie Sie im ersten Teil gesehen haben gibt es direkt einen Menüpunkt in der Bankkontenabstimmung und auch im Zahlungsabstimmungs Buch.-Blatt für den Import von Daten. Das Format wird an der Bank hinterlegt.

Eine ähnliche Option gibt es auch für die Buchblätter, allerdings ist diese standardmäßig ausgeblendet weil keine vordefinierten Importe mitgeliefert werden. Um diese sichtbar zu machen sind ein paar Handgriffe notwendig.

Zuerst können Sie über die Finanzbuchhaltung Einrichtung das Inforegister “Lohntransaktionsimport” hinzufügen. Dies geht am besten über Anpassen / Diese Seite Anpassen. Dann hinzufügen und ganz wichtig auch noch das Inforegister selbst anpassen und das Feld hinzufügen. Dort wird nach der Definition das Format hinterlegt.

Im Fibu Buch.-Blatt gibt es eine Action “Lohntransaktionen importieren” die auch im Standard nicht sichtbar ist. Auch diese können Sie über Menüband anpassen hinzufügen.

Als Beispiel für den Import soll folgende CSV Datei dienen. Sie können wieder einfach die Zeilen hier in eine Textdatei kopieren und entweder mit der Endung .txt oder .csv speichern.


Es sind 6 Spalten in der Datei (Datum, Belegnummer, Betrag, Kontonummer, Kostenstelle, Kostenträger). Wie der Name vermuten lässt, ist der Import für Lohndaten gedacht und konzentriert sich auf Sachkonten. Benötigen Sie andere Kontenarten, vielleicht noch eine Bank, dann ist ein Import über die Bankkontenabstimmung oder das Zahlungsabstimmungs Buch.-Blatt besser geeignet. Sie können dort ja nicht nur in eine Tabelle einlesen, sondern gleichzeitig auch Zeilen im Buchblatt erzeugen lassen, indem mehrere Buchungsaustauschzuorndungen angelegt werden.

Wenn wir nun eine neue Buchungsaustauschdefinition anlegen, ist es wichtig bei “Art” Lohnimport zu hinterlegen. Nur Buchungsaustauschdefinitionen mit dem Typ Lohnimport können in der Finanzbuchhaltung Einrichtung hinterlegt werden. Als Dateityp nehmen wir in diesem Beispiel Variabler Text und wichtig ist dann noch das Spaltentrennzeichen “;”. Kopfzeilen sind nicht vorhanden und von all den möglichen Codeunits und XML Ports benötigen wir lediglich den XML Port 1220 wie der Name “Buchungsaustauschimport – CSV” auch vermuten lässt.

In den Buchungszeilendefinitionen geben Sie an, das 6 Spalten in der Datei sind und Sie können die Definition der Spalten aus dem unten angegebenen Screenshot ablesen.

Die Länge muss nicht hinterlegt werden, weil wir variable Feldlänge verwenden. Die Namen können Sie natürlich frei wählen, wir werden jetzt im nächsten Schritt die Feldzuordnung treffen. Also im Abschnitt Buchungszeilendefinition wählen Sie Feldzuordnung und legen Sie eine neue an. Tabellen-ID ist die 81, was den Fibu-Buchblatt-Zeilen entspricht.

Den Multiplikator müssen wir in diesem Fall nicht setzen, weil die Beträge schon richtig formatiert in der Datei stehen. Bei den Dimensionen habe ich den Haken “Optional” gesetzt, denn diese sind nicht immer vorhanden.  Das ist alles, was notwendig ist, um die Datei einzulesen. Denken Sie daran, das Format noch in der Finanzbuchhaltungseinrichtung zuzuweisen. Nach dem Import sieht das Buchblatt wie folgt aus:

Ich habe hier in das Standard Buchblatt importiert. Wichtig ist, daran zu denken, dass wir in unserer Datei kein Gegenkonto haben und dass die Buchungssätze als Splittbuchungen angegeben sind. Wir haben also kein Mapping für das Gegenkonto. Wähle ich jetzt z.B. ein Buchblatt aus, welches ein Gegenkonto im Buchblattnamen hat, so wird dies den Buchungen hinzugefügt, genauso wie wenn ich manuell eine Buchung erfassen würde. Das kann ein gewünschter Effekt sein, wenn z.B. alles über ein Transferkonto gebucht werden soll oder eine Fehlerquelle.

Zum Lohndatenimport und allgemein zur Data Exchange Framework gab es im allgemeinen Blog auch schon einen Artikel, dieser erschien direkt nach Erscheinen der Funktionalität – bezieht sich also auf die Version Dynamics NAV 2013 R2, Cumulative Update 1. Die Screenshots in diesem Artikel stammen aus der Version Dynamics NAV 2015.

Mit freundlichem Gruß

Andreas Günther

Microsoft Dynamics Germany

Eye of the Intern: Sneaking into //oneweek, TechReady, Imagine Cup, HoloLens and the Guinness Book of World Records

MSDN Blogs - Wed, 08/05/2015 - 04:00

Last week your favourite Technical Evangelist Intern took a workation at the mothership (aka Redmond) and got invoked in a bunch of life changing events. In this blog post, he discusses some of those events as experienced during one of the busiest weeks on campus this year. Follow Mansib’s journeys on Eye of the Intern as he tries to navigate the intricacies and adversities of interning at one of the world’s largest tech companies and recounts his mistakes in excruciating detail so that you don't have to.

Last week, I had the immeasurable pleasure of being shipped to Redmond for a week of merriment at my employer’s expense. Well, at least I figured it would be merriment.

If you’re anything like me and by that I mean a 20 year old college student who’s trying to stave off loans, you looked at this and thought “Wow Mansib, you’re getting paid to eat all day!” Well, I’m not going to deny that there was food and plenty of it. The food was actually pretty good too. They had a friggin ice cream bar. We had about 3 buffet meals per day, if not more. No, the food was pretty solid.

These cupcakes were pretty solid. (well actually they were really soft, but how do you explain that to a post-millennial?)

So why wasn’t the week pure merriment? Well, within a few hours in Redmond you realize why you’re always getting an opportunity to stuff cake into your face. When a couple hundred of the world’s most elite student hackers and thousands of Microsoft technical folk saturate every square millimeter of your retinas for 16-17 hours per day for a week straight, cake becomes your only refuge from a cruel world where you always happen to be the least intelligent and least accomplished person in the room. Seriously, around every bend stood someone I had read about or watched on the internet, across every table was a corporate vice president or another intern who wrote way more blog posts than you. If you’re not careful, this really starts to eat at you. How do you do good in a place where everything has already been done? I’m not sure yet. In the end, it helps to remember that being the dumbest person in a room full of geniuses still makes you pretty smart.

Cupcakes, my only respite at the MSP Summit.

Anyway, I was in a very unique position, because I attended as a Technical Evangelist intern, but also as a participant in the Microsoft Student Partner summit, yet not privy to a selection MSP activities and permitted into otherwise restricted employee events and areas because I was a blue badge. Overall, this is pretty dandy and kind of like being the little kid who gets to go to all the older kid house parties because you’re somehow friends with them. That is, until you find out all the kids your age get to go go-karting and you’re not allowed to go because you’re an “old kid” and all you’re really allowed to do now is drink. When translated into the actual world, this meant that the MSPs got Surface Pro 3s, Xboxes, Microsoft Bands for doing twitter contests and all I got was access to the //oneweek beer garden.

This led me to visiting several related but separate events during the week. The mains were //oneweek, TechReady, the Worldwide MSP Summit and Imagine Cup. Inside the WW MSP Summit we also had a several hour HoloLens demo and the Imagine Code Camps where we broke a world record. Yes, that HoloLens.

A lot of the events overlapped with each other, so you’d have a //oneweek hackathon beer zone at a TechReady event and an MSP event done together with the Imagine Cup folks.


//oneweek is a totally rad assortment of events that has happened annually ever since Satya became the totally rad leader of Microsoft. The predecessor to //oneweek was some form of potentially-not-as-fun annual company meeting, but now it’s separated into three parts. There’s still the meeting, but we also now have a hackathon and an expo/product fair. There’s also a huge beer garden. It was a fulltime employee excusive event and as an MSP I would normally have no access to it… but luckily for my badge I was able to abscond here for a few moments.

The expo at //oneweek. Almost certain the umbrellas are there because they are the ideal operating environment for Microsoft Lumia phones.

I didn’t attend the hackathon or the meeting because I arrived late (that’s a crummy excuse because the hackathon was international), but I was able to visit the expo. Basically what happens at the expo is that all the product and some of the research and design groups come out to that big tent and talk about what they’re working on. You also have some external presenters like certain Microsoft venture startups and snow cone making people. A series of Microsoft employees and externals give talks throughout the day and sign copies of their books which you can conveniently buy at the book tent for 10 USD. There’s a metric ton of demos and games as well as employee only contests. Like this one. Tweet that thing like fire.

Under the tent.

A Microsoft Band golf tracking demo.

Internals from the Xbox entertainment system.

Also there’s free food samples everywhere. Literally enough different ones to make a full day’s meal. (I gained 8 pounds after the summit if you were wondering). There was one booth that gave away unlimited Godiva truffles. They called this portion of the event “Byte of Microsoft”. Good to know I don’t have the worst puns out there.

The event culminated on the 31st with a live Q&A with Satya. Unfortunately, the contents of the Q&A are confidential so I shouldn’t be sharing anything with you. Just because I like you so much however, I’ll tell you a bit of what I know… ready? Here it is: “ “. Yeah sorry, I didn’t watch the Q&A so I don’t know what went on. Reliable author much? Anyway, I had a good excuse. I was partying away at an event called…


TechReady is insane. Basically, it is a semi-annual one week conference where evangelists and other technical people from the various Microsoft subs meet in Redmond to get up to date on all the latest Microsoft products and technology. It’s also an excuse to drink. Senior execs come by during the event to present their vision for Microsoft going into the next year. Much like //oneweek, it is broken up into different components. You have the presentations and meeting parts which I skipped entirely. There’s a night called Ask the Experts where they set up dozens of tables on a conference floor and give each a sign with a topic such as “Hybrid Azure Security” or “Diversity and Equality”. Basically you go up to any desk of your choice, sit with a beer and chat about the topic listed on the sign. You were sure to find experts in any field walking about. I was quite astonished to see Scott Gu himself just walking around talking to various experts.

Interestingly enough, MSPs were given booths this year to demonstrate any interesting projects they had. Most were pretty keen on the immense amount of free food they were giving out during the night though.

After work, the evangelists will get together with their long lost buddies and go out for drinks or in the case of the Canadian evangelists, make their own beer (I didn’t participate.)

The reason why I say TechReady is insane however is actually because of the last day of the conference: The TechReady Party. You won’t see Microsoft employees in the same light after that night . Microsoft rented out CenturyLink Field for the event (home of the Seattle Seahawks, why didn’t they run the ball?) There were essentially restaurants opened at the field where you could eat all you want, for free. Copious amounts of alcohol was passed out (with proper ID of course), but this was all well deserved after a week or rather more like a year of hard work. Xboxes, soccer balls, bocce balls and pigskins were left out for everyone to play with.

Being a patriotic Canadian, I am always looking for an opportunity to espouse the values of my great country. When I saw a random fence and a bunch of red and white cups, my fellow Canadian MSP James and I took no rest to complete the semblance of our glorious flag on the fence. The other subs soon tried to copy us, but luckily we had used up all the red cups and the Dutch, French and Russian flags had to do with splotches of brown on their red parts because only brown cups were left.

I wish I had more pictures to show, but I refrained from taking pictures later in the night because I didn’t want to be that loser intern who had his phone out during a rave. it got pretty hectic by the end when a quasi-mosh pit arouse at one end of the field. Let’s just say I’ve never seen so many Microsofties drop to the bass.

Imagine Cup

For the last 2 years, being at the Imagine Cup had probably been my greatest wish. It was far more important to me than my dreams of being accepted into med school or finding myself in a stable relationship (FYI I ended up with neither). My participation in the competition eventually led me to becoming a coder and ultimately landed me this job. I had won the competition last year in the World Citizenship category in Canada, but the Innovation project from Canada last year was a heck of a better project so they ended up being the national winners who got sent to Redmond for the finals  (Wow, I’m totally cringing just glancing over what I wrote back then in that blog post. A true #flashbackfriday for me. Everything I wrote then still applies though!).

I figured I’d never get a chance to be at the Imagine Cup since I was now an employee and therefore ineligible to participate but boy does destiny like to prove me wrong. I was pretty close to being front and center to the whole Imagine Cup experience even if I wasn’t a competitor. Looking at all the epic projects and hard work the competitors put in, I was very envious and was wishing I was a competitor throughout.

The Imagine Cup was a multiday event (well, technically multimonth if you go prior to the finals) and so the MSP experience differs a bit from the competitors. One of the first things I recall is being greeted by a red carpet and a live band.

Microsoft sure has a peculiar intern benefit package.

We got to see dozens of amazing presentations. I was left dumbfounded after so many of them. For example, team Japan made a TV that could detect air pressure… it could measure how strong you breath against it! Never mind the business case or the use case, it was cool just to see these random innovations.

Team Canada was a group of students hailing from Queen’s. They made an app called Walkly that helps friends and loved ones ensure each other’s safety.

Team Canada presenting

Throughout the Imagine Cup, the MSPs were privy to many guest speakers and lectures. One that just blew my mind was when Giorgio Sardo, senior Windows evangelist, took the stage. He told all Microsoft employees to leave the room (I didn’t disclose my affiliation) and pretty much just straightforwardly answered any question the MSPs had. He didn’t use any canned statements, he pretty much disclosed any trade secret the MSPs inquired about. Why was Windows 10 named Windows 10? He told us the real reason. Unfortunately, I can’t disclose any of it here Maybe you should become an MSP and try to attend the Worldwide Summit.

Giorgio spilling the beans.

The MSPs also partook in a lot of discussions to share which activities worked and which didn’t. I didn’t expect to see such a difference between the programs, but the various subs sure set the record straight. MSPs in Nepal for example, spent a lot of time lending out Skype enabled mobile phones to people affected by the earthquake.

The MSPs hard at work at making me feel useless.

The finalists were announced on Thursday and Friday we went to the Seattle Convention center to see who would be crowned the winner of the 2015 Imagine Cup. We had quite the celebrity tech judge line-up this time around. We had the venerable Alex Kipman, the progenitor of the Kinect and HoloLens, we had Jens Bergensten aka jeb_ of Minecraft fame and finally the personable Thomas Middleditch aka Richard Hendricks of Silicon Valley and fellow Canadian. Of course most of us spent their introduction speeches gleefully grabbing footage of them for our Snapchat stories.

Snapchat caption omitted.

After 3 presentations and much deliberations, team eFitFashion of Brazil took the cup, $50K USD and the selfie with Nadella. It was well deserved. The team patented a software algorithm and system that allowed tailors to drastically cut the time of custom clothe jobs by creating the patterns electronically.

I like the unicorn shirt, but maybe Satya could still use a bit of eFitFashion sense.

And just like it all started, within moments the elusive magical code wizard Satya disappeared,


Yeah, so this section won’t get a photo treatment. The HoloLens experience was understandably a highly catered one and few steps were spared to present a certain desired impression of the device. In fact, I was almost certain that the presenters had their spoken words penned out by Alex Kipman himself. As was the case with the few journalists who got to try the HoloLens at //build/ and E3, all our electronic devices were confiscated and I was only let in with the clothes on my back (they permitted me to keep my Microsoft band, so I could have technically recorded audio had I somehow hacked the device, but I decided to hand it to them anyway).

Speaking of //build/ and E3, this was the first time since those events that anyone internal or external other than Microsoft executives and HoloLens development collaborators have been able to try the HoloLens. I don’t have any hard numbers, but that would put me squarely in the first 100, or maybe even first 50 Canadians to have tried out the device. At Microsoft Canada, I’m the first in my org, Developer Experience (DX) and maybe even the first throughout the subsidiary. After remembering that I’m really no one special, you begin to see just how much Microsoft values its interns… or at the very least, how especially magnanimous my manager Tommy and my supervisor Susan are.

Anyway, as you can imagine, the HoloLens is a most excellent device. I’ll be straight up honest, the field of view issue is certainly there, though it’s no where near as debilitating as certain reviewers would have you believe. Otherwise, the device functions entirely as you’d expect. The picture is crisp, the sound is clear, the environment scanning is impressive and the ergonomics of the device are well thought out. Being a developer however, for me, the actual HoloLens augmented reality experience didn’t hold a candle to actually developing for the HoloLens. Being a huge Unity fan, it was amazing to see that the HoloLens was ultimately a tool to break Unity’s 4th wall and provide it with a canvas that had all three dimensions. Developing for HoloLens had very few nuances from developing traditional Unity applications for something like a Windows or Android. The main thing to remember is that you’re no longer creating a simulation where an artificial character moves about a virtual world, but rather one where a virtual world moves about a real character which is yourself. And of course you have to include your using HoloToolkit; directives.

The whole workshop was very regimented. Any attempt to fiddle with the workshop project assets was curtailed and it was impossible to pry much information from the HoloLens personnel onsite (including on whether they had any evangelism positions they were hiring for!) Of course, that didn’t deter this intern from veering of course and hacking his heart’s content out of the device. There were a set of premade scripts and prefabs for us to progressively add to a Unity scene until we completed the scenario and made an entire virtual world whilst sampling each one of HoloLens’ key features. I completed them quickly without waiting for the instructors to go through each step in detail and invariably this let to much unnecessary debugging, but this experience was ultimately a desired one because it granted me better insights into using the actual HoloLens SDK. With the time I saved, I was able to experiment with other things, like creating a rudimentary soccer game which you could play with your feet and trying to have the Game of Thrones theme song play in the background. Although it’s dead easy to play .wav files in HoloLens, for reasons I’m choosing not to disclose, playing the GOT theme didn’t work out so well.

In the end, my partner and I had physically broken 3 HoloLens units, but the HoloLens folks took it in stride. The device was by no means flimsy, but it does require proper care and handling. Overall, all my expectations were attained with the HoloLens demo and I’m looking forward to seeing it released to the public soon (I don’t know the release date or timeframe). I have no reason to expect another chance to try out the HoloLens before release, but because I enjoy your patronage so much, I promise I’ll find a way to secure another hacking session with it so that I can recount you with more details and hopefully a picture.

Breaking a Guinness World Record

As a part of the Worldwide MSP summit, we were asked to participate in breaking a world record. Believe it or not, this is actually the first world record I’ve willingly and knowingly participated in breaking. The record we aimed to break was most people taught to code in 8 hours. We set the new record at something like 1337 people (I’m not kidding, I think the actual number is something like that).

As you can imagine, this was a pretty arduous task. Hundreds of kids were ushered into Microsoft computer labs to partake in the event. We had a skeleton crew of MSPs who were responsible for coaching these hundreds of children and every hour they’d bring in a new batch. We gave out some cool prizes such as Raspberry Pi 2s to encourage the kids to keep coding. By the end I was so tired that I was falling asleep on a laptop and repeatedly unwittingly hitting the keyboard with my face. After a while, a commotion had woken me up and I got up with the intention of finding a more comfortable sleeping spot, but lo and behold, we had done it!

Getting pics of VP Guggs as if I’ll never get to see him in person again. Well maybe I won’t.

Guinness record plaques make handy umbrellas in a pinch.

All in all pretty satisfied.

In Retrospect

It’s not every day you get an all-inclusive trip to meet some of the smartest and most influential people in the world. Once it all happened, I was quite overwhelmed and disappointed with myself. Why didn’t I go introduce myself to Guggs or Scott Gu? Why didn’t I run up to take a picture with Alex Kipman like some of the MSPs managed to? Why did I constantly stuff my face? Why didn’t I bring a project to demo? Why didn’t I confer my new idea for Azure DreamSpark with Satya? Admittedly, these are some of the things I won’t forget for a while. For every experience I had lived during that week, I missed another. But it’s important to look at it in reverse. For every opportunity I missed that week, I lived another. For a week I got to breathe and live as a genuine Redmonder. I got to try HoloLens and break a world record. These are things most other people won’t get to do for the time being.

The most important lessons I learned from this trip are to be grateful and to be aware of your shortcomings. I’m grateful, because I could have learned these lessons on the last day of my internship and never have had a fighting chance to accomplish any of my goals.

But now I know. If I didn’t make the effort to shake hands with Satya last time, I need to work my way up and make that opportunity again.

Microsoft Breaks a World Record at Imagine Coding Camps

MSDN Blogs - Wed, 08/05/2015 - 02:15

The following is a repost from the Microsoft Student Developer Blog:

We did it!! We broke the Guinness World Record® for the “Most People Trained in Computer Programming in 8 Hours."

Kids from all over the Seattle area, including from the Boys & Girls Club of King County, arrived at the Microsoft campus today for Imagine Coding Camps. In four bustling rooms throughout Building 92, children learned how to code in the best way possible, by having fun.

This is not your ordinary coding camp. Not only was it offered free to the public, it also marked our attempt to break a Guinness World Record® for the most people taught computer programming in 8 hours.

Students danced to music playing in the rooms. Microsoft Student Partners and Imagine Cup competitors ran all over the rooms answering questions and passing out raffle tickets, clearly jazzed to help out.

Students performed two coding exercises. The first one, “Color Customization,” teaches the basics of TouchDevelop and how to navigate the program. The second exercise, “Piñata Breaker,” challenges them to change backgrounds, graphics, add their own sounds and change the speed of their piñata.

To keep the kids super motivated, we hosted raffles. Every time a kid completed a new challenge, they waved their hand to get a raffle ticket. These tickets enter them into a drawing to win prizes like a Raspberry Pi developer board, Minecraft key chains or a t-shirt signed by Jens Bergensten, Lead Developer of Minecraft.

We asked local parent Melody Murdock why she brought her kids to coding camp. “I signed my kids up for the Imagine Coding Camp because I wanted them to get first-hand experience coding and the exposure to a company like Microsoft in general. I feel to be competitive in the future, these kids have to know about technology and will most likely be expected to know how to code at some level—no matter what field they go into.”

Melody enrolled her 7-year-old son, Everett and 10-year-old daughter, Sedona. “I was especially excited to take my daughter. I feel females don’t always get the encouragement or exposure to pursue interests or careers in the high-tech sector. I want her to know that’s an option.”

It worked. Sedona begged her mom, “Can we do this again next year, please, please?”

Larrisa Jarvis told us about her son’s experience. “My kids were a little grumpy this morning. But once the camp got going, they really lit up, engaged. Raising their hands, psyched about the raffle ticket prizes, and getting the games to work!”

For several kids, this camp was the first time any of them coded. On the way out, they were smiling and almost giddy about how cool it was. Dylan Peay exclaimed, “I wish that coding camp went all day!” His cousin, Everett, said it all, “I didn’t know video games were designed by using code.”

Find coding kits similar to “Piñata Breaker” over on Microsoft Imagine, which connects you with the tools and knowledge you need to create, code and develop your ideas. So whether you’re new to coding, studying it in school, or planning for your career, you can dream big, build creatively, and boldly bring your ideas to life.

エキサイト翻訳でMicrosoft Translatorを採用!

MSDN Blogs - Wed, 08/05/2015 - 01:03
本日8月5日よりエキサイト株式会社の「エキサイト翻訳」でBing Solutionsの一つでもある、弊社Translator APIを利用したサービスが提供開始されました。今回の協業では、下記23言語の翻訳に弊社Translator APIが用いられています。 アラビア語 インドネシア語 ウクライナ語 ウルドゥー語 エストニア語 オランダ語 スウェーデン語 スロバキア語 スロベニア語 タイ語 チェコ語 トルコ語 ハンガリー語 ヒンディー語 フィンランド語 ブルガリア語 ベトナム語 ヘブライ語 ペルシア語 ポーランド語 ラトビア語 リトアニア語 ルーマニア語 詳しくはエキサイト株式会社様の下記プレスリリースを参照ください。 本協業を通じて、より多くのお客様の翻訳のニーズに応えるとともに、弊社Microsoft Translator...(read more)


Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux