You are here

Feed aggregator

Coda

MSDN Blogs - Fri, 11/28/2014 - 21:12

Over 15 years ago, I joined Microsoft and I have worked in and around Visual Studio ever since. It seems like eons ago given how much things have changed in this industry, this company, my life, and the world over the past 5716 days. It amuses me that my earliest projects live on in MSDN Library: the Online Bookstore sample and an article about coding techniques and programming practices. Fast forward through Visual Studio .net 2002, Visual Studio .net 2003, Visual Studio 2005, Visual Studio 2008, Visual Studio 2010, Visual Studio 2012, Visual Studio 2013, and now Visual Studio 2015 Preview and you end up where we are today. When I joined Microsoft .NET 1.0 was in development and wasn't even public - now it's open source. 

Late November is when many Americans celebrate Thanksgiving and give thanks for what they have. I'm thankful for my time at Microsoft and the many awesome and interesting people who I worked and traveled with over the years. When she was little, my daughter, Stephanie, used to draw on my whiteboard (at least the part she could reach), and now she's a college graduate and engaged to be married. I'm thankful for what my career and benefits at Microsoft have afforded her. And most of all, I'm thankful to my wife, Nicole, for putting up with the countless times I worked weekends and holidays, brought a work laptop on vacation, missed dinner with the family, and pulled all-nighters readying for another product launch.

All the journeys of this great adventure
It didn't always feel that way
I wouldn't trade them because I made them
The best I could, and that's enough to say

Some days were dark
I wish that I could live it all again
Some nights were bright
I wish that I could live it all again

Walt Disney is quoted as saying, “I only hope that we don't lose sight of one thing - that it was all started by a mouse.” Let’s not lose sight of one thing – that Micro Soft was all started by a developer tool.

The future disappears into memory,
With only a moment between,
Forever dwells in that moment,
Hope is what remains to be seen.

2137

Use Remote PowerShell to Manage Your Azure PaaS Compute Instances

MSDN Blogs - Fri, 11/28/2014 - 16:35

The Windows Remote Management (WinRM) service implements the WS-Management protocol in Windows.  Many remote management tasks are made possible with WinRM, as I.T. administrators are well aware.  There are both command line and PowerShell variants on how to accomplish most of these tasks. But what does this have to do with Azure web roles and Azure worker roles?

Imagine for a moment that your API is running in an Azure web role.  You just updated it, and for some odd reason it’s not working right.  But you’ve seen this before.  The AppPool needs to be restarted.  On each instance.  There are 10 of them.  You don’t want to, but you’re going to have to log in to each instance and manually restart the AppPools.  Or, maybe your app manages some local storage.  Periodically you need to check to see how full it is, perhaps delete old log files, whatever.  Wouldn’t it be great if you could write a script that runs on your desktop that would loop through each instance and do the task?

Here’s my technical tooling configuration:

  • Visual Studio 2013 Update 4
  • Azure SDK 2.4
  • Windows 8.1 laptop
  • Azure web role is OS Family 4 = Windows Server 2012 R2

Here are the steps to accomplish this:

  • Add a startup task to your web or worker role that does the following:
    • create a user with administrator rights (so your workstation can authenticate into the instance)
    • create a WinRM listener on HTTPS and the default port 5986
  • Add an SSL certificate (if you don’t already have one) to the solution
  • Add an InstanceInputEndpoint to the Service Definition of your role.
  • Deploy your service to Azure
  • From PowerShell, Enter-PSSession to each instance by port number, execute whatever task you need to, and Exit-PSSession.

A key feature of the Azure platform that makes this possible is the InstanceInputEndpoint.  This feature makes it possible to direct communications to a specific instance of a multi-instanced PaaS role – either web or worker role.  Here’s what it looks like:

And note: the InstanceInputEndpoint is in addition to other endpoints that you have defined.  So, you can have port 80 or 443 for your web traffic, PLUS you can define an InstanceInputEndpoint that arrives at port 5986 of each compute instance.  Which is great, because it allows me to use WinRM in its default configuration.

And finally, the details:  (Indentation to accommodate long lines.)

Startup task.  Add the following lines to the Service Definition file.  The <WebRole line is only present for context. 

<WebRole name="WebRole1" vmsize="Small"> <Startup> <Task commandLine="EnableWinRM.cmd" executionContext="elevated" taskType="simple" /> </Startup>

EnableWinRM.cmd:

net user winrm_user > nul && echo User Exists || (net user winrm_user S3cr3tl7 /add >> EnableWinRMLog.txt net localgroup administrators winrm_user /add >> EnableWinRMLog.txt echo User Created >> EnableWinRMLog.txt) PowerShell -command Set-ExecutionPolicy -ExecutionPolicy Unrestricted >> EnableWinRMLog.txt PowerShell .\EnableWinRM.ps1 >> EnableWinRMLog.txt exit /B 0

EnableWinRM.ps1:  (the thumbprint is that of the SSL cert that you uploaded to the Azure cloud service)

$thumbprint = 'XXXXXXXC8131944706F7DDD7001DCA8F' $certId = '<your hostname>' winrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{Hostname=`"($certId)`"`; CertificateThumbprint=`"($thumbprint)`"`} Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 2000

And now the Service Definition for the InstanceInputEndpoint.

<Endpoints> <InputEndpoint name="Endpoint1" protocol="http" port="80" /> <InstanceInputEndpoint name="WinRM" localPort="5986" protocol="tcp"> <AllocatePublicPortFrom> <FixedPortRange min="30000" max="30100"/> </AllocatePublicPortFrom> </InstanceInputEndpoint> </Endpoints>

Note that I show the definition of Endpoint1 on port 80, but just for context. It's not necessary for the InstanceInputEndpoint definition to be valid.  And the name ‘WinRM’ is arbitrary.  What’s not arbitrary is the localPort, nor the FixedPortRange.  Be sure you have enough ports defined for all of your instances.  After deploying your service to Azure, use RDP to verify that all’s well.  In e:\approot\bin\EnableWinRMLog.txt you should see favorable looking messages.

Now for the fun part!

Here’s a PowerShell script that will restart the web role’s AppPool.  Run it on your workstation. 

# run this line by itself. A dialog will ask for the # userid/password in EnableWinRM.cmd $cred = Get-Credential # then run this loop for ($i=0; $i -lt 3; $i++) { $port = 30000 + $i $uri = "https://golive-XXXX.cloudapp.net:$port" $psopt = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck Enter-PSSession -ConnectionUri $uri -Credential $cred ` -SessionOption $psopt ` -Authentication Negotiate import-module WebAdministration $pool = Get-ChildItem IIS:\AppPools | where-object {$_.Name -NotLike '.NET*'} Restart-WebAppPool $pool.Name Exit-PSSession }

Cheers!

Managing Azure VMs with Azure PowerShell-Part 1

MSDN Blogs - Fri, 11/28/2014 - 16:33

 Managing  Azure VMs with Azure PowerShell-Part 1

Working on creating some VMs in my lab environment, thought why not post it as a reference for others

Service Configurations
1. Storage Accounts: satishlocalstorage, satishvnet1sa
2. Cloud Service: satishcloud, satishVnet1cloudsvc
3. VNET: Satishnetwork (infrasubnet, websubnet SQLsubnet), Satishvnet1 (infrasubnet1, appsubnet1)


Goal

1. Deploy  VMs  in a specific  storage account, cloud service and  VNET
2. Setting a static IP address for your existing VM
3. Create a New VM and configure a static IP during the VM deployment
4. Capturing a Sysprep’ed / generalized VM testvm2 as a template for future deployments  & deploy the image
5. Capturing a running VM image for a quick snapshot & redeploy the VM to a good point in time from the latest capture

1. Deploy  VMs  in a specific  storage account, cloud service and  VNET
Associate subscription to the storageaccount
PS C:\> Set-AzureSubscription -SubscriptionName "yoursubscriptionaname" -CurrentStorageAccountName "satishlocalstorage"
  (All VM’s will now be created in “satishlocalstorage” account” by default

Create a VM and from a latest image release date(best practice)
PS C:\> $image=(get-azurevmimage | where {$_.imagefamily -eq "windows Server 2012 R2 DataCenter"} | sort publisheddate -descending | select-object -first 1).imagename

PS C:\> New-AzureVMConfig -Name "testvm1" -InstanceSize "Basic_A1" -imagename $image | add-azureprovisioningconfig -windows -adminusername userid -Password yourpassword | set-azuresubnet -subnetnames "infrasubnet" | New-AzureVM -ServiceName "satishcloud" –vnetname “satishnetwork”

 VM status
PS C:\> Get-AzureVM -ServiceName "satishcloud" -name "testvm1"  | select-object name, ipaddress, powerstate, servicename

Name                          IpAddress                     PowerState                    ServiceName
----                                    ---------                         ----------                        -----------
testvm1                       10.101.2.4                    Started                             satishcloud

2. Setting a static IP address for your existing VM

Configure VM “testvm1” with a static ip address of 10.101.2.99 or 10.101.2.5(whichever is available)

PS C:\> Test-AzureStaticVNetIP -vnetname "satishnetwork" -IPAddress "10.101.2.5"

 PS C:\> Test-AzureStaticVNetIP -vnetname "satishnetwork" -IPAddress "10.101.2.99"
                               IsAvailable          : True
                                   AvailableAddresses   : {}
                                  OperationDescription : Test-AzureStaticVNetIP
                                  OperationId          : d497b983-d876-3c22-a266-c4eb0c0c773e
                                   OperationStatus      : Succeeded

PS C:\> get-azurevm -servicename "satishcloud" -name "testvm1" |Set-AzureStaticVNetIP -    iPAddress "10.101.2.99" | update-Azurevm
         (This command will restart and update the VM with new static IP address)

PS C:\> Get-AzureVM -ServiceName "satishcloud" -name "testvm1"  | select-object name, ipaddress, powerstate, servicename

                        Name                          IpAddress                     PowerState                    ServiceName
                               ----                                 ---------                                ----------                     -----------
                              testvm1                           10.101.2.99                         Started                       satishcloud

PS C:\> $staticipvm=get-azurevm -servicename 'satishcloud' -name 'testvm1'
PS C:\> Get-AzureStaticVNetIP -vm $staticipvm

                             IPAddress
                                ---------
                             10.101.2.99
   If you want to remove static IP for a VM, run
Get-AzureVM -ServiceName “satishcloud” -Name “testvm1l”  | Remove-AzureStaticVNetIP | Update-AzureVM

3. Create a New VM and with a static IP address

PS C:\> $image=(get-azurevmimage | where {$_.imagefamily -eq "windows Server 2012 R2 DataCenter"} | sort publisheddate -descending | select-object -first 1).imagename

PS C:\> New-AzureVMConfig -Name "Testvm2" -InstanceSize "Basic_A1" -imagename $image  | add-azureprovisioningconfig -windows -adminusername username -Password yourpassword | set-azuresubnet -subnetnames "infrasubnet" | Set-AzureStaticVNetIP -IPAddress "10.101.2.100" | New-AzureVM -ServiceName "satishcloud" –vnetname “satishnetwork”

4.  Capturing a Sysprep’ed / generalized VM "testvm2" as a template for future deployments  & deploy the image

PS C:\>  Save-AzureVMImage -ServiceName satishcloud -Name testvm2 -ImageName webtemplate1 -ImageLabel webtemplate1 -OSState Generalized
(this command will delete the VM testvm2 and create an image from this VM

  Deploy a new VM called webvm1 from the capured generalized image webtemplate1

PS C:\>  New-AzureVMConfig -Name "webvm1" -InstanceSize "Basic_A1" -imagename webtemplate1 | add-azureprovisioningconfig  -windows -adminusername userid1 -Password yourpassword | set-azuresubnet -subnetnames "infrasubnet"  | Set-AzureStaticVNetIP -IPAddress "10.101.2.100" | New-AzureVM -ServiceName "satishcloud" –vnetname “satishnetwork"

PS C:\> get-azurevm -ServiceName "satishcloud" -name "webvm1" | select-object name, ipaddress, status
                                         Name                                    IpAddress                               Status
                                            ----                                           ---------                                  ------
                                         webvm1                                  10.101.2.100                      ReadyRole


5. Capturing a running VM "webvm1" for a quick snapshot & redeploy the VM to a good point in time from the latest capture 


PS C:\> Save-AzureVMImage -ServiceName satishcloud -Name webvm1 -ImageName webvm1snapshot -ImageLabel webvm1snapshot -OSState Specialized
PS C:\> Get-AzureVMImage | where {$_.imagename -like "web*"}  |select-object imagename,category,RoleName
                         

                                 ImageName                               Category                            RoleName
                               ---------                                           --------                                --------
                               webtemplate1                                   User                                    Testvm2
                              webvm1snapshot                          User                                    webvm1

(Next I Deleted the VM webvm1 and redeployed it, all previous configurations were intact including domainjoin in my case 

 PS C:\> Remove-AzureVM -ServiceName "satishcloud" -name "webvm1" -DeleteVHD
PS C:\> New-AzureVMConfig -Name "webvm1" -InstanceSize "Basic_A1" -imagename webvm1snapshot | set-azuresubnet -subnetnames "infrasubnet" | Set-AzureStaticVNetIP -IPAddress "10.101.2.100" | New-AzureVM -ServiceName "satishcloud" –vnetname “satishnetwork”

Note: when the VM is back online,and when you try to login for the first time you will see a dialog pop up stating the VM was not properly shutdown, this may be an expected behavior, as we captured a running VM

Stopped all VMs in cloud service "satishcloud"

PS C:\> get-azurevm -ServiceName "satishcloud" | stop-azurevm

DISCLAIMER: This posting is provided "AS IS" with no warranties and confers no rights 

 

 

Sometimes the site Administrator is unable to change the Regional Settings of site collection.

MSDN Blogs - Fri, 11/28/2014 - 15:29

The site Administrator is unable to change the regional settings for a site collection. Whenever he tries to do so a message appears that says something similar to the following:

"The column"Workflow History Parent Instance" in the list or library "CreateDateExpiredTasks History" has been marked for indexing. Please turn off all indexed columns before changing the collation of this site. You may re-index those columns after the collation of the site has been changed."

Or

"The column "Invitation" in the list or library "correlation ID number" has been marked for indexing. Please turn off all indexed columns before changing the collation of this site. You may re-index those columns after the collation of the site has been changed."

Or 

"The site regional settings cannot be changed because one or more columns has been marked for indexing";

 You could try to get rid of the afore mentioned column manually but that could take hours on a medium to large site.

 The best way I have found so far to be able to change the Regional Settings under this condition is to make sure the "Sort Order" option is set to "General", as in this screen-shot:

 I hope this helps.

 

Have a nice and productive day !

How to delete a problematic External User account from a SharePoint Online site Users List.

MSDN Blogs - Fri, 11/28/2014 - 15:03

As this issue keeps showing up again and again in support requests, I thought it is worth mentioning here.

The problem is that the External User account gets, somehow, associated with a different UPC in the User List of SharePoint Online or gets corrupted somehow. Either way the fist thing to do is find it and get rid of it so that we can recreate a health account in the User List.

I had like to mention that I have seen cases where the Administrator has executed the command Get-SPOExternalUser in the Power Shell Administration console but the External User account was not found using this method.

The solution in this case was to look for the External User account in the Users List page:

<URL>/_layouts/15/people.aspx?MembershipGroupId=0

If the account appears listed in there, just delete and then re-invite the user. This time  around it should work.

I hope this works for you if you are facing a similar problem.

Have a nice and productive day !

Fight for your right to monkey

MSDN Blogs - Fri, 11/28/2014 - 14:05
The gentle start to a rant...

In any software system, especially but certainly not just in the cloud, things can and will go horribly wrong in a variety of ways given enough time/opportunity. Some of those are unanticipated disasters: bugs, floods, hurricanes, etc. But a good number of those are just tradeoffs that are consciously made by the developers of the system in order to make other gains:

  • a NoSQL system that anticipates some data inconsistency or even data loss to gain scale/performance,
  • a cloud system that periodically reboots your VM's to gain automatic/unmanaged updates and security,
  • ...

I'm not a purist, and I appreciate those tradeoffs and have made them myself many times, so I don't think these are necessarily bad or evil; but of course I'm writing this post to complain/rally about something, and it's this: if you have a platform/building block where you've made such a tradeoff and anticipate some way of things going wrong, you better give me a way to induce this failure mode to test the rest of the system with it. In other words: saying "my system occasionally decides to run around the room like a mad man to vent steam and you should design for that" is an appreciated but an almost useless statement if you don't give me a way to induce this running around mode in a controlled environment as I'm designing the room to make sure that when it happens it doesn't set the rest of the room on fire.

Doing right

For a decent example of doing this right, look at VM reboots in Microsoft Azure. When you have a Cloud Service in Azure, the system reserves the right to reboot or even reimage any of your role instances (VM's) at any time to deal with hardware failures/updates/etc. Your system should be resilient for that, usually by having redundant instances spread across fault domains. This is well-documented on multiple places, but even more importantly: Azure gives you REST API's, PowerShell commandlets and Portal UI buttons to reboot, reimage or rebuild any of your role instances which is a great way to test that your application is indeed resilient to this. So you can use/do something like WazMonkey to randomly disturb your staging (or hey, if you're brave enough even your production) environment and make sure you're all good, or you can do more targeted testing to e.g. make sure your startup scripts can survive being interrupted by a reboot without ill effects.

Note: I don't think this example is perfect. Ideally the Azure team would have a standard supported test kit that induces all known failure modes into your application (reboot, reimage with/without data loss, etc.) and stand behind it. But as far as things go, I feel good about putting this as my "good" example.

You done me wrong

So what are examples of getting this wrong? Alas, there are too many of those...

Let me pick on the SQL Azure team first (since I worked there so as much to blame as anyone): SQL Azure very much retains the right to drop your connection at any time for a variety of reasons and encourages good client behavior to tolerate this (with the various client SDK's doing a decent job of abstracting this resilience), and yet I don't know of any way to induce such connection drops in a reasonable way.

Another big example is the networking stack: everyone knows that TCP/IP and our networking infrastructure is notoriously noisy and error-prone, and yet it's surprisingly hard to induce all these failures (packet drops, latency, etc.).

Rallying cry

For the most part I've achieved my goal with this post, which is to cathartically rant about this so I can steel myself for the next smug "you should've designed for that" or "we don't support that" statement from a platform developer. But other than ranting, a side goal is to get anyone reading this to be less tolerant of such unsupported statements, and start demanding "OK where's my test kit" every time they hear it. If enough of us start demanding this especially from the most basic building blocks in our computer systems, I sincerely believe our systems will be significantly and noticeably more stable. In a way my attitude towards this is an extension of my attitude towards all software I write: I never say I've actually written my software to perform X until I see it performing X in practice, so I can never sincerely say I've designed/written my software to be resilient to failure Y until I see it actually handle Y in practice. So if I can't induce Y, I'll have to let my software just hang out in the wild until Y inevitably happens. Which is a crazy slow way to test software, and unfortunately the way many of us do it today.

I am now helping out a little bit with Hotmail and outlook.com

MSDN Blogs - Fri, 11/28/2014 - 13:03

One of the projects I will be working on going forward is helping out with some of the filtering with outlook.com.

In case you haven’t heard, over the past few months Microsoft has merged together the spam filtering units responsible for protecting Office 365 (also known as Exchange Online Protection (EOP), formerly known as Forefront Online Protection for Exchange (FOPE), previously known as Frontbridge) and outlook.com (formerly known as Hotmail). Instead of two different teams with some data sharing, it will be one team with lots of data sharing although not necessarily the same filters – consumer email and enterprise email are different.

To that end, I will be taking over some duties for Hotmail that also show up in Office 365. For example, Hotmail supports both DKIM and DMARC, so the equivalent feature in Office 365 will be the same one Hotmail uses once it moves over to the Office 365 infrastructure (I am working on the Office 365 version of DKIM and DMARC). Similarly, the Boomerang feature in Office 365 is the same one that Hotmail currently uses and will use.

The main piece I will be inheriting is some of the Safety UX (user experience) for both systems’ web interfaces. You may have noticed that Hotmail shows a green shield next to trusted users in its webmail:

We are looking to ensure both Office 365 and outlook.com – on both web, mobile and tablet – all show the same thing for both trusted senders and spoofed messages. Thus, if you’re an outlook.com user and you’re used to seeing a red line when a message is spoofed, when your business goes to Office 365 you’ll see the same thing (this does not necessarily mean that the Outlook client will show the same thing, that’s not decided yet; I don’t know if my scope of responsibilities includes it).

So, my component is figuring out how it looks in some of the web clients and how (if?) we can make it better than it is today; what criteria to use to show these properties; and how it all ties together with spam, phishing, and authentication.

Before you get too excited that “Finally! Someone I know works in Hotmail and I can get them to change Feature X so I can deliver to Customer Y!” let me say this – I don’t work on the general spam filter in Hotmail, nor work with their deliverability team. I know people there but don’t have that much input. So, I can’t help you much in that regard.

But at least you know what I’ll be working on for the next little while.

Transformación del área de TI y la importancia del IT Pro

MSDN Blogs - Fri, 11/28/2014 - 11:10

 

En los últimos años hemos vivido una gran transformación en cuanto a la plataforma de TI, se han dejado atrás los temas de servidores físicos y ahora hablamos no solo de virtualización sino del cómputo en la nube.

Esta transformación trae consigo nuevos retos para los profesionales de TI donde ahora debemos especializarnos en una o 2 soluciones que habiliten al negocio.

Para apoyar a los IT Pros en su desarrollo, quiero invitarte a que tomes una serie de cursos que te ayudarán a comprender mejor cual es el papel del IT Pro en la evolución al cómputo en la nube.

También se que el autoaprendizaje no lo es todo por ello al concluir uno de los cursos y mandar tu evidencia con los pasos que se describen a continuación, te invitaré a una sesión en línea en vivo con uno de los expertos en la industria de TI que nos compartirá su conocimiento sobre los retos y oportunidades del IT Pro en la actualidad a realizarse en Enero del 2015.

Lo único que debes hacer es:

  1. Tomar el curso dentro de la academia sobre la Evolución del IT Pro a la nube Aquí
  2. Ver de inicio a fin todos los videos
    *Esto se monitorea a través de tu usuario ligado a la plataforma, por lo que no será válido ver los videos incompletos.
  3. Aprobar el curso en un 100%. Enviar la captura de pantalla mostrando que se finalizó con éxito y donde sea visible el nombre del curso en cuestión así como el nombre de usuario en la plataforma.
  4. El correo electrónico al que deberás enviar la evidencia anterior es: itmvamx@microsoft.com usando como título del correo: MVA IT PRO
    -Es muy importante usar este título en el asunto de correo ya que de lo contrario se puede perder en la bandeja de entrada.

Para ver un ejemplo de cómo debe ser la evidencia que debes de enviar consulta la siguiente liga

A vuelta de correo recibirás una notificación antes del 14 de Diciembre donde sabrás si eres acreedor a uno de los premios*, así como la invitación a la sesión en vivo con los expertos de Microsoft.
Nota: las personas que no reciban premio pero concluyan el curso y nos envíen su evidencia recibirán la invitación a la sesión con los expertos.

Tomado del blog de Rubén Colomo (Especialista en Infraestructura en Microsoft México)

 

* Tenemos premios que estaremos entregando de la siguiente manera a las primeras personas que envíen la evidencia correcta:
10 primeros ganadores: Arc Keyboard
40 siguientes ganadores: Arc Mouse Negro
20 siguientes ganadores: Mochila Wenger negra
15 siguientes ganadores: LIfeCam Cinema
100 siguientes ganadores: Una playera

Aplica solo para residentes de la República Mexicana

Mucho éxito!
Conoce los términos y condiciones de esta iniciativa aquí

Yo-Yo Simulations - Small Basic Featured Programs

MSDN Blogs - Fri, 11/28/2014 - 11:09

In the November 2014 challenges...

 

LitDev included this Physics challenge:

Physics Challenge

Write a program to model the movement of a yo-yo.  Try to get the rotation consistent with the up and down movement, and perhaps even add the jerk required when the yo-yo is fuly extended to keep it going. 

The idea is not necessarily the exact physics, but something that unmistably looks like a yo-yo in action, maybe even some tricks.

 


Let's look at some of the results!

  

From Nonki Takahashi:

This is my solution for physics challenge: WWD539.

Use up arrow key to jerk the yo-yo.

 

From jalpc:

My solution for physics challenge: JRT022

 

  

Thank you to our community contributors!

   - Ninja Ed

TEMPDB files fail to be created with error: "CREATE FILE encountered operating system error 32"

MSDN Blogs - Fri, 11/28/2014 - 10:59
The TEMPDB database is a unique database, as its files are deleted and recreated every time you restart the SQL Server services. If all the files of TEMPDB fail to be recreated for some reason, the SQL Server service won't be able to start. In a specific scenario, the error we saw was: CREATE FILE encountered operating system error 32(The process cannot access the file because it is being used by another process.) while attempting to open or create the physical file '<folder\tempdb.mdf>...(read more)

Sample chapter: Decision and Loop Statements in Microsoft Visual C++

MSDN Blogs - Fri, 11/28/2014 - 08:00

In this chapter from Microsoft Visual C++/CLI Step by Step, you will see how to use these statements to control the flow of execution through a C++/CLI application.

After completing this chapter, you will be able to:

  • Make decisions by using the if statement.

  • Make multiway decisions by using the switch statement.

  • Perform loops by using the while, for, and do-while statements.

  • Perform unconditional jumps in a loop by using the break and continue statements.

All high-level languages provide keywords with which you can make decisions and perform loops. C++ is no exception. C++ provides the if statement and the switch statement for making decisions, and it provides the while, for, and do-while statements for performing loops. In addition, C++ provides the break statement to exit a loop immediately and the continue statement to return to the start of the loop for the next iteration.

In this chapter, you will see how to use these statements to control the flow of execution through a C++/CLI application.

Read the complete chapter here: https://www.microsoftpressstore.com/articles/article.aspx?p=2222444.

A user's SID can change, so make sure to check the SID history

MSDN Blogs - Fri, 11/28/2014 - 07:00

It doesn't happen often, but a user's SID can change. For example, when I started at Microsoft, my account was in the SYS-WIN4 domain, which is where all the people on the Windows 95 team were placed. At some point, that domain was retired, and my account moved to the REDMOND domain. We saw some time ago that the format of a user SID is

S-1- version number (SID_REVISION) -5- SECURITY_NT_AUTHORITY -21- SECURITY_NT_NON_UNIQUE -w-x-y- the entity (machine or domain) that issued the SID -z the unique user ID for that entity

The issuing entity for a local account on a machine is the machine to which the account belongs. The issuing entity for a domain account is the domain.

If an account moves between domains, the issuing entity changes, which means that the old SID is not valid. A new SID must be issued.

Wait, does this mean that if my account moves between domains, then I lose access to all my old stuff? All my old stuff grants access to my old SID, not my new SID.

Fortunately, this doesn't happen, thanks to the SID history. When your account moves to the new domain, the new domain controller remembers all the previous SIDs you used to have. When you authenticate against the domain controller, it populates your token with your SID history. In my example, it means that my token not only says "This is user number 271828 on the REDMOND domain", it also says "This user used to be known as number 31415 on the SYS-WIN4 domain." That way, when the system sees an object whose ACL says, "Grant access to user 31415 on the SYS-WIN4 domain," then it should grant me access to that object.

The existence of SID history means that recognizing users when they return is more complicated than a simple Equal­Sid, because Equal­Sid will say that "No, S-1-5-21-REDMOND-271828 is not equal to S-1-5-21-SYS-WIN4-31415," even though both SIDs refer to the same person.

If you are going to remember a SID and then try to recognize a user when they return, you need to search the SID history for a match, in case the user changed domains between the two visits. The easiest way to do this is with the Access­Check function. For example, support I visited your site while I belong to the SYS-WIN4 domain, and you remembered my SID. When I return, you create a security descriptor that grants access to the SID you remembered, and then you ask Access­Check, "If I had an object that granted access only to this SID, would you let this guy access it?"

(So far, this is just recapping stuff I discussed a few months ago. Now comes the new stuff.)

There are a few ways of building up the security descriptor. In all the cases, we will create a security descriptor that grants the specified SID some arbitrary access, and then we will ask the operating system whether the current user has that access.

My arbitrary access shall be

#define FROB_ACCESS 1 // any single bit less than 65536

One way to build the security descriptor is to let SDDL do the heavy lifting: Generate the string D:(A;;1;;;⟨SID⟩) and then pass it to String­Security­Descriptor­To­Security­Descriptor.

Another is to build it up with security descriptor functions. I defer to the sample code in MSDN for an illustration.

The hard-core way is just to build the security descriptor by hand. For a security descriptor this simple, the direct approach involves the least amount of code. Go figure.

The format of the security descriptor we want to build is

struct ACCESS_ALLOWED_ACE_MAX_SIZE { ACCESS_ALLOWED_ACE Ace; BYTE SidExtra[SECURITY_MAX_SID_SIZE - sizeof(DWORD)]; };

The ACCESS_ALLOWED_ACE_MAX_SIZE structure represents the maximum possible size of an ACCESS_ALLOWED_ACE. The ACCESS_ALLOWED_ACE leaves a DWORD for the SID (Sid­Start), so we add additional bytes afterward to accommodate the largest valid SID. If you wanted to be more C++-like, you could make ACCESS_ALLOWED_ACE_MAX_SIZE derive from ACCESS_ALLOWED_ACE.

struct ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR { SECURITY_DESCRIPTOR_RELATIVE Header; ACL Acl; ACCESS_ALLOWED_ACE_MAX_SIZE Ace; }; const ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR c_sdTemplate = { // SECURITY_DESCRIPTOR_RELATIVE { SECURITY_DESCRIPTOR_REVISION, // Revision 0, // Reserved SE_DACL_PRESENT | SE_SELF_RELATIVE, // Control FIELD_OFFSET(ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR, Ace.Ace.SidStart), // Offset to owner FIELD_OFFSET(ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR, Ace.Ace.SidStart), // Offset to group 0, // No SACL FIELD_OFFSET(ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR, Acl), // Offset to DACL }, // ACL { ACL_REVISION, // Revision 0, // Reserved sizeof(ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR) - FIELD_OFFSET(ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR, Acl), // ACL size 1, // ACE count 0, // Reserved }, // ACCESS_ALLOWED_ACE_MAX_SIZE { // ACCESS_ALLOWED_ACE { // ACE_HEADER { ACCESS_ALLOWED_ACE_TYPE, // AceType 0, // flags sizeof(ACCESS_ALLOWED_ACE_MAX_SIZE),// ACE size }, FROB_ACCESS, // Access mask }, }, };

Our template security descriptor says that it is a self-relative security descriptor with an owner, group and DACL, but no SACL. The DACL consists of a single ACE. We set up everything in the ACE except for the SID. We point the owner and group to that same SID. Therefore, this security descriptor is all ready for action once you fill in the SID.

BOOL IsInSidHistory(HANDLE Token, PSID Sid) { DWORD SidLength = GetLengthSid(Sid); if (SidLength > SECURITY_MAX_SID_SIZE) { // Invalid SID. That's not good. // Somebody is playing with corrupted data. // Stop before anything bad happens. RaiseFailFastException(nullptr, nullptr, 0); } ALLOW_ONLY_ONE_SECURITY_DESCRIPTOR Sd = c_sdTemplate; CopyMemory(&Sd.Ace.Ace.SidStart, Sid, SidLength);

As you can see, generating the security descriptor is a simple matter of copying our template and then replacing the SID. The next step is performing an access check of the token against that SID.

const static GENERIC_MAPPING c_GenericMappingFrob = { FROB_ACCESS, FROB_ACCESS, FROB_ACCESS, FROB_ACCESS, }; PRIVILEGE_SET PrivilegeSet; DWORD PrivilegeSetSize = sizeof(PrivilegeSet); DWORD GrantedAccess = 0; BOOL AccessStatus = 0; return AccessCheck(&Sd, Token, FROB_ACCESS, const_cast<PGENERIC_MAPPING>(&c_GenericMappingFrob), &PrivilegeSet, &PrivilegeSetSize, &GrantedAccess, &AccessStatus) && AccessStatus; }

So let's take this guy out for a spin. Since I don't know what is in your SID history, I'm going to pick something that should be in your token already (Authenticated Users) and something that shouldn't (Local System).

// Note: Error checking elided for expository purposes. void CheckWellKnownSid(HANDLE Token, WELL_KNOWN_SID_TYPE type) { BYTE rgbSid[SECURITY_MAX_SID_SIZE]; DWORD cbSid = sizeof(rgbSid); CreateWellKnownSid(type, NULL, rgbSid, &cbSid); printf("Is %d in SID history? %d\n", type, IsInSidHistory(Token, rgbSid)); } int __cdecl wmain(int argc, wchar_t **argv) { HANDLE Token; ImpersonateSelf(SecurityImpersonation)) OpenThreadToken(GetCurrentThread(), TOKEN_QUERY, TRUE, &Token); RevertToSelf(); CheckWellKnownSid(Token, WinAuthenticatedUserSid); CheckWellKnownSid(Token, WinLocalSystemSid); CloseHandle(Token); return 0; }

Related reading: Hey there token, long time no see! (Did you do something with your hair?)

Join us for Microsoft Visual Studio vNext & Azure event with live Q&A with Product Team

MSDN Blogs - Fri, 11/28/2014 - 06:09

A few weeks ago Microsoft hosted an event in New York call
Connect(); where some key announcements were made around .Net, mobile, cloud
development and software engineering practices. We will be re-delivering the
same event locally in Johannesburg where you will have the opportunity to get
an insider’s view of the announcements made including a preview of upcoming
release Visual Studio, .Net Framework and Microsoft Azure. You will also have
the opportunity to participate in a live Q&A with the same Microsoft
executives who presented at the New York event who will be joining us online.
The Microsoft executives include:

  • Brian Harry Corporate      Vice President  //Developer Services
  • Scott Hanselman              Principal Program Manager //Web Platform Team
  • Amanda Silver                  Principal Director, Program Management //Client Platform Tools
  • Jay Schmelzer                  Partner Director, Program Management //Cloud Platform Tools
  • Ryan Salva                       Principal Program Manager //Client Platform Tools

 

Come learn about
Microsoft's vision for developer platforms, tools and technologies that brings
developers to the centre of the mobile-first, cloud-first era. We hope to see
you there! Limited seats so register NOW!

Date:     4 December 2014
Time:    15:30 - 19:30
Venue: Auditorium 2, Microsoft Johannesburg, 3012 William Nicol Drive,
Bryanston

Click here to register

Should you not be able to join us in person, you can join us
online for the live Q&A with the Microsoft executives at 17h45. Join Lync Meeting

Join by phone +27113617099

Conference ID: 78740011

Kindly note we can only allow 250 delegates online so this
will be based on first come first serve.

Note only the Q&A session will be made available through
conference call and not the keynote session

Visual Studio 2015 and pricing of VM's

MSDN Blogs - Fri, 11/28/2014 - 05:44

So instead of ranting about the joys of PaaS let's discuss Azure development environments and cost.

I really wanted to try the new ASP vNext and maybe even the possibility to run the application on Linux/Mac as well.So the first thing I need is to get me a version of Visual Studio 2015 preview.
I don't want to hose my workstation by installing weird versions of VS so I'd rather just get another computer ... hmm , let's think about that ... maybe a virtual one from Azure !

Ok, I go to Azure and provision me a new A2 machine , 2 cores 3,5G memory. Installs very fast, no problems there. Log into the vm with remote desktop , fire up IE , goto google, install chrome, install the new visual studio : http://www.visualstudio.com/en-us/downloads/visual-studio-2015-downloads-vs
This phase actually takes some time since it's like 10G of stuff that gets installed.

So while I'm waiting for the install to happen I start thinking about the cost of this test ...
Looking at the pricing at http://azure.microsoft.com/en-us/pricing/calculator/?scenario=virtual-machines it says that a Basic A2 machine will cost me 0.111 € per hour.
That's not very much. So If I was to use this machine 8 hours per day for 20 days a month it would cost me 17,76 € per month.
Which is definitely not how much I will be using this machine. In reality If doing only VS testing the real number would be closer to 4 hours a day times 5 days which makes about 2,22 € !!!

Think about it, I don't have to screw up my real desktop computer for this test and I get to utilize another processor for compiling in totally separated environment and runnning and it only costs me 2,22 € !.
That is if I remember to shut the vm down every day when I'm done with it. I will definitely try to.

This can be taken so much further... I could be using preseeded VM images or using Azure DSC-tools to create specific configurations with minimal effort on my part.
Have visual studio do stress testing on finished projects (there's gonna be a writeup on that later) .
So much possibilities and the cost is ridiculously small.

One would have to be a complete moron not to do this.

 

Lessons from the Media – teaching inspired by popular content

MSDN Blogs - Fri, 11/28/2014 - 05:30

This is a guest post from Hélène Fyffe, an undergraduate starting her final year at Edinburgh Napier University, having spent a year on placement with Microsoft UK Education as part of her course.

Over the next few weeks I am going to be releasing the 'Lessons from the Media' series of blogs providing primary teachers with ideas for lesson plans based around popular media content. The aim of the blogs will be to highlight potential learning opportunities that can be drawn from #ontrend media productions that teachers can create fun and engaging exercises using Microsoft Education technology around to develop essential learning skills such as analytical, writing and creativity skills.

You might be questioning whether popular media content is appropriate content to use in the classroom so I'll attempt to dispel any concerns by painting a research-strewn backdrop regarding different learning styles and the role of technology in learning.

Engaging multiple learning styles

We are all well versed in the fact that extensive research is being carried out by academics to establish how teachers can better engage students. A famous study conducted by Honey and Mumford exposed that we all have different learning styles which they found had a direct influence on whether teaching methods motivated students to learn. To illustrate in the classroom context, some learners are engaged by jumping straight into a practical experience such as learning through games (activist), some students are more engaged when they can learn by having the time to review something and think about it, for example hearing and observing a teacher demonstrate a maths concept (reflector), other students prefer learning by fathoming how something fits into a concept, by for example reading through a case study or problem and deducing their own conclusions (theorist), and others are motivated to learn by applying a learning outcome to the real world, such as carrying out a science experiment for themselves (pragmatist). Whilst Honey and Mumford highlighted these styles individually, it is important to recognise that many students adopt a mix of a few learning styles, either leaning towards a more cognitive learning preference, or a practical learning preference.

Hopefully this helps understand why the traditional (antiquated) teaching method of lecturing a class during school lessons then giving out practical homework exercises doesn't resonate with all students, as this teaching strategy is catered towards the reflectors and theorists and would be very dull to the pragmatists and activists in the class. The challenge this poses to teachers is how to engage everyone? It may seem slightly idealistic and unrealistic, the notion of teaching in a style that will suit everyone in the class equally. However, an increasing volume of research is showing that teachers can engage more learners by adopting a 'flipped learning' approach to teaching. Flipped learning enables students to read about a topic before class and then practice the concept in class with individual support from the teacher at their disposal, which is a more holistic learning experience and caters more equally to all the learning styles. Technology has been proven to really bring flipped learning to life, with more schools adopting tablets, apps and media solutions in class to inspire students to accomplish exercises in a fun environment.

Engaging learners through relevant technology platforms

Indeed, adopting technology-enhanced learning has been proven to have a direct positive influence on motivating students due to the range of exercises and activities that can be carried out to match the multiple learning styles.

Recent research has shown that students are highly engaged when they can be taught with mediums from their own worlds that they resonate with and understand. A couple of examples of such teaching strategies are more and more teachers using YouTube very effectively as a 21st century learning tool, and of course the Skype in the Classroom phenomenon which has been seen to be adopted by over 95,000 teachers around the globe.

So where does 'lessons from the media' fall into the equation?

By basing a melange of learning exercises on the back of popular media content, I argue that teachers will be providing a fun experience that learners can relate with and get really involved with. As the research has shown, if you can connect with young people through mediums they know and understand, you are more likely to provide a really engaging learning experience.

I imagine that a concern for some of our readers may be that basing a lesson plan on the back of for example, a television programme could exclude students who don't perhaps have access to television at home, but I am going to be exploring 'in-class' solutions that will use mediums that aren't dependent on students having watched the content at home.

Essentially, the series of blogs will explore rich, vibrant and fun media productions that are popular today and will provide teachers with ideas to manipulate the content into interactive learning opportunities.

Make sure to tune into the first blog, Monty the Penguin!

Aplikační okénko: Voyo.cz

MSDN Blogs - Fri, 11/28/2014 - 04:40
Aplikace VOYO.cz vám nabízí možnost neomezeně a legálně sledovat české i zahraniční filmy a seriály, atraktivní sportovní přenosy a pořady TV Nova nejen na počítači a chytré televizi, ale i na Windows zařízeních. Za měsíční poplatek máte neomezený přístup ke všem filmům a seriálům z nabídky, které si můžete kdekoliv a kdykoliv přehrát –...(read more)

Internet of Things: My first experience with Galileo 2 (part 4)

MSDN Blogs - Fri, 11/28/2014 - 04:30

Today I am going to continue my series about Galileo 2 board for beginners and I will discuss analog inputs. But before it, I want to discuss resistors as well.

We already used several resistors in our projects in previous posts but it’s still too hard to understand a value of particular resistor. There are two ways to do it. First of all we may use a multimeter device. You can buy it in any electronics shop. If you want to find a resistor in 330 ohm you need to set the dial of multimeter to 20 KOhm and touch red and black probes of multimeter to different legs of the resistor. You will see something around 0.330 for 330 ohm resistor.

At the same time it’s not easy to find a right resistor among many types of them. Here is my box with different types of resistors and there are about 4 resistors in 330 Ohm only. Probably I will spend up to one hour in order to check all of them.

So, in order to find a right resistor you need to use the second way – you need to understand color marks there. It’s not too hard. Usually, you will find resistors with 4 or 5 bands there. Based on these bands you can calculate resistance of a particular resistor. Here is a small table, which will help:

Color

1st Band

2nd Band

3rd Band

Multiplier

Tolerance

Black

0

0

0

1 Ohm

 

Brown

1

1

1

10 Ohm

1%

Red

2

2

2

100 Ohm

2%

Orange

3

3

3

1 KOhm

 

Yellow

4

4

4

10 KOhm

 

Green

5

5

5

100 KOhm

0.5%

Blue

6

6

6

1 MOhm

0.25%

Violet

7

7

7

10 MOhm

0.10%

Grey

8

8

8

 

0.05%

White

9

9

9

 

 

Gold

 

 

 

0.1

5%

Silver

 

 

 

0.01

10%

So, if you have resistor with 4 bands, you will not use 3rd band column. Just find the right number for the first band and the right number for the second band and combine them to the one number like <first number><second number>. And multiply this number by multiplier in order to get the final resistance. That’s why we need <Orange, Orange, Brown> forth band resistor in order to get 330 Ohm resistor.

Let’s start to create a project based on sensors, which can send inputs to our board. I wanted to use a flame sensor in the first project but today I woke up at 4 A. M. due to fire alarm in my building. Probably somebody forgot to turn off a stove or something like it but buzzer’s sound was terrible and I decided that it was a sign to stop my experiments with flame. So, we will use a photoresistor sensor.

If you have a modern car, you should have something like “Auto” mode for your lights. This mode allows to switch your light based on amount of external light. We will try to emulate the same behavior in our project. In order to do it we will create a circuit like this:

Depending on your kit, you will find many types of photoresistors there. In my case I have an analog sensor, which looks like as a separate board:

It has three legs, which should connect your sensor to power, ground and analog input pin on the board. You should check the datasheet for your sensor in order to understand how to connect it.

Since we are using an analog sensor, it should be connected to an analog pin. Photoresistor works like a variable resistor, which resists our voltage based on the amount of light. If you have a lot of light, the resistor will put down your voltage. Galileo converts the analog input to digital and it helps to understand the quantity of light. Here is a simple code, which you may use to test the project:

int led = 8;
int photo = A0;

void setup()
{
pinMode(led, OUTPUT);
pinMode(photo, INPUT);
}

void loop()
{
int res=analogRead(photo);
if (res <= 700)
{
digitalWrite(led, LOW);
}
else
{
digitalWrite(led, HIGH);
}
delay(1000);
}

Galileo convertor will convert the analog input to a number between 0 and 1023. In this code I decided that if our number is smaller than 700 (resistor resists enough voltage), we will switch off our led because we have enough amount of light. If our resistor doesn’t resist enough voltage (it’s dark) we will switch on out led. Based on this example you are able to make many experiments with light and analog input.

Since we are using Visual Studio, it’s easy to launch our projects in a debug mode. When you will stop debugging, Visual Studio kills your process automatically. Visual Studio allows us to set breakpoints and review variables values etc. You may also use the Log method in order to print something to the output screen in the Debug mode. If you are launching your application without Debug mode, you need to kill your process using the Telnet as I mentioned in previous articles.

So, Visual Studio makes our life better but there still is a question: how to launch our projects just after startup of our operation system. It’s very important if you already created the final version of your project. Because we are using Windows, it’s easy to do in two steps. First of all you need to navigate to the following folder: \\mygalileo\c$\Windows\System32\Boot. You will need to enter your credentials there. On the next step, change autorun.cmd file and put one more command there, like start YourFolder\YourApp.exe. That’s all and your application will launch on the system startup.

Today I will finish my post. Next time I am planning to show more advanced projects.  

New PaaS blog

MSDN Blogs - Fri, 11/28/2014 - 03:08

Hello world,

this is a blog that I keep to document and organize my adventures in Microsoft Azure and it's many services.

I am an avid supported of PaaS-oriented view of the world so most of my ramblings center around PaaS-related issues and software development in Azure environment.

So please excuse me for being a little enthusiastic from time to time.

regards,

petri.

 

 

Azure へのプライベート Docker レジストリのデプロイメント

MSDN Blogs - Fri, 11/28/2014 - 03:00
このポストは、11 月 11 日に投稿された Deploying Your Own Private Docker Registry on Azure の翻訳です。 マイクロソフトでは、Microsoft Azure 上で自分の Docker レジストリを管理できるようにするなど、Docker サポートへの取り組みを急ピッチで進めています。 6 月には、Docker 化アプリケーションを簡単にクラウドにデプロイできるようにするために、Microsoft Azure に Linux VM 上の Docker コンテナーの サポート を追加しました。また 2 週間ほど前には、Windows Server コンテナーと Azure での Docker オープン オーケストレーション API のサポートを 発表 しました。さらに先日の TechEd Europe では、Azure の CTO を務める Mark Russinovich が、まもなくオープン ソースとして提供開始予定の Docker Client for Windows の ライブ デモを実演 しました。 コンテナー...(read more)

基于Windows Azure Media Service REST API 进行Windows Store/Windows Phone 应用开发系列-Part 3上传媒体到WAMS

MSDN Blogs - Fri, 11/28/2014 - 02:52

在文章基于Windows Azure Media Service REST API 进行Windows Store/Windows Phone 应用开发系列-Part 1 简介中介绍了基于Windows Azure Media Service(Windows Azure 媒体服务,简称WAMS)进行应用开发的相关信息及包含的基本流程,紧接着在文章基于Windows Azure Media Service REST API 进行Windows Store/Windows Phone 应用开发系列-Part 2初始设置链接到WAMS中介绍了如何使用REST API进行WAMS的链接,本文将基于此对如何上传媒体到媒体服务进行详细讲解。

使用REST API将媒体文件上传到媒体服务包含许多步骤,主要流程是:

  1. 创建资产(ASSET)
  2. 对资产进行加密(可选项)—本文暂时不予以考虑
  3. 将媒体文件上传到blob storage

请注意,在使用REST API时访问媒体服务时,必须在HTTP请求中添加如下必须的header;WAMS 的根 URI 为https://media.windows.net/,成功连接到此 URI后,会收到一个“301 重定向”响应并提取出新的媒体服务URI,随后调用新都是基于该 URI,详细内容参见Part2内容。

根据文档媒体服务 REST API 开发的设置可知,每次调用WAMS时,客户端必须在请求中包括必需的标头,列表如下:

Header

Type

Value

Authorization

Bearer

Bearer 是唯一接受的授权机制。

该值还必须包括由 ACS 提供的访问令牌。

x-ms-version

Decimal

2.7

DataServiceVersion

Decimal

3.0

MaxDataServiceVersion

Decimal

3.0

 

Part 1中描述:资产(ASSET)是包含媒体信息的逻辑实体,可能包含了一个或多个需要处理的数字文件如audio,video等。资产实体是对资产的抽象,包含了一系列的属性如Id, State等,为便于处理,我们定义Asset类用于表示媒体服务中的一个资产:

创建资产:在媒体服务中创建一个新的资产

EndPoint

https://media.windows.net/Assets

或redirection后的新的URI/Assets

HTTP Method

POST

Request Headers

DataServiceVersion:  3.0
MaxDataServiceVersion: 3.0
x-ms-version: 2.7
Authorization:   Bear + ACSToken

Request Content Type

application/json;odata=verbose

Request Body Format

{
    "Name": "<Asset Name>", (通常使用媒体文件名,本文省略加密操作)
format>"
}

e.g.
{
    "Name": "azure",
}

 

通常根据媒体文件名创建asset, 如下代码根据文件名在媒体服务中创建相应的资产实体AssetId,代码执行成功后可以提取相应的Id值如nb:cid:UUID:636363d9-7c66-42ca-add4-0a8c4d4464f6。

(为便于描述,本文先假设上传位于媒体库名为azure.wmv的文件,以Part2链接媒体服务的内容为基础)

private string AssetId;

private string AccessPolicyId;

private string UploadUrl;

private string UploadEndpoint;

private static readonly string mediaFileName = "azure.wmv";

 

private async void ConnectMediaService(string acsToken)

        {           

            AuthenticationHeaderValue header = CreateBasicAuthenticationHeader(acsToken);

         

            var handler = new HttpClientHandler()

            {

                AllowAutoRedirect = false

            };

            HttpClient httpClient = new HttpClient(handler);

            httpClient.MaxResponseContentBufferSize = int.MaxValue;

 

            httpClient.DefaultRequestHeaders.Add("Authorization", header.ToString());

            httpClient.DefaultRequestHeaders.Add("x-ms-version", "2.7");

            httpClient.DefaultRequestHeaders.Add("DataServiceVersion", "3.0");

            httpClient.DefaultRequestHeaders.Add("MaxDataServiceVersion", "3.0");

 

            HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, WAMSEndpoint);

            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

 

            var response = await httpClient.SendAsync(request);

          

            //after connecting to https://media.windows.net, you will receive a 301 redirect specifying another Media Services URI.

            // You must make subsequent calls to the new URI.

            if (response.StatusCode == HttpStatusCode.Moved || response.StatusCode == HttpStatusCode.MovedPermanently)

            {

                var htmlcontent = await response.Content.ReadAsStringAsync();

                HtmlDocument htmlDocument = new HtmlDocument();

                htmlDocument.LoadHtml(htmlcontent);

                var inputs_a = htmlDocument.DocumentNode.Descendants("a");

 

                string newLocation = null;

                foreach (var input in inputs_a)

                {

                    newLocation = input.Attributes["href"].Value;

                  

                }

               

                if (newLocation!=null)

                {

                    WAMSEndpoint = newLocation;

                  

                }

 

                string mediaName = Path.GetFileNameWithoutExtension(mediaFileName);

                

                CreateAsset(acsToken, WAMSEndpoint, mediaName);

               

            }  

 

        }

 

 

   private async void CreateAsset(string acsToken, string wamsEndPoint, string mediaName)

        {

 

            AuthenticationHeaderValue header = CreateBasicAuthenticationHeader(acsToken);

 

            HttpClient httpClient = new HttpClient();

            httpClient.MaxResponseContentBufferSize = int.MaxValue;

 

            httpClient.DefaultRequestHeaders.Add("Authorization", header.ToString());

            httpClient.DefaultRequestHeaders.Add("x-ms-version", "2.7");

            httpClient.DefaultRequestHeaders.Add("DataServiceVersion", "3.0");

            httpClient.DefaultRequestHeaders.Add("MaxDataServiceVersion", "3.0");

 

            HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, wamsEndPoint);

            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

 

          

            string assetCreateRequestPayloadFormat = @"{0} ""Name"": {1} {2} {3} {4}";

            string requestBody = string.Format(CultureInfo.InvariantCulture, assetCreateRequestPayloadFormat, "{","\"", mediaName,"\"", "}");

    

            HttpContent body = new StringContent(requestBody, Encoding.UTF8, "application/json");

         

            var response = await httpClient.PostAsync(wamsEndPoint + "Assets", body);

 

 

            if (response.IsSuccessStatusCode)

            {

                var content = await response.Content.ReadAsStringAsync();

                XmlDocument xmlDoc = new XmlDocument();

                xmlDoc.LoadXml(content);

 

                XmlNodeList elemList = xmlDoc.GetElementsByTagName("d:Id");

                foreach (var ele in elemList)

                {

                    AssetId = ele.InnerText;

                }

 

                CreateAccessPolicy(acsToken, WAMSEndpoint);    //create AccessPolicy

            

            }

            else

            {

                // other ops

            }

 

        }

 

 

Part 1中描���,每个WAMS 账户都有一个或多个相关联的Azure Storage 账户,用于存储关联的WAMS 账户控制的媒体内容,在将媒体文件上传至Blob Storage时要求对资产设置相应的写入权限,然后提取用于上传的Storage URL,最后使用 Azure Storage REST APIs进行传输。

将任何文件上载到 BLOB 存储之前,需要设置用于对资产执行写入操作的访问策略权限。为此,需要向 AccessPolicy 实体集发送一个 HTTP POST 请求:

创建AccessPolicy:

EndPoint

https://media.windows.net/AccessPolicies  

或redirection后的新的URI/AccessPolicies

HTTP Method

POST

Request Headers

DataServiceVersion:  3.0
MaxDataServiceVersion: 3.0
x-ms-version: 2.7
Authorization:   Bear + ACSToken

Request Content Type

application/json;odata=verbose

Request Body Format

{
    "Name": "NewUploadPolicy",

   "DurationInMinutes" : "300",

    "Permissions" : 2

format>"
}

 

执行代码如下,执行成功后,根据返回内容获得相应的AccessPolicyId, 如:"nb:pid:UUID:58fcf51e-5219-4fea-8536-6c295d2c388a"

 

private async void CreateAccessPolicy(string acsToken, string wamsEndPoint)

        {

 

            AuthenticationHeaderValue header = CreateBasicAuthenticationHeader(acsToken);

 

            HttpClient httpClient = new HttpClient();

            httpClient.MaxResponseContentBufferSize = int.MaxValue;

 

            httpClient.DefaultRequestHeaders.Add("Authorization", header.ToString());

            httpClient.DefaultRequestHeaders.Add("x-ms-version", "2.7");

            httpClient.DefaultRequestHeaders.Add("DataServiceVersion", "3.0");

            httpClient.DefaultRequestHeaders.Add("MaxDataServiceVersion", "3.0");

 

            HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, wamsEndPoint);

            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

 

         

            String requestBody = "{ \"Name\" : \"" + "NewUploadPolicy" + "\"," +

                                   " \"DurationInMinutes\" : \"" + "300" + "\", " +

                                   " \"Permissions\" : " + "2" + "}";

 

            HttpContent body = new StringContent(requestBody, Encoding.UTF8, "application/json");

 

            var response = await httpClient.PostAsync(wamsEndPoint + "AccessPolicies", body);

 

            if (response.IsSuccessStatusCode)

            {

                var content = await response.Content.ReadAsStringAsync();

             

                XmlDocument xmlDoc = new XmlDocument();

                xmlDoc.LoadXml(content);

 

                XmlNodeList elemList = xmlDoc.GetElementsByTagName("d:Id");

                foreach (var ele in elemList)

                {

                    AccessPolicyId = ele.InnerText;

                }

  

                CreateUploadURL(acsToken, WAMSEndpoint); 

             

            }

            else

            {

                // return null;

            }

 

        }

 

创建 AccessPolicy 后,将该Id链接到某个定位符实体,该实体将为你提供用于将文件上载到 BLOB 存储的 URL 路径。

创建上传URL:

EndPoint

https://media.windows.net/Locators  

或redirection后的新的URI/Locators

HTTP Method

POST

Request Headers

DataServiceVersion:  3.0
MaxDataServiceVersion: 3.0
x-ms-version: 2.7
Authorization:   Bear + ACSToken

Request Content Type

application/json;odata=verbose

Request Body Format

{
    " AccessPolicyId": “< AccessPolicyId>”,

    "AssetId" : “<AssetId>”,

    "StartTime" :” < DateTime.UtcNow.AddMinutes(-5) >”,(必须为YYYY-MM-DDTHH:mm:ss 格式)      “Type”: “<1>”

format>"
}

 

e.g:

{"AccessPolicyId" : "nb:pid:UUID:58fcf51e-5219-4fea-8536-6c295d2c388a",

"AssetId" : "nb:cid:UUID:636363d9-7c66-42ca-add4-0a8c4d4464f6", 

"StartTime" : "2014-11-28T10:39:58", 

"Type" : 1

}

 

 

 

执行代码如下,执行成功后,根据返回内容构造相应的上传URL,注意必须将要上载的文件的文件名添加到在获取得到的URL中。如:https://myappsstorage.blob.core.windows.net/asset-636363d9-7c66-42ca-add4-0a8c4d4464f6/azure.wmv?sv=2012-02-12&sr=c&si=8561f64e-0392-47f6-a178-8397f6ee4352&sig=JjDaBdvxHqWsA%2FWrLMLkAqQjl92v03vwdOrqVIAjkV8%3D&st=2014-11-28T10%3A39%3A58Z&se=2014-11-28T15%3A39%3A58Z

 

  private async void CreateUploadURL(string acsToken, string wamsEndPoint)

        {

 

            AuthenticationHeaderValue header = CreateBasicAuthenticationHeader(acsToken);

 

            HttpClient httpClient = new HttpClient();

            httpClient.MaxResponseContentBufferSize = int.MaxValue;

 

            httpClient.DefaultRequestHeaders.Add("Authorization", header.ToString());

            httpClient.DefaultRequestHeaders.Add("x-ms-version", "2.7");

            httpClient.DefaultRequestHeaders.Add("DataServiceVersion", "3.0");

            httpClient.DefaultRequestHeaders.Add("MaxDataServiceVersion", "3.0");

 

            HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, wamsEndPoint);

            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

 

 

            String requestBody = "{ \"AccessPolicyId\" : \"" + AccessPolicyId + "\"," +

                           " \"AssetId\" : \"" + AssetId+ "\", " +

                           " \"StartTime\" : \"" + DateTime.UtcNow.AddMinutes(-5).ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss") + "\", " +  

                           " \"Type\" : " + "1" + "}";        

 

 

            HttpContent body = new StringContent(requestBody, Encoding.UTF8, "application/json");

            var response = await httpClient.PostAsync(wamsEndPoint + "Locators", body);

 

            if (response.IsSuccessStatusCode)

            {

                var content = await response.Content.ReadAsStringAsync();

 

                XmlDocument xmlDoc = new XmlDocument();

                xmlDoc.LoadXml(content);

 

                XmlNodeList elemListBase = xmlDoc.GetElementsByTagName("d:BaseUri");               

                XmlNodeList elemListSV = xmlDoc.GetElementsByTagName("d:ContentAccessComponent");

                string baseUrl=null;

                string sv=null;

                foreach (var ele in elemListBase)

                {

                    baseUrl = ele.InnerText;

                  

                }

                foreach (var ele in elemListSV)

                {

                    sv = ele.InnerText;

 

                }

                UploadUrl = baseUrl + "/" +Path.GetFileName(mediaFileName) +  sv;              

                UploadEndpoint = baseUrl.Substring(0, baseUrl.IndexOf("/asset"));

 

               var isUploaded= PerformFileUpload(acsToken, UploadEndpoint, UploadUrl);

            }

            else

            {

                // return null;

            }

 

        }

 

设置了AccessPolicy 和上传URL后,即可使用 Azure 存储 REST API 将具体的文件上载到 Azure BLOB 存储容器,

上传文件:

EndPoint

上传地址(blob storage SAS 定位符)

HTTP Method

PUT

Request Headers

x-ms-version: 2011-08-18 x-ms-date: 2011-01-17 x-ms-blob-type: BlockBlob

Request Content Type

application/octet-stream

Request Body Format

{
  媒体文件二进制数据   

}

 

执行代码如下,需要注意的是二进制文件的添加,此处仅以本地Video库中的一个媒体文件为例,可以做相应修改,如让用户选择要上传的文件或自带摄像头拍摄的视频,执行成功返回201创建成功。

  private async Task<bool> PerformFileUpload(string acsToken, string uploadEndPoint,string uploadUrl)

        {         

 

            HttpClient httpClient = new HttpClient();

            httpClient.MaxResponseContentBufferSize = int.MaxValue;

       

            httpClient.DefaultRequestHeaders.Add("x-ms-version", "2011-08-18");

            httpClient.DefaultRequestHeaders.Add("x-ms-date", "2011-01-17");

            httpClient.DefaultRequestHeaders.Add("x-ms-blob-type", "BlockBlob");

 

            //need to set it as yourstorageaccount.blob.core.windows.net

            HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, uploadEndPoint);

            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/octet-stream"));           

      

            StorageFolder library = Windows.Storage.KnownFolders.VideosLibrary;

            var videoFile = await library.GetFileAsync(mediaFileName);

           // reqestbody= binary data

          

            byte[] fileBytes;

            using (IRandomAccessStream stream = await videoFile.OpenReadAsync())

            {

                using (DataReader reader = new DataReader(stream.GetInputStreamAt(0)))

                {

                    await reader.LoadAsync((uint)stream.Size);

                    fileBytes = new byte[stream.Size];

                    reader.ReadBytes(fileBytes);                  

                }

            }

            MultipartFormDataContent form = new MultipartFormDataContent();

            var fileContent = new System.Net.Http.ByteArrayContent(fileBytes);      

            form.Add(fileContent,mediaFileName);

 

            var response = await httpClient.PutAsync(uploadUrl, form);

 

            if (response.IsSuccessStatusCode)

            {

                return true;

            }

            else

            {

                 return false;

            }

 

        }

 

完成文件的上传后,便可以进行后续媒体的操作如编码,将在后续文章中详细讲述,敬请期待。

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux