You are here

MSDN Blogs

Subscribe to MSDN Blogs feed
from ideas to solutions
Updated: 49 min 52 sec ago

Vidéo : La gestion de l'identité dans le Cloud, avec Infinite Square - Part 2

Tue, 05/13/2014 - 02:27

Chaque semaine, découvrez sur le blog une nouvelle vidéo réalisée par notre partenaire Infinite Square. Ils vous présenteront, service par service, l'ensemble des fonctionnalités de Microsoft Azure.

 

Cette semaine nous nous concentrons sur Azure Active Directory, qui permet la gestion de l'identité dans le Cloud, avec des scénarios plus avancés que Microsoft Acces Control Service, notamment en terme d'intégration au SI interne.

Azure Active Directory permet, entre autres, de mettre au point des scénarios de Single Sign-On (SSO) : l'utilisateur n'a pas besoin de taper son mot de passe à chaque fois qu'il utilise l'applicaton.

 

Pour découvrir

  • L'utilisation d'Azure Active Directory en mode stand alone : le tenant Active directory n'existe que dans Azure
  • Mais aussi l'utilisation d'Azure Active Directory avec un scénario hybride : lorsque l'on possède déjà un Active Directory en interne et on souhaite synchroniser notre AD interne avec l'AD Azure
  • Et, enfin, la différence entre Azure Active Directory et un contrôleur de domaine

Vous pouvez visionner la vidéo.

Des questions ? Un projet ? Contactez Infinite Square :  contact@infinitesquare.com

 

 

Azure SQLDB Basic, Standard and Premium Tiers - Quick glimpse and first impressions - Russian version

Tue, 05/13/2014 - 02:25

В прошлой статье мы рассмотрели общие (и местами продвинутые) моменты, которые могут стать решающими в процессе выбора между размещением SQL Server как PaaS и размещением его на виртуальной машине. Эта статья является переводом статьи зарубежного коллеги Igor Pagliai "Azure SQLDB Basic, Standard and Premium Tiers - Quick glimpse and first impressions". В ней мы углубимся в то, что собой представляют новые режимы работы Microsoft Azure SQL Database.

Резюмируем прошлую статью:

Выбираем SQL Server на ВМ, если:

Необходима полная совместимость с функциональностью локального SQL Server

Вносить серьезные изменения в проект не представляется возможным

Необходимы гибкие средства контроля над низлежащей инфраструктурой

Выбираем Microsoft Azure SQL Databases, если:

Создается совершенно новое приложение

Необходимо сократить риски и затраты, связанные с развертыванием и дальнейшим управлением низлежащей инфраструктуры

Как следствие прошлого пункта, хочется полностью сконцентрироваться на самом проекте

Устраивает уровень контроля, в который не входит низлежащая инфраструктура

Перейдем к статье.

Azure SQL Database - новые режимы

http://blogs.msdn.com/b/windowsazure/archive/2014/04/24/azure-sql-database-introduces-new-service-tiers.aspx

После прочтения статьи по ссылке выше очевидно бросаются в глаза две вещи - предсказуемая производительность и усовершенствованная высокая доступность + отказоустойчивость. В моем топ-листе "что хочется больше всего", выстроенном по мотивам опыта работы с ISV, анонсированная функциональность предполагает решение самых приоритетных вещей. Однако, для конечного пользователя, что изменилось по сравнению с традиционными режимами Web/Business? В этом контексте нужно рассмотреть три аспекта - стоимость, производительность и HA&DR. Давайте начнем с таблицы.

Примечание: если вы использовали Premium раньше и удивлены отсутствием уровня P4, не волнуйтесь - P4 был переименован в P3.

Стоимость

С анонсом режимов Basic, Standard и Premium биллинг Azure SQL DB концептуально сместились из модели за-объем к модели, компонентами которой являются различные уровни гарантированной производительности и HA&DR. Пример: с Web/Business факт наличия 100 баз данных по 1 Гб был сравним факту наличия одной большой базы на 100Гб с позиции стоимости. В новых режимах ситуация другая - вы платите за возможности базы, которые тоже включают, но не ограничиваются, максимальный размер базы. За последние два года я видел много партнеров и клиентов, использовавших мультитенантность и шардинг на базисе 1 БД <=> 1 Тенант, сейчас же необходимо переключиться на новый концептуальный подход - после решения использовать какой-то конкретный режим, например, Standard, для оптимизации стоимости вам нужно собрать вместе максимально возможный размер данных (тенантов, пользователей) для того, чтобы достичь максимального размера БД в 250 Гб (конечно, если решение позволяет такое сделать). Все это отражено на официальной странице с ценами для Azure SQL DB: 

http://azure.microsoft.com/en-us/pricing/details/sql-database/#basic-standard-and-premium

После изучения этой странице я заметил интересную штуку относительно биллинга, которая специфична не для Azure SQL DB, но для Azure в целом:

Зачем этот выбор регионов? Я выяснил, что сейчас стоимость Azure может варьироваться в зависимости от региона, в котором размещаются ваши ресурсы. По этому поводу возникает несколько интересных соображений. Перейдем к соображениям по поводу стоимости. Если внимательно посмотреть на стоимость Premium, вы можете удивиться цене. Если думаете, что она слишком высока, особенно в сравнении с SQL Server на виртуальной машине, оцените ниженаписанное и подумайте еще раз:

  • Каждая БД в Azure SQL DB реплицируется три раза для обеспечения высокой доступности и отказоустойчивости. Реплики располагаются в одном датацентре, но оплачивается только одна.
  • Если размещать SQL Server на виртуальной машине в конфигурации HA, надо создать как минимум 3 виртуальных машины для группы доступности AlwaysOn + две виртуальных машины для Active Directory.
  • Каждая база Azure SQL DB автоматически резервируется до 35 дней (зависит от режима), при этом хранилище бесплатно, тогда как в IaaS все это обеспечить надо самому.
  • В Azure SQL DB не надо тратиться на управления, Azure сама позаботится практически обо всем.
  • На данный момент нет возможности сделать группу доступности AlwaysOn и поместить в нее ресурсы в нескольких датацентрах.
  • Развернуть уже готовую инфраструктуру с Azure SQL DB можно за несколько минут, тогда как ручная настройка AlwaysOn займет часы и потребует специальных знаний.

В случае режима Standard я провел сравнение и обнаружил, что они несколько дешевле, и это хорошо, так как за меньше денег я получаю больше функций, включая новые. Однако нужно понимать, что сейчас новые режимы оплачиваются исходя из того, что они находятся в Preview.

Производительность

Первый вопрос, который может возникнуть, особенно если уже есть база в режиме Web/Business, это насколько производительнее новые режимы? Что здесь важно - это не вычислительная мощность новых режимов, но тот факт, что у вас теперь есть гарантия получения определенного уровня ресурсов и производительности, которой не было раньше. Я общался с несколькими партнерами и обнаружил общее недопонимание, поэтому давайте разберемся в этом вопросе, так как я считаю это важным. Сначала посмотрим на сессии и потоки, описание которых приведено по ссылке.

Azure SQL Database Resource Governance

http://msdn.microsoft.com/en-us/library/azure/dn338078.aspx

Как можно увидеть в статье, в режимах Web/Premium максимальное количество параллельных запросов равно 180, если же превышаем это количество, то получаем ошибку 10928 - ключевое здесь то, что 180 является максимумом, но, если система слишком загружена, по факту вы можете иметь это значение, но в виде НУЛЯ. Другими словами, здесь нет гарантии предоставления ресурсов, и по этой причине новые режимы ведут себя иначе - теперь вы можете использовать то количество ресурсов, которое гарантируется режимом, то есть Минимум = Максимум. Важным последствием нового подхода к управлению ресурсами является то, что ваш проект теперь будет иметь более предсказуемый уровень производительности, более устойчивое выполнение запросов и время ответа.

Уровни производительности - новая концепция, которая была введена в Azure SQL DB для решения проблемы пользователей №1 - возможности иметь предсказуемый уровень производительности, таким образом минимизируя факт работы в мультитенантной среде, где надо делиться ресурсами с соседями. Ниже - таблица с описанием того, что эти режимы нам предлагают (http://msdn.microsoft.com/library/azure/dn741336.aspx):

В таблице отсутствует очень интересный момент - данные о памяти, CPU и IOPS. Можно, конечно, подумать, что их забыли, но есть другое объяснение, которое не подкреплено официальными заявлениями, но выглядящее для меня интересным. Если бы даже они и были, то, когда мы создали бы новую базу Azure SQL DB используя режим Premium, мы бы увидели, что уровень оперирует термином DTU, и больше ничем.

По ссылке ниже написано, что DTU расшифровывается как “Database Throughput Units”, и это хороший способ, на мой взгляд, абстрагировать производительность базы данных от низлежащей инфраструктуры Azure SQL DB. Если даже в будущем эта низлежащая инфраструктура изменится, уровень производительности, выделенный вам, останется прежнем, и будет выражаться в логической единице, измеряющей то, что нам действительно важно - транзакциях, а не том, как много CPU/RAM будет иметь наш сервер.

Azure SQL Database Benchmark Overview

http://msdn.microsoft.com/en-us/library/azure/dn741327.aspx

В качестве последнего утверждения о производительности я приведу данные о том, как в новых режимах измеряется Performance SLA:

  • Basic: предназначен для приложений с небольшими транзакционными требованиями. Производительность исходит из предсказуемой час��вой нагрузки.
  • Standard: предназначен для бизнес-приложений в облаке. Средний уровень производительности и бизнес-функциональности. Исходит из предсказуемой минутной нагрузки. 
  • Premium: предназначен для критичных баз. Premium предоставляет наибольший уровень гарантированной производительности и доступ к продвинутым функциям системы. Исходит из секундной нагрузки.

С Premium можно иметь базу данных размером до 500GB.

HA&DR

Позвольте сначала представить новый SLA в 99.95% для всех новых режимов. Небольшое, на первый взгляд, увеличение этого показателя очень важно для достижения уровня HA SLA, предлагаемого конкурентами, и выравнивания уровня SLA с другими сервисами Azure. Лично я надеюсь, что в будущем Azure Storage пойдет по той же дороге и увеличит свой SLA, который сейчас равен 99.90%. Очень важной функцией новых режимов является возможность самовосстановления базы данных, которое можно сделать с портала, Powershell или с помощью API.

Azure SQL Database Backup and Restore

http://msdn.microsoft.com/en-us/library/azure/jj650016.aspx

Обратите внимание на две важных вещи - у всех режимов разные retention-периоды - это раз. И два - только Standard/Premium дают функцию восстановления point-in-time, с Basic же инкрементального резервирования нет, поэтому есть возможность восстановиться только на последнюю полную копию. Еще я заметил интересную вещь - всегда, когда тестируешь что-то новое, начинаешь замечать интересные функции:

С этой функцией можно восстанавливать удаленные таблицы бесплатно, то есть такой аналог Корзины. Я опишу подробнее эту функцию потом, но сейчас надо уточнить, что можно восстанавливать конкретную точку в зависимости от режима:

Последняя новая, и очень важная, функция в HA&DR - активная георепликация. 

Active Geo-Replication for Azure SQL Database

http://msdn.microsoft.com/en-us/library/azure/dn741339.aspx

Теперь у вас есть возможность асинхронно реплицировать транзакции одной базы на другую, в другом экземпляре или даже датацентре - хорошо здесь наличие гарантии RPO < 5 минут, но помните, что эта функция доступна только в Premium. Она очень полезна не только для обеспечения отказоустойчивости, но и разгрузки Read-Only операций, так как можно иметь до 4 вторичных реплик. Кроме этого, надо учитывать, что за вторичные реплики надо платить. Автоматический failover недоступен, так как эта функция обеспечивает DR, но не HA.

Резюме

За период использования новых режимов у меня скопилось несколько интересных наблюдений:

  • Я очень рад, что клиенты с платным уровнем техподдержки Azure и Premier Support теперь могут получить поддержку для Azure SQL DB в preview - Basic, Standard и Premium. Приятно видеть, что политика изменилась, теперь клиенты могут использовать обновленные сервисы сразу же с нормальной поддержкой, но все же надо понимать, что нет гарантии SLA до официального релизма.
  • Эластичность - краеугольный камень в любом облачном сервисе, и в Azure SQL DB у вас есть возможность переходить из одного режима в другой, но будьте аккуратны - время на эту операцию может повлиять на производительность, если база в production. Кроме этого изменение режима может потребовать перенос внутренней копии вашей базы с одного сервера на другой, и здесь уже нет SLA на то, сколько это может занять.
  • Во время моих тестов я сразу обратил внимание, что есть ограничение на количество изменений режимов в день - режим можно поменять до 4 раз в день, но Web/Business не считаются.
  • Оценка стоимости происходит раз в день на одну базу, в зависимости от того, какой старший режим использовался - это означает, что, если вы за день переключились с Basic на Premium P1, то оценка стоимости будет произведена на основе Premium P1, но не обоих режимов.
  • По умолчанию на один экземпляр Azure SQL DB есть лимит на две базы данных, но, как вы видите ниже, можно поднять запрос в техподдержку и увеличить этот лимит.

В конце давайте расскажу про будущее Web/Business - поддержка этих режимов будет отключена через год после 24 апреля 2014. Подробнее по ссылке ниже.

Web and Business Edition Sunset FAQ

http://msdn.microsoft.com/library/azure/dn741330.aspx

По этой ссылке также важная информация по отказу от поддержки сервиса федераций: 

The current implementation of Federations will be retired with Web and Business edition

Почему это важно? Все - и внутри Microsoft, и снаружи - уже осознали, что это произойдет, но не было официального анонса, теперь он есть. Важным вопросом теперь является то, чем клиенты и партнеры должны заменить федерации? Похоже, что пока опций нет, и, к сожалению, это правда. Я уверен,, однако, что скоро, до момента окончания поддержки, Microsoft выдаст что-то новое! 

Igor Pagliai,

Twitter автора (@igorpag).

Download the Windows Driver Kit (WDK) 8.1

Tue, 05/13/2014 - 01:30
Windows Driver Kit (WDK) 8.1

“The Windows Driver Kit (WDK) 8.1 is a collection of tools you can use to build, test, debug, and deploy drivers for Windows 8.1. “

And you can download at Download the Windows Driver Kit (WDK) 8.1

Rob

Technorati Tags: ,

Service Management Automation: Migrating Simple (single .net activity) Orchestrator runbook to SMA

Tue, 05/13/2014 - 01:12

 

Hello Readers,

Service Management Automation is an IT process automation solution for Windows Azure Pack for Windows Server. It enables you to automate the creation, monitoring, and deployment of resources in your Windows Azure Pack environment.

This blog is meant for IT teams who are trying to migrate their traditional data center Orchestrator runbook to SMA. I am targeting simple runbooks here which contains single .Net activity (the PowerShell script inside the .net script activity can be complex though :) ), something like shown below.

Before jumping into code details of Orchestrator runbook and SMA runbook, Please make sure you have a working Orchestrator and SMA environment. Below are the main requirements for this scenario

1. Prepare SMA environment

You can install SMA using this article on TechNet. There is also another very simple article present on internet which gives step by step process of installing SMA.

2. System Center Orchestrator 2012 RTM or above

Follow this link to prepare Orchestrator environment.

3. Windows Azure Pack

Follow this link for more details of WAP.

4. System Center Virtual Machine Manager 2012 RTM or above.

Follow this link to prepare SCVMM environment. We need VMM as we are using sample Orchestrator runbook which changes the memory or CPU count of Virtual machines using VMM.

 

For SMA to Orchestrator integration i.e. invoking a SMA runbook from Orchestrator and Orchestrator runbook from SMA, please refer this article written by Chris Sanders.

We will achieve below in this blog

1. Creating a sample Orchestrator Runbook. (ChangeVM)

2. Step by step process of converting Orchestrator Runbook to SMA (Invoke-ChangeVM)

You can use your already created runbooks also for step 1. You can directly jump to Step 2 to see the conversion steps J

These are the requirements for this scenario:

- VMM Server and SMA Server should be in same domain.

- VMM server must be reachable from SMA server (Connectivity). Please follow below steps to test.

· Ping the VMM Server and confirm it is working. Please refer below image.

· Create PSSession of VMM Server and confirm it is working. Please refer below image.

- ExecutionPolicy should be set as ‘RemoteSigned’ on both SMA and VMM server.

- The VM needs to be in either Power Off or Stored State mode before SCVMM can perform the specified hardware changes.

- Given Memory value in ChangeVM runbook should be in increments of 2MB. Odd values are not accepted while assigning Memory.

  Step 1:- Creating Sample Orchestrator Runbook (ChangeVM)

You can follow this free ebook written by Andreas Rynes on Designing Orchestrator runbooks. This ebook provides a framework for designing Runbooks and IT process automation to help the IT Pro get the most out of their System Center 2012 Orchestrator implementation and help them to utilize Orchestrator in a very modular approach, focusing on small, focused pieces of automation.

Let’s focus on the runbook we are using as an example in this blog:-

‘Change VM’ Orchestrator Runbook is responsible for changing configuration of an existing VM (Memory). The runbook accepts below parameters:-

1. $VMAction – Action to be performed on existing VM e.g. IncreaseMemory, DecreaseMemory.

2. $VMValue – Value of Memory to be increased or decreased, VMValue should be in increments of 2MB

3. $VMMServerName – Name of the SCVMM Server.

4. $VMName – Name of the VM that needs to be configured.

This runbook creates remote session to SCVMM server and uses Set-VM cmdlet to set the memory. The values of VMName, VMMServerName, VMAction and VMValue will be read from ‘Initialize Data’ activity

The runbook script is pasted in below box:-

1: # Get Input parameters 2: $VMName="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{A389F8BA-E2E4-465E-9470-00F85094E07C}\`d.T.~Ed/" 3: $VMMServerName="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{D7D3C839-97AE-4940-88C7-09FE4C3212BB}\`d.T.~Ed/" 4: $VMAction="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{CCEF9043-CE8B-4D4E-9276-85CD768C77BE}\`d.T.~Ed/" 5: $VMValue="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{05AF33B3-CE05-444A-A09D-0E2D2B7B608A}\`d.T.~Ed/" 6:  7: try 8: { 9: Write-Output 'Create Session to the Remote Computer'; 10: 11: $Session = New-PSSession -ComputerName $VMMServerName 12: $Parameters = @{ 13: VMMServerName = $VMMServerName 14: VMName = $VMName 15: VMAction = $VMAction 16: VMValue = $VMValue 17: } 18: $ReturnArray = Invoke-Command -Session $Session -Argumentlist $Parameters -ScriptBlock { 19: Param ( $Parameters ) 20: 21: try 22: { 23: Write-Output "Beginning action : Change VM `r`n" 24: 25: Import-Module VirtualMachineManager | out-null 26: 27: $VMMServerName=$Parameters.VMMServerName 28: $VMName=$Parameters.VMName 29: $VMAction=$Parameters.VMAction 30: $VMValue=$Parameters.VMValue 31: Get-VMMServer -Computername $VMMServerName | out-null 32: $vm = Get-SCVirtualMachine -Name $VMName 33: if ($vm -eq $null) 34: { 35: Throw "Error: VM Name is invalid " 36: } 37: [int]$memoryMB = $vm.Memory 38: $Trace += "Current Memory : '$memoryMB' `r`n" 39: 40: if($VMAction -eq "IncreaseMemory") 41: { 42: [int]$newMemoryMB = $memoryMB+$VMValue; 43: } 44: elseif($VMAction -eq "DecreaseMemory") 45: { 46: [int]$newMemoryMB = $memoryMB-$VMValue; 47: } 48: Set-VM $vm -MemoryMB $newMemoryMB | Out-Null 49: } 50: catch 51: { 52: Write-Output "Exception caught in remote Action" 53: Throw $Error[0].Exception.ToString() 54: } 55: } 56: Remove-PSSession -Session $Session 57: } 58: catch 59: { 60: Write-Output "Error changing VM" 61: }   Step 2:- Converting Orchestrator Runbook into SMA Workflow (Invoke-ChangeVM)

This section contains all details of step by step process of converting the Orchestrator Runbook ‘ChangeVM’ into SMA workflow.

High Level Steps

Following are the generalized steps that can be followed by any existing Orchestrator runbook that has to be converted into SMA workflow:-

1. Create an empty PowerShell Workflow container.

2. Add the existing PowerShell script inside Workflow container.

3. Convert the Initialize Data Parameters read by script, if any, to workflow parameters.

4. Add InlineScript activity inside the workflow after the parameters and move the code inside the InlineScript activity.

5. Reassign the parameters read in workflow inside InlineScript using ‘$Using:’ as the scope is different and parameters are not directly available.

6. If you are creating a PSSession inside the script, make sure you invoke the session with required credentials, if current SMA user is not having access to the remote machine.

7. Import the Workflow in SMA server.

8. Run the Workflow and verify the output.

Detail of each step is shown with the help of example in below section of this document.

“Converting ‘Change VM’ Orchestrator Runbook to SMA Worklfow”

This example will show the steps followed while converting the Component Runbook ‘Change VM’ to SMA Runbook ‘Invoke-ChangeVM’.

Steps followed

1. Create an empty PowerShell Workflow container named ‘Invoke-ChangeVM’ as shown in image below.

2. Add Orchestrator .Net Script PowerShell code inside the PowerShell workflow as shown in image below.

3. Convert Input Parameters from Initialize Data Activity to SMA Workflow Parameters as shown in lines 3-11 below. Two more parameters ‘UserName’ and ‘Password’ are added in this workflow which are used while creating session to VMM server.

4. Add the whole code written in workflow except parameters to InlineScript as shown in line 13 below.

The InlineScript activity runs commands in a shared Windows PowerShell session. You can include it in a workflow to run commands that share data and commands that are not otherwise valid in a workflow.

The InlineScript script block can include all valid Windows PowerShell commands and expressions. Because the commands and expressions in an InlineScript script block run in the same session, they share all state and data, including imported modules and the values of variables. More details for InlineScript can be found at http://technet.microsoft.com/en-us/library/jj649082.aspx

5. Reassign the parameter values inside InlineScript using ‘$Using:’ keyword as shown in lines 15-19 below. PowerShell workflow parameters are not available inside InlineScript scope. More details on using variables in PowerShell workflows can be read at http://technet.microsoft.com/en-us/library/jj574187.aspx

6. Add credentials while creating PSSession to remote SCVMM server as current logged in user in SMA server might not have access to remote VMM server as shown in lines 29-32 below. This step is optional incase the logged in user of SMA has got required access on SCVMM server.

7. The final SMA Workflow script will look like below

1: workflow Invoke-ChangeVM 2: { 3: param 4: ( 5: [String]$VMAction, 6: [String]$VMValue, 7: [String]$VMMServerName, 8: [String]$VMName, 9: [String]$UserName, 10: [String]$Password 11: ) 12: 13: InlineScript 14: { 15: $VMAction = $Using:VMAction 16: $VMValue = $Using:VMValue 17: $VMMServerName = $Using:VMMServerName 18: $VMName = $Using:VMName 19: $UserName = $Using:UserName 20: $Password = $Using:Password 21: 22: try 23: { 24: Write-Output 'Create Session to the Remote Computer'; 25: 26: $pass = ConvertTo-SecureString $Password -AsPlainText -Force 27: $cred = New-Object System.Management.Automation.PSCredential ($UserName, $pass) 28: $Session = New-PSSession -ComputerName $VMMServerName -Credential $cred 29: 30: $Parameters = @{ 31: VMMServerName = $VMMServerName 32: VMName = $VMName 33: VMAction = $VMAction 34: VMValue = $VMValue 35: } 36: 37: $ReturnArray = Invoke-Command -Session $Session -Argumentlist $Parameters -ScriptBlock { 38: Param ( $Parameters ) 39: 40: try 41: { 42: Write-Output "Beginning remote action `r`n" 43: 44: Import-Module VirtualMachineManager | out-null 45: 46: $VMMServerName=$Parameters.VMMServerName 47: $VMName=$Parameters.VMName 48: $VMAction=$Parameters.VMAction 49: $VMValue=$Parameters.VMValue 50: 51: Get-VMMServer -Computername $VMMServerName | out-null 52: $vm = Get-SCVirtualMachine -Name $VMName 53: 54: if ($vm -eq $null) 55: { 56: Throw "Error: VM Name is invalid..`r`n" 57: } 58: 59: Write-Output 'Change current memory setting'; 60: 61: [int]$memoryMB = $vm.Memory 62: 63: if($VMAction -eq "IncreaseMemory") 64: { 65: [int]$newMemoryMB = $memoryMB+$VMValue; 66: } 67: elseif($VMAction -eq "DecreaseMemory") 68: { 69: [int]$newMemoryMB = $memoryMB-$VMValue; 70: } 71: 72: Set-VM $vm -MemoryMB $newMemoryMB | Out-Null 73: #Getting new updated Memory 74: $vm = Get-SCVirtualMachine -Name $VMName 75: [int]$memoryMB = $vm.Memory 76: Write-Output "Updated Memory : '$memoryMB' `r`n" 77: 78: Write-Output " Successfully completed change VM...`r`n" 79: } 80: catch 81: { 82: Write-Output $Error[0].Exception.ToString() 83: } 84: } 85: Remove-PSSession -Session $Session 86: 87: } 88: catch 89: { 90: Write-Output $Error[0].Exception.ToString() 91: } 92: } 93: }

8. Import the runbook on SMA server

9. Run the SMA workflow. Enter Parameters values.

10. The output of the SMA Runbook can be seen in Output pane.

  Summary

I hope the above content will be useful to you and has given you a first step to start working on converting traditional Orchestrator runbooks to SMA runbooks.

In my next blog, I will discuss scenario on how you can move complex runbooks to SMA.

Till then, happy automating! :)

FE Reimagined - New eBook

Tue, 05/13/2014 - 00:30

Our new eBook, FE Reimagined, makes the case for a radical rethink of how learning should be delivered within FE. It asks questions about leadership and the capacity of our sector to reap the rewards for learners that technology can potentially deliver.

The eBook has been produced in association with the Gazelle Group and can be viewed/downloaded in full below via our SlideShare channel.

FE Reimagined (eBook) from Microsoft Education UK

If you have any questions or feedback, we would love to hear from you via the comments below.

FE Reimagined - New eBook

Tue, 05/13/2014 - 00:30

Our new eBook, FE Reimagined, makes the case for a radical rethink of how learning should be delivered within FE. It asks questions about leadership and the capacity of our sector to reap the rewards for learners that technology can potentially deliver.

The eBook has been produced in association with the Gazelle Group and can be viewed/downloaded in full below via our SlideShare channel.

FE Reimagined (eBook) from Microsoft Education UK

If you have any questions or feedback, we would love to hear from you via the comments below.

FE Reimagined - New eBook

Tue, 05/13/2014 - 00:30

Our new eBook, FE Reimagined, makes the case for a radical rethink of how learning should be delivered within FE. It asks questions about leadership and the capacity of our sector to reap the rewards for learners that technology can potentially deliver.

The eBook has been produced in association with the Gazelle Group and can be viewed/downloaded in full below via our SlideShare channel.

FE Reimagined (eBook) from Microsoft Education UK

If you have any questions or feedback, we would love to hear from you via the comments below.

Service Management Automation: Migrating Simple (single .net activity) Orchestrator runbook to SMA

Tue, 05/13/2014 - 00:03

 

Hello Readers,

Service Management Automation is an IT process automation solution for Windows Azure Pack for Windows Server. It enables you to automate the creation, monitoring, and deployment of resources in your Windows Azure Pack environment.

This blog is meant for IT teams who are trying to migrate their traditional data center Orchestrator runbook to SMA. I am targeting simple runbooks here which contains single .Net activity (the PowerShell script inside the .net script activity can be complex though :) ), something like shown below.

Before jumping into code details of Orchestrator runbook and SMA runbook, Please make sure you have a working Orchestrator and SMA environment. Below are the main requirements for this scenario

1. Prepare SMA environment

You can install SMA using this article on TechNet. There is also another very simple article present on internet which gives step by step process of installing SMA.

2. System Center Orchestrator 2012 RTM or above

Follow this link to prepare Orchestrator environment.

3. Windows Azure Pack

Follow this link for more details of WAP.

4. System Center Virtual Machine Manager 2012 RTM or above.

Follow this link to prepare SCVMM environment. We need VMM as we are using sample Orchestrator runbook which changes the memory or CPU count of Virtual machines using VMM.

 

For SMA to Orchestrator integration i.e. invoking a SMA runbook from Orchestrator and Orchestrator runbook from SMA, please refer this article written by Chris Sanders.

We will achieve below in this blog

1. Creating a sample Orchestrator Runbook. (ChangeVM)

2. Step by step process of converting Orchestrator Runbook to SMA (Invoke-ChangeVM)

You can use your already created runbooks also for step 1. You can directly jump to Step 2 to see the conversion steps J

These are the requirements for this scenario:

- VMM Server and SMA Server should be in same domain.

- VMM server must be reachable from SMA server (Connectivity). Please follow below steps to test.

· Ping the VMM Server and confirm it is working. Please refer below image.

· Create PSSession of VMM Server and confirm it is working. Please refer below image.

- ExecutionPolicy should be set as ‘RemoteSigned’ on both SMA and VMM server.

- The VM needs to be in either Power Off or Stored State mode before SCVMM can perform the specified hardware changes.

- Given Memory value in ChangeVM runbook should be in increments of 2MB. Odd values are not accepted while assigning Memory.

Step 1:- Creating Sample Orchestrator Runbook (ChangeVM)

You can follow this free ebook written by Andreas Rynes on Designing Orchestrator runbooks. This ebook provides a framework for designing Runbooks and IT process automation to help the IT Pro get the most out of their System Center 2012 Orchestrator implementation and help them to utilize Orchestrator in a very modular approach, focusing on small, focused pieces of automation.

Let’s focus on the runbook we are using as an example in this blog:-

‘Change VM’ Orchestrator Runbook is responsible for changing configuration of an existing VM (Memory). The runbook accepts below parameters:-

1. $VMAction – Action to be performed on existing VM e.g. IncreaseMemory, DecreaseMemory.

2. $VMValue – Value of Memory to be increased or decreased, VMValue should be in increments of 2MB

3. $VMMServerName – Name of the SCVMM Server.

4. $VMName – Name of the VM that needs to be configured.

This runbook creates remote session to SCVMM server and uses Set-VM cmdlet to set the memory. The values of VMName, VMMServerName, VMAction and VMValue will be read from ‘Initialize Data’ activity

The runbook script is pasted in below box:-

# Get Input parameters $VMName="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{A389F8BA-E2E4-465E-9470-00F85094E07C}\`d.T.~Ed/" $VMMServerName="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{D7D3C839-97AE-4940-88C7-09FE4C3212BB}\`d.T.~Ed/" $VMAction="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{CCEF9043-CE8B-4D4E-9276-85CD768C77BE}\`d.T.~Ed/" $VMValue="\`d.T.~Ed/{3128930E-D53C-4129-BBA2-16EBC4A724D4}.{05AF33B3-CE05-444A-A09D-0E2D2B7B608A}\`d.T.~Ed/"   try { Write-Output 'Create Session to the Remote Computer'; $Session = New-PSSession -ComputerName $VMMServerName $Parameters = @{ VMMServerName = $VMMServerName VMName = $VMName VMAction = $VMAction VMValue = $VMValue } $ReturnArray = Invoke-Command -Session $Session -Argumentlist $Parameters -ScriptBlock { Param ( $Parameters ) try { Write-Output "Beginning action : Change VM `r`n" Import-Module VirtualMachineManager | out-null $VMMServerName=$Parameters.VMMServerName $VMName=$Parameters.VMName $VMAction=$Parameters.VMAction $VMValue=$Parameters.VMValue Get-VMMServer -Computername $VMMServerName | out-null $vm = Get-SCVirtualMachine -Name $VMName if ($vm -eq $null) { Throw "Error: VM Name is invalid " } [int]$memoryMB = $vm.Memory $Trace += "Current Memory : '$memoryMB' `r`n" if($VMAction -eq "IncreaseMemory") { [int]$newMemoryMB = $memoryMB+$VMValue; } elseif($VMAction -eq "DecreaseMemory") { [int]$newMemoryMB = $memoryMB-$VMValue; } Set-VM $vm -MemoryMB $newMemoryMB | Out-Null } catch { Write-Output "Exception caught in remote Action" Throw $Error[0].Exception.ToString() } } Remove-PSSession -Session $Session } catch { Write-Output "Error changing VM" }   Step 2:- Converting Orchestrator Runbook into SMA Workflow (Invoke-ChangeVM)

This section contains all details of step by step process of converting the Orchestrator Runbook ‘ChangeVM’ into SMA workflow.

High Level Steps

Following are the generalized steps that can be followed by any existing Orchestrator runbook that has to be converted into SMA workflow:-

1. Create an empty PowerShell Workflow container.

2. Add the existing PowerShell script inside Workflow container.

3. Convert the Initialize Data Parameters read by script, if any, to workflow parameters.

4. Add InlineScript activity inside the workflow after the parameters and move the code inside the InlineScript activity.

5. Reassign the parameters read in workflow inside InlineScript using ‘$Using:’ as the scope is different and parameters are not directly available.

6. If you are creating a PSSession inside the script, make sure you invoke the session with required credentials, if current SMA user is not having access to the remote machine.

7. Import the Workflow in SMA server.

8. Run the Workflow and verify the output.

Detail of each step is shown with the help of example in below section of this document.

“Converting ‘Change VM’ Orchestrator Runbook to SMA Worklfow”

This example will show the steps followed while converting the Component Runbook ‘Change VM’ to SMA Runbook ‘Invoke-ChangeVM’.

Steps followed

1. Create an empty PowerShell Workflow container named ‘Invoke-ChangeVM’ as shown in image below.

2. Add Orchestrator .Net Script PowerShell code inside the PowerShell workflow as shown in image below.

3. Convert Input Parameters from Initialize Data Activity to SMA Workflow Parameters as shown in lines 3-11 below. Two more parameters ‘UserName’ and ‘Password’ are added in this workflow which are used while creating session to VMM server.

4. Add the whole code written in workflow except parameters to InlineScript as shown in line 13 below.

The InlineScript activity runs commands in a shared Windows PowerShell session. You can include it in a workflow to run commands that share data and commands that are not otherwise valid in a workflow.

The InlineScript script block can include all valid Windows PowerShell commands and expressions. Because the commands and expressions in an InlineScript script block run in the same session, they share all state and data, including imported modules and the values of variables. More details for InlineScript can be found at http://technet.microsoft.com/en-us/library/jj649082.aspx

5. Reassign the parameter values inside InlineScript using ‘$Using:’ keyword as shown in lines 15-19 below. PowerShell workflow parameters are not available inside InlineScript scope. More details on using variables in PowerShell workflows can be read at http://technet.microsoft.com/en-us/library/jj574187.aspx

6. Add credentials while creating PSSession to remote SCVMM server as current logged in user in SMA server might not have access to remote VMM server as shown in lines 29-32 below. This step is optional incase the logged in user of SMA has got required access on SCVMM server.

7. The final SMA Workflow script will look like below

workflow Invoke-ChangeVM { param ( [String]$VMAction, [String]$VMValue, [String]$VMMServerName, [String]$VMName, [String]$UserName, [String]$Password ) InlineScript { $VMAction = $Using:VMAction $VMValue = $Using:VMValue $VMMServerName = $Using:VMMServerName $VMName = $Using:VMName $UserName = $Using:UserName $Password = $Using:Password try { Write-Output 'Create Session to the Remote Computer'; $pass = ConvertTo-SecureString $Password -AsPlainText -Force $cred = New-Object System.Management.Automation.PSCredential ($UserName, $pass) $Session = New-PSSession -ComputerName $VMMServerName -Credential $cred $Parameters = @{ VMMServerName = $VMMServerName VMName = $VMName VMAction = $VMAction VMValue = $VMValue } $ReturnArray = Invoke-Command -Session $Session -Argumentlist $Parameters -ScriptBlock { Param ( $Parameters ) try { Write-Output "Beginning remote action `r`n" Import-Module VirtualMachineManager | out-null $VMMServerName=$Parameters.VMMServerName $VMName=$Parameters.VMName $VMAction=$Parameters.VMAction $VMValue=$Parameters.VMValue Get-VMMServer -Computername $VMMServerName | out-null $vm = Get-SCVirtualMachine -Name $VMName if ($vm -eq $null) { Throw "Error: VM Name is invalid..`r`n" } Write-Output 'Change current memory setting'; [int]$memoryMB = $vm.Memory if($VMAction -eq "IncreaseMemory") { [int]$newMemoryMB = $memoryMB+$VMValue; } elseif($VMAction -eq "DecreaseMemory") { [int]$newMemoryMB = $memoryMB-$VMValue; } Set-VM $vm -MemoryMB $newMemoryMB | Out-Null #Getting new updated Memory $vm = Get-SCVirtualMachine -Name $VMName [int]$memoryMB = $vm.Memory Write-Output "Updated Memory : '$memoryMB' `r`n" Write-Output " Successfully completed change VM...`r`n" } catch { Write-Output $Error[0].Exception.ToString() } } Remove-PSSession -Session $Session } catch { Write-Output $Error[0].Exception.ToString() } } }

8. Import the runbook on SMA server

9. Run the SMA workflow. Enter Parameters values.

10. The output of the SMA Runbook can be seen in Output pane.

Summary

I hope the above content will be useful to you and has given you a first step to start working on converting traditional Orchestrator runbooks to SMA runbooks.

In my next blog, I will discuss scenario on how you can move complex runbooks to SMA.

Till then, happy automating! :)

How we see different error message from the web browser

Tue, 05/13/2014 - 00:02

We always see different error message from the browser. There are four kinds of error pages listed as below. Here I will clarify which kind of error message will be returned under different configuration.

1. Custom page

2. IIS error info

3. ASP.NET error info

4. Hide detailed error

The <httpErrors> element allows you to configure custom error messages for your Web site or application from IIS level. Custom error messages let you provide a friendly or a more informative response by serving a file, returning another resource, or redirecting to a URL when visitors to your site cannot access the content they requested. Please refer to http://www.iis.net/configreference/system.webserver/httperrors

customErrors is the configuration at the ASP.NET level. Please refer to http://msdn.microsoft.com/en-us/library/h0hfz6fc(v=vs.100).aspx

Custom error only focus on if the detailed error message returned from asp.net. If custom error is set to off, we will not see the detailed asp.net errorinfo.

Httperrors focus on if returns the custom error info or return the detailed error info. The existing mode focus on if returns the iis error info or asp.net error info. The 54 kinds of combinations is listed as below for your reference.

Best Regards,

Charles Liang from GBSD DSI Team

In App Purchases Really Work for Developers and Users

Tue, 05/13/2014 - 00:00
In the Beginning

I started out developing for Windows Phone way back when Windows Phone 7 was the big thing.  Your options for monetization were pretty much advertising or charging for your app.  One great innovation that Microsoft provided to developers (and phone owners) was the concept of "Try" it before you buy it.  This did actually help out quite a bit.  With other platforms at that time (and some still today) if you, as the developer, wanted customers to be able to try your app before paying for it, you needed to create a special "free" version and publish alongside the "Pro or Paid" version that costs some money.  That created lots of extra work and management.  With the advent of Windows Phone 7 you could publish a single app with a bit of code inside that allowed a downloader of your app to play with it in either a feature or time limited fashion and then when satisfied with the quality and usefulness just tap a button and update the app in-place to the full paid version.  All the work they had previously done was preserved because it actually was the same app with the light switch turned on.

This was all well and fine but as with most "innovations" it tended to be a good starting point.  Over the years of smart phones and app stores, many end users kind of got trained to look for "free" apps and, I think, had become averse to paying for apps.  Not all users, but certainly many stayed well clear of paid apps not realizing there was a try before you buy model.  I found this from personal experience. 

The coming of Windows Phone 8

Microsoft was listening.  In my opinion the single best part of Windows Phone 8, and there were lots of great bits, was the ability to do in-app purchases.  In-App purchases gave developers yet another way to monetize their apps, maybe the single best way if you ask many developers.  In-App purchases allowed a developer to release a clearly free app.  No Trial necessary.  However they could hold back some features like perhaps cloud storage or no advertising or coins in a game and allow the owner of the free app the option to get these neat features (for a small price) to make their favorite app even better.

Mock or Not

For developers there was only two things you needed to learn.  When developing, add the Mock In-App Purchase Library to your app.  This library, available from the Windows Phone Dev Center lets you simulate the purchase of in-app purchase features.  This, in turn, lets the developer test out if their in-app purchase features are working without having to deploy to the Store first.  For example, if a user of my app purchases the removal of advertising for $0.99, I can run my app in the debugger to make sure that I have properly disabled the advertising feature of the app when purchased.  Also, while you need to check "durable" licenses each time you connect, if there is no network connection you will have had to have stored the information locally so the features are available anyway.  Basically you have an #if DEBUG statement that switches from the MockAPI to the store features depending on how you built the app.  Second, to implement in-app purchases for real you will need to become familiar with the "Windows.ApplicationModel.Store" library. It is within the CurrentApp class you will find everything you need to purchase in-app products and then, later, verify the purchase.  Once you have it working in test mode you will need to become friends with the "beta" system in the Store.  You want to deploy your app as a beta where you can create the beta in-app purchase bits in the beta store, purchase them (for free) and then make sure everything is working properly.  Store Betas are critical to good apps in the long run.

Durable or Consumable

Microsoft provided options for in-app purchases that lets you be flexible.  Those purchases can be durable, like removing advertising, where once done it is done forever or consumable like buying coins in your favorite game to help you get past those nasty bad guys.  Even time expiring in-app purchases are available to you so you can provide a subscription model to your apps.  Each year to use those fine features the phone owner will need to pay some amount to keep using them.  The possibilities are endless.  I think, depending on the app, that a mixture is a good thing.   With Windows Phone 8.1 Microsoft gave developers the ability to synchronize in-app purchases across devices and platforms.  So, if I wish I can add the ability to purchase something for my phone app and then for the Windows 8.1 version of the same app, that purchase would also be valid.  This opens up many other options which I’ll reserve for a follow up post.

The Math

In-App purchases make sense even when you think you might be cannibalizing your existing revenue streams.  For example you have a free app that has been downloaded 10,000 times and you are making a bit of money from advertising but really not enough yet so you decide to add an in-app purchase option to remove advertising as you have had a number of customer requests for this ability through the "email me" feature in your app (you have of those right?).  But you are concerned that if you do a durable in-app purchase to remove advertising you will get some up front revenue but on-going revenue from ads will dry up.  The thing is, it won't.  If you have 10,000 (and growing) users of your app, the majority don't really care about advertising or don’t want to spend their own money.  If you managed to get just 10% to remove the ads you have added an extra $1000 to your revenue but you still have 90% of your advertising revenue coming in with more being added all the time. This gives you the option to continue a revenue stream from those that don't want to pay or can't afford to pay while offering a premium option for those that wish to pay.  Everybody ends up loving you.  That's living the dream baby!  Remember, you could even charge $0.99 to remove the advertising for just 1 year, then they have to renew each year or the ads come back.  I haven’t seen this done much yet, but the year is still young.

The Results

Because of Microsoft’s constant growth of the Windows Phone OS, the options for making money from your app development continue to grow and the outlook for making that money continues to get brighter.  Combine that with the growth in Market share we are seeing in Windows Phone, add in some great ideas, some great marketing and a little bit of creativity and you might have to put on a fancy pair of sunglasses.  It's never going to be easy and you will have some wins and losses.  Learn from what people like.  Listen to what they want.  And then charge them for it what they want using in-app purchases.

Hybrid Connections (Preview)

Mon, 05/12/2014 - 23:22

We're introducing Hybrid Connections, a cool and easy new way to build hybrid applications on Azure. Hybrid Connections are a feature of Azure BizTalk Services, enabling your Azure Website or Mobile Service to connect to on-premises data & services with just a few simple gestures right from the Azure portal. We're also introducing a free tier of Azure BizTalk Services, exclusively to make it easy for you to try this new hybrid connectivity.

Hybrid Connections will support all frameworks supported by Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET). Various Microsoft & non-Microsoft LOB applications are supported, including many application-specific protocols, with a few caveats. Hybrid Connections does not require changes to the network perimeter (such as configuring VPN gateways or opening firewall ports to incoming traffic).  It offers enterprise admins control over and visibility into resources accessed by the hybrid applications.

With Hybrid Connections, Azure websites and mobile services can access on-premises resources as if they were located on the same private network. Application admins thus have the flexibility to simply lift-and-shift specific most front-end tiers to Azure with minimal configuration changes, extending their enterprise apps for hybrid scenarios. 

To connect your website to an on-premises resource using a hybrid connection:

  • From the Azure Preview Portal for your website, select the Hybrid Connections tile in the Operations lens, and click on 'Add' 

 

  • Select an existing hybrid connection or to create a 'New hybrid connection'
    • Enter a hybrid connection name, as well as the hostname and port for the on-premises resource
    • Use an existing or create a new BizTalk Service instanc
  • Click 'OK'

    • Once the connection has been created, its status will show as "Not Connected". To complete the connection with a single click, from any on-premises Windows host:
  • Select the hybrid connection
  • Click on 'Listener Setup'
  • In the Hybrid Connection properties blade, choose 'Install and configure' - this will ask you for permission to setup the hybrid connection
  • Granting this permission will complete the hybrid connection setup

The status for the hybrid connection in the portal should now show as "Connected". That's it – your website is now connected to your on-premises server.

 

For Mobile Services, the configuration is just as simple using the Azure Management Portal:

  • Create a new BizTalk Service in you don’t already have one. Navigate to the ‘Hybrid Connections’ tab in your BizTalk Service and add a new Hybrid Connection.

 

  • Create a Hybrid Connection from the BizTalk Services or the WebSites portal as described above
  • Navigate to the Configuration tab for you mobile service and scroll to the Hybrid Connections section
  • Click on 'Add Hybrid Connection' and select a hybrid connection to use with your Mobile Service

 

Using Hybrid Connections, you can now use the same application connection string and API's in your Azure Website or Mobile Service that you would normally use if these were hosted locally on your private network. For instance, if you are connecting to an on-premises SQL server ‘payrollSQL.corp.contoso.com’, you can use the SQL connection string “Data Source=payrollSQL.corp.contoso.com;Initial Catalog=payrollDB;User ID=<user>;Password=<password>” on your Azure Website or your Mobile Service.

 

To learn more about Hybrid Connections, visit:

 

Call for Content Ideas for reIMAGINE 2014 conference

Mon, 05/12/2014 - 23:00

As mentioned in my previous post, the November 10-13 Partner Event in Fargo, ND, USA will be called reIMAGINE 2014.

Now, the team organising the conference is asking for feedback so that they can make sure the conference covers all the topics the partner community want to learn about.

Please use the link below to access the Survey and provide your feedback.

If you want to see content from me on the Support Debugging Tool, or development techniques, or if you would like me to conduct a training course on development or the SDT, please make sure you ask.

 

Also please see Pam's post: Call for Content for reIMAGINE 2014.

David

News from TechEd 2014: Portability and flexibility with ASP.NET vNext

Mon, 05/12/2014 - 22:41

Outside, the Houston weather for the opening day of TechEd 2014 is a little gloomy, but inside the George R. Brown Convention center it’s a bright day for Openness at Microsoft with the announcement of ASP.NET vNext!   check out the full details on the MS Open Tech Blog

Microsoft Partner Network Partner Training News. Up-skill. On-sell (May 13 Update)

Mon, 05/12/2014 - 21:40

Below is a list of all the Australian training sessions to help you and your team skill up on the latest technology, solutions and news. This training is designed to help you stay current on Microsoft technologies and strategies through in person training, virtual instructor led training and on demand training.

Dynamics

Microsoft Dynamics Hour

 

 

Type: Sales & Marketing

Product:  Dynamics

Location & Month:  Virtual - Live Meeting (May)

Duration: 60 minutes

Fee: No Charge

 

Connect with the Dynamics team at The Dynamics Hour.  These sessions are scheduled quarterly for the Dynamics partner community. The Dynamics Hour will focus on a general business review followed by key topic updates by keynote speakers, including news and events, and finalised with critical actions for the month.

Dynamics AX

Data Import  & Export Framework for Dynamics AX 2012 (NEW)

 

Type:  Technical (Level 300)

Product:  Dynamics AX

Location & Month:  Sydney (May)

Duration: 2 Days

Fee: Chargeable at 4 Partner Advisory Hours (PAH) per seat. Contact Yiming Li to purchase Partner Advisory Hours. Any travel expenses must be borne by the attendee.

 

Exclusively for Microsoft Dynamics AX partners, the objective of this 2-day onsite training is to educate you on features of the tool, take you through guided examples on the use of the pre-built features and how to leverage the framework for custom data migration activities. Hands-on activities will be included in the workshop that involves solving typical data migration business scenarios.

Prerequisites: Attendees are required to have a minimum of Level 200 knowledge of Dynamics AX 2012 Architecture, familiarity with table structures and building integrations within Dynamics AX 2012 as well as experience in ETL processes.

Education Industry

Get Ready "Education Edition"

 

 

Type:  Sales & Marketing

Product:  Education Industry

Location & Month: Virtual -  Live Meeting (April &  May)

Duration: 60 minutes

Fee: No Charge

 

With a live webcast devoted exclusively to education, this invaluable and open monthly forum provides an overview of the various sales, marketing, licensing and training tools available to help you sell to the education industry. It’s also your opportunity to ask those burning questions - and get brilliant answers.

 

 

Edu

DevCamp

 

 

 

Type:  Technical (Level 200)

Product:  Education Industry

Location & Month: In Person (May)

Duration: 2 Day

Fee: No Charge

 

Work alongside Microsoft experts and learn how to integrate your solution with Office and SharePoint, achieve Single Sign On with Office 365 through Azure Active Directory and build apps for the Office Marketplace. The new Office has a focus on cloud, making it easier for our partners to build integration that adds value for existing education customers and bring to market apps which drive business growth.

Cloud Technologies

Cloud Champion Webinars

 

 

Type:  Sales & Marketing

Product:  Cloud Technologies

Location & Month:  Virtual - Live Meeting (April & May)

Duration: 60 minutes

Fee: No Charge

 

To get your head in the game around Cloud Technology, join our 12 week Cloud Coaching Class. As your Cloud Coach, we'll equip you with all the skills you need to come out fighting - and win.

 

 

Get Ready “The Azure Roadmap”

 

 

Type:  Sales & Marketing

Product:  Cloud Technologies

Location & Month:  Virtual - Live Meeting (April, May & June)

Duration: 60 minutes

Fee: No Charge

 

Providing an overview of the various sales, marketing, licensing and training resources available to help you sell Microsoft Cloud solutions, this one-hour monthly live webcast is also a great forum for discussion and a chance to ask any tough questions you may have.

 

 

SQL Server, SharePoint Server, Systems Centre. Windows Server & Hyper V

 

SQL 2014 and Power BI Training

 

 

 

 

Type:  Technical (Level 300)

Product:  SQL 2014

Location & Month: Perth, Brisbane (May)

Duration: 3 Days

Fee: $399 AUD

 

With the commercial availability of SQL Server 2014, a new set of technologies are about to land that can transform your data platform and business intelligence projects.  This three day course will help you build your expertise.  You will leave this training with new skills to implement the latest features of this expanding product set. This course will cover Microsoft SQL Server 2014, Power BI & Windows Azure.

 

 

Get Ready “Big Data”

 

 

Type:  Sales & Marketing (Level 100)

Product:  SQL Server

Location & Month:  Virtual - Live Meeting (May & June)

Duration: 60 minutes

Fee: No Charge

 

This 45-minute monthly live webcast provides an overview of the various sales, marketing, licensing and training resources available to help you sell Microsoft SQL Server solutions.  This is also a forum for discussion, to ask questions/get answers, and generally to find out more information about Microsoft SQL 2012 Server, and how to position it with your customers.

 

 

Grow Your Business Delivering BI Solutions to SMBs (NEW)

 

 

Type:  Technical (Level 300)

Product:  SQL Server

Location & Month:  Melbourne, Sydney, Adelaide (June)

Duration: 1 Day

Fee: No Charge

 

SQL Server 2014 offers small and midsize organizations a fast, highly available database—and a cloud-ready data platform that will grow with their businesses. The newest course in the Ahead of the Game Technical Series, Be Lean & Stay Lean with SQL Server 2014, provides training, demonstrations, and hands-on instruction on how to use the latest SQL Server capabilities to deliver database applications both on-premises and in the cloud. This free one-day course covers a range of important technology areas, including In-Memory OLTP performance, hybrid database scenarios, enhanced high availability, and more.

 

Issues with Visual Studio Online version control - Mitigated

Mon, 05/12/2014 - 21:29

Update: Tuesday, 13 May 2014, 04:50 AM UTC

Our DevOps teams have worked together in identifying the root cause of the issue & put a fix in place for the affected customers. The root cause was a bad SQL query plan that caused querying changeset changes to take longer than 5 minutes. The issue was mitigated at 04:32 AM UTC and any affected customers are asked to retry their commands.

We sincerely thank you for your patience and apologize for the inconvenience this may have caused you

-----

Initial Update: Tuesday, 13 May 2014, 04:30 AM UTC

We are currently investigating performance issues with TFS version control. A small percentage of customers may be experiencing slow performance while querying changeset changes. Git operations are not impacted by this issue. The root cause is understood and our DevOps team is working to mitigate the issue. We apologize for the inconvenience this may have caused.

VS Online Service Delivery Team

Internet of things: Fab, Fritzing and First

Mon, 05/12/2014 - 21:10
Check out the way to build boards using Fritzing! It is cool and free: http://www.fritzing.org Here is my first circuit that I built using the Fritzing tool, and the VisualMicro Arduino code follows.  Schematic: Arduino Code: VisualMicro Code here, which looks like the “blink” example, and it is, with just a few small changes. int led1 = 13; void setup() {   /* add setup code here */     // initialize the digital pin as an output.     pinMode(led1, OUTPUT);...(read more)

Planning for WPC 2014?

Mon, 05/12/2014 - 20:07

The WPC 2014 session catalogue is now live! Learn more about the specific content that will be available at WPC, including exciting opportunities to grow leadership potential, then register for WPC.

DEVICES AND DEPLOYMENT – MAY 2014 Readiness Update

Mon, 05/12/2014 - 20:00

May 8: Defence in Depth: Windows 8.1 Security
Take a deep dive into an advanced defence strategy to fortify your Windows experience. Sign up for this session to learn the five steps every hacker follows, harden your Windows Enterprise architecture against advanced attacks, and hear how to stop malware engineers. Bring your questions!

Drive sales with Windows 8.1 Update

On April 2, the release of the Windows 8.1 Update was announced publicly. The Update brings new navigation features which significantly improve the experience of users migrating from previous versions of Windows. Here are the latest resources, by partner type:

For small and medium businesses (SMB):

For enterprise:

New and updated Partner Marketing Center campaigns:

Infra Day RJ–Influenciadores se reúnem em um dia de evento e muita tecnologia

Mon, 05/12/2014 - 19:30
MVPs, MTAC, MSPs e profissionais de mercado apresentarão nesse sábado diversas palestras sobre infraestrutura O Rio de Janeiro receberá nesse sábado, 17 de Maio, à partir das 08h30, o evento Infra-Day Microsoft; um evento de tecnologia apresentado por influenciadores e profissionais de mercado, onde temas como Windows Server, System Center, Azure e recursos do Windows 8.1 serão abordados em um dia de palestras e muito network. Aproveite para fazer a sua inscrição gratuitamente, clicando aqui . Conheça...(read more)

How to Manage GPP settings for IE10+ clients on windows 2008R2 DC

Mon, 05/12/2014 - 18:32

Probably you have noticed that on domain controller with relatively lower version of windows(such as windows 2008R2), there is no IE10/11 from the IE version list of GPP and the clients with IE10/11 will never apply those GPP IE settings from the DC which does not have windows 2012 or windows 8 installed. This is by design as the GPP for IE10/11 need to be configured on server 2012 DC or by means of RAST tool on a windows 8/2012 client machine to get it applied.

What if there is no windows 2012 or windows 8 machines ready yet?

One alternative option is to use the GPP registry keys or Administrative templates which would get upgraded and can get applied once the latest IE has been installed on the DC.

Still want to use GPP? Here is a quick workaround for your reference although it is provided with no warranties and confers no rights.

===============================

Workaround

================================

1. Edit the targeted GPO on DC in Group Policy Management Editor;

2. Expand User Configuration-> Preferences-> Control Panel Settings-> Internet Settings;

3. Right Click -> New-> Internet Explorer 8;

4. (Here I will take setting home page as example), I will set home page to be www.bing.com press F5 to enable it; click OK.

5. Find the InternetSettings.xml for the targeted GPP IE setting (generally located at ..\sysvol\domainname\Policies\{GPOID}\User\Preferences\InternetSettings\)

a. Go to User Configuration-> Policies-> Windows Settings-> Scripts (Logon/Logoff);

b. Double click Logon;

c. Click Show Files.. which will open a folder with path ended as  ..\user\scripts\Logon;

d. Go to the user folder, There you can see a preference folder;

e. Under ..\USER\Preferences\InternetSettings folder, you can see a file named  InternetSettings.xml;   //you can copy it to somewhere to backup if first.

6. Edit the InternetSettings.xml, Change the Max version to 99.0.0.0, save it.

7. Then we should be able to apply the GPP settings to those clients with IE11 installed

Hope it helps.

Wenbo Fu from GBSD DSI Team

Pages

Drupal 7 Appliance - Powered by TurnKey Linux