You are here

Feed aggregator

Sample Of Sept. 23] New C# 7.0 features in Visual Studio 15 Preview release

MSDN Blogs - Fri, 09/23/2016 - 02:47
Sept. 23

Sample : https://code.msdn.microsoft.com/Introduce-new-C-70-features-c639ed88

This sample demonstrates 7 new C# 7.0 features.

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Hybrid Learning Spaces effective use of your existing labs with Cloud Virtual Machines

MSDN Blogs - Fri, 09/23/2016 - 01:39

 

So for the last 10 years the concept of Hybrid Learning Spaces has been growing across academic institutions this week I attend a Industrial Board where one of the key debates was growth of the department but how this was limited by computer lab faciilities and simply how many PC could be fitted into the lab,

Back in 1976, Maastricht University Library developed the concept of the Study Landscape to respond to students’ needs. It consists of a combination of study spaces and learning resources, supervised by skilled librarians and with generous opening hours.

Many UK Universities have taken on this challenge two I know very well are University of Sheffield Information Commons and the University of Manchester Alan Gilbert Learning Commons. The Study Landscape contains the library and all the necessary facilities are concentrated in one building.

These building are an amazing blend of learning technology phycology which empower students to get stuff done, organisation like Google and Microsoft have resource spaces of similar format and characteristics.

So lets reflect this back on changes that have occurred in computing and the fact that computer labs have been computer labs for the past decade without any significant change other than the smart board.

 

So with the growing needs for flexibility and mobility, and students that are ‘digital natives’ shouldn’t we consider how we redesign labs and resources to make these more effective what student needs.

So we got into the debate of do we really need to provide PCs in the Clusters or how can we make the spaces scale to allow us to have more students?

Well the key areas of the debate are

1. Software Provision – Licensed and appropriate to course requirements and needs so Windows, Linix, Specialised Software

2. The current life time of the PCs in the lab (i.e. Refresh cycle)

3. Capacity in terms of seats, table, power and connectivity (Number of Students)

4. Lab ownership is this a department or central IS own facility (Room Booking, Ownership)

5. CAPEx and OPEX costs of a lab rebuild or refresh and total cost of ownership (Budget Ownership)

So with all these factors many universities are moving to a hybrid learning approach with lab spaces and making them multidiscipline or multi use rooms.

A key driver to this is costs and expansion plans of the faculty or department.

During the meeting we discussed how the use of Cloud Services can really advance this, all students now have their own PCs, running Windows, Mac, Linux or a multiboot environment they connect to JaNET via EduRoam and Institutions have all made significant investments in WiFi.

Therefore the only blocker to actually getting students to utilise their own PCs is the cost of some products to license.

So what if the University simply took their existing desktop and virtualised the image and presented this as a virtual machine on the cloud.

Typical BSc courses are approx 280 credit points so if 1 module credit is equitant to 7.5 hours of teaching a student studying a UG BSc in Computer Science undertakes 2,100 hours or 87.5 days of learning/direct engagement.  Off course they are expected to do far more in their own time.

So lets look at some simple cost models of using Azure Virtual Machines to support teaching labs using.

The way I want to present this is terms of OPEX costs for the faculty, department or IT Services. Microsoft has produced a great tools The Azure pricing calculator https://azure.microsoft.com/en-us/pricing/calculator/

We know from some scaling we have done across universities 25 users are easily supported by 8 Core 56Gb of Memory VM see the Data Science Virtual Machine post

We know Universities utilise Windows or Linux and some cases both so lets look at the costs for each

So if we look at the Azure pricing calculator at Windows Costs all costs as of 23/09/2016

We then take this cost and simply factor this up to the 2,100 hours so $6,83 x 2100 = $14,343 for providing VM to support  125 students based on 25 students per VM with a total of 5 VMs supporting 125 users.

So now if we look at Linux

 

We then take this cost and simply factor this up to the 2,100 hours so $4.69 x 2100 = $9,849 for providing VM to support 125 students based on 25 students per VM with a total of 5 VMs supporting 125 students.

No lets actually consider how many modules and time students actually need all that compute for so 2100 hours is approx 88 days of activities if we say a 1/3 of the time is based on lab exercises this equates to around 700 hours of compute time.

Windows

$4,781.00 USD = £3,672.67 GBP

Linux

$3,279.50 USD = £2,519.46 GBP

So under £4,000 of OPEX cost vs a CAPEX investment of 125 PCs at approx £530 per machine £66,520 plus associated maintenance, support, imagine prep costs. VM labs can be started and closed within 15 mins using custom scripts and Azure Resource Manager templates.

I know there are many factors but the purpose of the blog was show the offset of infrastructure costs from CAPEX to OPEX and the flexibility this provides in utilising your teaching and learning spaces to grow student numbers and student satisfaction.

Please share your thoughts and comments.

Design Pattern: Security – Data Encryption

MSDN Blogs - Fri, 09/23/2016 - 01:00


Context: After applying Sensitive Data Encapsulation, all sensitive data is gathered in a known place in the database. This makes it possible to apply further protection best practices.

Problem: If any non-authorized actor manages to get access to a copy of the database, the sensitive data is immediately available in clear-text.

Forces:

  • Accessibility: anyone who managed to steal a copy of the database can at once read the sensitive information.

Solution: Encrypt sensitive data. Dynamics NAV offers a simple mechanism for data encryption, to be used by NAV developers.

 

Read more on NAV Design Patterns wiki site…

By Bogdana Botez at Microsoft Development Center Copenhagen

Azure ServiceBus Queue–reading query results from Stream Analytics

MSDN Blogs - Fri, 09/23/2016 - 00:26

This blog is part of IoT Series, where I am trying to build few IoT devices that push events to Azure EventHub.  From the EventHub, Azure Stream Analytics will execute my query to calculate average values for each individual device and publish these average values to Azure ServiceBus. From Azure ServiceBus, I am going to read the average values in Azure Functions Apps and save them into Azure Redis Cache. My Azure Website will poll this Redis Cache and displays the average values.

Here are list of blog posts in this series:

      1. Azure IoT
      2. Azure EventHub–sending IoT device events to EventHub
      3. Azure ServiceBus Queue–reading query results from Stream Analytics
      4. Azure Stream Analytics–reading events from EventHub, running query and saving results to ServiceBus
      5. Azure Function Apps – reading events from ServiceBus and writing to Redis Cache

In this blog, I am going to show how to configure Azure ServiceBus.

    1. Log into Azure Portal
    2. Click on + New button
    3. In the Search, type ServiceBus
    4. Click Service Bus, and click create
    5. Provide name, resource group and pricing tier as shown below
    6. Once the ServiceBus is deploy, navigate to this new created ServiceBus
    7. Click on the Queues, click on Add queue, provide queue name as shown below
    8. Now get the connection string for this ServiceBus Queue
    9. Click on Share access policies, next click RootManagerSharedAccessKey and click on copy connection string as shown below
    10. Next we are going to create Azure Stream Analytics , please click here to continue

Sending encrypted workflow notification emails

MSDN Blogs - Fri, 09/23/2016 - 00:25

Does business require to send encrypted workflow notification emails in synch with your company’s security requirements? The following steps describe how to do it using CRM Online and Office 365.

Office 365 Message Encryption requires the Azure Rights Management service. Once you have a subscription to this service, you can activate it as described in the following procedure. For more information about this requirement, see Prerequisites for using Office 365 Message Encryption.

Necessary steps:

  1. Server side synchronization configuration for your CRM Online instance
  2. Activate Azure Rights Management
  3. Set up Azure Rights Management for Office 365 Message Encryption
  4. Define rules to encrypt email messages
  5. Create a test CRM workflow and test it
1. Server side synchronization configuration for your CRM Online instance

It is assumed that server side synchronization is set up properly for your CRM Online instance using Exchange Online for outgoing emails. More information: Set up server-side synchronization of email, appointments, contacts, and tasks

2. Activate Azure Rights Management

Check whether you have an Azure Right Management subscription.:

Go to https://portal.office.com/AdminPortal/Home?switchtomoderndefault=true#/subscriptions within Office 365. You should see Azure Rights Management Premium among your subscriptions.

 

If your subscription does not include it, press the +Add Subscription button and select Azure Rights Management Premium (you can buy it or start a 30-day trial):

 

After a few minutes you should see this under Subscriptions in the Office 365 Admin Center:

 

More information: how to activate Azure Rights Management (https://docs.microsoft.com/en-us/rights-management/deploy-use/activate-service)

3. Set up Azure Rights Management for Office 365 Message Encryption

Once you have Azure Rights Management, the next step is to set up Azure Rights Management for Office 365 (Exchange Online) message encryption. We will use Windows Power Shell to connect to Exchange Online and accomplish this step. (More information how to Connect to Exchange Online using PowerShell)

Open a PowerShell windows as Administrator and execute the following PowerShell commands:

Set-ExecutionPolicy RemoteSigned

$UserCredential = Get-Credential

Enter your Office 365 Global Administrator user credentials.

$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection

Import-PSSession $Session

 

Configure the Rights Management Services (RMS) online key-sharing location in Exchange Online. Use the RMS key sharing URL corresponding to your location, as shown in this table:

Location RMS key sharing location European Union https://sp-rms.eu.aadrm.com/TenantManagement/ServicePartner.svc North America https://sp-rms.na.aadrm.com/TenantManagement/ServicePartner.svc South America https://sp-rms.sa.aadrm.com/TenantManagement/ServicePartner.svc Asia https://sp-rms.ap.aadrm.com/TenantManagement/ServicePartner.svc

 

Since my tenant is located in the European Union, I use the following PowerShell command:

Set-IRMConfiguration -RMSOnlineKeySharingLocation “https://sp-rms.eu.aadrm.com/TenantManagement/ServicePartner.svc

Run the following command to import the Trusted Publishing Domain (TPD) from RMS Online:

Import-RMSTrustedPublishingDomain -RMSOnline -name “RMS Online”

To verify that you successfully configured IRM in Exchange Online to use the Azure Rights Management service, run the following command:

Test-IRMConfiguration -RMSOnline

Among other things, the command checks connectivity with the RMS Online service, downloads the TPD, and checks its validity. If everything is OK, you should see as result of the test: ‘Overall result: pass’.

Run the following commands to disable IRM templates from being available in OWA and Outlook and then enable IRM for your cloud-based email organization to use IRM for Office 365 Message Encryption.

To disable IRM templates in OWA and Outlook:

Set-IRMConfiguration -ClientAccessServerEnabled $false

To enable IRM for Office 365 Message Encryption:

Set-IRMConfiguration -InternalLicensingEnabled $true

To test the IRM functionality, run the following command, where you use your username instead of administrator@encryptedwfmail.onmicrosoft.com:

Test-IRMConfiguration -Sender administrator@encryptedwfmail.onmicrosoft.com

If everything is OK, you should see as result of the test: ‘Overall result: pass’.

More information how to Set up Microsoft Azure Rights Management for Office 365 Message Encryption.

4. Define rules to encrypt email messages

The next step is to define the conditions when we want to encrypt an email. In our case, email encryption is only needed when the body of the email contains the following phrase: ‘(This email was encrypted using Microsoft Office 365)’

It can be defined in the Exchange Admin Center (EAC), which can be accessed within Office 365 via Admin > Exchange:

From the EAC, go to mail flow > rules:

Select + > Create a new rule…

 

Enter the Name (for example ‘Encrypted CRMONL workflow email’) and click on the More options… button in the opening pop-up window as shown below:

 

 

Then specify when the rule should be applied. So select The subject or body > subject or body matches these text patterns:

 

Enter the phrase (for example: ‘(This email was encrypted using Microsoft Office 365)), press the + sign and finally press Ok as shown below:

 

The last step to set Office 365 Message Encryption by selecting Modify the message security… > Apply Office 365 Message Encryption as shown below and then press Save:

 

 

More information how to Define rules to encrypt or decrypt email messages.

5. Create a test CRM workflow and test it

The final step is to create a workflow where we want to use the email encryption and test it in practice.

We are creating a simple workflow which is fired when an account is created and sends an encrypted notification email. Assuming that the reader is familiar with the Dynamics CRM workflow basic, the relevant parts are highlighted here.

When you define the workflow, specify the

  • Process Name: Account create – encrypted mail
  • Entity: Account
  • Category: Workflow
  • Start condition: Record is created
  • Step: Send email: Create new message

as shown below:

 

And add the email properties as the following:

 

The key elements of the workflow notification email is the last sentence in the message body – (This email was encrypted using Microsoft Office 365) – which should be the same string as we defined the message encryption rule.

Let’s Save and Activate the workflow, before we can test our work.

Finally, to test our work, first let’s create a new account in CRM:

 

Then we receive the encrypted account creation notification email, which is actually an html email attachment:

 

When we try to open the html file in a browser, we have two options:

  • Sign in with our Office 365 credentials or
  • Use a one-time passcode

 

Now, using the one-time passcode option, we will get another email including the one-time passcode:

 

After specifying the passcode, we can read the workflow notification email:

 

 

– Miklos Hoffmann

 

 

Azure Function Apps – reading events from ServiceBus and writing to Redis Cache

MSDN Blogs - Fri, 09/23/2016 - 00:25

This blog is part of IoT Series, where I am trying to build few IoT devices that push events to Azure EventHub.  From the EventHub, Azure Stream Analytics will execute my query to calculate average values for each individual device and publish these average values to Azure ServiceBus. From Azure ServiceBus, I am going to read the average values in Azure Functions Apps and save them into Azure Redis Cache. My Azure Website will poll this Redis Cache and displays the average values.

Here are list of blog posts in this series:

    1. Azure IoT
    2. Azure EventHub–sending IoT device events to EventHub
    3. Azure ServiceBus Queue–reading query results from Stream Analytics
    4. Azure Stream Analytics–reading events from EventHub, running query and saving results to ServiceBus
    5. Azure Function Apps – reading events from ServiceBus and writing to Redis Cache

In this blog, I am going to show how to configure Azure Function Apps to read from Azure ServiceBus and write them to Azure Redis Cache.

    1. Log into Azure Portal
    2. Click on + New button
    3. In the Search, type Function App
    4. Select Function App and provide name, resource group, storage account as shown below
    5. Click on Create button
    6. Once the Function App is deployed, browse to newly created Function App
    7. In the Function App, click on the + New Function
    8. In the templates, select ServiceBusQueueTrigger – C# as shown below
    9. Now we need to add ServiceBus Connection String
    10. Click on new link next to the Service Bus Connection as shown below
    11. Provide the connection string name and connection string to Azure Service Bus. Click here to learn how to get connection string.
    12. Expand function and click on Develop node as shown
    13. Enter below code in the run.csx file

      using System;
      using System.Threading.Tasks;
      using StackExchange.Redis;
      using Newtonsoft.Json;

      public static void Run(string myQueueItem, TraceWriter log)
      {
          log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");

          var msg = JsonConvert.DeserializeObject<MyOutput>(myQueueItem);

          if (msg == null)
          {
              log.Info("failed to convert msg to MyOutput");
              return;
          }

          IDatabase cache = Connection.GetDatabase();
          cache.StringSet(msg.devicename, myQueueItem);
      }

      private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
      {
          return ConnectionMultiplexer.Connect("redis connection string");
      });

      public static ConnectionMultiplexer Connection
      {
          get
          {
              return lazyConnection.Value;
          }
      }

      class MyOutput
      {

          public string devicename { get; set; }
          public double avgtemp { get; set; }
          public double avgspeed { get; set; }
      }

    14. Now we need to add NuGet package for Newtonsoft and Redis Cache
    15. Click on the View files and click + to add a new file
    16. Set the name of the new file as Project.json and add these lines
      {
          "frameworks": {
              "net46": {
                  "dependencies": {
                      "StackExchange.Redis":"1.1.603",
                      "Newtonsoft.Json": "9.0.1"
                  }
              }
          }
      }

    17. Now try to run the Function, click on the Run button in the Run section as shown below
    18. Check for any compiler errors in the above Logs section
    19. next, push few events into EventHub using the console app, code is here
    20. These events will travel from EventHub to Stream Analytics to Service Bus Queue to Function App to Redis Cache
    21. To verify if the event reached Redis Cache, in the Azure Portal, navigate to Redis Cache and click on Console and execute these commands as shown
       

Azure Stream Analytics–reading events from EventHub, running query and saving results to ServiceBus

MSDN Blogs - Fri, 09/23/2016 - 00:24

This blog is part of IoT Series, where I am trying to build few IoT devices that push events to Azure EventHub.  From the EventHub, Azure Stream Analytics will execute my query to calculate average values for each individual device and publish these average values to Azure ServiceBus. From Azure ServiceBus, I am going to read the average values in Azure Functions Apps and save them into Azure Redis Cache. My Azure Website will poll this Redis Cache and displays the average values.

Here are list of blog posts in this series:

      1. Azure IoT
      2. Azure EventHub–sending IoT device events to EventHub
      3. Azure ServiceBus Queue–reading query results from Stream Analytics
      4. Azure Stream Analytics–reading events from EventHub, running query and saving results to ServiceBus
      5. Azure Function Apps – reading events from ServiceBus and writing to Redis Cache

In this blog, I am going to show how to configure Azure Stream Analytics to read from Azure EventHub, run Query and save results to Azure ServiceBus.

    1. Login to Azure Portal (http://portal.Azure.com)
    2. Click on + New button
    3. In the Search, type Stream Analytics
    4. Select Stream Analytics Jobs and click Create button
    5. Provide name and resource group as show below
    6. Click on Create button
    7. Once this Stream Analytics Job is deployed, navigate to this newly created Stream Analytics Job
    8. Click on the Overview, select Input, click Add and in the New Input select the source as EventHub as shown below
    9. Now select the Output, click on Add button and set the New Output source to ServiceBus Queue as show below
    10. Now set the query, click on Query, in the query editor type this below query SELECT DeviceName, AVG(temperature) as AvgTemp, AVG(speed) as AvgSpeed FROM fromWabacEventHub TIMESTAMP BY datetime Group BY DeviceName, TumblingWindow(second, 10)


       

    11. Now start the Stream Analytics Job, by clicking on the Start button
    12. To check if the query is getting executed and results are saved to ServiceBus, in the console app run these two functions SendEvents(10) and CheckIfServiceBusGotEvents()
    13. Complete code is here
    14. Next, we will configure Azure Function Apps to read from Azure ServiceBus and write it to Azure Redis Cache. Click here for next blog

Azure ServiceBus Queue–reading query results from Stream Analytics

MSDN Blogs - Fri, 09/23/2016 - 00:23

This blog is part of IoT Series, where I am trying to build few IoT devices that push events to Azure EventHub.  From the EventHub, Azure Stream Analytics will execute my query to calculate average values for each individual device and publish these average values to Azure ServiceBus. From Azure ServiceBus, I am going to read the average values in Azure Functions Apps and save them into Azure Redis Cache. My Azure Website will poll this Redis Cache and displays the average values.

Here are list of blog posts in this series:

      1. Azure IoT
      2. Azure EventHub–sending IoT device events to EventHub
      3. Azure ServiceBus Queue–reading query results from Stream Analytics
      4. Azure Stream Analytics–reading events from EventHub, running query and saving results to ServiceBus
      5. Azure Function Apps – reading events from ServiceBus and writing to Redis Cache

In this blog, I am going to show how to configure Azure ServiceBus.

    1. Log into Azure Portal
    2. Click on + New button
    3. In the Search, type ServiceBus
    4. Click Service Bus, and click create
    5. Provide name, resource group and pricing tier as shown below
    6. Once the ServiceBus is deploy, navigate to this new created ServiceBus
    7. Click on the Queues, click on Add queue, provide queue name as shown below
    8. Now get the connection string for this ServiceBus Queue
    9. Click on Share access policies, next click RootManagerSharedAccessKey and click on copy connection string as shown below
    10. Next we are going to create Azure Stream Analytics , please click here to continue

주간닷넷 2016년 8월 30일

MSDN Blogs - Fri, 09/23/2016 - 00:16

여러분들의 적극적인 참여를 기다리고 있습니다. 혼자 알고 있기에는 너무나 아까운 글, 소스 코드, 라이브러리를 발견하셨거나 혹은 직접 작성하셨다면 Gist주간닷넷 페이지를 통해 알려주세요. .NET 관련 동호회 소식도 알려주시면 주간닷넷을 통해 많은 분과 공유하도록 하겠습니다.

금주의 커뮤니티 소식

Taeyo.NET에서 http://docs.asp.net 의 ASP.NET Core 문서를 한글화하여 연재하고 있습니다.

On .NET 소식

이번 주 On .NET에서는 Phillip Carter가 F# 초보자들을 위해서 재미있는 데모와 함께 F#을 소개해주었습니다.

https://sec.ch9.ms/ch9/fd0f/9e937d1c-b46f-40e8-8d97-73cc5151fd0f/onnet20160825phillipcarter_high.mp4

다음 주 On .NET에서는 Hibernating RhinosAyende Rahien과 함께 .NET과 RavenDB에 대한 이야기를 나누어 볼 예정입니다. 링크의 Channel 9에서 확인해 주세요!

금주의 툴 – F# Data

F# Type providers의 모든 장점을 가지고 있는 F# Data 라이브러리를 사용하면 다양한 데이터 타입(CSV, JSON, XML)을 F#에서 다룰 수 있습니다. HTTP Request를 통한 Remote API 기능도 제공하고 있습니다.

금주의 툴 – NDepend

NDepend는 코드 기반 아키텍쳐 문제를 찾아내는 툴입니다. 코드의 종속적인 구조 안에서 잠재적으로 문제가 될수 있는 커플링과 상호 참조(cycles) 등의 문제를 알려줍니다. 또한 LINQ를 이용해서 코드를 쿼리할 수도 있습니다.

금주의 게임 – Mervils: A VR Adventure

Mervils: A VR Adventure는 플랫포머 요소를 가진 오픈 월드 RPG 게임입니다. 플레이어는 고대의 마법사인 Merlin과 함께 무시무시한 악인 Balazar의 귀환을 막기 위한 모험을 떠나야 합니다. 약간의 돈과 대검을 가지고, Balazar를 무찌르는 위대한 여정을 VR로 생생하게 즐기실 수 있습니다. VR을 타겟으로 만들어진 Mervils의 세계 속에서 풍부한 스토리와 아름다운 환경, 복잡한 퍼즐을 풀며 세상을 구하는 임무를 완수해 보세요!

Mervils: A VR AdventureVitruviusVR에서 제작하였으며 UnityC#을 이용하여 개발하였습니다. 현재 Oculus Home을 통해서 Oculus Rift에서 즐길 수 있으며 Steam을 통해서 HTC Vive에서도 플레이 가능합니다. 또한 이번 가을부터는 PlaystationVR 에서도 즐기실 수 있습니다.

.NET 소식 ASP.NET 소식 F# 소식 Xamarin 소식 Games 소식

주간닷넷.NET Blog에서 매주 발행하는 The week in .NET을 번역하여 진행하고 있으며, 한글 번역 작업을 오픈에스지의 송기수 전무님의 도움을 받아 진행하고 있습니다.

송 기수, 기술 전무, 오픈에스지
현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사 전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 .NET 과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Seminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼 스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’ 이다.

Guest post: Desktop Bridge – Espandere un’applicazione Win32 con la Universal Windows Platform

MSDN Blogs - Fri, 09/23/2016 - 00:00

uesto post è stato scritto da Matteo Pagani, Windows AppConsult Engineer in Microsoft

Nel post precedente abbiamo imparato a sfruttare il Desktop Bridge per convertire un’applicazione in maniera manuale, senza passare per il Desktop App Converter e un installer: si tratta dell’approccio da utilizzare quando il punto di partenza della conversione è un eseguibile o, più in generale, un’applicazione Win32 da noi sviluppata, di cui possediamo il codice sorgente. Nei post precedenti, avevamo usato, ad esempio, come punto di partenza un’applicazione Windows Forms.

In questo post riprenderemo i concetti visti nel post precedente ma, questa volta, inizieremo ad espandere l’applicazione Win32 per sfruttare alcune delle funzionalità offerte dalla Universal Windows Platform. Uno sviluppatore desktop potrebbe chiedersi, a questo punto, perché dovrebbe essere interessato a questo approccio: in virtù del modello di sicurezza più chiuso, in molti scenari un’applicazione UWP deve tenere conto di limitazioni maggiori rispetto ad un’applicazione Win32, che potrebbero perciò rendere difficoltoso il lavoro dello sviluppatore.

Abbiamo già visto nel corso del primo post i numerosi vantaggi portati dalla conversione del Desktop Bridge, come la maggiore efficienza del meccanismo di deployment (installazione e disinstallazione) e la maggiore sicurezza, dovuta al fatto che le app girano all’interno di un container, che virtualizza molti degli aspetti di Windows (come il file system o il registro di sistema) che l’applicazione potrebbe usare come veicolo di attacco per scopi malevoli. A questo, si aggiunge il fatto che il mondo IT sta attraversando un cambiamento molto forte e, di conseguenza, sono sempre di più gli utenti che non usano solamente un computer, ma anche dispositivi come smartphone e tablet, con i quali interagiscono con modalità differenti: comandi vocali, penne digitali, notifiche, touch screen sono solo alcuni dei tanti esempi. Si tratta di scenari che si sposano alla perfezione con la filosofia della Universal Windows Platform e molte di queste funzionalità possono essere implementate in maniera molto più semplice ed efficiente rispetto a quanto potremmo fare con un’applicazione Win32 tradizionale.

Pensiamo, ad esempio, alle notifiche: le applicazioni desktop sono sempre state in grado di avvisare l’utente di un evento, ad esempio mostrando un popup o un messaggio nella task bar. Si tratta, però, di approcci che non si integrano pienamente con l’ecosistema di Windows. Una notifica toast generata dalla Universal Windows Platform, invece, ha un look & feel consistente con quello del sistema operativo e delle altre applicazioni; viene memorizzata all’interno dell’Action Center, così che l’utente possa leggerla anche se non si trovava davanti al computer nel momento in cui è stata inviata; offre la possibilità di eseguire una serie di operazioni (come rispondere ad un messaggio) direttamente dalla notifica stessa, senza neanche aprire l’applicazione. E se aveste la necessità di ricevere notifiche anche quando l’applicazione non è in esecuzione? In questo caso, nel mondo Win32 vi servirebbe trovare un’alternativa, come sfruttare un servizio Windows o mantenere l’applicazione sempre aperta ma, magari, nascosta nella barra di sistema. Grazie alla Universal Windows Platform, invece, avete un meccanismo nativo per gestire questi scenari molto più efficiente, ovvero le notifiche push.

Come potete vedere, ci sono decine di scenari nei quali integrare API della Universal Windows Platform all’interno di un’applicazione Win32 potrebbe semplificarvi parecchio la vita. Perciò, facciamolo! In questo articolo partiremo dalla stessa applicazione Windows Forms del post precedente, ma aggiungeremo una nuova feature: ogni volta che il file di testo viene creato sul desktop, mostreremo una notifica toast all’utente per confermare l’esito dell’operazione.

Accedere alla Universal Windows Platform

Come appena citato, il punto di partenza per il nostro esperimento sarà l’applicazione Windows Forms creata nel precedente post, che potete scaricare da GitHub all’indirizzo https://github.com/qmatteoq/DesktopBridge/tree/master/Step%201/AppConverter.Step1. Vi ricordo solamente che, per lavorare su questa applicazione di esempio, vi servirà la versione Preview di Visual Studio 15 con installata l’apposita estensione per il Desktop Bridge: in questo modo, sarà molto più semplice per noi testare e deployare la versione convertita dell’applicazione.

Per il momento, ci concentreremo sull’applicazione Windows Forms, nella quale dobbiamo integrare le API UWP. La prima cosa da fare è rendere visibile la Universal Windows Platform all’applicazione Windows Form: come comportamento predefinito, infatti, essa gira appoggiandosi al framework .NET e, di conseguenza, non ha accesso alle API UWP. Dobbiamo, perciò, aggiungere un riferimento ai metadati che contengono tutte le definizioni delle API supportate: il file si chiama Windows.md e lo trovate all’interno della cartella C:Program Files (x86)Windows Kits10UnionMetadata.

Il primo passo, perciò, è fare clic con il tasto destro sul progetto Windows Forms in Visual Studio, scegliere Add reference, premere il pulsante Browse e andare alla ricerca di questo file sul vostro computer:

L’unica cosa importante da sottolineare, come evidenziato nell’immagine precedente, è di selezionare nel menu a tendina che funge da filtro per i file visualizzati la voce All files. Come comportamento predefinito, infatti, la finestra di dialogo per gestire le reference di un’applicazione Win32 mostra solamente componenti .NET standard, come DLL o eseguibili, mentre i file con estensione .winmd fanno parte dell’ecosistema del Windows Runtime, sul quale si appoggia la Universal Windows Platform.

Ora che abbiamo accesso alla piattaforma, possiamo iniziare a scrivere il codice necessario per generare una notifica toast, che dovrebbe esservi famigliare se avete già avuto esperienza con lo sviluppo di applicazioni UWP:

public void ShowNotification() { string xml = @"<toast> <visual> <binding template='ToastGeneric'> <text>Desktop Bridge</text> <text>The file has been created</text> </binding> </visual> </toast>"; Windows.Data.Xml.Dom.XmlDocument doc = new Windows.Data.Xml.Dom.XmlDocument(); doc.LoadXml(xml); ToastNotification toast = new ToastNotification(doc); ToastNotificationManager.CreateToastNotifier().Show(toast); }

Nel mondo UWP le notifiche toast sono rappresentate da un payload XML, che definisce la struttura e il contenuto della notifica. In questo esempio, abbiamo utilizzato un template molto semplice, che mostra solamente un titolo e un messaggio (tramite due elementi di tipo text). Dopo aver definito l’XML, possiamo iniziare ad utilizzare sul serio le API della Universal Windows Platform: potete notare, infatti, come, per creare la notifica, dobbiamo prima convertire la stringa in un oggetto di tipo XmlDocument. Si tratta, però, di un tipo che non appartiene al framework .NET, ma alla Universal Windows Platform: lo si può notare dal fatto che il namespace di appartenenza è Windows.Data.Xml.Dom.

Il passaggio successivo è creare un oggetto di tipo ToastNotification (di nuovo, una classe specifica della Universal Windows Platform, che appartiene al namespace Windows.UI.Notifications) e passare, come parametro, l’oggetto di tipo XmlDocument appena creato. Come ultima operazione, utilizziamo la classe ToastNotificationManager per creare un oggetto che espone il metodo Show(), che ci serve per mostrare la notifica vera e propria che abbiamo appena creato.

Ora che abbiamo creato questo metodo, siamo pronti per modificare il codice che avevamo scritto nel post precedente per creare il file di testo sul desktop dell’utente:

private void OnCreateFile(object sender, EventArgs e) { string userPath = Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory); string fileName = $"{userPath}\centennial.txt"; File.WriteAllText(fileName, "This file has been created by a Centennial app"); ShowNotification(); }

Al termine della creazione del file, abbiamo aggiunto la chiamata al metodo ShowNotification() che abbiamo appena creato. E’ importante, a questo punto, sottolineare una cosa. L’accesso alle API della Universal Windows Platform non è legato solamente alla versione del sistema operativo su cui sta girando l’app (che, ovviamente, deve essere Windows 10), ma è reso possibile dal container della Universal Windows Platform all’interno del quale viene inclusa l’applicazione Win32 nel momento in cui la convertiamo. Possiamo renderci conto di questo legame semplicemente lanciando l’applicazione Windows Forms direttamente da Visual Studio, senza passare per il progetto di deployment. Nel momento in cui premiamo il pulsante presente nel form, il file sarà correttamente creato sul desktop, ma subito dopo comparirà un’eccezione nel momento in cui sarà chiamato il metodo ShowNotification():

Il motivo è che l’applicazione, girando in maniera nativa Win32, non è in grado di accedere alle API della Universal Windows Platform. Se, invece, impostassimo come progetto di startup quello di deployment (AppConverter.Step1.DeployToUWP), lanciassimo il debug e premessimo nuovamente il pulsante nell’interfaccia grafica, questa volta, oltre a creare il file sul desktop, vedremmo correttamente comparire la notifica: l’applicazione, infatti, ora non sta più girando come nativa ma all’interno del container della Universal Windows Platform.

Un unico progetto, due mondi differenti

Il codice che abbiamo visto in precedenza ha un difetto: è stimolante poter iniziare ad usare API della Universal Windows Platform senza dover necessariamente riscrivere la nostra applicazione da capo come UWP nativa ma, allo stesso tempo, potremmo non essere ancora pronti per rilasciare un’applicazione in grado di girare solamente su Windows 10. Al momento, Windows 10 è il sistema operativo con il tasso di diffusione più alto nella storia di Microsoft, sia nel mondo consumer che in quello enterprise, ma è altamente probabile (soprattutto in ambito aziendale) che vi troviate ancora nella condizione di dover gestire clienti che utilizzano versioni precedenti del sistema operativo, come Windows 7 o Windows 8.

In questo scenario, dovremmo perciò mantenere due branch separate della nostra applicazione Windows Forms: una che si limita a scrivere il file sul desktop e un’altra che, invece, invii la notifica toast. Abbiamo appena visto, infatti, come lanciando l’applicazione Win32 nativa, senza passare tramite il container del Desktop Bridge, sia sia scatenata un’eccezione appena abbiamo provato ad inviare la notifica. Questa è la stessa esperienza a cui si troverebbe di fronte un utente Windows 7 o Windows 8: l’applicazione sarebbe andata in crash, dato che le versioni precedenti a Windows 10 non includono la Universal Windows Platform.

Fortunatamente, in questo caso ci viene in aiuto la compilazione condizionale, che ci permette di evitare di mantenere due branch separate: possiamo mantenere un’unica base di codice comune e fare sì che le API UWP vengano utilizzate solo nel momento in cui l’applicazione viene eseguita all’interno del container, mantenendo invece il regolare flusso di esecuzione in caso di utilizzo della versione Win32 nativa.

Vediamo come fare: innanzitutto dobbiamo creare una nuova configurazione di build, che utilizzeremo solo quando dovremo compilare l’app Win32 per includerla nel progetto di deployment UWP. Apriamo perciò Visual Studio e, dal menu Build –> Configuration Manager, clicchiamo sul menu a tendina di nome Active solution configuration e selezioniamo New:

 

Date un nome significativo alla nuova configurazione (ad esempio, DesktopUWP) e selezionate dal menu a tendina di copiare le stesse impostazioni della configurazione di Debug.

 

 

Ora cliccate con il tasto destro sul progetto Windows Forms e scegliete la voce Properties. Nella sezione Build, assicuratevi di selezionare la configurazione DesktopUWP che abbiamo appena creato dal menu a tendina Configuration e aggiungete, nella sezione denominata General, un nuovo simbolo di compilazione condizionale (per rendere le cose più semplici, utilizzeremo lo stesso nome della configurazione, ovvero DesktopUWP).

 

 

Ora dobbiamo modificare leggermente il nostro codice, aggiungendo un attributo particolare al metodo ShowNotification() che abbiamo creato in precedenza:

[Conditional("DesktopUWP")] public void ShowNotification() { string xml = @"<toast> <visual> <binding template='ToastGeneric'> <text>Desktop Bridge</text> <text>The file has been created</text> </binding> </visual> </toast>"; Windows.Data.Xml.Dom.XmlDocument doc = new Windows.Data.Xml.Dom.XmlDocument(); doc.LoadXml(xml); ToastNotification toast = new ToastNotification(doc); ToastNotificationManager.CreateToastNotifier().Show(toast); }

Grazie a questo attributo, il metodo ShowNotification() sarà compilato ed eseguito solamente se stiamo compilando l’applicazione usando la configurazione DesktopUWP.

Da questo momento in poi:

  • Ogni volta che dovete creare una versione dell’applicazione da utilizzare su Windows 7 o su Windows 8, compilate l’applicazione Windows Forms usando la configurazione Debug o Release
  • Ogni volta che dovete creare un nuovo pacchetto sfruttando il Desktop Bridge, compliate l’applicazione Windows Forms usando la configurazione DesktopUWP

C’è un ultimo passaggio da fare: nel momento in cui abbiamo creato la configurazione DesktopUWP, Visual Studio ha creato una nuova sottocartella all’interno della cartella bin, nella quale viene incluso l’output della build (quindi l’eseguibile e le eventuali DLL). Di conseguenza, al suo interno non avremo più solamente le classiche cartelle Debug e Release, ma anche una nuova cartella chiamata DesktopUWP:

 

E’ l’eseguibile contenuto all’interno di questa nuova cartella che contiene le API UWP che abbiamo utilizzato per inviare la notifica toast, di conseguenza è quello che deve essere copiato all’interno della cartella PackageLayout del progetto di deployment, dalla quale viene poi generato l’AppX. Dobbiamo, perciò, fare una modifica al file AppXPackageFileList.xml del progetto di deployment per riflettere questa nuova configurazione:

<?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <MyProjectOutputPath>$(PackageLayout)....AppConverter.Step2bin</MyProjectOutputPath> </PropertyGroup> <ItemGroup> <LayoutFile Include="$(MyProjectOutputPath)DesktopUWPAppConverter.Step2.exe"> <PackagePath>$(PackageLayout)AppConverter.Step2.exe</PackagePath> </LayoutFile> </ItemGroup> </Project>

Come potete notare, nell’elemento LayoutFile abbiamo sostituito la cartella Debug con DesktopUWP. Se volete testare il funzionamento della compilazione condizionale, è sufficiente mettere un breakpoint all’interno dell’event handler OnCreateFile() legato alla pressione del pulsante:

  • Se impostate come progetto di avvio quello Windows Forms nativo e lanciate il debugging, noterete che il file sul desktop sarà creato ma il metodo ShowNotification() sarà completamente saltato.
  • Se impostate come progetto di avvio quello di deployment UWP e lanciate il debugging, noterete questa volta che il metodo ShowNotification() sarà correttamente invocato e la notifica toast sarà mostrata a video.

Abbiamo raggiunto il nostro obiettivo! Ora avete la stessa identica base di codice, in grado di funzionare su qualsiasi versione di Windows, ma che, allo stesso tempo, quando gira su Windows 10 come applicazione UWP è in grado di sfruttare delle funzionalità aggiuntive del sistema operativo.

Utilizzare metodi asincroni

Se avete un minimo di esperienza con lo sviluppo utilizzando il Windows Runtime o la Universal Windows Platform, saprete per certo che uno degli obiettivi di queste piattaforme è quello di rendere semplice la creazione di applicazioni veloci e reattive: di conseguenza, qualsiasi API che impieghi più di 50 millisecondi per completare la sua operazione è stata implementata usando il pattern async / await. In questo modo, è più semplice per lo sviluppatore scrivere codice asincrono, in grado cioè di mantenere il thread della UI il più libero possibile (e quindi l’interfaccia utente fluida e reattiva), indipendentemente da quanto tempo richieda l’operazione che l’applicazione deve eseguire.

Se, però, provaste ad usare un metodo asincrono esposto dalla Universal Windows Platform in un’applicazione Win32 vi accorgerete di come Visual Studio non sia in grado di compilare il progetto. Vediamo un esempio reale, aggiungendo un’altra funzionalità alla nostra applicazione che la Universal Windows Platform rende molto semplice implementare: la sintesi vocale. Aggiungiamo un altro pulsante nella nostra applicazione Windows Forms e gestiamo l’evento Click con il seguente blocco di codice:

private async void OnGenerateAudio(object sender, EventArgs e) { SpeechSynthesizer speech = new SpeechSynthesizer(); var result = await speech.SynthesizeTextToStreamAsync("Hello cenntennial"); string userPath = Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory); string fileName = $"{userPath}\speech.wav"; using (FileStream stream = File.Create(fileName)) { await result.AsStreamForRead().CopyToAsync(stream); await stream.FlushAsync(); } }

Come potete vedere, grazie alle API UWP, è molto semplice raggiungere il nostro scopo: sfruttiamo la classe SpeechSynthesizer (che appartiene al namespace della Universal Windows Platform Windows.Media.SpeechSynthesis) a il metodo SynthesizeTextToStreamAsync(), che prende in input una stringa e ne restituisce l’elaborazione vocale sotto forma di stream audio. Il resto del codice fa uso di API standard del framework .NET: creiamo un file sul desktop dell’utente chiamato speech.wav, lo apriamo in scrittura e, al suo interno, copiamo lo stream audio appena ottenuto. Potete notare come il metodo sia stato impostato per sfruttare il pattern async / await: nella firma dell’event handler abbiamo aggiunto la parola chiave async, mentre abbiamo aggiunto il prefisso await a tutti i metodi asincroni che abbiamo utilizzato, come SynthesizeTextToStreamAsync() o CopyToAsync(). Noterete, però, immediatamente che Visual Studio vi mostrerà un errore, lamentandosi della mancanza dell’implementazione del metodo GetAwaiter().

 

 

Per risolvere questo problema, è necessario aggiungere un’altra reference alla nostra applicazione Windows Forms, la quale non viene aggiunta in maniera predefinita dato che fa parte del framework .NET Core e non di quello tradizionale. Il nome della libreria è System.Runtime.WindowsRuntime.dll e la trovate nel percorso C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETCorev4.5.1

Dopo aver aggiunto tale reference, noterete immediatamente il messaggio di errore scomparire e sarete in grado di compilare l’applicazione Windows Forms senza problemi. Se volete testare questa funzionalità, ricordatevi che anche in questo caso non potete lanciare l’applicazione Windows Forms nativa, ma il progetto di deploy UWP incluso nella soluzione, dato che stiamo usando alcune API della Universal Windows Platform. Se avete fatto tutto correttamente, al termine della procedura troverete sul vostro desktop un file audio di nome speech.wav, con uno spezzone audio di una voce sintetizzata che pronuncerà la frase “Hello centennial”.

In conclusione

In questo articolo abbiamo visto come possiamo iniziare a integrare alcune funzionalità della Universal Windows Platform all’interno di un’applicazione Win32. Vi segnalo due link importanti:

Nel prossimo post faremo fare un altro passo in avanti alla nostra applicazione, integrando l’applicazione Windows Forms con un componente specifico del mondo UWP: i background task.

Happy coding!

Pozvánka: WUG Days 2016

MSDN Blogs - Thu, 09/22/2016 - 22:40

Zveme vás na největší brněnskou bezplatnou vzdělávací konferenci, která je určená pro administrátory, vývojáře i databázové specialisty se zájmem o technologie společnosti Microsoft. Na 40 prakticky orientovaných přednáškách předních českých odborníků se seznámíte nejen s novinkami v oblasti Windows Serveru 2016, .NETu a SQL Serveru 2016, ale i s best-practices z různých oblastí správy a zabezpečení vaší IT infrastruktury a vývoje desktopových, webových i mobilních aplikací.

Místo konání: FIT VUT, Božetěchova 1/2, Brno

Na akci je nutné se zaregistrovat zde.

Visual Studio 2015 install failures (Android SDK Setup) behind a Proxy

MSDN Blogs - Thu, 09/22/2016 - 19:48

I was recently working with a customer trying to install Visual Studio 2015 in their corporate network which happened to be behind a firewall.  Access to the internet was restricted and provided with a proxy server that would allow some requests through the firewall.  When they tried to install Visual Studio 2015 on a developer desktop with the proxy configured correctly they were still seeing install failures.

The install failures showing in the UI were all the Android SDK API Levels.  These are installed for Cordova and Xamarin cross platform mobile development scenarios.

Looking deeper into the logs we would find errors like the following:

DownloadManager Error: 0 : Install return code for product 'Android SDK Setup (API Level 22)' is Failure (The following package(s) were not downloaded: build-tools-22.0.1 android-22 . Please check your internet connection and try again.) DownloadManager Information: 0 : Product Android SDK Setup (API Level 22) done install completed DownloadManager Information: 0 : Increasing current install to 2 DownloadManager Information: 0 : Using execute progress heuristic for: Android SDK Setup (API Level 23) DownloadManager Error: 0 : Install return code for product 'Android SDK Setup (API Level 23)' is Failure (The following package(s) were not downloaded: build-tools-23.0.1 android-23 . Please check your internet connection and try again.) DownloadManager Information: 0 : Product Android SDK Setup (API Level 23) done install completed DownloadManager Information: 0 : Increasing current install to 3 DownloadManager Information: 0 : Using execute progress heuristic for: Android SDK Setup (API Level 19 and 21) DownloadManager Error: 0 : Install return code for product 'Android SDK Setup (API Level 19 and 21)' is Failure (The following package(s) were not downloaded: platform-tools extra-android-support extra-android-m2repository build-tools-19.1.0 build-tools-21.1.2 android-19 android-21 sys-img-armeabi-v7a-android-19 sys-img-x86-android-19 addon-google_apis-google-19 . Please check your internet connection and try again.) DownloadManager Information: 0 : Product Android SDK Setup (API Level 19 and 21) done install completed

It turns out the root cause of this issue is that the Android SDK Package Manager doesn’t look to the machine-wide proxy settings when accessing the internet.  The Android SDK Package Manager stores it’s own proxy configuration instead.  Visual Studio is able to install the package manager, but the package manager can’t install any of the API levels because it doesn’t have the proxy information to access the internet through the secure firewall.

As it turns out there’s an easy fix for this issue!  We just need to help the Android SDK Package Manager with the proxy settings!  The package manager stores it’s proxy settings in a specific file in the user profile so we can put this file with the correct settings before installing Visual Studio and everything should work. 

Here are the steps to fix this issue!
  1. BEFORE installing Visual Studio, create a file at this location:  %USERPROFILE%.androidandroidtool.cfg
  2. In the androidtool.cfg file that you just created, place the following contents.  Make sure to update “http.proxyPort” and “http.proxyHost” in the file!
### Settings for Android Tool #Fri Jan 08 02:53:27 UTC 2016 http.proxyPort=8888 sdkman.enable.previews=false http.proxyHost=127.0.0.1 sdkman.ask.adb.restart=false sdkman.show.update.only=true sdkman.force.http=false sdkman.use.dl.cache=true
  1. Install Visual Studio normally

 

Anyway, I hope that’s useful!  Post a comment below if this helped!

OMS Assessments and Performance data not showing

MSDN Blogs - Thu, 09/22/2016 - 19:20

I signed up for a preview access to the SCOM Assessment in OMS to criticise it, (let’s be honest). As a PFE I do health assessments for SCOM all the time and I was curious to see what I can now get for free using a trial subscription for OMS.

After configuring as required, and waiting the 4hours+  to get my results, I still saw no data in my Assessment.

Investigating it further, I saw a bunch of Event ID 4506 in my Operations Manager logs. In a SCOM agent that normally means that is too much data in the workflow, and it gets dropped. My events looked like this:

Log Name:     Operations Manager
Source:       HealthService
Date:         8/25/2016 3:16:40 PM
Event ID:     4506
Task Category: None
Level:         Error
Keywords:     Classic
User:         N/A
Computer:     SCOM.contoso
Description: Data was dropped due to too much outstanding data in rule "Microsoft.IntelligencePack.LogManagement.Collection.PerformanceCounter.ID0EZABAAA" running for instance "SCOM.contoso" with id:"{CF532894-BDE3-434F-9AB3-7135F94CDD7F}" in management group "LAB2".

 

The agent for OMS works the same way as the SCOM agent (in fact, it is the same, but the OMS agent is always more up to date). The SCOM assessment though uses the Management Service Health service. The workflow dropped stated in the event is  “Microsoft.IntelligencePack.LogManagement.Collection.PerformanceCounter.ID0EZABAAA”  which is an OMS rule in one of the OMS management packs.
To resolve this I increased the queues for the OMS Management group:
HKLMSYSTEMCurrentControlSetservicesHealthServiceParametersPersistence Checkpoint Depth Maximum;
New Value = 104857600
Type = DWORD

HKLMSYSTEMCurrentControlSetServicesHealthServiceParametersManagement GroupsAdvisorMonitorV2MaximumQueueSizeKb
New Value = 204800
Type = DWORD

Then restart the health service/Microsoft Monitoring Agent.

PS.: SCOM OMS Assessment is awesome!

Olo 如何以行動 DevOps 來增強超過 150 家餐廳的訂餐 App

MSDN Blogs - Thu, 09/22/2016 - 19:00

Olo 提供給顧客與員工使用的 Xamarin app 幫助餐廳的品牌傳遞更快更精確與更個人化的服務給它們的顧客,建立顧客忠誠度與改善餐廳的營運。擁有超過 150 個品牌與 25 萬用戶的業務量,行動 DevOps 對於 Olo 的行動策略至關重要,而 Xamarin 幫助團隊傳遞最高品質的體驗給他們全球的客戶。

今天我們邀請了 Greg Shackles,為 Olo 的首席工程師與長久以來 Xamarin 社群的成員,來分享他對於行動開發的經驗和他給立志作為 app 開發者的建議。

 

告訴我們一點有關你的公司與你所扮演的角色。什麼是 Olo 而誰是你們的客戶?

我是 Olo 的首席工程師,這也表示每件我們所做的事我都有參與一點。

可以把 Olo 視為餐廳與這變化快速的市場間的數位橋梁。不論是要外送、遞送或簡單的電話訂購,我們直接整合品牌現存的系統,包含 POS 系統與忠誠度管理系統,來將數位商務帶到他們現存的營運中。

我們一般與大型多據點的品牌合作,通常超過 40 個位置。如果您曾用過 Five Guys 的 app,撥打 1-800-CHIPOTLE,或在線上訂購 Wingstop,那麼您就用過 Olo。

 

告訴我們有關你們的 app。

我們為餐廳發布了完整品牌的 iOS 與 Android app,讓他們的顧客可以直接與他們的品牌互動,客製化訂購他們真正想要的東西,在任何他們想要的平台,按照自己的節奏,並確保餐廳能夠收到正確的訂單。

有了這個 app,顧客可以很容易地找到離他們最近的店面,兌換忠誠物獎勵,看菜單,查看最近的或最愛的訂單,提前預訂,當抵達時讓他們可以 Skip The Line®,或甚至讓他們的訂單送出。我們開放 framework 讓客戶的機構利用我們的 API,而我們已經從 Sweetgreen 看到。

 

行動通訊在你們公司策略中扮演怎樣的角色?你們的行動產品有隨著時間進步嗎?

打從一開始,並在過去的十年中,行動通訊一直是 Olo 的基礎。

Noah Glass 我們的創辦人與 CEO,厭倦了每天早上排隊等他的咖啡,他看見了行動通訊解決這個問題的潛力,並創建了 Olo。在 2005 年,我們推出了一般行動電話的簡訊訂購;記得那是在最初版的 iPhone 問世之前幾年,而直到今天行動通訊仍然是我們平台的基石。

而我們現在不再支援簡訊訂購,我們的客戶從很多不同的平台處理與接收顧客訂單,包含網站、iOS、Android、電話,或甚至 Facebook 與 Twitter 的聊天介面。

 

你為什麼會選擇使用 Xamarin?

Olo 的 app 平台並不總是用 Xamarin 建置的。原本是用其他的 Framework 搭配 JavaScript,好一段時間這樣就已經足夠,但最終還是讓我們陷入困境。我們發現我們自己要對抗 framework 在我們和平台間的抽象(abstraction),而有時它需要花幾個星期才可以完成一個簡單的功能。這對我們的開發人員來說非常挫折,但更重要的是,這代表我們不能傳遞我們希望給客戶的體驗。

任何工程師都知道重寫一個平台並不輕鬆,但選擇使用 Xamarin 來重建它是一個明智的選擇。我們已經是一間 .NET 與 C# 的店了,所以能夠將我們一些現有厲害的工程師帶到行動的世界,充分利用他們現有的技術,將會是一個非常棒的機會。有了新的平台,我們可以提供一個穩定、完全原生的體驗給我們的用戶,同時還能讓我們的工程師開心。

 

當你聽到「原生 app」你有什麼想法?

我覺得就是 app 完全利用原生底下的 UI framework,而不是抽象,提供這種用戶都習慣且期望在那平台的體驗。原生 app 需要感覺原生並執行起來也像那樣。它們需要可以使用讓每個平台很獨特和有趣,並不只是建置它們所有之中大眾所接受的。

Xamarin 同時提供我們最棒的兩個世界,讓我們可以直接使用全部的原生 API,並且還讓我們可以輸入我們自己的抽象以我們想要的方式。

 

你通常在平台共享多少的程式碼?

這個問題有一個限定常常被忽略:多少應該被共享的程式碼被分享在平台?分享程式碼是很棒的事,但是這不代表你需要避免去寫非共享的平台特定程式碼,如果它可以讓你的 app 在平台上更完美。讓我們的 app 開發人員可以為我們用戶撰寫最佳體驗的程式碼是我們的職責。

我們從共享大量的程式碼受益在多個不同的層級。在底層,一個核心的可攜式類別函式庫包含了應用程式的真正內涵:服務、資料庫存取、網路與呼叫 API、應用程式的流程、每個畫面背後的邏輯等等。在這層我們有很許多的單元測試,有辦法執行這些測試在個別的平台來確保相容性。

在這上面,我們實作了我們共享的 iOS 與 Android 層當作共享的程式專案,讓我們定義所有用來驅動 app 在每個平台的平台特定元件,相容於任何特定品牌。

對於一個特定品牌的 app,我們擁有大量的模具在建置與鷹架 (scaffolding) 管線上頭,讓我們可以產生獨特的專案們。

 

描述一下你開發的過程 – 你如何使用行動 DevOps / CI?

一個像我們這樣要維護這麼多不同 app 的平台,自動化與 CI 非常重要。每個提交在每個分支建置每一個專案,執行單元測試,與建置品牌專屬的 app,確保這整個流程是保持穩固的。

我們可以透過 HockeyApp ,點擊一個按鈕來部署任何品牌的 app 版本到任何我們的環境。我們有自動化的流程在應用程式商店 provisioning、打包與憑證,而且也使用 Xamarin.UITest 來自動生成螢幕截圖。

 

你如何測試你的 app?為什麼行動品質很重要?

就如同自動化,測試也很關鍵。我們非常自豪在傳遞不僅是原生和高性能,而且又極致穩定的 app。不像網頁,你可以很快送出修改,而用戶只要重新刷新就會看到,要發布 app 更新的障礙是較高的。所以在 app 送到用戶手上之前能查出越多問題越好。

我們從單元測試取得眾多的勝利。因為太多我們的 app 是從那層驅動,我們可以寫執行很快與測試 app 特定部分行為的測試。不用在 UI 上打轉,我們可以從網路呼叫來定義 app 的流程從一個畫面到另一個畫面來測試每一個東西。

我們也喜歡使用 Xamarin.UITest 與 Xamarin Test Cloud 來確定東西都運作正常。單元測試是很棒,但最重要的還是使用者體驗。

我們已經經歷了一些大的設計改變在最近幾個月,而且有辦法執行我們的 app 在 Xamarin Test Cloud 來看看不同的裝置如何處理(或不處理)那些較無價值的東西。我們有辦法快速輕鬆地抓出那些我們有可能沒找到的問題在任意的 Android 裝置。

 

你如何使用 HockeyApp?

我們使用 HockeyApp 純粹用來發布 beta 版,同時對內與對外。至於分析的部分,我們使用蠻多不同的技術,從像是 Google Analytics 與 Google Tag Manager 到我們自己設計用來檢視我們 app 表現的工具。

在 Xamarin Evolve 2016 我的部分提到我們怎麼來即時監控 app 在用戶端的性能表現。像 Olo 這樣的平台有一件有趣的事,就是我們有辦法看到廣大的品牌與用戶是如何與我們的 app 互動,而這也給了我們一個優勢,來學習與持續改善我們的流程與整體的用戶體驗。

 

你們的顧客與他們的用戶對你們的 app 有什麼看法?

我們一直都在追蹤顧客給我們的回饋與進行用戶測試。整體來說回饋都是非常正面的。每次只要有人留下對於我們 app 的評論,不管好的還是壞的,都會送到 app 團隊在 Slack 上的聊天室讓大家都能看到。

當一個品牌與我們合作推出 app 而他們的顧客為之瘋狂,是很讓人興奮的事。如果你想要知道我是什麼意思,到 Twitter 上搜尋 「Wingstop app」。

And on a Tuesday, God said, “let there be a #Portillos app!” #Dope

— Ricky Orozco (@RealRickyOrozco) 2016年8月9日

 

你有什麼建議要給剛開始投入於行動開發的開發者?有什麼很棒的資源?

部落格、影音服務像是 Pluralsight,與 Xamarin 官網的文件,有太多內容可以取得了。雖然這很棒,但是對於一個剛加入的開發者來說,可能會感到有點不知所措。

我給新加入的行動開發者一個最主要的建議就是從小處開始。不要一開始就丟一堆抽象到你與平台間,像是大的 MVVM framework 添加很多價值,還有許多魔法。無法否認的是像是 iOS 與 Android 這樣的平台,雖然它們有很多共通點,但是他們終究還是不同的。一開始先花一些時間來學習基本的操作,然後再深入一些抽象層,只要你了解了它們到底在做什麼。

您可以在 Xamarin Customers webpage 與 Microsoft’s Mobile DevOps Solution 了解更多有關我們的客戶如何轉換與顧客的關係與驅動內部的生產力.

 

本文翻譯自 How Olo Powers 150+ Restaurant Ordering Apps with Mobile DevOps

若對以上技術及產品有任何問題,很樂意為您服務! 請洽:台灣微軟開發工具服務窗口 – MSDNTW@microsoft.com / 02-3725-3888 #4922

Microsoft Azure の Hadoop ディストリビューション HDInsight を使ってみよう! (2)

MSDN Blogs - Thu, 09/22/2016 - 18:00

Microsoft Japan Data Platform Tech Sales Team

高木 英朗

 

前回の記事では、HDInsight の概要と実際に Azure のポータル画面からデプロイする方法をご紹介いたしました。今回は HDInsight へのアクセス方法や Hive、Spark の実行についてご紹介いたします。

前回の記事でデプロイした環境を使って試していきましょう。


Ambari へのアクセス
デプロイが完了すると管理画面にアクセスすることができます。ここではクラスターのスケール設定、インスタンスの削除等の管理ができます。管理画面の [クイック リンク] からクラスター ダッシュボードのメニューに入ると、HDInsight を管理・操作するためのリンクが表示されます。HDInsight https://blogs.msdn.microsoft.com/dataplatjp/?p=11325&preview=trueのクラスター管理は Apache Ambari を使用します。[HDInsight クラスター ダッシュボード] をクリックしてすると Ambari にアクセスすることができます。



資格情報で設定したユーザー名とパスワードを入力してログインします。



ログインすると Ambari のダッシュボードが表示されます。ここでクラスターの監視、構成の変更、Hiveクエリ実行、YARNのスケジューラー管理等を行うことができるようになっています。



SSH でのアクセスも試してみましょう。Azure ポータルの HDInsight 画面にもどり [SSH (Secure Shell)] メニューに入ると SSH での接続先が表示されます。この接続先に対して設定した資格情報(パスワード認証または公開鍵認証)でログインします。



この例では Putty でログインしています。ホスト名に “hn0” と入っており、ヘッドノード (HeadNode0) に接続されていることがわかります。現バージョン (HDI3.4) は Ubuntu 14.04.5 LTS で稼働していることがわかります。



現時点では日本のリージョンにデプロイしても、システムのロケールやタイムゾーン等は日本向けに設定されていないのでご注意ください。

    ロケール



    タイムゾーン




HDInsight の構成
HDInsight をデプロイすると様々な役割をもったサーバやネットワーク、データベース等が配置されて構成されます。以下の図は仮想ネットワーク内に Spark クラスターをデプロイした際の (おおまかな) 構成になります。

*本記事投稿時点でのアーキテクチャであり、今後変更になる可能性があります。

  • ヘッドノード (Head0, Head1)

  • HDFS の NameNode、YARN の ResourceManager、Ambari Server、 Hiveserver2 などが稼働します。HDFS や YARN などの一部のサービスは、常にどちらかのヘッド ノードで「アクティブ」になり、もう一方のノードで「スタンバイ」になります。 Hiveserver2 などは両方のヘッドノードで「アクティブ」になります。


  • ワーカーノード (Worker)

  • ジョブを実行するノードです。DataNode や NodeManager が配置されます。


  • Gateway

  • ヘッドノードに HTTPS でアクセスする際に Reverse Proxy として中継します。その際に管理者アカウント等の認証の役割も担います。


  • ZooKeeper

  • ここではヘッドノードのアクティブ / スタンバイ構成を支援するために構成されます。


  • Azure SQL Database

  • 標準で Hive と Oozie の Metastore として利用します。これは設定で変更することも可能です。


  • Azure Storage

  • HDInsight ではストレージとして Azure Storage (Blob) を使用します。Azure Storage は大容量・低価格・高信頼性を兼ね備えており、他サービスとのデータ連携や性能面においてもメリットがあります。明示的に指定することで HDFS を使用することも可能ですし、設定を変更することでメインストレージを HDFS に変更することもできます。 HDFS はワーカーノードのローカルディスクによって構成されています。



各ノードへのアクセス
仮想ネットワーク内の Gateway 以外の各ノードには SSH を使ってアクセスすることができます。内部のホストを探すには以下のように Ambari REST API を使うと便利です。

curl --user '管理者ユーザー名:パスワード' https://クラスタ名.azurehdinsight.net/api/v1/hosts

リクエストを投げるとクラスター内のノードの FQDN を含んだ情報が JSON で返ります。Private IP のアドレス体系は仮想ネットワークのサブネット設定に依存します。
以下はレスポンスの一部の例です。

{ "href": "http://10.254.1.19:8080/api/v1/hosts", "items": [ { "href": "http://10.254.1.19:8080/api/v1/hosts/hn0-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net", "Hosts": { "cluster_name": "hdi1609", "host_name": "hn0-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" } }, (後略)

情報量が多いので探しにくいですが、jq を使うと JSON データ構造のハンドリングが容易にできて便利です。
以下は jq を使った例です。ホスト名の頭文字 hn は ヘッドノード、wn はワーカーノード、zk は ZooKeeper を示しています。

hdiopadmin@hn0-hdi160:~$ curl -s --user '管理者ユーザー名:パスワード' https://クラスタ名.azurehdinsight.net/api/v1/hosts |./jq '.items[].Hosts.host_name' "hn0-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "hn1-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "wn0-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "wn1-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "wn2-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "wn3-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "zk0-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "zk2-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net" "zk4-hdi160.dom4e4qycx5e1ii2ryvsfy0hnc.lx.internal.cloudapp.net"

*Gateway や Metastore 等は Ambari 管理下に無いためここでは表示されません。


これらのノードは仮想ネットワーク内の Private IP アドレスおよび 内部の FQDN となりますので、外部からは直接アクセスすることができず、HeadNode0 に SSH でログインしてからアクセスする必要があります。SSH ユーザーとパスワードは同じものが展開されていますが、公開鍵認証を使用している場合は それぞれのノードでも公開鍵認証が必要です。Putty であれば Pagent を使って 鍵認証をフォワーディングすると良いでしょう。


Hive の実行
簡単に Hive を試してみましょう。HeadNode0 に SSH で接続した状態から hive を実行します。

hdiopadmin@hn0-hdi160:~$ hive WARNING: Use "yarn jar" to launch YARN applications. Logging initialized using configuration in file:/etc/hive/2.4.2.4-5/0/hive-log4j.properties hive> show databases; OK default Time taken: 1.233 seconds, Fetched: 1 row(s)

テーブルを確認します。

hive> use default; OK Time taken: 0.114 seconds hive> show tables; OK hivesampletable Time taken: 0.108 seconds, Fetched: 1 row(s)

hivesampletable というテーブルが見つかったので確認してみます。

hive> desc hivesampletable; OK clientid string querytime string market string deviceplatform string devicemake string devicemodel string state string country string querydwelltime double sessionid bigint sessionpagevieworder bigint Time taken: 0.44 seconds, Fetched: 11 row(s)

HDInsight にはあらかじめ サンプルのテーブルが作成されていることがわかりました。それではこれを使ってクエリを試してみましょう。

hive> SELECT country, count(country) AS cnt FROM hivesampletable GROUP BY country; Query ID = hdiopadmin_20160918073336_a1c6deb3-22dc-4d1d-9672-4664ba956433 Total jobs = 1 Launching Job 1 out of 1 Status: Running (Executing on YARN cluster with App id application_1474176033094_0004) -------------------------------------------------------------------------------- VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED -------------------------------------------------------------------------------- Map 1 .......... SUCCEEDED 1 1 0 0 0 0 Reducer 2 ...... SUCCEEDED 1 1 0 0 0 0 -------------------------------------------------------------------------------- VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 11.07 s -------------------------------------------------------------------------------- Status: DAG finished successfully in 11.07 seconds METHOD DURATION(ms) parse 28 semanticAnalyze 1,465 TezBuildDag 542 TezSubmitToRunningDag 97 TotalPrepTime 3,350 VERTICES TOTAL_TASKS FAILED_ATTEMPTS KILLED_TASKS DURATION_SECONDS CPU_TIME_MILLIS GC_TIME_MILLIS INPUT_RECORDS OUTPUT_RECORDS Map 1 1 0 0 2.45 3,230 29 59,793 88 Reducer 2 1 0 0 1.43 2,170 0 88 0 OK Antigua And Barbuda 11 Argentina 3 Australia 73 Austria 9 Bahamas 25 Bangladesh 2 (中略) Viet Nam 7 Virgin Islands (U.S.) 1 Time taken: 14.766 seconds, Fetched: 88 row(s)

このようにデプロイ後にそのまま Hive が使える環境になっており非常に便利です。また、Hive の実行エンジンには Tez が使われており、通常の Hive より高速に処理されるようになっています。Hive の設定確認は Ambari または /etc/hive/conf/hive-site.xml からできます。

/etc/hive/conf/hive-site.xml の記述

<property> <name>hive.execution.engine</name> <value>tez</value> </property>

Spark SQL の実行
今回の手順のなかで Spark クラスターをデプロイしましたので、最後に Spark を簡単に試してみましょう。spark-shell から実行することもできますが、今回は HDInsight に含まれる Jupter Notebook から実行します。

Jupyter Notebook には Azure ポータルの HDInsight 管理画面にある [クラスター ダッシュボード] からアクセスすることができます。
TOP 画面は下図のように表示されます。



ノートブックを新規作成するには画面右上の [New] から [PySpark] (Python 用) または [Spark] (Scala 用) を選びます。利用したい言語に合わせて選んでください。
*今回は Spark SQL を実行するだけなのでどちらでも良いです。


ノートブックを作成したらまずは Hive テーブルを確認します。セルにカーソルを置いて以下のように %%sql を指定すると SparkSQL を記述できます。続いて SHOW TABLES を記述します。入力が終わったら [Shift] + [Enter] を押します。そうすると処理が実行されてしばらくすると結果が表示され、先ほどの Hive テーブルが利用可能であることがわかります。



次のセルに以下のように入力して [Shift] + [Enter] を押すと、Spark SQL が実行されて結果が表示されます。



表示された結果内の Type を変更すると Jupyter Notebook の機能によって簡単なグラフを表示させることができます。このように Spark の実行環境もすぐに使い始めることができます。



分析がおわったら [File] メニューから [Close and Halt] を選択してセッションを終了させるようにしましょう。



以上、Microsoft Azure の Hadoop サービスである HDInsight の概要をご紹介いたしました。
Hadoop に興味がある方、クラウド上で分析基盤を構築したい方はぜひ試して頂ければと思います。


関連記事

Understanding the requirements for SeSecurityPrivilege to SQL setup account on remote fileserver when default backup folder is set to UNC path

MSDN Blogs - Thu, 09/22/2016 - 17:26

One of the actions of SQL Server setup is to configure appropriate permissions on the binaries, data, log, tempdb, backup folders such that post-installation, SQL Service account has all the required permissions to read, write and execute from these folders without any errors. In order for SQL Server setup to be able to assign appropriate permissions to SQL Service account, it requires SeSecurityPrivilege on the server where these folders are created. One of the pre-requisites for running SQL Server setup is that setup account should be an administrator on the server and by default in windows, administrators are granted SeSecurityPrivilege unless overridden by group policy. Hence when installing SQL Server on a local server with setup account as local administrator, DBAs may have never felt the need of assigning these permissions explicitly to setup account. If the setup account has these permissions missing, you are likely to hit one of the scenarios discussed in KB 2000257.

One of the common scenarios installing SQL Server is to setup the default backup directory to a SMB fileshare to store the backup files offsite as per the recommended practice. In this scenario, if the admins fails to grant SeSecurityPrivilege on the remote fileserver to setup account, the SQL Server setup validation with fail with following error not allowing you to proceed with the installation.

SQL Server setup account does not have the SeSecurityPrivilege privilege on the specified file server in the path <<network share>>. This privilege is needed in folder security setting action of SQL Server setup program. To grant this privilege, use the Local Security Policy console on this file server to add SQL Server setup account to “Manage auditing and security log” policy. This setting is available in the “User Rights Assignments” section under Local Policies in the Local Security Policy console.

This is by design behavior of SQL Setup to ensure SQL Setup doesn’t fail later during the installation while trying to setup appropriate permissions on the SMB fileshare. These pre-requisites are also documented in the MSDN article here but the article primarily talks about Data and Log folders on SMB fileshare and hence the same requirements for default backup directory on the fileshare may not be so apparent.

My organization doesn’t allow SeSecurityPrivilege to be granted on the remote fileserver and I have already assigned FULL CONTROL permissions to setup account and SQL Service on the SMB fileshare but without SeSecurityPrivilege, SQL setup doesn’t allow me to proceed with the installation.. How can I overcome this?

In this scenario, you can setup default backup folder to a local directory during setup which will allow SQL Server installation to proceed as desired. Post-installation, the default backup directory can be changed using SSMS or Powershell. Unlike setup, SSMS or Powershell doesn’t configure the permissions on the backup folder or perform any validation, hence the default backup directory can be changed without requiring SeSecurityPrivilege on the fileserver. However in this case, the onus of setting the FULL CONTROL permissions on the default backup directory on SMB fileshare to SQL Service account lies on the admins. If the DBA fails to assign these permissions and later when the backup is performed on the default backup location, the backup will fail with access is denied error.

Hope this clarifies the requirements for SeSecurityPrivilege on remote fileserver to SQL setup account and options to workaround it

Parikshit Savjani
Senior Program Manager (@talktosavjani)

Migrating Azure Services between different Azure Subscriptions

MSDN Blogs - Thu, 09/22/2016 - 13:56

 

One of the key issues student and academic potentially face during projects or assignments is the ability to move data between Azure Subscriptions.

The link https://azure.microsoft.com/en-gb/documentation/articles/resource-group-move-resources/ has a full walk through of the various scenarios.

The migration feature is changing a fast rate i.e. weeks so it depends what you want to move and new features are being added to Azure approx every 3 weeks..

At present 22/09/2016 Not every service can be moved but the plans for Azure at all services will eventually be able to one click migrated.

The services that currently do not support moving are:

  • Application Gateway
  • Application Insights
  • Express Route
  • Recovery Services vault – also do not move the Compute, Network, and Storage resources associated with the Recovery Services vault, see Recovery Services limitations.
  • Virtual Machines Scale Sets
  • Virtual Networks (classic) – see Classic deployment limitations
  • VPN Gateway

But simply put if you open the resource group of what you want to move, then go to the EDIT subscription link (a pen icon – No.1 below). You can edit the subscription name.

Within the resource group it tells you what you CAN and CANNOT move (nos 2 and 3 in the image below below).

 

Scenarios for Big Data–Capture, Curate and Consume

MSDN Blogs - Thu, 09/22/2016 - 13:39

Step 1.  Ingest Tweets with python scripts on the Data Science Virtual Machine

Step 2. Move the Tweets into an Blob on Azure Storage using Azure Storage Explorer

Step 3. Run Azure Data Lake Copy to move data from Blob storage to Azure Data Lake

The easiest way to create a pipeline that copies data to/from Azure Data Lake Store is to use the Copy data wizard. See Tutorial: Create a pipeline using Copy Wizard for a quick walkthrough on creating a pipeline using the Copy data wizard.

The following examples provide sample JSON definitions that you can use to create a pipeline by using Azure portal or Visual Studio or Azure PowerShell. They show how to copy data to and from Azure Data Lake Store and Azure Blob Storage. However, data can be copied directly from any of sources to any of the sinks stated here using the Copy Activity in Azure Data Factory.

Sample: Copy data from Azure Blob to Azure Data Lake Store

The following sample shows:

  1. A linked service of type AzureStorage.
  2. A linked service of type AzureDataLakeStore.
  3. An input dataset of type AzureBlob.
  4. An output dataset of type AzureDataLakeStore.
  5. A pipeline with a Copy activity that uses BlobSource and AzureDataLakeStoreSink.

The sample copies time-series data from an Azure Blob Storage to Azure Data Lake Store every hour. The JSON properties used in these samples are described in sections following the samples.

Azure Storage linked service:

{ "name": "StorageLinkedService", "properties": { "type": "AzureStorage", "typeProperties": { "connectionString": "DefaultEndpointsProtocol=https;AccountName=<accountname>;AccountKey=<accountkey>" } } } Step 4. Run a U- SQL Script and create outputs in Azure Data Lake Analytics

You must create an Azure Data Lake Analytics account before creating a pipeline with a Data Lake Analytics U-SQL Activity. To learn about Azure Data Lake Analytics, please see Get started with Azure Data Lake Analytics.

Please review the Build your first pipeline tutorial for detailed steps to create a data factory, linked services, datasets, and a pipeline. Use the JSON snippets with Data Factory Editor or Visual Studio or Azure PowerShell to create the Data Factory entities. For details on running U-SQL see https://azure.microsoft.com/en-us/documentation/articles/data-factory-usql-activity/ 
Step 5. Produce Power PI Dash Board on the data contained in the Azure Data Lake

 

Creating a custom Virtual Machine for deployment on Azure

MSDN Blogs - Thu, 09/22/2016 - 13:32

 

I was speaking at a University this week where they wanted to deploy specific Linux machine for their labs based on specific build of Red Hat Linux. the University already had a custom build for their labs PC but wanted to make this available as an Azure VM. 

The team already had PCs on campus which had the base image.  The following is a short guide on using  Azure ARM CLI. and taking an existing image to Azure a comprehensive guide is available at https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-cli-deploy-templates/

The following is a quick walkthrough of how to get started building and using a Custom VM for Azure

Or Select a existing custom VM at https://azure.microsoft.com/en-us/marketplace/virtual-machines

Step 1 Prepare VM image for Azure

It is very important to follow these pre-configuration steps here for Hyper-V or VMWare here to preconfigure and prepare the image. 

You can also covert the VHDX to VHD using a tool here.

Step 2 Install Azure CLI if you haven’t already.  The Azure CLI bits are available here.

Step 3 Open command prompt, Terminal, or Bash, etc..

Step 4 Connect to Azure via CLI. 

use the command Azure Login Insert your Azure AD credentials. It should return an ‘OK’ if successful.

run Azure help to see the CLI version, etc.. see requirements of versions at https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-cli-deploy-templates/ 

For a full list of the CLI ARM commands see here.

Step 4 Switch to ARM mode  config mode arm

View a list of available Azure CLI ARM commands here

Step 5 Prepare Azure for Red Hat VHD upload

Create a Resource Group if needed

azure group create resourcegroupname location

Create a storage group if needed

azure storage account create –l location –g resourcegroupname –type storagetype storageaccountname

Get your storage URI for the VHD upload

Step 6 Upload RedHat VHD to Azure

There are a few ways to accomplish this via Azure CLI or via Storage Explorer. Here is how to upload the VHD via the Azure CLI:

azure storage blob upload “local path to RHEL vhdcontainername –t page –a storage account name –k storage account key

For more documentation on uploading via CLI see here.

Here is a VHD for CLI tool you can also leverage effectively if your on premises VM has dynamic disks. This tool will convert to fixed which is required for Azure VM uploads.

Step 7 Create new Azure VM based on Red Hat VHD image

azure group deployment create resourcegroupname deploymentname – –template-file path to local JSON file

Here is a sample JSON template I used:

{
  “$schema”: “
https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “contentVersion”: “1.0.0.0”,
  “parameters”: {
    “customVmName”: {
      “type”: “string”,
      “metadata”: {
        “description”: “This is the name of the your Red Hat VM”
      }
    },
    “userImageStorageAccountName”: {
      “type”: “string”,
      “metadata”: {
        “description”: “This is the name of the your storage account of the RHEL vhd location”
      }
    },
    “osDiskVhdUri”: {
      “type”: “string”,
      “metadata”: {
        “description”: “Uri path to the uploaded Red Hat VHD image”
      }
    },
    “dnsLabelPrefix”: {
      “type”: “string”,
      “metadata”: {
        “description”: “DNS Label for the Public IP. Must be lowercase. It should match with the following regular expression: ^[a-z][a-z0-9-]{1,61}[a-z0-9]$ or it will raise an error.”
      }
    },
    “adminUserName”: {
      “type”: “string”,
      “metadata”: {
        “description”: “User Name for the Red Hat Virtual Machine”
      }
    },
    “adminPassword”: {
      “type”: “securestring”,
      “metadata”: {
        “description”: “Password for the Red Hat Virtual Machine”
      }
    },
    “osType”: {
      “type”: “string”,
      “allowedValues”: [
        “Linux”
      ],
      “metadata”: {
        “description”: “Red Hat OS”
      }
    },
    “vmSize”: {
      “type”: “string”,
      “metadata”: {
        “description”: “This is the size of your VM e.g. Standard_A9”
      }
    },
    “newOrExistingVnet”: {
      “allowedValues”: [ “new”, “existing” ],
      “type”: “string”,
      “metadata”: {
        “description”: “Select if this template needs a new VNet or will reference an existing VNet”
      }
    },
    “newOrExistingVnetName”: {
      “type”: “string”,
      “defaultValue”: “”,
      “metadata”: {
        “description”: “New or Existing VNet Name”
      }
    },
    “newOrExistingSubnetName”: {
      “type”: “string”,
      “defaultValue”: “Subnet1”,
      “metadata”: {
        “description”: “Subnet Name”
      }
    }
  },
  “variables”: {
    “publicIPAddressName”: “userImagePublicIP”,
    “vmName”: “[parameters(‘customVmName’)]”,
    “nicName”: “[concat(parameters(‘customVmName’),’Nic’)]”,
    “publicIPAddressType”: “Dynamic”,
    “apiVersion”: “2015-06-15”,
    “templatelink”: “[concat(‘
https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/101-vm-from-user-image/’,parameters(‘newOrExistingVnet’),’vnet.json’)]”
  },
  “resources”: [
    {
      “apiVersion”: “2015-01-01”,
      “name”: “vnet-template”,
      “type”: “Microsoft.Resources/deployments”,
      “properties”: {
        “mode”: “incremental”,
        “templateLink”: {
          “uri”: “[variables(‘templatelink’)]”,
          “contentVersion”: “1.0.0.0”
        },
        “parameters”: {
          “virtualNetworkName”: {
            “value”: “[parameters(‘newOrExistingVnetName’)]”
          },
          “subnetName”: {
            “value”: “[parameters(‘newOrExistingSubnetName’)]”
          }
        }
      }
    },
    {
      “apiVersion”: “[variables(‘apiVersion’)]”,
      “type”: “Microsoft.Network/publicIPAddresses”,
      “name”: “[variables(‘publicIPAddressName’)]”,
      “location”: “[resourceGroup().location]”,
      “properties”: {
        “publicIPAllocationMethod”: “[variables(‘publicIPAddressType’)]”,
        “dnsSettings”: {
          “domainNameLabel”: “[parameters(‘dnsLabelPrefix’)]”
        }
      }
    },
    {
      “apiVersion”: “2016-03-30”,
      “type”: “Microsoft.Network/networkInterfaces”,
      “name”: “[variables(‘nicName’)]”,
      “location”: “[resourceGroup().location]”,
      “dependsOn”: [
        “[concat(‘Microsoft.Network/publicIPAddresses/’, variables(‘publicIPAddressName’))]”,
        “Microsoft.Resources/deployments/vnet-template”
      ],
      “properties”: {
        “ipConfigurations”: [
          {
            “name”: “ipconfig1”,
            “properties”: {
              “privateIPAllocationMethod”: “Dynamic”,
              “publicIPAddress”: {
                “id”: “[resourceId(‘Microsoft.Network/publicIPAddresses’,variables(‘publicIPAddressName’))]”
              },
              “subnet”: {
                “id”: “[reference(‘vnet-template’).outputs.subnet1Ref.value]”
              }
            }
          }
        ]
      }
    },
    {
      “apiVersion”: “[variables(‘apiVersion’)]”,
      “type”: “Microsoft.Compute/virtualMachines”,
      “name”: “[variables(‘vmName’)]”,
      “location”: “[resourceGroup().location]”,
      “dependsOn”: [
        “[concat(‘Microsoft.Network/networkInterfaces/’, variables(‘nicName’))]”
      ],
      “properties”: {
        “hardwareProfile”: {
          “vmSize”: “[parameters(‘vmSize’)]”
        },
        “osProfile”: {
          “computerName”: “[variables(‘vmName’)]”,
          “adminUsername”: “[parameters(‘adminUsername’)]”,
          “adminPassword”: “[parameters(‘adminPassword’)]”
        },
        “storageProfile”: {
          “osDisk”: {
            “name”: “[concat(variables(‘vmName’),’-osDisk’)]”,
            “osType”: “[parameters(‘osType’)]”,
            “caching”: “ReadWrite”,
            “createOption”: “FromImage”,
            “image”: {
              “uri”: “[parameters(‘osDiskVhdUri’)]”
            },
            “vhd”: {
              “uri”: “[concat(reference(concat(‘Microsoft.Storage/storageAccounts/’, parameters(‘userImageStorageAccountName’)), variables(‘apiVersion’)).primaryEndpoints.blob, ‘vhds/’,variables(‘vmName’), uniquestring(resourceGroup().id), ‘osDisk.vhd’)]”
            }
          }
        },
        “networkProfile”: {
          “networkInterfaces”: [
            {
              “id”: “[resourceId(‘Microsoft.Network/networkInterfaces’,variables(‘nicName’))]”
            }
          ]
        },
        “diagnosticsProfile”: {
          “bootDiagnostics”: {
            “enabled”: “true”,
            “storageUri”: “[concat(reference(concat(‘Microsoft.Storage/storageAccounts/’, parameters(‘userImageStorageAccountName’)), variables(‘apiVersion’)).primaryEndpoints.blob)]”
          }
        }
      }
    }
  ]
}

Sample of running the script with the JSON above:

For more on deploying JSON templates via CLI see here.

Step 8 – Verify RHEL image worked properly

Use SSH to connect to the RHEL VM running in Azure for verification:

New Office training courses from LinkedIn Learning

MS Access Blog - Thu, 09/22/2016 - 13:00

Today’s post was written by Peter Loforte, general manager for the Office Modern User Assistance and Localization team.

In our continuing effort to provide better help to our customers, we’ve partnered with LinkedIn to provide an array of new LinkedIn Learning training courses to help you get more out of Office.

The new courses are available today and focus on helping you get the most out of Outlook 2016 and Excel 2016. Learn how to use Outlook 2016 to set up an email account; send, receive and organize messages; add a signature; manage contacts and calendars; manage meetings; and collaborate and share with others. Get started with Excel 2016 by learning how to enter and organize data, create formulas and functions, build charts and PivotTables, and use other time-saving features.

These courses are freely available and can be found alongside the hundreds of courses and videos in the Office Training Center.

If you’re interested in exploring the breadth of content from LinkedIn Learning (formally Lynda.com), you can get one month of free, special access to LinkedIn Learning, which includes thousands of courses and videos.

Here’s a peek at some of the new Outlook courses in the Office Training Center:

As always, we’d love to hear your feedback in the comment section below.

—Peter Loforte

The post New Office training courses from LinkedIn Learning appeared first on Office Blogs.

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux