You are here

Feed aggregator

Annonces autour de System Center Configuration Manager à la Conférence Ignite

MSDN Blogs - Mon, 05/04/2015 - 09:16

La conférence Ignite organisée par Microsoft commence aujourd’hui, après la conférence Build la semaine dernière. Dans la session de Brad Anderson, de nombreuses annonces ont été faites autour de System Center Configuration Manager (ConfigMgr) et relayées sur le blog de l’équipe ConfigMgr.

Il y a bien longtemps, Microsoft avait annoncé renoncer à utiliser le vecteur d’un service pack pour apporter de nouvelles fonctions à un logiciel. Cette posture a déjà subi plusieurs entorses dans un passé récent et je ne me plaindrai pas d’un nouvel écart.

ConfigMgr est aujourd’hui supporté dans 2 versions : System Center 2012 Configuration Manager Service Pack 1 et System Center 2012 R2 Configuration Manager. Avec les annonces de ce jour, chaque version reçoit un nouveau service pack : SP2 pour ConfigMgr 2012 et SP1 pour ConfigMgr 2012 R2 qui devrait sortir la semaine prochaine.

L’intérêt de sortir un service pack est de pouvoir rapidement mettre à jour une infrastructure déjà utilisée sans remettre en cause les choix précédents. Tout au plus pourriez-vous remettre en cause des choix d’architecture avec l’accroissement des performances ou, si vous êtes encore en phase de conception de votre nouvelle infrastructure, pourriez-vous étudier la possibilité de ne pas recourir à un Site Central d’Administration ou CAS pour Central Administration Site. Ainsi la limite du nombre de systèmes gérés par une hiérarchie dotée d’un CAS passe à 600 000 tandis qu’une hiérarchie basée sur un site primaire autonome supporte jusqu’à 150 000 systèmes. Il est désormais aussi possible d’utiliser SQL Server 2014 comme moteur de base de données des sites primaires.

Quelques avancées de ce service pack :

Le principal intérêt de ce service pack est d’apporter le support de Windows 10. Au niveau du déploiement de système d’exploitation, si le Windows Assessment and Deployment Kit ou ADK de Windows 8.1 est toujours un prérequis, en revanche, la version Technical Preview de l’ADK de Windows 10 est également supporté. Au niveau du déploiement de logiciels, ce service pack apporte le support des applications pour Windows 10.

De manière générale (non liée uniquement à Windows 10), la gestion des pilotes va s’améliorer tant au niveau de l’import de pilotes qu’en gestion au quotidien. Par exemple, il est possible de masquer les pilotes qui ne sont pas signés ou d’être averti lors de l’import d’un pilote 32 bits dans une image 64 bits. Lors de l’installation de systèmes d’exploitation, la journalisation est améliorée de manière à mieux suivre les opérations réalisées.

Pour répondre aux différentes crises passées, il est possible d’être averti lors de la tentative de déployer un système d’exploitation sur le regroupement (collection) Tous les systèmes (All Systems).

Les séquences de tâches pourront supporter le déploiement de correctifs nécessitant plusieurs redémarrages (http://support.microsoft.com/kb/2894518).

Les médias de séquences de tâches sont supportées jusqu’à 32Go.

Dans l’intégration de Microsoft Intune dans ConfigMgr, les possibilités supportées par Microsoft Intune seront rapidement accessibles. Ainsi sera-t-il possible de restreindre à l’accès à Exchange, à SharePoint Online ou à OneDrive à des périphériques gérés, voire conformes à un certain nombre de critères. De même, ce service pack apporte le support d’Apple Device Enrollment Program (DEP).

Dans les autres annonces de la soirée, une prochaine version de ConfigMgr sera disponible pour la disponibilité de Windows 10. Une version dite “Technical Preview” est disponible dès aujourd’hui pour faire des tests et planifier l’arrivée de cette version.

Cette version reprendra les nouveautés des services packs précités, mais l’autre nouveauté de cette version est de pouvoir utiliser tout ou partie de l’infrastructure ConfigMgr sur des machines virtuelles hébergées sur Microsoft Azure de manière similaire à l’utilisation que vous pouviez déjà faire de machines virtuelles sous Hyper-V.

Le mode de mise à jour de Windows 10 est supporté par cette Technical Preview de ConfigMgr qui permet de conserver les applications déjà présentes.

Vous aurez également la possibilité de tirer partie des fonctions natives de MDM (Mobile Device Management) de Windows 10 pour gérer via l’intégration de Microsoft Intune dans ConfigMgr des ordinateurs qui ne peuvent pas se connecter à Internet.

//build Tour - New York

MSDN Blogs - Mon, 05/04/2015 - 09:12

If you are anything like me, you are probably very excited after seeing all of the announcements at Microsoft's Build Conference last week.  So many BIG announcements: ability to port IOS and Android apps to Windows maintaining the same native languages, the ability to extend your phone to a desktop with Continuum, Windows Universal Apps that span phone, tablet, PC, XBOX...just to name a few.  Again, tons of awesome stuff.  You can find a recap of some of the big announcements from a colleague, Joe Healy, here

While Joe did a great job providing links to summaries of the talks from Build, wouldn't it be nice if you could see the highlights in person.  What if someone could walk you through all of the big announcement.  What if you could view all of the new innovations and such just feet away!!  Well, guess what...YOU CAN.  Microsoft is coming to New York City on May 18th for an all day event just for you!  You can find the registration page, here

In addition to New York City, Microsoft will be visiting many more cities across the nation.  You can find the list and link below!   Hope to see you in NYC or a city across the world soon!

 

https://www.build15.com/?CR_CC=200617198

  • London, England (May 18)
  • New York, USA (May 18)
  • Atlanta, USA (May 20)
  • Sao Paulo, Brazil (May 21)
  • Berlin, Germany (May 22)
  • Mexico City, Mexico (May 27)
  • Singapore, Singapore (May 28)
  • Austin, USA (May 29)
  • Auckland, New Zealand (May 30)
  • Sydney, Australia (Jun 1)
  • Seoul, Korea (Jun 1)
  • Paris, France (Jun 1)
  • Shanghai, China (Jun 3)
  • Amsterdam, Netherlands (Jun 3)
  • Beijing, China (Jun 5)
  • Prague, Czech Republic (Jun 5)
  • Mumbai, India (Jun 8)
  • Milan, Italy (Jun 10)

  • Bangalore, India (Jun 10)
  • Chicago, USA (Jun 10)
  • Johannesburg, South Africa (Jun 12)
  • Toronto, Canada (Jun 12)
  • Los Angeles, USA (Jun 15)

ebook deal of the week: Windows Server 2012 R2 Inside Out Volume 1: Configuration, Storage, & Essentials

MSDN Blogs - Mon, 05/04/2015 - 09:00

List price: $47.99  
Sale price: $19.99
You save 58%

Buy

This offer expires Sunday, May 10 at 7:00 AM GMT.

This supremely organized reference packs hundreds of timesaving solutions, troubleshooting tips, and workarounds for Windows Server 2012 R2 - with a focus on configuration, storage, and essential administrative tasks. Learn more

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

The products offered as our eBook Deal of the Week are not eligible for any other discounts. The Deal of the Week promotional price cannot be combined with other offers.

PowerShell to Deploy an SSA Across Multiple Servers (v2.0)

MSDN Blogs - Mon, 05/04/2015 - 08:44
A few years back, I published a PowerShell script to deploy a SharePoint Search Service Application across multiple servers, but it admittedly had a few quirks in regards to Index path location… until now. Below, I’ve attached the new/updated version 2.0 of the deploy script (and will also upload to the TechNet Scripts center soon) and have a screen shot below. From the screen shot, notice the prompts in yellow that occurred when I configured a path for my Index that did not exist...(read more)

GUEST POST: L’interoperabilità tra Microsoft e Open Source: cos’è e come è possibile sfruttarla

MSDN Blogs - Mon, 05/04/2015 - 08:32

Questo post è stato scritto da IvanFioravanti, co-founder e CTO 4ward Srl

Punto di partenza...

Che ne penso della nuova Microsoft e della sua “Openness” Vision? DIROMPENTE!

Per me che ho mosso i miei primi passi nel mondo del lavoro sotto il segno del mondo Open, che ho provato di tutto dal Commodore VIC20 in poi, l’idea di poter finalmente coniugare il meglio dell’IT senza fanatismi e senza preclusioni su un’unica piattaforma come Microsoft Azure è un sogno che si avvera: un vero cloud per tutto e tutti, con alle spalle una società dedicata anima e corpo a supportare questa Cloud Vision con persone, servizi e supporto senza eguali nel mercato.

Microsoft Azure è la base per qualunque cosa vi passi per la testa e la facilità con cui potete creare architetture ai confini della realtà è incredibile, così come lo sono i prodotti finali che potete costruire e il mercato che potrete raggiungere in un lampo: il mondo.

Questo è stato il punto di partenza su cui si è basata la nuova cloud vision della nostra 4ward e anche lo spunto per l’evento live del 12 maggio su Microsoft Virtual Academy con Paola Presutto.

Un viaggio insieme con Azure e l’Open Source

Il mio obiettivo in questo articolo è quello di trasmettere a tutti la potenza e la semplicità di Microsoft Azure e la sua totale e reale integrazione col mondo Open Source. Il tutto mostrando un esempio reale e in produzione.
Per dimostare la mia tesi ho preparato qualche “magia”, basata sulle macchine virtuali di tipo “G”, che come dico io “vanno a cannone”. Potrete ripetere voi stessi tutto il procedimento seguendo i passi su GitHub:

https://github.com/ivanfioravanti/easy-azure-opensource

Gli unici prerequisiti sono:

Prima magia: ReplicaSet MongoDB su Azure in 5 minuti

Un ReplicaSet è un cluster con almeno 3 server dove uno fa da primario e gli altri da secondario e con Azure Command Line potrete installarlo in un lampo partendo da macchine Ubuntu di base. Potete creare tutte le macchine che volete semplicemente impostando i vostri parametri nel file common.conf ed eseguendo successivamente il comando

initAzure.sh

Il vero trucco sta tutto nell’ultima parte del comando che segue

azure vm create … --custom-data ./configureMongoDB.sh

Questo parametro vi consente di sfruttare cloud-init, che è un processo nativo nelle immagini Ubuntu, per eseguire i comandi che volete sulla macchina virtuale creata. Per maggiori informazioni potete leggere questo articolo (in inglese).

Per terminare la configurazione del cluster vi basterà collegarvi al servizio MMS di MongoDB e vedrete tutti i vostri server già pronti per essere inseriti in qualunque configurazione voi vogliate.

 

Seconda magia: MongoDB Sharded Cluster in anche meno

Questo secondo trucco vi aiuta a creare un cluster di più nodi MongoDB con singolo server su Azure. Molto utile per sviluppo e testing delle vostre applicazioni in ambienti con sharding (suddivisione dei dati su più nodi). Questo cluster lo uso spesso per mostrare la vera potenza delle macchine G dal vivo e per farlo utilizzo i dati di Siope, che sono alla base del sito http://soldipubblici.gov.it e che rappresentano entrate e uscite di tutti gli enti d’Italia dal 2013 ad oggi.

Ho trasformato i dati da relazionali a documento e successivamente li ho messi in Time Series raggruppando entrate e uscite per ente e per mese, li carico sullo sharded cluster MongoDB creato e faccio qualche query come:

  • 2,5 secondi per vedere la classifica degli
    enti che hanno speso di più in Servizi Scolastici nel 2014
  • 1,6 secondi per raggruppare di tutti i
    costi di tutti gli enti per il 2014 per ente e vedere chi ha speso di piu'
  • 65 ms. per raggruppare tutte le spese del Comune di Milano nel
    2014 per categoria.

La potenza delle macchine G consente un’analisi in tempo reale dei dati di Siope da qualunque punto di vista! Questi dati non li ho ancora resi pubblici perché voglio prima ultimare la procedura automatica di import settimanale degli stessi. Appena pronti potrete usarli per giocare un po’ anche voi con la potenza reale unita ad OpenData di grande interesse.Sarebbe bello collegarci anche la parte di Azure Machine Learning, ma questo lo lasciamo per talk futuri.

 

Esempio reale: 4ward365

4ward365 è il prodotto 4ward su cui abbiamo concentrato tutto quello che abbiamo imparato negli ultimi anni, che unito ad Azure ci ha consentito di creare una piattaforma con una scalabilità infinita e soprattutto fruibile da un mercato globale. Nel mio ultimo talk agli Azure Open Day Microsoft che esortava il pubblico a vedere in Azure un modo per spingere il business delle proprie aziende al di là dei confini italiani e vi assicuro che in quel momento ho pensato: “è proprio vero, spero di riuscire a dimostrarlo durante il talk". E’ questa la vera potenza di Microsoft Azure, dovete vedere Microsoft come il vostro partner tecnologico e commerciale che vi supporta nella creazione e distribuzione delle vostre idee in tutto il mondo!

4ward365 è una piattaforma per la gestione, il monitoraggio e l’analisi di Office 365 dove abbiamo usato tutto quello che Azure mette a disposizione:

  • Web/Worker Role
  • WebJobs
  • VM Linux e Windows
  • Virtual Network
  • Traffic Manager
  • CDN
  • Service Bus
  • Storage (Blob e Queue)
  • Redis Cache (Prodotto spettacolare Made in Italy!)
  • Scheduler
  • Machine Learning
  • Azure Market Place

Per provarla sul vostro Office 365 chiedete al Microsoft Partner che vi segue o registratevi direttamente su http://www.4ward365.com/signup/

Vi aspetto insieme a Paola il 12 maggio per parlarne insieme live!



Ivan Fioravanti



 

Learning Redis - Part 3: Advanced Data Structures with Redis

MSDN Blogs - Mon, 05/04/2015 - 08:30

Want to bring more performance, speed, and scalability to your website? Or scale your sites for real-time services or message passing? Learn how, and get practical real-world tips in this exploration of Redis, part of a series on choosing the right data storage.

Steven Edouard and I (Rami Sayar) show you how to get up and running with Redis, a powerful key-value cache and store. In this tutorial series, you can check out a number of practical and advanced use cases for Redis as cache, queue, and publish/subscribe (pub/sub) tool, look at NoSQL and data structures, see how to create list sets and sorted sets in the cache, and much more. You can watch the course online on Microsoft Virtual Academy.

Level: Beginner to Intermediate.

Objectives

By the end of this installment, you will:

  • Learn about advanced data structures in Redis
  • Learn how to use hashes to store objects and other data models
  • Learn about sets and sorted sets.
  • Learn about bitmaps and hyperloglogs.
Hashes

Redis is often called a data structure store because it builds on top of the concept of key-value pairs to provide more advanced data structures like lists, hashes, sets, etc. We saw in the previous installment that Redis supports lists as a data type for value and includes several specialized commands for dealing with lists.

As we saw in the first installment, we can decompose objects and complex data modules into a series of key-value pairs by using several indices in keys. Furthermore, you may have noticed that the tediousness of having to do execute multiple commands to make the jump. Luckily, Redis supports hashes which are the subject of this section.

Hashes are collections of key-value pairs. A hash serves as a map to a series of string keys and their associated string values. Thus, they are the perfect representation of objects in your key-value store.

The commands to get & set hashes follow a very similar pattern to what we have seen before. To create a hash, you can use the HSET command with a key for your hash and followed by a key-value pair (with HSETNX to set only if it doesn't already exist). To get a value in the hash, you can use the HGET command with the key for your hash and the key for the value. You can also GET & set multiple key-value pairs in your hash with the HMGET & HMSET command respectively. You can also use HGETALL to get all the key-value pairs in your hash.

Here is an example with the same data structure as in Part 1.

> HSET person:0 first_name "Rami" (integer 1) > HSET person:0 last_name "Sayar" (integer 1) > HMSET account:0 type "Investment" balance "1234" currency "USD" OK > HGET account:0 currency "USD" > HGETALL account:0 1) "type" 2) "Investment" 3) "balance" 4) "1234" 5) "currency" 6) "USD"

You can use HINCRBY or HINCRBYFLOAT to increment a string integer or float in a hash respectively. You can use HLEN to get the number of keys in a hash. You can use HKEYS or HVALS to get all the keys or values in a hash respectively.

> HINCRBY account:0 balance 100 (integer) 1334 > HLEN account:0 (integer) 3 > HKEYS account:0 1) "type" 2) "balance" 3) "currency" > HVALS account:0 1) "Investment" 2) "1334" 3) "USD"

To delete a field in a hash, you can use the command HDEL with the key of the hash. To check if a field in a hash already exists, you can use HEXISTS.

> HEXISTS account:0 balance (integer) 1 > HDEL account:0 balance (integer) 1 > HEXISTS account:0 balance (integer) 0 Sets

Redis sets are unordered collections of Strings. They are similar to lists but have the desirable property of ensuring every member is unique. It is possible to add the same element multiple times without needing to check if it already exists as if it does, nothing will happen. You can add, remove and test the existence of members in constant time.

Sets allow you to do interesting operations with other sets such as unions, intersections and differences. If you are familiar with Set Theory, we are applying the same mathematical operations. You can perform these operations directly on the database as opposed to in the application code.

The commands to create sets are a little different than the previous pattern. To add a member to a set you have to use the SADD command, to remove a member the SREM command, to check if a member already exists the SISMEMBERcommand and to get the members of a set the SMEMBERS command.

> SADD countries "Canada" (integer) 1 > SADD countries "Canada" (integer) 0 > SADD countries "USA" (integer) 1 > SMEMBERS countries 1) "USA" 2) "Canada" > SREM countries "USA" (integer) 1 > SMEMBERS countries 1) "Canada"

To get the difference between multiple sets you can use the SDIFF command. To get the difference and store the results in a new set, you can use the SDIFFSTORE. To find out if a multiple sets intersect, you can use the SINTER command, the equivalent command but also stores the results is SINTERSTORE.

You can also get a random member from the set by using SRANDMEMBER. To get a member and also remove it from the set, you can use SPOP.

Read more here.

Sorted Sets

Redis sorted sets are similar to regular sets with one major difference; members of the set are given a rank that determines their position in the set. Immediately, you can use Redis to handle situations like leaderboards in your games or task priority queues that other databases cannot.

In sorted sets, members are automatically sorted by their rank and although the members must be unique, their rank not. Since members are ordered immediately as they are added to the set, the performance for adding, removing or updating members is faster of O(log(n)). You can retrieve elements by position or rank.

The commands to create sorted sets are a little different than the previous pattern but similar to set commands. To add a member to a sorted set you have to use the ZADD command along with a score, to remove a member the ZREMcommand, to check if a member already exists or get the score the ZSCORE command and to get the members of a set the ZRANGE command.

You can also run operations like unions and intersections.

Sorted sets have the unique ability to be able to retrieve and remove elements based on their lexicographical ordering or their rank or their score. You can read more about it here.

Bitmaps

Bitmaps are not data types in Redis so much as they are bit operations you can run on a string. You can count the number of bits set to 1, perform AND, OR, XOR and NOT operations, and find the first bit of a specific value. To set & get bits, you can use the SETBIT and GETBIT commands respectively, both allow you to offset the bit you are operating on. To perform a bit-wise operation, you can use the BITOP AND, BITOP OR, BITOP XOR, BITOP NOT commands with a result destination key set as the first parameter and all other keys containing values following, e.g. BITOP AND destkey srckey1 ... srckeyN. BITCOUNT counts the number of bits set to 1. BITPOS finds the first bit with a value of 0 or 1.

The support for bitmaps and the subsequent operations you can perform on bits are typically not found in databases.

HyperLogLogs

HyperLogLogs are an interesting data structure, designed to count unique elements. Counting unique element is incredibly important for analytics e.g. web analytics, etc. The magic of HyperLogLogs is that you do not need to keep a copy of all the members to ensure you do not count members multiple times. HyperLogLogs will give you an estimate along with a standard error. You can thus use HyperLogLogs as a running estimate of your analytics in a cache while offloading the true calculations to another system or database to be performed over time.

The commands to use with HyperLogLogs are fairly straightforward. You use PFADD to count new elements and you usePFCOUNT to retrieve the current approximation of unique elements. That's it.

P.S. Currently, the standard error estimation is close to 1%.

Stay Tuned!

Stay tuned for next installment of this tutorial series. You can stay up to date by following @ramisayar and @sedouard.

Using C# to help the colour blind! Meet the winners of the Canadian Imagine Cup 2015 World Citizenship competition

MSDN Blogs - Mon, 05/04/2015 - 08:30

Imagine Cup challenges students to do great things with technology. The world citizenship category sponsored by Taking IT Global, challenges students to use technology to help others. Team Eye3 from Queens University was inspired by the challenges faced by a colour blind teammate to build a better solution for all!

The problem!

700 million people are colour-deficient worldwide. Colour-blindness affects approximately 1 in 12 men, and 1 in 200 women in the world. The condition is X-linked, or can be caused by certain health factors such as diabetes and multiple sclerosis.

We use colour in charts, pictures, graphics, and clothing to convey information. These cues are often lost on colour-blind people. If these individuals could somehow glean this information, it would enrich their day-to-day lives and solve a whole host of problems.  We want to provide them with a real time visual overlay for their desktop computers, mobile tablets, and smartphone devices. The overlay will function by translating hard to see colours into visually equivalent ones that are easier for the colour-blind individual to identify. By using a camera application with this overlay enabled, the person can bring the same kind of functionality to the real world.

Team Eye3’s core concept was originally conceived by one of its founding members, Zaeem Anwar. As a diagnosed protanopic individual himself, he recognizes the troubles associated with colour-blindness firsthand. Many of solutions we are solving with Ciris derive from the troubles Zaeem faced throughout his life living as a colour-blind person.

The solution!

Ciris aims to be a multi-platform digital colour augmentation app, that is free, and user friendly. In almost every aspect, competing brands are costly, slow or require a high user skill-set to effectively use the program. Additionally, the lack of real-time augmentation in many solutions make usage a hassle, discouraging users from using the application.

The current application is built on top of the Windows Magnification API. The API is called using a C# front end which handles the user’s options and settings. Once the application is running, it sits on the user’s system tray, where it can be opened at any time to configure its settings. Within its settings, users are able to select their type of colour-blindness, and adjust the intensity of the overlay. When the application is toggled on, the overlay is seamlessly placed on top of the user’s display. Users may also use the application shortcut (Shift +Windows key + C) in order to toggle the overlay at will. The toggle ability enables the application when it is of use, and quickly disables it when it is not.

The application is extremely lightweight and consumes minimal system resources. There is no latency in navigating websites, watching videos or playing video games. We have demonstrated the application to numerous colour-blind individuals, with great positive feedback. It has been inspiring and heartwarming to see the smiles on people’s face’s as they saw the application transform their world.

The team!

This project was conceived and constructed by a talented team of students from Queens University: Eddie Wang from the bachelor of Commerce Honours program, Zaeem Anwar from the bachelor of Computing honours program and Biomedical Computing, and Jake Alsemgeest from the bachelor of Computing Honours program and Software Design.

What happens next for our winners?

They will get some advice from Canadian philanthropist Michael Furdyk of Taking IT Global and they go head to head against the winner of the Canadian Imagine Cup Innovation Category to determine who represents Canada at the Imagine Cup World Finals! The world finals will take place this July in Seattle, Washington home of Microsoft headquarters. At the World Finals they will compete against 9 other teams from around the world for a chance to win $50,000 USD and a meeting with Microsoft CEO Satya Nadella! The world finalists will also get to participate in a HoloLens hackathon! 

Here’s a little teaser of what the team might expect at the first day of world’s finals if they are selected. Good luck team Eye3!

Raspberry Pi 2 and Windows 10: My first experience

MSDN Blogs - Mon, 05/04/2015 - 08:30

Finally, Microsoft published Windows 10 preview build for Raspberry Pi 2 (and not just for Raspberry). So, if you have Raspberry, you can visit https://dev.windows.com/en-US/iot and download the build and setup instructions.

Of course, it’s easy to prepare an SD card and put it to Raspberry but pay special attention that:

  • You need to use microSD card class 10. I missed this requirement and was very upset when found that my Raspberry is not even trying to start. When I checked manual once again I checked my SD and found
    that it has class 4! So, I changed it and everything started working fine;
  • First start requires much time to finish all things. So, you need to be patient;
  • You need to connect your board to network using network cable. It’s a preview and many famous WiFi adapters don’t work on Raspberry right now. So, if your application connects to Internet you will need to use network cable all the time but if you want to create a simple LED project you need access to network at least for deploying your project;

If everything is ok you will see Raspberry image and important information like device name and IP address on the screen. So, your device is ready and it’s time to establish connection between your PC and Raspberry. In order to do it you can visit the following page to setup connection using PowerShell. I tried to make it several times and discovered some issues on my PC. Because I am not admin it was very hard to understand what happened there, so I want to share all stoppers I had:

  • Set-Item issue – when I ran this command I got an exception with complicated message about private and public networks and some advice to change network from Public to Private. When I checked my network settings (Network and Sharing Center) I found that my network is Private but I have Virtual Ethernet Adapter which was created by Visual Studio installation. I simply disabled it and the exception was gone;
  • Enter-PsSession – when I ran this command I got one more exception “The WinRM client cannot process the request. If the authentication scheme is different from Kerberos…” Once I got it I spent much time to understand how to setup WinRM but my knowledge in this area is not enough, so I selected the simplest way: I ran Gpedit.msc and navigated to Local Computer Policy->Computer Configuration->Administrative Templates->Windows Components->Windows Remote Management->WinRM Client. There I enabled Allow unencrypted traffic and Trusted Hosts and, additionally, I enabled all hosts there. And the problem was gone. I hope that you can find a way to configure it in a better way but I simply didn’t have much time to check all combinations because I wanted to start development!;

Once you connect to your Raspberry you can change password, device name, run some configuration cmdlets etc. Pay special attention that Raspberry supports two modes: headed and headless (with GUI and without GUI). You can read more about the modes here.

Right after you establish connection between your PC and Raspberry you can try to develop something. Thanks to Visual Studio it’s very easy to develop, deploy and debug solutions on Raspberry. Raspberry runs Remote Debuger by default so you should not make any additional configuration.

In order to start you need to select the language. You can select between Node.js, Python and standard languages for Universal applications like C#. Of course, I decided to use C# but you can easily install Python or Node.js tools from Connect site. So, in order to start you need to create a simple Universal application, change platform to ARM and select Remote Machine for deploying and debugging.

 

Finally I develop and deploy my first application for Raspberry:

 

The Innovation Revolution (A Time of Radical Transformation)

MSDN Blogs - Mon, 05/04/2015 - 07:44

It was the best of times, it was the worst of times …

It’s not A Tale of Two Cities.   It’s a tale of the Innovation Revolution.

We’ve got real problems worth solving.  The stakes are high.  Time is short.  And abstract answers are not good enough.

In the book, Ten Types of Innovation: The Discipline of building Breakthroughs, Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn explain how it is like A Tale of Two Cities in that it is the worst of time and it is the best of times.

But it is also like no other time in history.

It’s an Innovation Revolution … We have the technology and we can innovate our way through radical transformation.

The Worst of Times (Innovation Has Big Problems to Solve)

We’ve got some real problems to solve, whether it’s health issues, poverty, crime, or ignorance.  Duty calls.  Will innovation answer?

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“People expect very little good news about the wars being fought (whether in Iraq, Afghanistan, or on Terror, Drugs, Poverty, or Ignorance).  The promising Arab Spring has given way to a recurring pessimism about progress.  Gnarly health problems are on a tear the world over--diabetes now affects over eight percent of Americans--an other expensive disease conditions such as obesity, heart disease, and cancer are also now epidemic.  The cost of education rises like a runaway helium balloon, yet there is less and less evidence that it nets the students a real return on their investment.  Police have access to ever more elaborate statistical models of crime, but there is still way too much of it.  And global warming, steadily produces more extreme and more dangerous conditions the world over, yet according to about half of our elected 'leaders,' it is still, officially, only a theory that can conveniently be denied.”

The Best of Times (Innovation is Making Things Happen)

Innovation has been answering.  There have been amazing innovations heard round the world.  It’s only the beginning for an Innovation Revolution.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“And yet ...

We steadily expect more from our computers, our smartphones, apps, networks, and games.  We have grown to expect routine and wondrous stories of new ventures funded through crowdsourcing.  We hear constantly of lives around the world transformed because of Twitter or Kahn Academy or some breakthrough discovery in medicine.  Esther Duflo and her team at the Poverty Action Lab at MIT keep cracking tough problems that afflict the poor to arrive at solutions with demonstrated efficacy, and then, often the Gates Foundation or another philanthropic institution funds the transformational solution at unprecedented scale.

Storytelling is in a new golden age--whether in live events, on the radio, or in amazing new television series that can emerge anywhere in the world and be adapted for global tastes.  Experts are now everywhere, and shockingly easy and affordable to access.

Indeed, it seems clear that all the knowledge we've been struggling to amass is steadily being amplified and swiftly getting more organized, accessible, and affordable--whether through the magic of elegant little apps or big data managed in ever-smarter clouds or crowdfunding sites used to capitalize creative ideas in commerce or science.”

It’s a Time of Radical Transformation and New, More Agile Institutions

The pace of change and the size of change will accelerate exponentially as the forces of innovation rally together.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“One way to make sense of these opposing conditions is to see us as being in a time of radical transformation.  To see the old institutions as being challenged as a series of newer, more agile ones arise.  In history, such shifts have rarely been bloodless, but this one seems to be a radical transformation in the structure, sources, and nature of expertise.  Indeed, among innovation experts, this time in one like no other.  For the very first time in history, we are in a position to tackle tough problems with ground-breaking tools and techniques.”

It’s time to break some ground.

Join the Innovation Revolution and crack some problems worth solving.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Introducing the Visual Studio ALM Rangers – Ajay Bhosle

MSDN Blogs - Mon, 05/04/2015 - 07:23

Why do you want to join the ALM Rangers:

ALM Rangers are doing a fantastic job in building the overall competence & maturity of the development processes for teams the world over. Microsoft is embracing the Open Source community Rangers are backing this brilliantly by supplementing guidance and real world examples to simplify adoption. This guidance has proven to be invaluable for ALM & Agile consultants like me. As a way to give back to this community, I would like to volunteer to collaboratively build such valuable guidance and help improve its reach. Another reason is connecting with community leaders, which helps me know & understand the industry vision & focus, align my actions & thoughts to newer industry trends & insights.

I believe I have a strong background on ALM driven projects and my experience in this domain will be useful in supplementing the ALM Rangers guidance with some more real world scenarios.

Who are you?

Traditionally a .NET developer & more recently an ALM SME, I have significant experience consulting across multi domains. I have worked closely with small & large delivery projects at Avanade India Delivery Center, providing them with assessment & guidance to improve their overall ALM & Agile maturity, enhancing team productivity, collaboration & Quality. I am also a part of the capability team at Avanade, and assist with building tools & guidance for ALM & DevOps. I take pleasure in interacting with the local developer community and advocating for ALM best practices, recently started to increase my community reach through my blog, tweets, community forums & developer conferences. Currently focusing on DevOps – creating POCs, Processes & Guidance for Avanade, gradually building expertise on Microsoft stack as well as some open source tools. I started as a developer for custom .NET apps, back in 2005, and joined the VSTS journey since 2007. I hold a Bachelor’s degree in Electrical Engineering.

What makes you “tick”?

  • Learning & implementing new strategies that help improve developer productivity & efficiency
  • Sharing my knowledge, giving back to the community, and contributing to the community growth
  • Travel, Music, Food & Photography

Where do you live?

I live in Mumbai, India’s financial capital.

Where do you call Home?

Home is where I enjoy some quality family time; a place where I can socialize with like-minded people; A place which gives me access to some quiet private space, uninterrupted personal time & a good internet connection to hone my knowledge & skills.

Why are you active in the Ranger Program?

I am yet to get started, but if I was, it would be for the reason to advanced learning & sharing. I would love to create tools, guidance, POCs et all which would help the Developer community be more productive & efficient.

What is the best Rangers project you worked in and why?

I have not worked on any project yet, I wasn’t really aware I could do that without being a part of the team. Nevertheless, I found the Quick Reference guides really to the point & useful. Branching guide, ALM Assessment guide, Build Customization guide, Reporting guidance.

Microsoft Ignite Starts Today!

MSDN Blogs - Mon, 05/04/2015 - 07:00

Whether you’re a senior decision maker, IT professional or enterprise developer, you’ll be inspired by our vision of where technology is headed. Tailor your learning experience in this one-of-a-kind conference designed to fuel your business and give you a glimpse into the future.

If you cannot be there, be sure to tune into the live stream starting today (May 4) @ 9am CDT.

Creating a window that can be resized in only one direction

MSDN Blogs - Mon, 05/04/2015 - 07:00

Today's Little Program shows a window that can be resized in only one direction, let's say vertically but not horizontally.

Start with the scratch program and make these changes:

UINT OnNcHitTest(HWND hwnd, int x, int y) { UINT ht = FORWARD_WM_NCHITTEST(hwnd, x, y, DefWindowProc); switch (ht) { case HTBOTTOMLEFT: ht = HTBOTTOM; break; case HTBOTTOMRIGHT: ht = HTBOTTOM; break; case HTTOPLEFT: ht = HTTOP; break; case HTTOPRIGHT: ht = HTTOP; break; case HTLEFT: ht = HTBORDER; break; case HTRIGHT: ht = HTBORDER; break; } return ht; } HANDLE_MSG(hwnd, WM_NCHITTEST, OnNcHitTest);

We accomplish this by removing horizontal resize behavior from the left and right edges and corners. For the corners, we remove the horizontal resizing, but leave the vertical resizing. For the edges, we remove resizing entirely by reporting that the left and right edges should act like an inert border.

Wait, we're not done yet. This handles resizing by grabbing the edges with the mouse, but it doesn't stop the user from hitting Alt+Space, followed by S (for Size), and then hitting the left or right arrow keys.

For that, we need to handle WM_GET­MIN­MAX­INFO.

void OnGetMinMaxInfo(HWND hwnd, LPMINMAXINFO lpmmi) { RECT rc = { 0, 0, 500, 0 }; AdjustWindowRectEx(&rc, GetWindowStyle(hwnd), FALSE, GetWindowExStyle(hwnd)); // Adjust the width lpmmi->ptMaxSize.x = lpmmi->ptMinTrackSize.x = lpmmi->ptMaxTrackSize.x = rc.right - rc.left; } HANDLE_MSG(hwnd, WM_GETMINMAXINFO, OnGetMinMaxInfo);

This works out great, except for the case of being maximized onto a secondary monitor, because we run into the mixed case of being small than the monitor in the horizontal direction, but larger than the monitor in the vertical direction.

void OnGetMinMaxInfo(HWND hwnd, LPMINMAXINFO lpmmi) { RECT rc = { 0, 0, 500, 0 }; AdjustWindowRectEx(&rc, GetWindowStyle(hwnd), FALSE, GetWindowExStyle(hwnd)); // Adjust the width lpmmi->ptMaxSize.x = lpmmi->ptMinTrackSize.x = lpmmi->ptMaxTrackSize.x = rc.right - rc.left; // Adjust the height MONITORINFO mi = { sizeof(mi) }; GetMonitorInfo(MonitorFromWindow(hwnd, MONITOR_DEFAULTTOPRIMARY), &mi); lpmmi->ptMaxSize.y = mi.rcWork.bottom - mi.rcWork.top - lpmmi->ptMaxPosition.y + rc.bottom; }

The math here is a little tricky. We want the window height to be the height of the work area of the window monitor, plus some extra goop in order to let the borders hang over the edge.

The first two terms are easy to explain: mi.rcWork.bottom - mi.rcWork.top is the height of the work area.

Next, we want to add the height consumed by the borders that hang off the top of the monitor. Fortunately, the window manager told us exactly how much the window is going to hang off the top of the monitor: It's in lpmmi->ptMaxPosition.y, but as a negative value since it is a coordinate that is off the top of the screen. We therefore have to negate it before adding it in.

Finally, we add the borders that hang off the bottom of the work area.

Yes, handling this mixed case (where the window is partly constrained and partly unconstrained) is annoying. Sorry.

How to write and submit Hive queries using Visual Studio

MSDN Blogs - Mon, 05/04/2015 - 06:55

Now HDInsight Tools for Visual Studio support generic Hadoop clusters, so you can use HDInsight Tools for Visual Studio to connect to a generic Hadoop cluster so you can do the following:

  • write a Hive query with enhanced IntelliSense/auto-completion support
  • connect to your cluster, view all the jobs and associated resources (queries, job output an job logs) in your cluster with an intuitive UI
  • In the future we plan to bring more Hive performance investigation capabilities!

This blog describes how to connect HDInsight Tools for Visual Studio to your generic Hadoop cluster. It can be an on-prem cluster, or a cloud hosted cluster (as long as you have access to several endpoints, more details below). Please be noted that this feature is currently in preview so it only supports the Basic Auth (the username/password combination), it does not support Kerberos now.

Basically there are three steps we need to take:

  1. Configure your Hadoop cluster
    1. Make sure your Hadoop cluster is reachable from client
    2. Make sure your cluster has the right configurations
  2. Configure HDInsight Tools for Visual Studio

I will describe each steps in detail.

Step1: Configure your Hadoop Cluster Step 1.1 Make sure your Hadoop cluster is reachable from client
  1. If you are using a cloud based Hadoop cluster (for example Hortonworks Sandbox on Azure), you should make sure that the ports of the Azure VMs are reachable from Visual Studio. Normally the cloud service provider will block the Hadoop ports (for example, WebHCat by default is using port 50111) so you must open the ports in the VM configuration page.
  2.  Different services might run in different machines on your Hadoop cluster so please make sure that you know exactly which machine is running WebHCat, WebHDFS and HiveServer2. Normally you could get the information from Ambari (adopted by Hortonworks HDP) or Cloudera Manager (adopted by Cloudera CDH), or consulting your system administrator.
  3. Having confirmed 1# and 2# above, you should open at least 3 service ports: WebHCat Service (by default on port 50111) used to submit queries as well as list jobs; HiveServer2 (by default on port 10000) used to preview the table; WebHDFS (by default on Name Node 50070 and Data Node 50075) used to store queries.
  4. Please make sure that all the HDFS Data Node port is accessible to HDInsight Tools for Visual Studio. The default value is 50075. The reason is that when HDInsight Tools for Visual Studio writes file to HDFS, it is using WebHDFS APIs which require a two phase write. Generally speaking, HDInsight Tools for Visual Studio will first contact the Name Node, and the Name Node returns the address of the Data Node to write, and then the tool reaches the corresponding Data Node and write files. For more details about WebHDFS please refer to the document here.
  5. The Data Node address is not configurable in HDInsight Tools for Visual Studio, and sometimes the address returned by the Name Node might not be reachable directly by HDInsight Tools for Visual Studio due to the two-phase write using WebHDFS API. You might need to modify the hosts file of the development machine on which HDInsight Tools for Visual Studio is running in order to redirect that address to the real IP address. For example, if you are using Hortonworks Sandbox on Azure, then the Data Node address returned by the Name Node is actually sandbox.hortonworks.com (you can get the Data Node host address by using Ambari or other management tools). You will see the error like:

 

 

This address (sandbox.hortonworks.com) actually does not exist and is not reachable for HDInsight Tools for Visual Studio, so you need to edit the hosts file and make sandbox.hortonworks.com to point to the correct public IP address.

 

 

Step 1.2 Make sure the Hadoop cluster has the right configurations
  1. Make sure the user which will be used in Visual Studio could be impersonated by WebHCat since HDInsight Tools for Visual Studio is using WebHCat to submit jobs. You need to update the configuration hadoop.proxyuser.hcat.groups in core-site.xml to reflect the changes. For example, if you want to use user foo in Visual Studio, and foo belongs to a group named grpfoo, then grpfoo should be part of hadoop.proxyuser.hcat.groups. Also, you need to make sure that the IP address you are using should appear in hadoop.proxyuser.hcat.hosts. (or set * to this configuration to allow all IPs to access your cluster)
  2. Make sure the user which will be used in Visual Studio could access a folder named /Portal-Queriesunder root folder of HDFS. HDInsight Tools for Visual Studio uses that folder to store the queries you submitted. There are several ways to do this:
    1. Recommended: You could achieve this by creating a folder named /Portal-Queries under HDFS root folder and set its permission to 777 (i.e. everyone could write to that folder)
    2. Add the user which will be used in Visual Studio to HDFS supergroup (you could find the configurations in dfs.permissions.superusergroup in hdfs-site.xml)
    3. Turn off the HDFS security check (set dfs.permissions.enabled to false in hdfs-site.xml) which might make your HDFS unsecure. 
Step 2: Configure HDInsight Tools for Visual Studio

After installation, open Visual Studio, click VIEW > Server Explorer. Then right click on the “HDInsight” Node, select “Connect to a Hadoop Cluster (Preview)”.

  1. Configure the required endpoints (WebHCat, WebHDFS, and HiveServer2) of your Hadoop cluster. If you have gateways, be sure to input the correct address that could route over the gateway.
  2. After adding the Hadoop cluster you could see it in Server explorer (under HDInsight Node). You could right click on the cluster to see all the job histories on the emulator, or write queries with word completion and IntelliSense support. You can also create a Hive project and use other version management tools to cooperate within teams. For more information on how to use HDInsight for Visual Studio, you could refer the user manual here.
  3. Note: the user name and password are stored in clear text under the folder: C:\Users\<your user name>\AppData\Roaming\Microsoft\HDInsight Tools for Visual Studio\Local Emulator
Feedbacks

If you have any suggestions or feedback to the tool, feel free to reach us at hdivstool at microsoft dot com! We also have a preview bits which has several cool features for Hive on Tez/improved Hive authoring experience, so if you are interested in trying that, please also email us!

Bienvenue aux développeurs iOS, Android, Mac et Linux!

MSDN Blogs - Mon, 05/04/2015 - 05:19

Le mercredi 29 avril, lors de la Build 2015 à San Francisco, nous avons annoncé le lancement d’un ensemble de SDK qui aideront les développeurs sur toute plateforme à construire des sites web, des apps et des applications pour toute plateforme, mais aussi de nouveaux services de données Azure pour des applications intelligentes. De même, le lancement de Visual Studio et d’outils .NET tools et d’environnements d’exécution pour Windows, Mac et Linux a également été annoncé, sans oublier des API qui permettent aux développeurs de créer des applications complètes avec Office 365.

 

Nous avons présenté plusieurs nouvelles fonctionnalités de Windows 10, des nouvelles capacités pour échelonner les applications à travers les différents appareils jusqu’aux nouveaux processus permettant aux développeurs de compiler le code pour Windows 10. Par ailleurs, nous avons montré comment les développeurs peuvent créer une application unique qui fonctionne dans tous les appareils Windows 10 en s’adaptant automatiquement aux différentes tailles d’écran. Avec la plateforme Windows universelle, les développeurs peuvent adapter leurs applications en fonction des fonctionnalités de chaque appareil, intégrer Cortana et Xbox Live à leurs applications, offrir des fonctionnalités de commerce sécurisé, créer des hologrammes et publier leurs applications dans le Windows Store.

Le Windows Store offrira également une expérience unifiée pour tous les utilisateurs de Windows 10, quel que soit l’appareil utilisé, tout en simplifiant la recherche parmi les applications, les jeux, la musique, les vidéos et d’autres contenus.

Nous souhaitons la bienvenue à tous les développeurs sur la plateforme universelle Windows: les quatre nouveaux outils de développement de logiciels facilitent l’introduction de nouveaux codes pour Internet, .NET, Win32, iOS et Android sur le Windows Store tout en réduisant au minimum les modifications du code.

 

Durant la conférence, nous avons annoncé Microsoft Edge, le nouveau navigateur pour Windows 10 (nommé par le passé Project Spartan) et donné plus d’informations sur HoloLens — le premier ordinateur holographique sans câble au monde tournant sur Windows 10.

 

Nous avons également présenté un preview de la base de données élastique Azure SQL Database qui permet aux éditeurs de logiciels ISV et aux développeurs de logiciels fournis sous forme de service de grouper la capacité de milliers de bases de données afin de bénéficier de l’efficacité de consommation des ressources et du meilleur rapport qualité-prix sur le cloud public. Pour aider les développeurs à traiter de gigantesques ensembles de données, Microsoft a introduit Azure SQL Data Warehouse, le premier entrepôt de données pour entreprises sur le cloud sous forme de service pouvant être agrandi, diminué et mis en pause en quelques secondes. Microsoft a aussi annoncé Azure Data Lake, une solution de stockage de données grandes capacités (plusieurs pétaoctets) ouvert et très évolutif qui assure une intégration à grande vitesse avec Azure HDInsight, Azure Machine Learning, Cloudera et Hortonworks pour fournir un aperçu ultrarapide de grandes quantités de données.

Et ne croyez surtout pas que nous avons oublié Office – pas du tout! Nous offrons aux développeurs de nouvelles voies pour atteindre 1,2 milliard d’utilisateurs d’Office, y compris le nouveau Office Graph API, des fonctionnalités étendues pour l’iPad et Outlook et des API unifiés.

 Restez à l’écoute, nous entrons dans une nouvelle ère captivante!

 

Découvrez les annonces principales de la conférence build 2015

Youtube: Build 2015 Keynote Highlights

Ou regardez la conférence complète ici:

Channel9 MSDN: Build 2015, Keynote Stream

Welcome, iOS, Android, Mac and Linux Developers!

MSDN Blogs - Mon, 05/04/2015 - 04:59

On Wednesday, 29 April, at Build 2015 in San Francisco we announced a set of SDKs that will help developers on any platform build websites, apps and applications for any platform, and new Azure data services for intelligent applications; Visual Studio and .NET tools and runtimes for Windows, Mac and Linux; and APIs that enable developers to build rich apps with Office 365.

 

We showcased several new features in Windows 10, from new capabilities to scale applications across devices to new ways for developers to build code for Windows 10. Furthermore, we showed how developers can create a single app that scales across all Windows 10 devices, automatically adapting to different screen sizes. With the Universal Windows Platform, developers can tailor their apps to the unique capabilities of each device, integrate Cortana and Xbox Live into their apps, offer trusted commerce, create holograms, and publish their apps into the Windows Store.

The Windows Store will also offer a single unified experience for Windows 10 customers across devices and make finding great content easier than ever — across apps, games, music, video and other content.

We are welcoming all developers to the Universal Windows Platform: the four new software development toolkits will make it easy to bring code for the Web, .NET, Win32, iOS and Android to the Windows Store with minimal code modifications.

 

We unveiled Microsoft Edge, the new browser for Windows 10 (formerly known as Project Spartan) and provided news about HoloLens — the world’s first untethered holographic computer powered by Windows 10.

 

We showed a preview of Azure SQL Database elastic database, which allow ISVs and software-as-a-service developers to pool capacity across thousands of databases, enabling them to benefit from efficient resource consumption and the best price and performance in the public cloud. To help developers manage massive datasets, Microsoft introduced Azure SQL Data Warehouse, the industry’s first enterprise-class cloud data warehouse as a service that can grow, shrink and pause in seconds. Microsoft also announced Azure Data Lake, an open and massively scalable data repository that supports petabyte-size files and provides high-speed integration with Azure HDInsight, Azure Machine Learning, Cloudera and Hortonworks to quickly derive insights from vast amounts of data.

And if you thought we forgot about office, well, we have not. We are introducing new ways for developers to reach 1.2 billion Office users, including the new Office Graph API, expanded add-in capabilities for the iPad and Outlook, and unified APIs.

Stay tuned, we are entering an exciting area!

 

Find an overview of the Build 2015 highlights in this video:

Youtube: Build 2015 Keynote Highlights

Or watch the full stream here:

Channel9 MSDN: Build 2015, Keynote Stream

Willkommen iOS-, Android-, Mac- und Linux-Entwickler!

MSDN Blogs - Mon, 05/04/2015 - 04:51

Am Mittwoch, den 29. April, haben wir an der Build 2015 in San Francisco eine Reihe von SDKs, die Entwickler aller Plattformen bei der Erstellung von Websites, Apps und Anwendungen für Plattformen jeder Art unterstützen, neue Azure-Datendienste für intelligente Anwendungen, Visual Studio und .NET-Tools sowie Runtimes für Windows, Mac und Linux sowie APIs angekündigt, mit denen Entwickler reichhaltige Apps mit Office 365 erstellen können.

 

Wir präsentierten diverse neue Features in Windows 10, von neuen Funktionen zur Skalierung von geräteübergreifenden Anwendungen bis hin zu neuen Möglichkeiten der Code-Erstellung für Windows 10. Ferner wurde gezeigt, wie Entwickler eine einzige App mit Skalierung über alle Windows-10-Geräte erstellen können, die sich automatisch den unterschiedlichen Bildschirmgrössen anpasst. Mit der Universal-Windows-Plattform können Entwickler ihre Apps gezielt auf die spezifischen Funktionen jedes Geräts zuschneiden, Cortana und Xbox Live in ihre Apps integrieren, Trusted Commerce anbieten, Hologramme erzeugen und ihre Apps im Windows Store veröffentlichen.

Der Windows Store wird Windows-10-Anwendern ebenfalls eine einheitliche, geräteübergreifende Erfahrung aus einer Hand vermitteln und es einfacher denn je machen, Top-Inhalte zu finden — über das gesamte Spektrum von Apps, Spielen, Musik, Videos und anderen Inhalte hinweg.

Wir laden ausserdem alle Entwickler auf die Universal Windows-Plattform ein: Mit den vier neuen Softwareentwicklungs-Toolkits ist es leicht, Codes für das Web, .NET, Win32, iOS und Android mit minimalen Modifikationen für den Windows Store anzupassen.

 

Wir enthüllten Microsoft Edge, den neuen Browser für Windows 10 (bisher bekannt als Project Spartan) und lieferten News zur HoloLens — den weltweit ersten kabellosen holografischen Computer für Windows 10.

 

Vorgeführt wurde auch ein Preview der Azure SQL Database elastic database, mit der ISVs und Software-as-a-Service-Entwickler Kapazitäten über Tausende von Datenbanken hinweg bündeln und so von einer effizienten Ressourcensteuerung und dem besten Preis bzw. der besten Performance in der öffentlichen Cloud profitieren können. Zur Unterstützung der Entwickler beim Management massiver Datensätze hat Microsoft Azure SQL Data Warehouse eingeführt, das branchenweit erste als Dienst verfügbare Enterprise-Class Cloud Data Warehouse, das in Sekunden wachsen, schrumpfen und pausieren kann. Microsoft stellt ferner Azure Data Lake vor, ein offenes und enorm skalierbares Data Repository, das Dateien in Petabyte-Grösse unterstützt und über High-Speed-Integration mit Azure HDInsight, Azure Machine Learning, Cloudera und Hortonworks schnellen Einblick in grosse Datenbestände ermöglicht.

Und falls Sie gedacht haben, wir hätten Office vergessen – das haben wir nicht. Wir führen derzeit neue Möglichkeiten für Entwickler ein, um 1,2 Milliarden Office-Anwender zu erreichen, darunter die neue Office Graph API, erweiterte Add-in-Funktionen für das iPad und Outlook und vereinheitlichte APIs.

Bleiben Sie dran – wir betreten gerade interessantes Neuland!

 

Erhalten Sie einen Überblick zu den Build 2015 Highlights: Youtube: Build 2015 Keynote Highlights Oder schauen Sie sich gleich den kompletten Stream an:

Channel9 MSDN: Build 2015, Keynote Stream

Mobile Device Management indbygget i Office 365

MSDN Blogs - Mon, 05/04/2015 - 04:28

Vidste du, at Mobile Device Management (MDM) er indbygget direkte i Office 365? Nej vel, men Microsoft har allerede taget det første skridt mod MDM indbygget direkte i Office 365. Funktionaliteten er delt op i tre kategorier:

 

Conditional Access—Du kan sætte sikkerhedspolitikker på devices, som sikrer at mail og dokumenter i Office 365 kun kan tilgås af enheder, der er managed af organisationen og som overholder sikkerhedspolitikkerne.

Device management - du kan sætte og administrere sikkerhedspolitikker som pin lås og jailbreak detectton for at forhindre uautoriserede brugere i at få adgang til data hvis en enhed mistes eller stjæles.

Selective wipe - Du kan nemt slette organisationens data fra en ansat enhed, mens deres personlige data er uberørtet. Dette bliver mere og mere relevant, når der implementeres "bring your own device" (BYOD).

 

Ovenstående funktioner kan implementeres på grupper af brugere, så det f.eks. gælder for ansatte, men at kravene til elever/studerende ikke implementeres.

Har du brug for at vide mere, så ser her. Er det ikke nok for dig, så planlægger vi sammen med ProActive at holde en session om dette snarest.

 

 

Adding a new subscriber to the existing peer to peer replication setup

MSDN Blogs - Mon, 05/04/2015 - 03:39

 

Adding a new subscriber to the existing peer to peer replication setup using GUI

Distributor  : Server1\IN03   ****Already Exists****
Publisher    : Server1\IN01   ****Already Exists****
Subscriber   : Server1\IN02   ****Already Exists****

Subscriber1  : Server1\IN04   ****Adding it newly****

=> Refer this "article" for setting up Peer to Peer replication.

=> We will add a new subscriber onto the existing setup. 

=> The existing setup is as below,

=> Now we will add Server1\IN04 in this setup. 

=> Backup the database Test_DB using the below command on your publisher database server Server1\IN01. 
backup database Test_DB to disk='c:\temp\Test_db.bak'

=> Restore the database Test_DB on your subscriber server Server1\IN04 using the above backup.

=> Once after the restore. Login to the publisher database instance. Get into the Replication folder and Right click on the Publication and click on “Configure Peer-to-Peer Topology..”.

=>  Now add the new subscriber server as below,

=> Connect to the subscriber server and do the below,

 

=> You will see a screenshot as below,

=> Select the backup file restored in the initialization page as below,

=> Click on finish and complete rest of the wizard.

=> Try inserting data’s on both publisher and subscriber and check if data’s are getting replicated.

Adding a new subscriber to the existing peer to peer replication setup using TSQL

On Server1\IN01

exec sp_addpullsubscription
        @publisher = N'Server1\IN04'
      , @publication = N'TEST'
      , @publisher_db = N'Test_DB'
      , @independent_agent = N'True'
      , @subscription_type = N'pull'
      , @description = N''
      , @update_mode = N'read only'
      , @immediate_sync = 0 

exec sp_addpullsubscription_agent
        @publisher = N'Server1\IN04'
      , @publisher_db = N'Test_DB'
      , @publication = N'TEST'
      , @distributor = N'Server1\IN03'
      , @distributor_security_mode = 1
      , @distributor_login = N''
      , @distributor_password = null
      , @enabled_for_syncmgr = N'False'
      , @frequency_type = 64
      , @frequency_interval = 0
      , @frequency_relative_interval = 0
      , @frequency_recurrence_factor = 0
      , @frequency_subday = 0
      , @frequency_subday_interval = 0
      , @active_start_time_of_day = 0
      , @active_end_time_of_day = 235959
      , @active_start_date = 20141110
      , @active_end_date = 99991231
      , @alt_snapshot_folder = N''
      , @working_directory = N''
      , @use_ftp = N'False'
      , @job_login = null
     , @job_password = null
      , @publication_type = 0 

use [Test_DB]
declare @lsn int
set @lsn = (select originator_lsn from [Server1\IN04].Test_DB.dbo.mspeer_lsns
where originator = 'JBVIVEKKARTHIK\IN01' and originator_db = 'Test_DB'
and originator_publication = 'TEST' and originator_db_version>0) 

exec sp_addsubscription
  @publication = N'TEST'
, @subscriber = N'Server1\IN04'
, @destination_db = N'Test_DB'
, @subscription_type = N'pull'
, @update_mode = N'read only'
, @subscriber_type = 0
, @article = N'all'
, @sync_type = N'initialize from lsn'
, @subscriptionlsn = @lsn
GO

On server Server1\IN04

use master
exec sp_replicationdboption @dbname = N'Test_DB', @optname = N'publish',
@value = N'true'
GO

exec [Test_DB].sys.sp_addlogreader_agent @job_login = null,
@job_password = null, @publisher_security_mode = 1
GO
exec [Test_DB].sys.sp_addqreader_agent @job_login = null,
@job_password = null, @frompublisher = 1
GO
-- Adding the transactional publication
use [Test_DB]
exec sp_addpublication @publication = N'Test',
@description = N'Transactional publication of database.',
@sync_method = N'native', @retention = 0, @allow_push = N'true',
@allow_pull = N'true', @allow_anonymous = N'false',
@enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
@compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
@allow_subscription_copy = N'false',@add_to_active_directory = N'false',
@repl_freq = N'continuous', @status = N'active', @independent_agent = N'true',
@immediate_sync = N'true', @allow_sync_tran = N'false',
@autogen_sync_procs = N'false', @allow_queued_tran = N'false',
@allow_dts = N'false', @replicate_ddl = 1, 
@allow_initialize_from_backup = N'true', @enabled_for_p2p = N'true',
@enabled_for_het_sub = N'false', @p2p_conflictdetection = N'true',
@p2p_originator_id = 3
GO  exec sp_addpublication_snapshot @publication = N'Test',
@frequency_type = 1, @frequency_interval = 0, 
@frequency_relative_interval = 0, 
@frequency_recurrence_factor = 0, @frequency_subday = 0,
@frequency_subday_interval = 0, @active_start_time_of_day = 0,
@active_end_time_of_day = 235959, @active_start_date = 0,
@active_end_date = 0, @job_login = null, @job_password = null,
@publisher_security_mode = 1 exec sp_grant_publication_access @publication = N'Test',
@login = N'sa'
GO
exec sp_grant_publication_access @publication = N'Test',
@login = N'NT AUTHORITY\SYSTEM'
GO
.
.
.
-- Adding the transactional articles
use [Test_DB]
exec sp_addarticle @publication = N'Test', @article = N'test1',
@source_owner = N'dbo', @source_object = N'test1',
@type = N'logbased', @description = N'', @creation_script = N'',
@pre_creation_cmd = N'drop', @schema_option = 0x000000000803509F,
@identityrangemanagementoption = N'manual',
@destination_table = N'test1', @destination_owner = N'dbo',
@status = 24, @vertical_partition = N'false',
@ins_cmd = N'CALL [dbo].[sp_MSins_dbotest1]',
@del_cmd = N'CALL [dbo].[sp_MSdel_dbotest1]',
@upd_cmd = N'SCALL [dbo].[sp_MSupd_dbotest1]'
GO
use [Test_DB]
exec sp_addarticle @publication = N'Test', @article = N'test2',
@source_owner = N'dbo', @source_object = N'test2',
@type = N'logbased', @description = N'', @creation_script = N'',
@pre_creation_cmd = N'drop', @schema_option = 0x000000000803509F,
@identityrangemanagementoption = N'manual',
@destination_table = N'test2', @destination_owner = N'dbo',
@status = 24, @vertical_partition = N'false',
@ins_cmd = N'CALL [dbo].[sp_MSins_dbotest2]',
@del_cmd = N'CALL [dbo].[sp_MSdel_dbotest2]',
@upd_cmd = N'SCALL [dbo].[sp_MSupd_dbotest2]'
GO
use [Test_DB]
exec sp_addarticle @publication = N'Test', @article = N'test3',
@source_owner = N'dbo', @source_object = N'test3',
@type = N'logbased', @description = N'', @creation_script = N'',
@pre_creation_cmd = N'drop', @schema_option = 0x000000000803509F,
@identityrangemanagementoption = N'manual',
@destination_table = N'test3', @destination_owner = N'dbo',
@status = 24, @vertical_partition = N'false',
@ins_cmd = N'CALL [dbo].[sp_MSins_dbotest3]',
@del_cmd = N'CALL [dbo].[sp_MSdel_dbotest3]',
@upd_cmd = N'SCALL [dbo].[sp_MSupd_dbotest3]'
GO

-- Adding the transactional subscriptions

exec sp_addpullsubscription
     @publisher = N'Server1\IN01'
      , @publication = N'TEST'
      , @publisher_db = N'Test_DB'
      , @independent_agent = N'True'
      , @subscription_type = N'pull'
      , @description = N''
      , @update_mode = N'read only'
      , @immediate_sync = 0

exec sp_addpullsubscription_agent
        @publisher = N'Server1\IN01'
      , @publisher_db = N'Test_DB'
      , @publication = N'TEST'
      , @distributor = N'Server1\IN03'
      , @distributor_security_mode = 1
      , @distributor_login = N''
      , @distributor_password = null
      , @enabled_for_syncmgr = N'False'
      , @frequency_type = 64
      , @frequency_interval = 0
      , @frequency_relative_interval = 0
      , @frequency_recurrence_factor = 0
      , @frequency_subday = 0
      , @frequency_subday_interval = 0
      , @active_start_time_of_day = 0
      , @active_end_time_of_day = 235959
      , @active_start_date = 20141110
      , @active_end_date = 99991231
      , @alt_snapshot_folder = N''
      , @working_directory = N''
      , @use_ftp = N'False'
      , @job_login = null
      , @job_password = null
      , @publication_type = 0

use [Test_DB]
exec sp_addsubscription
  @publication = N'TEST'
, @subscriber = N'Server1\IN01'
, @destination_db = N'Test_DB'
, @subscription_type = N'pull'
, @update_mode = N'read only'
, @subscriber_type = 0
, @article = N'all'
, @sync_type = N'replication support only'
GO

=> After this you will see the below,

Run it on Server1\IN02 

-- Adding the transactional subscriptions
exec sp_addpullsubscription
      @publisher = N'Server1\IN04'
      , @publication = N'TEST'
      , @publisher_db = N'Test_DB'
      , @independent_agent = N'True'
      , @subscription_type = N'pull'
      , @description = N''
      , @update_mode = N'read only'
      , @immediate_sync = 0 

exec sp_addpullsubscription_agent
      @publisher = N'Server1\IN04'
      , @publisher_db = N'Test_DB'
      , @publication = N'TEST'
      , @distributor = N'Server1\IN03'
      , @distributor_security_mode = 1
      , @distributor_login = N''
      , @distributor_password = null
      , @enabled_for_syncmgr = N'False'
      , @frequency_type = 64
      , @frequency_interval = 0
      , @frequency_relative_interval = 0
      , @frequency_recurrence_factor = 0
      , @frequency_subday = 0
      , @frequency_subday_interval = 0
      , @active_start_time_of_day = 0
      , @active_end_time_of_day = 235959
      , @active_start_date = 20141110
      , @active_end_date = 99991231
      , @alt_snapshot_folder = N''
      , @working_directory = N''
      , @use_ftp = N'False'
      , @job_login = null
      , @job_password = null
      , @publication_type = 0 

use [Test_DB]
exec sp_addsubscription
  @publication = N'TEST'
, @subscriber = N'Server1\IN04'
, @destination_db = N'Test_DB'
, @subscription_type = N'pull'
, @update_mode = N'read only'
, @subscriber_type = 0
, @article = N'all'
, @sync_type = N'replication support only'
GO 

Run it on Server1\IN04 

-- Adding the transactional subscriptions
exec sp_addpullsubscription
      @publisher = N'Server1\IN02'
      , @publication = N'TEST'
      , @publisher_db = N'Test_DB'
      , @independent_agent = N'True'
      , @subscription_type = N'pull'
      , @description = N''
      , @update_mode = N'read only'
      , @immediate_sync = 0 

exec sp_addpullsubscription_agent
      @publisher = N'Server1\IN02'
      , @publisher_db = N'Test_DB'
      , @publication = N'TEST'
      , @distributor = N'Server1\IN03'
      , @distributor_security_mode = 1
      , @distributor_login = N''
      , @distributor_password = null
      , @enabled_for_syncmgr = N'False'
      , @frequency_type = 64
     , @frequency_interval = 0
      , @frequency_relative_interval = 0
      , @frequency_recurrence_factor = 0
      , @frequency_subday = 0
      , @frequency_subday_interval = 0
      , @active_start_time_of_day = 0
      , @active_end_time_of_day = 235959
      , @active_start_date = 20141110
      , @active_end_date = 99991231
      , @alt_snapshot_folder = N''
      , @working_directory = N''
      , @use_ftp = N'False'
      , @job_login = null
      , @job_password = null
      , @publication_type = 0

use [Test_DB]
exec sp_addsubscription
@publication = N'TEST'
, @subscriber = N'Server1\IN02'
, @destination_db = N'Test_DB'
, @subscription_type = N'pull'
, @update_mode = N'read only'
, @subscriber_type = 0
, @article = N'all'
, @sync_type = N'replication support only'
GO

 => The setup is as below now,

 

=> Try inserting data’s and check if data’s are getting replicated. Please notify me if there are any issues in the code or if the scripts
doesn't work as expected.  Thank You,
Vivek Janakiraman

Disclaimer:
The views expressed on this blog are mine alone and do not reflect the views of my
company or anyone else. All postings on this blog are provided “AS IS” with no
warranties, and confers no rights.

Build 2015 blew my mind!

MSDN Blogs - Mon, 05/04/2015 - 03:18
[Denne bloggposten ble opprinnelig gitt ut på medium ] Det er alltid like spennende å sette seg foran Build keynoten fra Microsoft, ihvertfall om hverdagen din dreier seg om å lage løsninger på Microsoft plattform. I år berørte vi alle type utviklere uansett om de utvikler på Windows, Mac OS eller Linux eller at de bruker Objectiv C, C++, Python, PHP, NodeJS, Java, R eller C# som sitt favoritt språk. Alt dette på selve 40 års...(read more)

改进Visual Studio 账户管理体验123

MSDN Blogs - Mon, 05/04/2015 - 03:00

对于已经使用多个服务(比如,Visual Studio漫游设置,在服务器资源管理器中访问Azure服务,或者使用Windows 应用商店)在Visual Studio中建立应用程序的用户来说,很可能已经经历了我们称作的“登陆 Whack-A-Mole”, 即在我们最不期待的时候弹出提示。在Visual Studio 2015,我们介绍过一个账户管理器,用以减少Visual Studio需要提示输入凭据的频率,并且在集成开发环境中用户可以轻松的在不同用户账户间切换。

你会看到账户管理器被应用到Visual Studio 2015用户界面的不同地方, 但是账户管理器主要的地方在文件-账户设置。

让我们来深入了解账户管理器的工作原理。

多服务,多用户账户

早前,我们看到在Visual Studio中有两种一般类型的身份验证工作流:

    • 多服务. Azure,Office 365, Visual Studio Online等在线服务单独管理他们的用户身份令牌。 此外,重新输入你的凭据仅刷新这一功能的令牌,保留其他功能未验证。这意味着你登陆的次数随着你使用不同服务的次数增长,即使你是用相同账户登陆的。
    • 多账户.如果你使用多用户账户(比如,工作和家庭或者开发和测试的单独账户),可能会更加复杂,在这些用户账户之间切换需要退出再登陆回来。

Visual Studio 2015 可以帮助这两者。

首先,统一了登陆体验,以便验证一个账户可以访问与该用户账户使用的所有服务。例如,如果你使用你的微软账户登陆到IDE,而此账户又是一个Azure订阅的管理员,Visual Studio 将同时认证你在服务器资源管理器中使用Azure服务。此外,如果你最近对一个服务(例如,Azure订阅)刷新了你的凭据,它们将对所有服务刷新。本声明适用于从新建项目对话框添加应用Insights,或者从新建连接服务对话框添加移动服务或者存储。更好的是,单一登陆功能可适用于整个Visual Studio家族包括Blend。 Visual Studio 管理跨应用访问令牌, 因此一旦你在一个应用验证账户,令牌对于整个应用统一刷新。

第二,我们添加了一个系统为了在Visual Studio中集中管理你的账户,以便你有一个地方查看、添加或者移除IDE中的多账户。你可以在文件菜单下的账户设置对话框查看管理账户。所有在Visual Studio里的账户相关功能使用相同账户管理,即使他们显示在用户界面的不同地方。使用新的用户界面,你能轻松在不同账户之间切换或者添加新账户,正如你使用专线服务如应用Insights或者Azure移动服务 。

重要的注释:Visual Studio不存储你的原始凭据。此关于AAD的快速入门是很好的关于Azure Active Directory的参考,Active Directory Authentication Library (ADAL),适用于为Visual Studio的新账户管理组成部分。简言之应该是这样:当你在Visual Studio登陆账户,正在验证一个基于网络的身份提供者(很可能是Azure

Active Directory或者Microsoft账户提供商)。如果验证成功,身份提供程序委派身份验证令牌。Visual Studio 2015 账户管理系统简介方便安全的处理和存储这些授权令牌。

后续改进

我们还没有完成。例如,你将会看到在短短12小时后Visual Studio将会要求你重新验证你的微软账户。我们希望改进这点,但同时,有一些变通方法。例如,如果你创建了一个“工作或学习账户”(原名组织账户)并且使该账户作为你的Azure订阅管理员去访问Azure资源,你将会在Visual Studio中获得比较好的单点登录体验。遗憾的是你不能用一个组织ID登陆到集成开发环境(例如,漫游设置和个性化设置集成开发环境)。我们正在努力解决这个问题。

除了预览,还有更多功能比如团队管理器,Office365和ASP.NET项目创建将会用于新的共享账户存储和管理服务。

我们也计划漫游个人个性化设置用户账户列表(但不包含密码或者身份验证令牌)帮助您在新设备上快速开始。我们也会联系账户选择器以便使跨产品表现更加一致。

我们一如既往的感激您分享在用户之声和通过产品-发送笑脸界面的反馈、建议以及其他任何想法。我们会在几个星期监视这篇文章的评论,但如果你发现了错误最好在联系站点直接记录它或者尝试在John提到的从集成开发环境的发送苦脸界面直接报告一个错误。

谢谢!

Ji Eun Kwon

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux