With the commercial availability of SQL Server 2014 comes a new set of technologies that can transform your data platform and business intelligence projects. Technologies such as in-memory OLTP, updateable column store, resource governor, Power BI on O365 and Azure are a leap forward in enabling data insights and mission critical scenarios.
To help you build your expertise on these technologies we are delivering a Level 300 three day course for data platform and business intelligence technology partners. You will leave this training with news skills to implement the latest features of the expanding product set. The 3 days cover the following technologies:
What Will You Learn?
During this 3-day, instructor-led course we will help students understand how to put the latest technologies to work with hands-on labs as well as instructor lead materials including:
WHO SHOULD ATTEND?
This course is targeted at technologists with data platform and business intelligence expertise as well as Pre-sales Engineers who are interested in updating and expanding their skills. Familiarity with SQL Server is required as this is not an introductory course. Some familiarity with Windows Azure will aid the attendee.
COST: $499 per person
COURSE LEVEL: 300 (Technical)
WHEN & WHERE:
For the best training experience, please come with your Windows Azure account. We will demonstrate the latest features for Virtual Machines, HDInsight (Hadoop on Azure) and PowerShell for Azure.
Also, feel free to explore Power BI prior to attendance to get more from the deep technical view we'll be providing by going to http://www.powerbi.com
Questions: If you have any questions, please send an email to firstname.lastname@example.org
This morning Satya and Scott presented on Microsoft’s strategy for cloud. If you missed it you can watch the recording here:
You can also read more about the announcements that were made at the following locations:
Конкурс Imagine Cup продолжается практически круглый год, поэтому уже сейчас пришла пора задуматься о том, какой проект представить на конкурс в этом году. В то время как мы проводим серию хакатонов Imagine Cup по всей России, чтобы помочь студентам придумать и реализовать свою идею, регистрация на конкурс уже открыта, и некоторые интересные международные конкурсы уже начались!
В этом году, как и в прошлом, финал конкурса пройдет в США. Россия представит на международном этапе конкурса лучшую команду, которая будет отобрана на национальном российском финале соревнований в Москве весной 2015 г.
Прием заявок на конкурс продлится до февраля 2015 г. (точную дату мы объявим дополнительно), после чего мы определим оптимальный формат проведения регионального отбора команд для участия в национальном финале. Заявки можно подать уже сейчас онлайн, заполнив соответствующую форму на сайте http://www.imaginecup.ru.
Соревнования, как и в прошлом году, проводятся в трех категориях: игры, инновации и социальные проекты.Международные онлайн-соревнования
Как и в прошлом году, вы можете участвовать также в ряде международных онлайн-конкурсов, соревнуясь сразу с командами и участниками со всего мира. Эти конкурсы устроены таким образом, чтобы помочь вам на каждом этапе развития проекта – участвуя во всех конкурсах по очереди, вы в конце концов создадите действующий прототип своего продукта. Онлайн-конкурсы отражают основные этапы развития проекта:
Набор требуемых материалов слегка различается для каждого из основных конкурсов (игры, инновации и социальные проекты). Подробности можно почитать на международном сайте соревнований.Конкурс Code Hunt Challenge
В рамках Imagine Cup также есть соревнование, предназначенная для тех, кто ищет легкие пути. В этом соревновании легко поучаствовать, потратив всего полчаса времени, и при этом вы приобретете полезные навыки и получите удовольствие. В прошлом году таким конкурсом был Brain Games, в этом году мы представляем совершенно новый индивидуальный конкурс – Code Hunt Challenge.
В рамках Code Hunt вы работаете в рамках специального тренажера, в котором вам показывается фрагмент ошибочно-работающей программы на C#. Вам необходимо найти в программе ошибку и исправить её так, чтобы она выдавала требуемый результат. Начиная с очень простых заданий, конкурс постепенно становится всё более захватывающим…
Соревнования Code Hunt проводятся раз в месяц в течение двух суток в третьи выходные месяца (по гринвичскому времени). Это значит, что ближайшие соревнования пройдут 15-16 ноября. В другие дни вы можете подготовиться к соревнованиям пройдя пробные упражнения на сайте www.codehunt.com.Imagine Cup – конкурс и образовательная программа
Важно отметить, что Imagine Cup – это не просто конкурс, который проходит весной и летом каждого года. Это еще и образовательная активность: хакатоны, онлайн-курсы по технологиям и предпринимательству, вебинары и много другое. Это активность длиною в год, которую надо начинать прямо сейчас. Регистрируйтесь в группе Imagine Cup вконтакте, чтобы быть в курсе активностей, следите за новостями в блоге, будьте активны и будьте счастливы!
Our team is responsible for developing and maintaining all of Azure Management SDK which includes quite a bit of languages and tool. For example, we own .NET SDK (https://github.com/Azure/azure-sdk-for-net), PowerShell Toolset (https://github.com/Azure/azure-sdk-tools-pr), Java SDK (https://github.com/Azure/azure-sdk-for-java), Node JS SDK (https://github.com/Azure/azure-sdk-for-node), XPlat CLI Toolset (https://github.com/Azure/azure-sdk-tools-xplat), PHP SDK (https://github.com/Azure/azure-sdk-for-php), as well as a pretty complex internal code generation platform. That being said we are a rather small team and until recently had at max 1-2 devs expert in each language/platform. That obviously presented a few issues. To gain maximum efficiency our PMs would always have to give us a diverse mix of tasks that included all of the languages and platforms. Whenever there was a big event such as //build or TechEd and the deliverables piled up (usually PowerShell and XPlat CLI got hit the worst) some team members had to work late hours while others were not really able to help. Also the daily scrums were kind of boring, since people could not really relate to what everyone else was doing.
So, to make the long story short, we have discussed at our last retrospective that our team would probably benefit from doing XP style pair programming (we have recently moved into an open space and the room setup was congruent to pairing). Since we didn’t have any XP practitioners on our team I have shot an email to our in-house Agile guru Arlo Belshee (http://arlobelshee.com/) who has generously agreed to come over to coach our team.
Here is what we have decided to do:
1. Do 2 pairing sessions a day – 90 minutes in the morning and 90 minutes in the afternoon. This arrangement gives us enough time to answer emails and review GitHub pull requests while letting us have 100% focus during the pairing sessions. By the way, pairing is intense. Doing 8 hrs a day of straight pairing will fry your brain!
2. Use the thinker/typer approach for coding tasks (not sure what is the official XP name for this) – one person types and one person thinks. The one typing is not allowed to type on his/her own. The moment the typer wants to contribute he/she needs to push out the keyboard. If both people want to “think” they need to discuss. This is by far the best pairing strategy we’ve found, which prevents the person who is not typing from falling asleep.
3. Change partners every session. That works for us.
4. Change work items every session. One person from the pair stays on the work items (if it takes more than 1 session to complete the item) and the other should go. Another rule is that a person can’t stay on a single task for more than 2 sessions!
5. Picking the right pair – initially we’ve agreed to do a mix of an expert with someone new to the area. For example someone who knows PowerShell pairs with a Java guys on a PowerShell task. This enables for the fastest knowledge dissemination. Later on, as the knowledge becomes more even, so will the pairs get more even.
6. Use the whiteboard to track work items and tasks. Before the session begins we write down all work items that are up for pairing on the board and discuss which exactly steps need to be done. The people volunteer for pairs and the names are written down on the board next to the items. Finally as work gets done, the tasks get crossed out one by one.
7. Pick easily “pairable” items, expanding the set of “pairable” items as we go. Here is a diagram that Arlo drew:
Initially we took just small simple coding problems that we knew how to solve while keeping a more complicated items for individual work. As time went by we started picking more complex items as well as non-coding research tasks. The goal is to make all work items “pairable”.
8. Do a brief retrospective on Tuesdays and Thursdays to micro-adjust our new process
So far we’ve been doing this approach for 3.5 weeks and based on the team feedback its working great for everyone. By the way, if you are going to try pairing, make sure to try pairing for the entire month before making the decision (Arlo’s suggestion). The first few days you will see a dramatic improvements. The 2nd - 3rd week you may start to hate it (the brain will rebel against someone always telling it what to do and will just shut down). The 3rd – 4th week is when you will get into the rhythm and will wonder how you were able to code without a partner before.
So here is where we are today. The team is now able to work on literally any codebase. Some quotes from the retrospectives have been “this is the most fun I’ve had coding in the past 6 months”, or “I have learned in one session more than I have learned in a long time being stuck doing X”. By the way, as you may imagine the main fear of our PMs has been that while we are learning and stuff we would stop delivering as many story points (2 people on 1 keyboard). The reality has been quite the opposite. We’ve started pairing in Sprint 39 and have finished more points than in Sprint 38. Now we are in Sprint 40 and have one week left to go. Based on velocity chart we should be complete if not more than at least as many story points as in the last sprint. Also our flow (we’re doing a mix of Scrum and Kanban) has become much more even. We max a max of 4-5 items in Active state, as opposed to 10-15 in the past.
So overall I would say this has been quite a successful experiment. Feel free to post comments or email me if you’d like more details on how pairing is working out for us.
We asked our talented intern, Steven, to work on several hardware projects this summer and he agreed to share some of his experiences with Windows hardware development using Sharks Cove hardware development board through a series of blog posts. Below is his first blog detailing his experience installing and debugging a driver using Sharks Cove.
Sharks Cove is a hardware development board that you can use to develop hardware and drivers for Windows. The Intel Sharks Cove board supports driver development for devices that use a variety of interfaces, including GPIO, I2C, I2S, UART, SDIO, and USB. You can also use the Sharks Cove board to develop drivers for cameras and touch screens.
For information about the Sharks Cove board, see www.sharkscove.org.
For details about Windows compatible hardware development boards, see www.msdn.microsoft.com/hardwaredevboard
Here’s the blog in his own words.
In this post, I wanted to provide more of a walkthrough on the process of setting up a Sharks Cove for installing and debugging a driver. This post is a direct result of my experience working with the board. Currently, there is a step-by-step guide to setting up and using Sharks Cove on MSDN, so I will not cover everything in detail. Instead, I’d like to look at each of the steps and provide the details of my experience.Sharks Cove setup
You can absolutely skip this section if you have the board already setup and provisioned or the link above is sufficient. This section is merely to provide clarification from my experience setting up the board. Starting from the top, you will need to gather your hardware. My setup was as follows:
After initial setup, it is possible to get away without a monitor for the Sharks Cove as you can simply remote into the machine from your host machine. The next step involves setting up your host machine. I think the documentation for this section covers it. You simply need Visual Studio Express, which is free, if you don’t already have it. Then, you’ll need to download and install the WDK 8.1 Update. This kit installs all of the necessary files to write drivers for Windows 8.1 Update, 8.1, 8, and 7. The kit is integrated into Visual Studio so you can develop and deploy drivers from within Visual Studio. My experience with this integration was positive. In general, the workflow is very fluid when compared to debugging separately through WinDbg or KD.
Now, we have arrived at setting up the Windows Image and installing it onto the dev board. To me, this step looks overwhelming, but I assure you that it is fairly simple and the documentation has you covered. Once you have successfully created an image, keep it around on a USB in case you ever need to reimage the board. The documentation directs users to download Windows Embedded 8.1 Industry Pro Evaluation. This is a trial OS option for available to developers, but in my experience, the board can run any version of Windows client image you want to install.
Next we need to provision the target system. In this scenario, your host machine is the machine you set up earlier with the WDK and Visual Studio. The target machine is the dev board. The documentation has a link for setting up a target, and that should be enough because provisioning for the Sharks Cove is just like any other target. Here’s a short version of this process, explicitly stating what is happening on each machine.
The documentation gives an example of the ASL entry for an ADXL345 accelerometer. This is the part that I worked with to test the board, so I’ll talk about using that part through the remainder of this post. To attach the part, I used a breadboard,
but with the proper cables, you can just connect it directly. The pin mappings can be found in this guide (SpbAccelerometer driver cookbook). Once connected, just follow the steps detailed in Step 6 from the Sharks Cove guide we’ve been following. The ASL entry I used during development looked slightly different: Device(ADXL) instead of Device(SPBA) and Name(_HID, “ADXL345”) instead of Name(_HID, “SpbAccelerometer”). Pay attention to these fields in whatever
version of the ASL entry you see available at the time of reading this. It will be important when installing your driver.
Using the sample driver linked at the bottom of the page, you can test your part. After opening the solution, be sure to set the configuration platform in Visual Studio. The instructions are on the sample’s documentation page. Be sure to set it to a
debug configuration or you may encounter signing issues. The simplest method of building and testing the driver is through Visual Studio itself. Right click the package project and select properties. Under Driver Install > Deployment, enable deployment, set your target machine to Sharks Cove, and select Install and Verify. Click OK, and you can begin debugging by clicking Debug > Start Debugging. This should build the driver, connect to Sharks Cove, copy over the driver files, install it, and finally attach to it for debugging. If the build is successful, you should see various cmd windows popping up and closing on the Sharks Cove. After they are finished, if successful, a Debugger Immediate Window will appear in
Visual Studio already broken into upon driver load. You can then set break points, continue, break, and step through code like normal with Visual Studio.
The alternative to using Visual Studio to deploy and debug is manual deployment and debugging with WinDbg (included in the WDK). Instructions for manual deployment can be found on the sample’s documentation page. Before you install the driver with devcon, you will need to do some setup so that you can attached to the driver on load with WinDbg. First, you need to copy wdfverifier.exe from …\Windows Kits\8.1\Tools\x86\ on the host to your Sharks Cove. This tool will allow you to set WUDFHost.exe, the process that hosts the driver, to break when your driver is loaded. Run it on the Sharks Cove. Go to the Global WDF Settings tab. Where it says, “At [driver load\start], Host Process will wait  seconds for user-mode debugger,” set the dropdown to driver load and the time to 10 or more seconds. You should check the kernel debugger checkbox below this if you plan on kernel debugging. When you install your driver, WUDFHost.exe will break upon you driver loading and wait as long as you specified. At this point you need to attach a debugger. For user-mode debugging, you need to run WinDbg as an administrator on the Sharks Cove. Click File > Attach to process. Find WUDFHost.exe and attach. If you have multiple WUDFHost.exe processes, use tasklist /M SpbAccelerometer.dll from the command line to get the PID of the correct WUDFHost.exe. You can now attach a remote WinDbg session to this one from your host machine if you
prefer. To do this simply run .server npipe:pipe=<pipename> in the WinDbg session on your Sharkscove where pipename is whatever you prefer. Next, start a WinDbg session on your host and click File > Connect to a Remote Session. You will need to type the output of the .server command after –remote (e.g. npipe:pipe=examplepipe,server=examplehost). Lastly, you can kernel debug through USB if it is setup. Simply start WinDbg on your host and click File > Kernel Debug. Select the COM tab and change the port to comX where X is the number you found for the target when provisioning. Once attached in all cases, you will need the symbols for your driver, the pdb file in the binary output of your Visual Studio project. You can either copy it to your Sharks Cove for local debugging or keep it in place and attach remotely to the Sharks Cove. Whichever method you choose, in WinDbg, on the host unless using the first option, select File > Symbol File Path. Add the full path to the directory containing the pdb file. If you’d like to look at or step through source code, add the full path to the directory containing the source files for the driver in File > Source File Path. Paths are separated by a semicolon. Links to debugging with WinDbg can be found in the Sharks Cove documentation.
I wanted to close with a few things that I did to make my life easier when working with Sharks Cove. There’s nothing fancy here, just stuff that you may or may not find useful or already thought of yourself.
So there it is, pretty much my knowledge and experience on the Sharks Cove. I hope this is helpful to those getting started, because I want to be able to help those who are having similar experiences as me.
You may already have an account, but before moving your company’s code, work items etc. it is important to know what is possible and what the terms and conditions are. Below I have collected some important links:
SLA & Terms of Service:
Integration & Service Hooks:
Visual Studio Online utilize SQL Azure, which provides High-Availability and Disaster-Recovery. This is described here http://msdn.microsoft.com/en-us/library/azure/jj650016.aspx
Test & Build – On Premise:
If you don’t want to or cant use the build & test features provided by the service you can connect from both on-premise build and test controllers: http://nakedalm.com/connect-a-test-controller-to-team-foundation-service/ and http://myalmblog.com/2014/04/configuring-on-premises-build-server-for-visual-studio-online/
Feel free to suggest more and I will add them :-)
Microsoft Scripting Guy Ed Wilson here. This week, the week of Oct. 19, I begin a special series of Hey Scripting Guy blog posts where we will be talking about why upgrade to Windows PowerShell 4.0. This series will be way cool, and I have enlisted the help from various MVP's, as well as from the PowerShell team. So, what is in store for you? Here is the lineup:
On Sunday I talk about the parade of features we have added to PowerShell over the various versions. The point of the article is to try and get ahead of the learning curve.
On Monday PowerShell MVP Richard Siddaway talks about some of the considerations involved in an upgrade.
On Tuesday PowerShell MVP Teresa Wilson, aka Scripting Wife, talks about how PowerShell 4.0 makes her life easier in day to day usage.
On Wednesday PowerShell MVP Jeff Wouters provides a history lesson with Windows PowerShell and concludes that PowerShell 4.0 Rocks.
On Thursday IIS MVP Terri Donahue talks about some of the changes brought about by PowerShell 4.0 that compelled her to upgrade.
On Friday PowerShell MVP Dave Wyatt says the big reason for upgrading is DSC, DSC and DSC.
On Saturday Windows PowerShell Team Member John Slack rounds out the week by providing his unique insiders perspective.
In all it is a powerful week of articles that you will not want to miss. Hope to see you over there.
Microsoft Scripting Guy
Follow us on Twitter: http://twitter.com/scriptingGuys
A menudo escuchamos tanto la coletilla “change the world” que a veces deja de tener sentido. No nos damos cuenta de que son, precisamente, las pequeñas iniciativas las que ayudan a empujar la rueda que hace girar el mundo.
Así que, me gustaría presentaros a los clubs .NET: nacen del espíritu de estudiantes apasionados que lo “único” que desean es disfrutar con lo que más les gusta: la tecnología. Y sin embargo, nos sorprenden en más de una ocasión.
Instituto Tajamar (Madrid)
Universidad de Albacete
Universidad de Alcalá
Universidad de Alicante
Universidad Autónoma de Barcelona
Universidad Autónoma de Madrid
Universidad de Cantabria
Universidad Complutense de Madrid
Universidad de A Coruña
Universidad de Málaga
Universidad Oberta de Catalunya
Universidad de Oviedo
Universidad Politécnica de Madrid Campus Sur
Universidad Politécnica de Madrid IEEE
Universidad Rey Juan Carlos
Universidad de Salamanca
This is the 45th in our series of guest posts by Microsoft Most Valued Professionals (MVPs). You can click the “MVPs” tag in the right column of our blog to see all the articles.
Since the early 1990s, Microsoft has recognized technology champions around the world with the MVP Award. MVPs freely share their knowledge, real-world experience, and impartial and objective feedback to help people enhance the way they use technology. Of the millions of individuals who participate in technology communities, around 4,000 are recognized as Microsoft MVPs. You can read more original MVP-authored content on the Microsoft MVP Award Program Blog.
This post is by Lync MVP Desmond Lee. Thanks, Desmond!
PowerShell and SQL Stored Procedure
PowerShell is the window to managing many aspects of a Lync Server 201x environment. Coupled with the deployment of the Monitoring Server service (or separate role in 2010), extensive data can be collected to support operational and troubleshooting demands. By installing the optional Monitoring Reports on selected SQL Server Reporting Services (SSRS) instances, common types of reports are easily available at your disposal.
As Lync deployments become ever more popular and widespread, it became evident that the Lync Management Reports shipped with the product do not address certain reporting requirements needed in the field. Since persistent and dynamic Lync data are stored in various SQL databases in the backend as well as on each Front-End Server, the act of firing up SQL Management Studio, connecting to the right server and executing a SQL query will enable you to put together a user-defined report. You can find many excellent blog posts that walk you through the intricacies of constructing the often complicated looking SQL statements.
Microsoft discourages building SQL queries to pull information directly from the underlying database tables for a good reason. Besides the inherent complexity and tediousness involved, such as the use of multiple table JOINs, the database schema, table relationships and naming conventions may change in future updates and product versions. Hence the risk of breaking customized SQL queries is very real indeed.
In this compact article, I shall show you how to utilize the built-in SQL stored procedures associated with the Lync databases. Think of a stored procedure as a function which comprises of one or more PowerShell statements, and is capable to accept input parameters and return one or more values. Stored procedures are found in the rtc / Programmability / Stored Procedures node of the rctlocal instance.
One popular yet missing report (from Lync Management Reports) is to list the various Lync client versions deployed in the environment. We shall be using the common skeleton code framework to build and run SQL queries in PowerShell (see code listing for the complete script).
To begin, instead of constructing a SQL query similar to the following:
SELECT rtc.dbo.Resource.UserAtHost as 'SIP Address',
CAST(rtcdyn.dbo.RegistrarEndpoint.ClientApp as varchar(100)) as 'Client Version'
INNER JOIN rtc.dbo.Resource
ON rtcdyn.dbo.RegistrarEndpoint.OwnerId = rtc.dbo.Resource.ResourceId
WHERE IsServerSource = 0
You simply specify the name of the stored procedure and assign it to CommandText:
$SqlCmd.CommandText = “DiagShowEndpointsByClientApp”
The result, stored in the first table of the dataset, is a summary of the range of Lync client versions, all the way from desktop to mobile apps, which are currently in use in the environment.
If the stored procedure expects one or parameters, you can supply them as well. Here, we are interested to confine our list to users connecting using Lync Mobile for Android devices:
$clientVersion = "Android"
The second table (index 1) of the dataset contains the results matching the given criteria.
To discover if parameters are optional or not, go ahead and inspect the stored procedure with the help of SQL Management Studio.
Using readily available SQL stored procedures makes it easy to harness the power and work already done by Microsoft to generate reports absent in the out-of-box Lync Monitoring Reports, while future-proofing your PowerShell scripts against any underlying database changes less the agony of building your own complex SQL queries. I hope this article will help you put to real good use of the vast potential of PowerShell and stored procedures.
List price: $39.99
Sale price: $19.99
You save 50%
Conquer SharePoint 2013—from the inside out! You're beyond the basics, so dive right into SharePoint 2013—and really put your business collaboration platform to work! This supremely organized reference packs hundreds of timesaving solutions, troubleshooting techniques, and workarounds. It's all muscle and no fluff. Discover how the experts facilitate information sharing across the enterprise—and challenge yourself to new levels of mastery. Learn moreTerms & conditions
Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.
The products offered as our eBook Deal of the Week are not eligible for any other discounts. The Deal of the Week promotional price cannot be combined with other offers.
While working on a TMG 2010 issue, we were getting an error when generating any kind of report. The error is mentioned below.
System.Web.Services.Protocols.SoapException: The item '/ISA2008
Reports/Summary_ServerParticipation' cannot be found. --->
item '/ISA2008 Reports/Summary_ServerParticipation' cannot be found.
Report, String HistoryID, ExecutionInfo2& executionInfo)
Report, String HistoryID, ExecutionInfo& executionInfo).
The error occurred on object 'Reports' of class 'Reports Configuration' in the
scope of array <Servername>
It is always a good idea to actually look if actually the Report “Summary_ServerParticipation” existed in the SQL or not.
So we opened the Reporting Services Configuration Manager on the TMG 2010 Server.
Clicked the Highlighted link below.
This opened an IE window and we could not find any report in the location mentioned.
Compared this to a healthy TMG 2010 Server and I could see a bunch of reports present. Highlighted is the report that we are actually getting an error:
So, our Task now was to do a restore of the Database from a healthy TMG 2010 server that is holding the above populated report to the TMG 2010 server, we are facing issue with.
Details of the TMG servers.
TMG2 – TMG 2010 Server with the issue.
TMG1 – Healthy TMG 2010 Server.
Take a backup before we proceed.
Based on the complexity of the implementation of the action plan, we will be accomplishing this task in Phases:
Phase 1: Taking Backup of the Database from TMG 2:
How to: Back Up a Database (SQL Server Management Studio)
Database’s that we need to take a backup for:
Phase 2: Backing Up the Encryption Key from Reporting Services Configuration Manager:
Backing up the Encryption Key.
Back up encryption keys -Reporting Services Configuration Manager (Native Mode)
Now as we are done backing up the database’s from TMG2, we need to perform the same steps for exporting the database from the working TMG1 Server.
At the end we will have 3 files that we need:
EK- Encryption Key
ReportsServer$ISARS – Report Server Database.
ReportsServer$ISARSTempDB – Report Server Temporary Database.
Phase 3: Deleting the Database from the TMG 2 and Importing a healthy Database from TMG 1:
On TMG 2 we need to delete only ReportsServer$ISARS from the SQL Management Studio.
Now why we delete this database only is because firstly this is the database where our report Summary_ServerParticipation is under. And we want to replace our Database ReportsServer$ISARS with a healthy one.
How to: Restore a Backup from a Device (SQL Server Management Studio)
We need to only import ReportsServer$ISARS as that’s what we are recovering.
Phase 4: Importing the Encryption Key onto the TMG 2 server that we exported from the TMG 1 Server:
Restore encryption keys -Reporting Services Configuration Manager (Native Mode)
It will take a little bit to complete this process , do watch out for the message in the Results Section.
Phase 5: Delete the TMG 1(Working Server) instance from the Scale-out Deployment.
Delete the Working Server Instance(TMG1) and after deleting we should see only one server which is TMG2.
How to Delete TMG1 Instance:
On the ReportsServer$ISARS Datbase execute the below mentioned query.
delete from keys where machinename=’TMG1’.
To confirm the Exact Machine Name:
On the table dbo.keys under ReportsServer$ISARS Database execute the below mentioned query.
Select [MachineName] from [ReportServer$ISARS].[dbo].[Keys]
After deleting the Instance TMG1, visit the Scale-out Deployment Status.
We would see only TMG2 Server listed there.
This completes the process and now to confirm if we can see the Reports on TMG2. Follow the same procedure mentioned in the starting of the article.
Or you can directly generate a report and see if we are still getting the error. If followed in the exact way we should see the issue getting rectified.
I hope this article will come handy in solving these kind of Reporting issues while working on Forefront Threat Management Gateway.
Microsoft Security Support Engineer
Microsoft Security Support Escalation Engineer
So it was about time that someone wrote an article about Advanced Feature option on the Rights Management Page.
But first of all what is Azure Rights Management?
Almost every organization is Internet-connected these days, with users bringing personal device to work, accessing company data on the road and home, and sharing sensitive information with important business partners. As part of their daily work, users share information by using email, file-sharing sites, and cloud services. In these scenarios, traditional security controls (such as access control lists and NTFS permissions) and firewalls have limited effectiveness if you want to protect your company data while still empowering your users to work efficiently.
How do we enable it on the Office 365 Portal?
Going a little deep and you can create and manage custom templates in the Azure Management Portal. You can do this directly from the Azure Management portal, or you can sign in to the Office 365 admin center, and choose the advanced features for Rights Management, which then redirects you to the Azure Management portal.
More about the Advance Feature for Rights Management: http://technet.microsoft.com/en-in/library/dn642472.aspx
But I am not here to talk about the advanced feature here. But more of when we decide that we want to go ahead and try this out and we end up with an error.
No Subscriptions Found when we click on the Advanced Feature button on the Rights Management Page
When we click on the advanced features button on the rights management page (Figure 1) we get a message like the one in Figure 2.
So what happened here?
I am logged onto the office365 with email@example.com . So the tenant here is kausd.onmicrosoft.com.
Office 365 account: firstname.lastname@example.org
What is a Tenant?
We as cloud users would often hear the term “tenant” a lot. As per oxford tenant would mean: a person who occupies land or property rented from a landlord.
Same way in office 365 it means your own space and in general it is denoted by .onmicrosoft.com.
So if one company has an office 365 name space email@example.com then no other company can use the same name space.
Now when I click on the advanced features button it takes us tohttps://account.activedirectory.windowsazure.com/.
This page would land us on Azure Management portal once it verifies that we have a subscription associated with my account with which I am logged in on office 365.
For example, Azure Space will look if my account Kaustubh.kausd.onmicrosoft.com has a subscriptions associated with it, if found it will check if my account is a Service administrator or a co-administrator.
In case it does not find that my account satisfies any of the conditions mentioned above it will throw a message like the one in Figure 2.
What is Microsoft Azure?
What is Azure? In short, it’s Microsoft’s cloud platform: a growing collection of integrated services—compute, storage, data, networking, and app—that help you move faster, do more, and save money. But that’s just scratching the surface.
In case if any one of them is not true we will get message we saw in figure 2.
Now let’s look at how to resolve this kind of issue.
Some of the users are confused if they have or don’t have an Azure Account.
So let’s check that by visiting https://manage.windowsazure.com and see if we can login here or not.
Login with the same account the account as of office365. At this point you will be logged of from Office365 if the login to the Azure Portal Succeeds.
Under setting section look if he we have the subscription associated with the Directory that you intend to use (Figure 3). In my case it would be kausd.onmicrosoft.com.
Now in starting I could never logon to the Azure portal with the Kaustubh@kausd.onmicrosoft.com because of the reason that kausd.onmicrosoft.com was not associated with any subscription.
I have multiple directories (Figure 3) and the default directory was the one that had the subscription associated with it. So I had to first login to the portal as a global administrator (Figure 4) where the account used is firstname.lastname@example.org.
email@example.com was the account through which I first signed up for my Azure Account.
Under setting section and click edit directory (Figure 5) and (Figure 6).
NOTE: For this change you have to be a Global Administrator. Once you associate the directory you need to make Kaustubh@kausd.onmicrosoft.com either a service administrator or a co-administrator.
For this we need to go to the Active Directory section select the Directory you (firstname.lastname@example.org) belong to.
Under users tab look for the account and once you expand that you would see a section where a role can be assigned (Figure 7). Again I have to be a global administrator for this.
Here I have made Kaustubh@kausd.onmicrosoft.com as the service administrator.
Once done with these steps we should be done.
In case we don’t have an Azure Subscription then we can always sign up for a trial.
We need to click on sign up for Windows Azure and then we can proceed with procedure in Scenario 1.
I hope you find the above given information helpful and useful. Welcome to the Microsoft Rights Management world.
Microsoft Security Support Engineer
Microsoft Security Support Escalation Engineer
To demonstrate how SharePoint and SQL Server AlwaysOn work together, especially with a failover, I did a quick video to show it in action. Here it is, enjoy (https://www.youtube.com/watch?v=se_M1vdriMA if the embedded video doesn’t load):
SQL Server AlwaysOn is a key tool in maintaining a high-availability SharePoint solution. More information on how to implement it @ http://blogs.msdn.com/b/sambetts/archive/2014/05/16/sharepoint-2013-on-sql-server-alwayson-2014-edition.aspx
// Sam Betts
Today I am going to talk about a new feature which Office 365 team has introduced and that is “Office 365 Message Encryption”.
Office 365 Message Encryption depends on Microsoft Azure Rights Management (previously known as Windows Azure Active Directory Rights Management). To use this encryption service, you must have an Office 365 organization that includes an Exchange Online or Exchange Online Protection subscription that, which in turn, includes an Azure Rights Management subscription.
Office 365 Enterprise E3 includes:
So when creating a transport rule for message encryption, we may receive the following error.
You can't create a rule containing the ApplyOME or RemoveOME action because IRM licensing is disabled
After doing some research found out that this was because for IRM the internal licensing was not enabled.
Please refer to the below given TechNet article that would help us understand and enable the internal licensing for the IRM.
Note: Follow all the six steps in the article mentioned below.
Also on how to connect to Exchange Online, we can use any PowerShell tool from any machine and connect as mentioned in the article below.
Microsoft Security Support Engineer
Microsoft Security Support Escalation Engineer