You are here

Feed aggregator

Unable to start debugging on the web server. Operation not supported. Unknown error. 0x80004005

MSDN Blogs - 11 hours 11 min ago

I was trying to run my application hosted in IIS from Visual Studio. On pressing F5, it gave me the following error: “Unable to start debugging on the web server. Operation not supported. Unknown error. 0x80004005”

 

 

Go to the Application Pool in IIS by which your application is running. Right click -> Advanced Settings.  Enable 32-Bit Applications to True.

 

Adding/Updating SharePoint O365 Calendar Event

MSDN Blogs - 11 hours 12 min ago

To add or update a calendar in SharePoint 0365:

First connect to the SharePoint site

 

Now if the the listItemCollection already has data, update it or insert a new Calendar event.

 

 

BizTalk 2013: Suspended Error Message – The message found multiple request response subscriptions. A message can only be routed to a single request response subscription

MSDN Blogs - 11 hours 29 min ago

BizTalk 2013- If you find the below suspended messages in your application :

  1. The message found multiple request response subscriptions. A message can only be routed to a single request response subscription.

  2. This service instance exists to help debug routing failures for instance “{84685AE1-3D71-49E0-BB16-87B6A3049AFD}”. The context of the message associated with this instance contains all the promoted properties at the time of the routing failure.

this could be when there are multiple ports/orchestrations trying to listen to the same request message. You can check in your application if a previous version of the Orchestration is in “Stopped” state or is running. It should be in unenlisted state or remove the old versions. Check for the receive ports and send ports also if there are ports which are listening to the same type of message.

Froggy goes to Seattle: World Championship dan persiapan pulang

MSDN Blogs - Fri, 07/29/2016 - 23:01

Di hari terakhir kegiatan Imagine Cup 2016, Juara 1 dari setiap kategori dipertandingkan lagi dalam World Championship, yang akan memilih 1 juara umum, atau World Champion. Ketiga tim yang tampil berturut-turut adalah Juara Games dari Thailand, Juara Innovation dari Rumania, dan Juara World Citizenship dari Yunani.

Ada 3 juri yang bertugas memilih World Champion, yaitu Jennifer Tang (World Champion Imagine Cup 2014), Kasey Champion (Software Engineer dari Microsoft) dan John Boyega (aktor yang memerankan tokoh Finn dalam Star Wars: The Force Awakens). Ketiga juri ini memilih ENTy dari Rumania sebagai World Champion!

Dengan terpilihnya World Champion, maka berakhir sudahlah seluruh rangkaian Imagine Cup 2016 di seluruh dunia. Sekarang kita menunggu pengumuman resmi mengenai Imagine Cup 2017.

Setelah acara World Championship selesai, tim None Developers kemudian menyempatkan waktu untuk berjalan-jalan di downtown Seattle, sambil berbelanja oleh-oleh. Setelah itu, seluruh rombongan kembali ke asrama dan mengikuti Closing Party Imagine Cup 2016 yang dihadiri oleh semua World Finalists.

 

 

Malam ini, seluruh rombongan akan mulai mengepak koper, karena rombongan utama akan berangkat dengan pesawat jam 9 pagi, yang artinya sudah harus berangkat ke airport dengan bis pukul 6 pagi. Rombongan kedua kemudian akan menyusul dengan bis pukul 10 pagi, dengan keberangkatan pesawat pukul 2 siang. Mohon doanya agar perjalanan kami dapat berlangsung dengan baik!

Using SQL Server Stored Procedures to Dynamically Drop and Recreate Indexes

MSDN Blogs - Fri, 07/29/2016 - 21:36

Recently, I’ve been working on a project that of necessity involves periodically updating data in some reasonably large tables that exist in an Operational Data Store (ODS). This particular ODS is used for both reporting via SQL Server Reporting Services and staging data for use in a SQL Server Analysis Services database. Since a number of the tables in the ODS are used for reporting purposes, it’s not entirely surprising that the report designers have created a few indexes to help report performance. I don’t have a problem with indexes, but as any experienced DBA is well aware, the more and larger the indexes the greater the impact on the performance of inserts and updates.

The magnitude of the performance impact was really brought home when a simple update on a 12 million row table that normally completed in roughly three minutes had to be killed at the two hour mark. On further investigation, it was found that over 30 indexes had been added to the table in question. So to address the immediate problem and allow the update to complete in a reasonable time period, DROP INDEX and CREATE INDEX commands were scripted out and added to a stored procedure which would first drop the indexes then run the update statement and finally recreate the indexes. That worked well for a couple of days and then performance again began to degrade. When this episode of performance degradation was investigated, it was found that of the indexes that had been scripted out and added to the stored procedure only one remained and several additional indexes had been created.

Not wishing to revise a rather lengthy stored proc on an almost daily basis, after a brief bit of research, I found a blog posting by Percy Reyes entitled Script out all SQL Server Indexes in a Database using T-SQL. That was great, but only covered NONCLUSTERED indexes and since we were seeing both CLUSTERED and NONCLUSTERED indexes, it would need a bit of revision. Coincidentally, at about the same time there was some serious talk of adding COLUMNSTORE indexes on one or two of these tables, which would essentially cause any update statement to fail. The possibility of having to contend with COLUMNSTORE indexes in addition to CLUSTERED and NONCLUSTERED indexes would necessitate a reasonably significant revision to the T-SQL presented in Percy’s blog, especially since it would be necessary to dynamically drop and then recreate indexes. With those bits of information, it was time to formulate a plan, which would mean accomplishing the following:

  1. Capturing and storing the names of tables, their associated indexes and the definitions of those indexes
  2. Dropping the indexes after the definitions had been safely stored
  3. Recreating the indexes from stored definitions using correct syntax

A relatively simple three step task, the first of which was to create a stored proc that would capture the names of the tables, associated indexes and the definitions of those indexes.  That lead to creation of the following SQL Server stored procedure:

CREATE PROCEDURE [dbo].[sp_GetIndexDefinitions]
as

IF NOT EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME=’WORK’)
BEGIN
EXEC sp_executesql N’CREATE SCHEMA WORK’
END

IF OBJECT_ID(‘[WORK].[IDXDEF]’,’U’) IS NOT NULL DROP TABLE [WORK].[IDXDEF]

CREATE TABLE WORK.IDXDEF (SchemaName NVARCHAR(100), TableName NVARCHAR(256), IndexName NVARCHAR(256), IndexDef NVARCHAR(max))

DECLARE @SchemaName VARCHAR(100)
DECLARE @TableName VARCHAR(256)
DECLARE @IndexName VARCHAR(256)
DECLARE @ColumnName VARCHAR(100)
DECLARE @is_unique VARCHAR(100)
DECLARE @IndexTypeDesc VARCHAR(100)
DECLARE @FileGroupName VARCHAR(100)
DECLARE @is_disabled VARCHAR(100)
DECLARE @IndexOptions VARCHAR(max)
DECLARE @IndexColumnId INT
DECLARE @IsDescendingKey INT
DECLARE @IsIncludedColumn INT
DECLARE @TSQLScripCreationIndex VARCHAR(max)
DECLARE @TSQLScripDisableIndex VARCHAR(max)

DECLARE CursorIndex CURSOR FOR
SELECT schema_name(st.schema_id) [schema_name], st.name, si.name,
CASE WHEN si.is_unique = 1 THEN ‘UNIQUE ‘ ELSE ” END
, si.type_desc,
CASE WHEN si.is_padded=1 THEN ‘PAD_INDEX = ON, ‘ ELSE ‘PAD_INDEX = OFF, ‘ END
+ CASE WHEN si.allow_page_locks=1 THEN ‘ALLOW_PAGE_LOCKS = ON, ‘ ELSE ‘ALLOW_PAGE_LOCKS = OFF, ‘ END
+ CASE WHEN si.allow_row_locks=1 THEN  ‘ALLOW_ROW_LOCKS = ON, ‘ ELSE ‘ALLOW_ROW_LOCKS = OFF, ‘ END
+ CASE WHEN INDEXPROPERTY(st.object_id, si.name, ‘IsStatistics’) = 1 THEN ‘STATISTICS_NORECOMPUTE = ON, ‘ ELSE ‘STATISTICS_NORECOMPUTE = OFF, ‘ END
+ CASE WHEN si.ignore_dup_key=1 THEN ‘IGNORE_DUP_KEY = ON, ‘ ELSE ‘IGNORE_DUP_KEY = OFF, ‘ END
+ ‘SORT_IN_TEMPDB = OFF’
+ CASE WHEN si.fill_factor>0 THEN ‘, FILLFACTOR =’ + cast(si.fill_factor as VARCHAR(3)) ELSE ” END  AS IndexOptions
,si.is_disabled , FILEGROUP_NAME(si.data_space_id) FileGroupName
FROM sys.tables st
INNER JOIN sys.indexes si on st.object_id=si.object_id
WHERE si.type>0 and si.is_primary_key=0 and si.is_unique_constraINT=0 –and schema_name(tb.schema_id)= @SchemaName and tb.name=@TableName
and st.is_ms_shipped=0 and st.name<>’sysdiagrams’
ORDER BY schema_name(st.schema_id), st.name, si.name

open CursorIndex
FETCH NEXT FROM CursorIndex INTO  @SchemaName, @TableName, @IndexName, @is_unique, @IndexTypeDesc, @IndexOptions, @is_disabled, @FileGroupName

WHILE (@@fetch_status=0)
BEGIN
DECLARE @IndexColumns VARCHAR(max)
DECLARE @IncludedColumns VARCHAR(max)

SET @IndexColumns=”
SET @IncludedColumns=”

DECLARE CursorIndexColumn CURSOR FOR
SELECT col.name, sic.is_descending_key, sic.is_included_column
FROM sys.tables tb
INNER JOIN sys.indexes si on tb.object_id=si.object_id
INNER JOIN sys.index_columns sic on si.object_id=sic.object_id and si.index_id= sic.index_id
INNER JOIN sys.columns col on sic.object_id =col.object_id  and sic.column_id=col.column_id
WHERE si.type>0 and (si.is_primary_key=0 or si.is_unique_constraINT=0)
and schema_name(tb.schema_id)=@SchemaName and tb.name=@TableName and si.name=@IndexName
ORDER BY sic.index_column_id

OPEN CursorIndexColumn
FETCH NEXT FROM CursorIndexColumn INTO  @ColumnName, @IsDescendingKey, @IsIncludedColumn

WHILE (@@fetch_status=0)
BEGIN
IF @IsIncludedColumn=0
SET @IndexColumns=@IndexColumns + @ColumnName  + CASE WHEN @IsDescendingKey=1  THEN ‘ DESC, ‘ ELSE  ‘ ASC, ‘ END
ELSE
SET @IncludedColumns=@IncludedColumns  + @ColumnName  +’, ‘

FETCH NEXT FROM CursorIndexColumn INTO @ColumnName, @IsDescendingKey, @IsIncludedColumn
END

CLOSE CursorIndexColumn
DEALLOCATE CursorIndexColumn
SET @IndexColumns = substring(@IndexColumns, 0, len(ltrim(rtrim(@IndexColumns))))
SET @IncludedColumns = CASE WHEN len(@IncludedColumns) >0 THEN substring(@IncludedColumns, 0, len(@IncludedColumns)) ELSE ” END

SET @TSQLScripCreationIndex =”
SET @TSQLScripDisableIndex =”
SET @TSQLScripCreationIndex=’CREATE ‘+ @is_unique  + @IndexTypeDesc + ‘ INDEX ‘ +QUOTENAME(@IndexName)+’ ON ‘ +QUOTENAME(@SchemaName) +’.’+ QUOTENAME(@TableName)+
CASE WHEN @IndexTypeDesc = ‘NONCLUSTERED COLUMNSTORE’ THEN ‘ (‘+@IncludedColumns+’) ‘
WHEN @IndexTypeDesc = ‘CLUSTERED COLUMNSTORE’ THEN ‘ ‘
ELSE ‘ (‘+@IndexColumns+’) ‘
END  +
CASE WHEN @IndexTypeDesc = ‘NONCLUSTERED COLUMNSTORE’ and len(@IncludedColumns)>0 THEN ”
when @IndexTypeDesc = ‘CLUSTERED COLUMNSTORE’ THEN ”
ELSE
CASE WHEN LEN(@IncludedColumns)>0 THEN CHAR(13) +’INCLUDE (‘ + @IncludedColumns+ ‘)’ ELSE ” END
END  +
CASE WHEN @IndexTypeDesc not like (‘%COLUMNSTORE%’) THEN CHAR(13) + ‘WITH (‘ + @IndexOptions + ‘) ‘ + ‘ ON ‘ + QUOTENAME(@FileGroupName) ELSE ” END  + ‘;’

INSERT INTO [WORK].[IDXDEF] (Schemaname,TableName,IndexName,IndexDef) values (@SchemaName, @TableName, @IndexName, @TSQLScripCreationIndex)

FETCH NEXT FROM CursorIndex INTO  @SchemaName, @TableName, @IndexName, @is_unique, @IndexTypeDesc, @IndexOptions, @is_disabled, @FileGroupName

END
CLOSE CursorIndex
DEALLOCATE CursorIndex

When that tested out, it was time for the next step of the task and simply dynamically dropping the indexes. But I wanted to ensure that when the indexes were dropped, that the index definitions would be safely stored (I’m like most other DBAs and sort of enjoy being employed). That resulted in creation of the following stored proc:

CREATE PROCEDURE [dbo].[sp_DropIndexes] as

EXEC sp_GetIndexDefinitions

DECLARE @DropIndex NVARCHAR(max)
DECLARE @SchemaName NVARCHAR(256)
DECLARE @TableName NVARCHAR(256)
DECLARE @IndexName NVARCHAR(256)
DECLARE CursorIDXDrop CURSOR FOR
SELECT DISTINCT ss.name AS schemaname, st.name AS tblname, si.name AS indexnname
FROM sys.tables st
INNER JOIN sys.schemas ss ON st.schema_id=ss.schema_id
INNER JOIN sys.indexes si ON st.object_id=si.object_id
WHERE si.type<>0 AND st.is_ms_shipped=0 AND st.name<>’sysdiagrams’ AND
(is_primary_key=0 AND is_unique_constraint=0)

OPEN CursorIDXDrop
FETCH NEXT FROM CursorIDXDrop INTO @SchemaName, @TableName, @IndexName
WHILE @@FETCH_STATUS=0
BEGIN
SET @DropIndex= ‘DROP INDEX ‘ + QUOTENAME(@IndexName) + ‘ ON ‘ + QUOTENAME(@SchemaName) + ‘.’ + QUOTENAME(@TableName)
EXEC sp_executesql @DropIndex
FETCH NEXT FROM CursorIDXDrop INTO  @SchemaName, @TableName, @IndexName
END
CLOSE CursorIDXDrop
DEALLOCATE CursorIDXDrop

After that worked, with the index definitions safely stored so that I could manually re-create them if necessary, it was time to move on to the third step of dynamically re-creating the indexes. That resulted in creation of the following stored proc:

CREATE PROCEDURE [dbo].[sp_RebuildIndexes]
as

IF OBJECT_ID(‘[WORK].[IDXDEF]’,’U’) IS NOT NULL
BEGIN

DECLARE @Command nvarchar(max)
DECLARE @IndexCmd nvarchar(max)
DECLARE @NumRows int
DECLARE CursorIndexes CURSOR for
SELECT DISTINCT CAST(IndexDef as nvarchar(max)) as IndexDef from work.IDXDEF
SET @NumRows=0
OPEN CursorIndexes
FETCH NEXT FROM CursorIndexes into @Command
WHILE @@FETCH_STATUS=0
BEGIN
EXEC sp_executesql @Command
FETCH NEXT FROM CursorIndexes into @Command
SET @NumRows=@NumRows+1
END;
CLOSE CursorIndexes
DEALLOCATE CursorIndexes
–PRINT LTRIM(RTRIM(CAST(@NumRows as varchar(10)))) + ‘ Indexes Recreated from stored definitions’
END

By having my update script call the sp_DropIndexes before the update and then call the sp_RebuildIndexes, it was very easy to drop the indexes, run updates and then re-create the indexes in a reasonable time period without the necessity of having to continually revise code.

Imagine Cup 2016 世界大会、閉幕しました!

MSDN Blogs - Fri, 07/29/2016 - 19:10

日本代表チームは惜しくも入賞を逃してしまいましたが、Imagine Cup は、テクノロジーのすごさだけでなく、そのテクノロジーをどのようにビジネスにするのか? Go to Market ストラテジーは? そのテクノロジーは既存の代替品と比べて何が優れているのか? このテクノロジーは、どれぐらいのマーケットポテンシャルがあり、どれぐらいの規模の人に役に立つのか? といったビジネス要素がとても必要だということを実感しました。

筑波大学の朝倉さん、上原さんからは、「来年リベンジする!」との一声が!是非頑張っていただき来年はファイナルステージに上がりましょう!

 

授賞式の後は、Imagine Cup出場の学生さん、MSP  Summitに参加した学生はHoloLensを使ったHolographic Academyへ。すごかった!とワクワクして帰ってきてくれました。

 

金曜日のワールドチャンピオンシップは、Game、Innovation、World Citizenship各部門の優勝チームが、ステージで、3分間のピッチを行いました。シアトルのGarfield 高校で行われた最終決選には、スターウォーズのキャラクターたちがお出迎え、マイクロソフト のExecutive Vice PresidentのJudson Althoff、Corporate Vice President of Developer Platform & Evangelism and Chief EvangelistのSteve Guggenheimer、2014年Imagine Cup優勝者、マイクロソフトのComputer Science Curriculum DeveloperのKesey Champion、ハリウッドスターが登場してのセレモニーとなりました。優勝は、Innovation部門のルーマニアチームENTy。体のバランスと姿勢をリアルタイムにモニターできる手軽なメディカルデバイスを開発しました。既に、数名の医者と数百人の患者での利用実績がある点もプレゼンテーションでしっかりとアピールしていました。

MSPの皆さんは、授賞式後、Robo World Cup Hackathon の会場で現地の子供たちのメンターとして大活躍してくれました!

Imagine Cup 日本代表の筑波大学Biomachine Industrialチームの皆さん、MSPのお二人、本当にお疲れ様でした!!

Packaging issues with Visual Studio Team Services – 7/30 – Investigating

MSDN Blogs - Fri, 07/29/2016 - 19:07

Initial Update: Saturday, 30 July 2016 02:01 UTC

We are actively investigating issues with Visual Studio Team Services. Some customers may see a build failure with Nuget errors if they have below conditions.

1) If you have at least one VSTS Nuget package source
2) If you have more than one NuGet restore task in the build definition.

The symptom is, the second NuGet restore build task fails.

You will see an error message like below : 
Unable to find version ‘1.8.1’ of package ‘Microsoft.Cosmos.Client’.
##[debug]rc:1
##[debug]success:false
##[error]Error: C:BAagentWorkerToolsnuget.exe failed with return code: 1
##[error]Packages failed to install
##[debug]task result: Failed

Work Around:: If you have the above repro then, you go to your agent pool in web UI, right click, and choose “Update All Agents”

Next Update: Before 30 July 2016 06:00 UTC

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Bapayya

.NET 4.6.2 and long paths on Windows 10

MSDN Blogs - Fri, 07/29/2016 - 18:21

The Windows 10 Anniversary update is almost out the door. .NET 4.6.2 is in the update (as we’ve looked at in the past few posts). I’ve talked a bit about what we’ve done in 4.6.2 around paths, and how that is targeted at both allowing access to previously inaccessible paths and opens up the door for long paths when the OS has support. Well, as people have discovered, Windows 10 now has started to open up support. In this post I’ll talk about how to enable that support.

Enabling Win32 Long Path Support

Long paths aren’t enabled by default yet. You need to set a policy to enable the support. To do this you want to “Edit group policy” in the Start search bar or run “gpedit.msc” from the Run command (Windows-R).

In the Local Group Policy Editor navigate to “Local Computer Policy: Computer Configuration: Administrative Templates: All Settings“. In this location you can find “Enable Win32 long paths“.

Enabling Win32 long paths in the policy editor.

After you’ve turned this on you can fire up a new instance of PowerShell and free yourself from the constraints of MAX_PATH! The key File and Directory Management APIs respect this and now allow you to skip the check for MAX_PATH without having to resort to using “\?” (look back to my earlier posts on path formats to understand how this works). This is also possible as PowerShell has opted into the new .NET path support (being that it is a .NET application).

If you look carefully at the description in the setting you’ll see “Enabling Win32 long paths will allow manifested win32 applications…“. That’s the second gate to getting support- your app must have a specific manifest setting. You can see what this is by opening C:WindowsSystem32WindowsPowerShellv1.0powershell.exe in Visual Studio or some other manifest viewer. Doing so you’ll see the following section in it’s manifest:

<application xmlns="urn:schemas-microsoft-com:asm.v3"> <windowsSettings> <longPathAware xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">true</longPathAware> </windowsSettings> </application>

These two gates will get you the native (Win32) support for long paths. In a managed app you’ll also need the new behavior in .NET. The next section covers this.

Configuring a Simple Long Path .NET Console App

This example uses a new C# Console Application in Visual Studio 2015.

The first thing to do after creating a new console app is edit the App.Config file and add the following after the <startup> end tag:

<runtime>   <AppContextSwitchOverrides value="Switch.System.IO.UseLegacyPathHandling=false;Switch.System.IO.BlockLongPaths=false" /> </runtime>

Once the 4.6.2 Targeting Pack is released you can alternatively select 4.6.2 as your target framework in the project properties instead of using the app.config setting. The defaults for these two values are true if the target framework is 4.6.1 or earlier.

The second thing to do is add the Application Manifest File item to your project. After doing so add the windowsSettings block I shared above. In the default template there is already a commented-out section for windowsSettings, you can uncomment this and add this specific longPathAware setting.

Here is a sample block to add to your Main() method to test it out:

string reallyLongDirectory = @"C:TestabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; reallyLongDirectory = reallyLongDirectory + @"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; reallyLongDirectory = reallyLongDirectory + @"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; Console.WriteLine($"Creating a directory that is {reallyLongDirectory.Length} characters long"); Directory.CreateDirectory(reallyLongDirectory); You can open up PowerShell and go and look at the directory you created! Yayyyyyy! This is the start of what has been a very long journey to remove MAX_PATH constraints. There is still much to do, but now the door is finally open. The rest can and will come- keep your feedback coming in to keep us on track! Note that in this initial release CMD doesn’t support long paths. The Shell doesn’t add support either, but previously had limited support utilizing 8.3 filename trickery. I’ll leave it to the Windows team for any further details.

SQL Updates Newsletter – July 2016

MSDN Blogs - Fri, 07/29/2016 - 18:03
Recent Releases and Announcements

 

Recent Whitepapers/E-books/Training/Tutorials

 

Monthly Script Tips

 

Windows Server 2016 – Get Started

 

Issue Alert

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services

 

 

 

What’s in a PDB file? Use the Debug Interface Access SDK

MSDN Blogs - Fri, 07/29/2016 - 17:37

It’s easy to use C# code and MSDia140.dll from the Debug Interface Access SDK to examine what’s inside a PDB.

A PDB is Program Database which is generated when an executable such as an EXE or DLL is built. It includes a lot of information about the file that is very useful for a debugger. This include names and addresses of symbols.

Managed code PDB contents are somewhat different from native code: a lot of the managed code information can be obtained from other sources. For example, the Type of a symbol can be obtained from the Metadata of the binary.

Below is some sample code that uses the DIA SDK to read a PDB and display its contents.

See also

Write your own Linq query viewer

Use DataTemplates and WPF in code to create a general purpose LINQ Query results display

<code> using Dia2Lib; using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Runtime.InteropServices; using System.Windows; using System.Windows.Controls; using System.Windows.Data; // File->New->Project->C# Windows WPF Application. // Replace MainWindow.Xaml.cs with this content // add a reference to c:Program Files (x86)Microsoft Visual Studio 14.0Common7PackagesDebuggermsdia140.dll namespace WpfApplication1 { public partial class MainWindow : Window { class SymbolInfo { public int Level { get; set; } //recursion level public string SymbolName { get; set; } public uint LocationType { get; set; } public ulong Length { get; set; } public uint AddressOffset { get; set; } public uint RelativeAddress { get; set; } public string SourceFileName { get; set; } public uint SourceLineNo { get; set; } public SymTagEnum SymTag { get; set; } public string SymbolType { get; set; } public override string ToString() { return $"{SymbolName} {SourceFileName}({SourceLineNo}) {SymbolType}"; } } public MainWindow() { InitializeComponent(); this.Loaded += (ol, el) => { try { this.WindowState = WindowState.Maximized; var pdbName = System.IO.Path.ChangeExtension( Assembly.GetExecutingAssembly().Location, "pdb"); this.Title = pdbName; var lstSymInfo = new List<SymbolInfo>(); using (var diaUtil = new DiaUtil(pdbName)) { Action<IDiaEnumSymbols, int> lamEnum = null; // recursive lambda lamEnum = (enumSym, lvl) => { if (enumSym != null) { foreach (IDiaSymbol sym in enumSym) { var symbolInfo = new SymbolInfo() { Level = lvl, SymbolName = sym.name, Length = sym.length, LocationType = sym.locationType, SymTag = (SymTagEnum)sym.symTag, AddressOffset = sym.addressOffset, RelativeAddress = sym.relativeVirtualAddress }; var symType = sym.type; if (symType != null) { var symtypename = symType.name; symbolInfo.SymbolType = symtypename; } lstSymInfo.Add(symbolInfo); if (sym.addressOffset > 0 && sym.addressSection > 0 && sym.length > 0) { try { IDiaEnumLineNumbers enumLineNums; diaUtil._IDiaSession.findLinesByAddr( sym.addressSection, sym.addressOffset, (uint)sym.length, out enumLineNums ); if (enumLineNums != null) { foreach (IDiaLineNumber line in enumLineNums) { var linenumber = line.lineNumber; symbolInfo.SourceFileName = line.sourceFile.fileName; symbolInfo.SourceLineNo = line.lineNumber; break; } } } catch (Exception) { } } switch (symbolInfo.SymTag) { case SymTagEnum.SymTagFunction: case SymTagEnum.SymTagBlock: case SymTagEnum.SymTagCompiland: IDiaEnumSymbols enumChildren; sym.findChildren(SymTagEnum.SymTagNull, name: null, compareFlags: 0, ppResult: out enumChildren); lamEnum.Invoke(enumChildren, lvl + 1); break; } } } }; /* query by table of symbols IDiaEnumTables enumTables; diaUtil._IDiaSession.getEnumTables(out enumTables); foreach (IDiaTable tabl in enumTables) { var tblName = tabl.name; if (tblName == "Symbols") { IDiaEnumSymbols enumSyms = tabl as IDiaEnumSymbols; lamEnum.Invoke(enumSyms, 0); } } /*/ // query by global scope var globalScope = diaUtil._IDiaSession.globalScope; IDiaEnumSymbols enumSymGlobal; globalScope.findChildrenEx(SymTagEnum.SymTagNull, name: null, compareFlags: 0, ppResult: out enumSymGlobal); lamEnum.Invoke(enumSymGlobal, 0); //*/ } var gridvw = new GridView(); foreach (var mem in typeof(SymbolInfo).GetMembers(). Where(m => m.MemberType == MemberTypes.Property) ) { var gridCol = new GridViewColumn(); gridvw.Columns.Add(gridCol); gridCol.Header = new GridViewColumnHeader() { Content = mem.Name }; var template = new DataTemplate(typeof(SymbolInfo)); var factTblk = new FrameworkElementFactory(typeof(TextBlock)); factTblk.SetBinding(TextBlock.TextProperty, new Binding(mem.Name)); // for wide columns let's set the tooltip too factTblk.SetBinding(TextBlock.ToolTipProperty, new Binding(mem.Name)); factTblk.SetValue(TextBlock.MaxWidthProperty, 300.0); var factSP = new FrameworkElementFactory(typeof(StackPanel)); factSP.SetValue(StackPanel.OrientationProperty, Orientation.Horizontal); factSP.AppendChild(factTblk); template.VisualTree = factSP; gridCol.CellTemplate = template; } var lv = new ListView() { ItemsSource = lstSymInfo, View = gridvw }; lv.DataContext = lstSymInfo; this.Content = lv; } catch (Exception ex) { this.Content = ex.ToString(); } }; } } public class DiaUtil : IDisposable { public IDiaDataSource _IDiaDataSource; public IDiaSession _IDiaSession; public DiaUtil(string pdbName) { _IDiaDataSource = new DiaSource(); _IDiaDataSource.loadDataFromPdb(pdbName); _IDiaDataSource.openSession(out _IDiaSession); } public void Dispose() { Marshal.ReleaseComObject(_IDiaSession); Marshal.ReleaseComObject(_IDiaDataSource); } } }

</code>

Tips and Tricks on Doing “Lift and Shift” On-Prem Systems to Azure

MSDN Blogs - Fri, 07/29/2016 - 17:14

While Microsoft Azure offers an open and flexible platform for PaaS solutions, customers and partners usually take a “Lift and Shift” approach to moving their existing apps to Azure, that is, they try to keep and run the systems “as-is” or with minimal changes. The reason why they take the approach is rather obvious, whether it is for a proof of concept (POC), pilot or full migration. Most of these on-prem systems have some dependencies on other internal or external systems and any change to the infrastructure configuration, not to mention source code change, require further testing, which can take time and people resources. With the approach, they are also interested to evaluate the overall cost as compared to on-prem hosting. Working from several ISV partners I have discovered and learned 12 important lessons, most of which are related to manageability and security and should be applicable to hybrid migration and cloud-only migration, and would like to share them here.

Working with Azure Resource Group

Resource group is container that holds related resources for an application and role-based access controls. It’s up to you to determine how many resource groups you want to have as you create VMs, networks, etc. to support one or many apps on Azure. While it is not wrong to create multiple resource groups for your apps, with one VNET within each resource group, you will discover very quickly that doing so requires fairly amount of configuration if you have to enable communications between these VNETs. 

It is common, however, that you create one dedicated resource group for Azure networking, that is, Azure VNET and subnets, and that you can grant read-only, contributor or custom role permissions to the group of people who are responsible for managing the networking at your organization.

Creating Windows Active Directory AD Domain

When you have multiple Windows Active Directory AD domains, you may be thinking whether or not you should consolidate the domains to simplify AD management. On the other side, you may be wondering if making such change would break existing administrative boundaries among teams. The general rule of thumb is that you make no or little change during the initial phase of “lift and shift” unless the benefits of making changes overweigh the no change option.

It is worth noting that when Windows AD domains must be deployed on Azure data disks, the same or separate data disks and that the host cache preference setting on the Azure data disk is set for NONE. Here is why. Active Directory Domain Services (AD DS) uses update sequence numbers (USNs) to keep track of replication of data between domain controllers. Failure to disable write caching may, under certain circumstances, introduce USN rollback resulting in lingering objects and other problems. For more info, read the AD documentation.

To further protect your AD identity systems, you can implement the Tier model.

Considering Custom AD Domain with Azure DNS

You can use Azure DNS or your own DNS. If you use custom domains or subdomains, e.g. mycompanydomain.com, for public accessible URLs, you 

Configuring VNET

You can choose to have one or many VNETs. My colleague Igor has put together a nice blog post explaining how to configure communications between these VNETs. It is not uncommon that you go with one VNET with multiple subnets and place them in one separate resource group. You then grant appropriate permissions to users from other resource groups.

Leveraging Network Security Group and User Defined Routes

To protect your resources in Azure, you can use NSG to set up access controls and UDR to route traffic flows.

Adding Virtual Appliances to meet network requirements

Virtual appliances are typically Linux or FreeBSD-based VMs on Azure that perform specific network functions including security (Firewall, IDS , IPS), Router/VPN, application delivery controller and WAN optimization. They are available through partner solution on the Azure Marketplace and can be used to meet on-prem network requirements.

Setting up jump box for secure remote access

Despite different views on their benefits, as mentioned in this skyport blog post, Jump boxes are used today to provide secure remote access to administrators. In conjunction with Azure NSG and UDF and virtual appliances, jump boxes (two for high availability) can be configured behind the virtual appliances (two for high availability) with no public IP. This way, only authorized administrators can get on the jump box through the virtual appliance and then RDP to internal resources.

Providing multi-factor authentication on remote access

You can easily enable MFA through Azure AD premium. In addition, you can add MFA to RDP servers. For more info on the latter, read the white paper “Secure RDP Connection to on premise servers using Azure MFA – Step by Step Guide“.

Storing keys and secrets in Key Vault

You can use Azure Key Vault to create and store keys and passwords using PowerShell or CLI. There is no portal UI at the moment but will be added. Also, there are no notifications/alerts for keys due to expire at this time but this is a known common feature request.

Dealing with backup and DR issues

You can use Azure Backup service to back up files. For Bitlocker protected volume, the volume must be unlocked before the backup can occur. More info at Azure Backup service- FAQ

You can use Azure Site Recovery Service (ASR) to migrate an on-prem system to a secondary site or to Azure. However, site to site within Azure is not supported currently.

Working around the Linux cluster issue

Linux cluster requires shared access to a shared disk, which is not currently supported on Azure. There are some workarounds that you can find from the Linux community. For example, this blog post, “Step-By-Step: How to configure a Linux failover cluster in Microsoft Azure IaaS without shared storage #azure #sanless” walks you through all steps required to configure a highly available, 2-node MySQL cluster (plus witness server) on Azure VMs.

Monitoring your Azure environments with OMS

Azure Operations Management Suite (OMS) is your best bet when it comes to monitor the health of your systems on Azure. Keep in mind that services such as backup that are not available today are being added to the suite very rapidly.

Guest Blog: Paul Woods – Adopt & Embrace. Reflections on my first time at the Worldwide Partner Conference

MSDN Blogs - Fri, 07/29/2016 - 15:44

It has been almost two week since I landed back in Brisbane after a whirlwind week and a bit in North America for the Microsoft Worldwide Partner Conference. My, and Adopt & Embrace’s first WPC! I think I am finally over the jet lag. But the work has only just begun!

In my final guest post on the Microsoft Australia Partner Blog about my “First Time Attendee” experience at WPC, I wanted to take some time to reflect on the experience and share with you my key takeaways, actions, progress against the goals I set etc.

As a general statement before I get into the detail – if someone asked me whether or not WPC was a worthwhile investment as an emerging Microsoft Partner… my answer would be a resounding YES.

There are a couple of different ways I can unpack that very quick answer.

WPC enabled me to work on the business, not in it

Like many smaller Microsoft partners, I wear many hats in the business. One of the interesting things about being away from the business for a week at WPC (well as much as I could be) is that it gave me a little bit of breathing, and more importantly thinking space. I could take most of those hats off! From when I stepped on the plane in Brisbane until when I landed back in Brisbane 9 days later… I was working “on my business, not in it”. WPC was a great catalyst to enable me to get out of the minutiae of customer meetings, proposals, billable work etc and really think about how we can improve our organisation and better align within the Microsoft eco-system. There are already a number of decisions we have made in the business that we would not have made (or had the chance to even think about) if we didn’t “pull the rip cord” and land at WPC for the week.

WPC enabled me to establish and/or reinforce key relationships across Microsoft

Being a Partner Seller based in Brisbane, we have reasonably good access to the local branch. However, Microsoft doesn’t start and finish at 400 George Street. There are plenty of Microsoft stakeholders that we have established ‘virtual’ relationships over time that we very rarely get face time with. Whether it was members of the Office Business Group, the Partner team, Sales Managers or the Executive team, WPC gave me the opportunity to have a meal, a social drink, or a formal meeting with people we don’t normally get to see all that often. To put it in perspective the conversations and catch ups that we managed to have in just one week in Toronto was the equivalent of 3 or 4 trips to Sydney to engage with the same people. But in a more relaxed, but also more focused environment.

Park Microsoft Australia for a second though… the real advantage of WPC was being able to establish or reinforce existing relationships with Microsoft stakeholders from APAC, or Redmond. Be it Ananth Lazurus, the APAC Partner Lead out of Singapore… Brian Kealey, the Country Manager for Sri Lanka and the Maldives… Steven Malme, the Senior Director for Corporate Accounts in the Asia time zone… Cyril Belikoff from the Microsoft Office team in Redmond who has responsibility for the O365 active usage number globally… to be honest it would be possible to drop even more names. Beyond name dropping, we have follow up actions and activities with most of those stakeholders, all focused on mutual “win/win” outcomes. Would that be possible without going to WPC – well yes. However, it would be difficult to achieve so much progress so quickly without everyone being at WPC at the same time.

WPC enabled me to connect with other partners from around the world and learn from their success

Calling WPC a “melting pot” or Microsoft Partners from around the world may sound a bit clichéd… but it really is. Formally I was able to sit down with a number or partners who traditionally would be considered competitors – except that geography means that realistically we are not competitors, but organisations focused on serving our markets better. For example, I had a great 1 hour meeting in the WPC Connect meeting zone with Richard and Kanwal from 2ToLead. We explored opportunities to deliver services on each other’s behalf for our respective (and growing) lists of customers with international operations.

Other partnership opportunities emerged between partners focused on serving different markets here in Australia as well. Whether it was exploring how to assist LSPs broaden their customer conversation to accelerate active usage, or help solution partners broaden their offering to existing customers, there were a number of very fruitful discussions.

RE: the Australia specific conversations – yes they could have taken place at the Australia Partner Conference, but the key difference with WPC is that because only 250-300 people from Australia attend (and not 2000) there is a very high likelihood you are having those conversations with a decision maker who can act (or delegate action) based on the outcomes of your meeting. Another big tick for WPC!

WPC enabled me to learn some new things

The opportunity to learn is everywhere! The content of the keynotes and handful of breakout sessions I managed to attend was valuable. Even more valuable was the conversations, questions and answers you overhear throughout the conference. What are the questions or concerns that the delegates from New Signature – one of the more successful Office 365 partners in North America – are raising at the end of the session. Do they apply to my business? Is that answer relevant to my customers.

Then there are the war stories you get caught up in over a few beers at the networking events like the Australian trip to Niagara Falls. My biggest regret of my experience at WPC was that I didn’t get into “Sponge Mode” fast enough and didn’t realise the useful and actionable knowledge that was being shared right from the start.

WPC has enabled us to establish more authority with our customers

All this week the customers I have talked to have been very keen to learn more about what WPC was like, what the key takeaways were, and what it means for their business. A great ice breaker for an authentic conversation where we can deliver more value to our customers by interpreting and contextualising the key announcements and news form WPC for our customers.

So how did we go with regard to our WPC goals?

If you think back to my first guest post in June, there were three key goals we set for WPC

  1. Connect with key stakeholders within Microsoft Australia. Specifically, those that are goaled/targeted on Office 365 active usage / adoption / consumption. We will ensure they are familiar with what personal and professional value Adopt & Embrace can deliver to them and the customers in their territory. We will do this during informal conversations using customer evidence from engagements over the past 6 months.
  2. Similarly connect with key Microsoft Partners within Australia. Specifically, those that are considering augmenting their traditional business with high value advisory services, managed services or IP focused on user adoption. We will ensure they are familiar with our channel friendly approach that enables them to resell Adopt & Embrace’s capability to unlock additional value for their customers. We will do this using customer evidence from through partner engagements over the past 6 months
  3. Finally connect with forward thinking international partners. Specifically, those that have dipped their toe in the water of delivering services around Office 365 adoption / change management / value realisation. Beyond sharing war stories, we want ensure they are familiar with our “Lean User Adoption” methodology and discuss the potential for them to leverage our ‘secret sauce’. We will do this using customer evidence from Lean User Adoption based engagements over the past 6 months

So how did we go? Compared to many attendees (both partners and Microsoft), I had a relatively relaxed week. I didn’t have many meetings pre-scheduled apart from a handful that I had arranged via email or the meeting scheduling tool. This meant that I needed to actively seek out the people I wanted to engage with on site. Across the week there were around 15 meetings that would fall into one of the three buckets listed above. Some of those meetings where 5 minutes catch ups with emails exchanged and a list of actions. Some were an hour long and covered a lot of ground before agreeing on next steps.

If I were to have my time again, I think a little more time planning ahead of arriving on site, and organising a few more scheduled meetings would have helped us unlock more from WPC. That being said, having flexibility in the schedule also meant that we could quickly react and meet with people when the opportunity arose. That careful balance between a calendar jam packed with meetings, and enabling the serendipity of connections at the event.

What about the other side of WPC…

“What happens at WPC stays at WPC” J   Just kidding. What I can say is that there is never nothing to do at the Worldwide Partner Conference. You can’t attend every party. You can’t attend every lunch or breakfast event. The key is to prioritise and think about what value you can extract from that event.

The partner celebration on the other hand was amazing. As per my last guest blog post – Icona Pop were great, Gwen Stefani was sensational. Nothing like a concert to really take your mind off work for a little while.

So what is next?

For me, I have a long list of actions from WPC. Be it follow up activity based on meetings had on site, or changes in our business / approaches / how we communicate about what we do with customers. I am slowly working through the list, I almost need another week off to kick start all the execution we need to do off the back of what we learned at the Microsoft Worldwide Partner Conference.

I think we will have all of that under control just in time for the Australian Partner Conference in September. Wow… only 40 days to go! I have registered, you should make sure you get your ticket today.

Entity Framework Core 1.1 Plans

MSDN Blogs - Fri, 07/29/2016 - 13:57

Now that Entity Framework Core (EF Core) 1.0 is released, our team is beginning work on bug fixes and new features for the 1.1 release. Keep in mind that it’s early days for this release, we’re sharing our plans in order to be open, but there is a high chance things will evolve as we go.

High level goals

Our goal with 1.1 release is to make progress on the items that are blocking folks from using EF Core.

  • Fix a lot of the bugs reported on the 1.0 release in order to improve stability
  • Tackle a number of critical O/RM features that are currently not implemented in EF Core
Timeframe

EF Core 1.1 is scheduled for Q4 2016 / Q1 2017. We’ll have a more exact date as we get closer to the release and decide where to draw the line on features to be included.

What features are we working on

Over the next couple of months we are beginning work on a number of features. Some we expect to include in 1.1, others we expect to deliver in a future release.

Features we expect to ship in 1.1

Following is the list of features that our team is expecting to include in the 1.1 release.

  • LINQ improvements
    • Improved translation to enable more queries to successfully execute, with more logic being evaluated in the database (rather than in-memory).
    • Queries for non-model types allows a raw SQL query to be used to populate types that are not part of the model (typically for denormalized view-model data).
  • DbSet.Find provides an easy way to fetch an entity based on its primary key value.
  • Explicit Loading allows you to trigger population of a navigation property on an entity that was previously loaded from the database.
  • Additional EntityEntry APIs from EF6.x such as Reload, GetModifiedProperties, GetDatabaseValues etc.
  • Connection resiliency automatically retries failed database commands. This is especially useful when connection to SQL Azure, where transient failures are common.
  • Pluralization support for reverse engineering will result in singularized type names and pluralized DbSet property names, regardless of whether the table name is singular or plural.
  • Stable release of tools – whilst the runtime has reached RTM, the tooling for EF Core (plus .NET Core and ASP.NET Core) is still pre-release.
Other features we’re starting on

We’re also planning to start work on the following features, but do not expect them to be ready in time for inclusion in the 1.1 release. These features will require a longer time to implement, stabilize, and gather feedback from the community and therefore they will have a longer pre-release cycle.

  • Complex/value types are types that do not have a primary key and are used to represent a set of properties on an entity type.
  • Simple type conversions such as string => xml.
  • Visual Studio wizard for reverse engineering a model from an existing database.
Will EF Core replace EF6.x after the 1.1 release

The short answer is no. EF6.x is still the mature, stable data access stack and will continue to be the right choice for many applications when EF Core 1.1 is available. Along with EF Core 1.1, we are also starting work on the EF6.2 release – we’ll share our plans on that shortly. That said, EF Core will be a viable choice for more applications once 1.1 is released. Our documentation has guidance on choosing between EF6.x and EF Core, and this same guidance will apply when 1.1 is released.

Notable exclusions

The full backlog of features that we want to add to EF Core is too long to list here, but we wanted to call out a few features that we know are critical to a lot of applications and will not be worked on in the 1.1 timeframe. This is purely a matter of not being able to work on every feature at the same time, and the order in which things must be implemented. As an example, Lazy Loading will build on top of some of the feature work we are doing in the 1.1 timeframe (such as Explicit Loading).

While we realize this list will cause frustration for some folks, we want to be as transparent as possible so that you all have the information required to make an informed decision about when EF Core would be the right choice for you.

  • Many-to-many relationships without join entity. You can already model a many-to-many relationship with a join entity, see Relationships for details.
  • Alternate inheritance mapping patterns for relational databases, such as table per type (TPT) and table per concrete type (TPC). Table per hierarchy (TPH) is already supported.
  • Lazy loading enables navigation properties to be automatically populated from the database when they are accessed. Some of the features we implement in 1.1 may enable rolling your own lazy loading, but it will not be a first class feature in 1.1.
  • Simple command interception provides an easy way to read/write commands before/after they are sent to the database.
  • Stored procedure mapping allows EF to use stored procedures to persist changes to the database (FromSql already provides good support for using a stored procedure to query, see Raw SQL Queries for details).
  • Spatial data types such as SQL Server’s geography & geometry. The type conversion work we are starting may enable some spatial scenarios, but it will not be a complete solution.
  • Seed data allows you to easily specify a set of data to be present in the database. This is useful for populating lookup tables etc. and for inserting test data.
  • GROUP BY translation will move translation of the LINQ GroupBy operator to the database when used with an aggregate function (i.e. when all underlying rows do not need to be returned to the client).
  • Update model from database allows a model that was previously reverse engineered from the database to be refreshed with changes made to the schema.
  • Model visualization allows you to see a graphical representation of your model.

DSC Resource Kit Community Call August 3

MSDN Blogs - Fri, 07/29/2016 - 13:22

We will be hosting a community call for the DSC Resource Kit 1-2PM on Wednesday, August 3 (PDT).
Call in to ask questions or give feedback about the DSC Resource Kit!

How to Join Skype for Business

Join Skype Meeting
This is an online meeting for Skype for Business, the professional meetings and communications app formerly known as Lync.

Phone

+14257063500 (USA – Redmond Campus) English (United States)
+18883203585 (USA – Redmond Campus) English (United States)
Find a local number

Conference ID: 88745041
Forgot your dial-in PIN? | Help

Agenda

The community call agenda is posted on GitHub here.

Backup Migrated Mobile Service (Node.Js backend)

MSDN Blogs - Fri, 07/29/2016 - 12:48

You may have gotten this error when trying to backup a Migrated Mobile Service built with a Node.Js backend:

Database connection string not valid for database MS_TableConnectionString (SQLAzure). Keyword not supported: ‘driver’.
Reason:

Backup is using ADO.Net internally. The Backup feature uses the Application Setting MS_TableConnectionString to configure the Database backup and this legacy setting is for the node Driver and connection.
Fix:

Don’t use the MS_TableConnectionString.  Instead create a new ADO.Net connection string for your Database.  Configure the Database Backup settings so it picks up this new string, and save your changes.
Walkthrough of the fix: Get the ADO.Net connection string you need

In your migrated Azure Mobile Service, find the name of your Mobile Service DB in your Application Settings, MS_TableConnectionString.

Example: Driver={SQL Server Native Client 10.0};Server={tcp:aooc2mj075.database.windows.net,1433};Database=jsanderswestmobileservice_db;Uid=AvGdMxxLogin_jsandersmigratenodewest@aooc2mj075.database.windows.net;Pwd=Ns36hxx8663zZ$$;

 

Go to the SQL Databases tab, find the Database and click on it

When it opens, click on the Database Connection Strings.

Copy the ADO.NET connection string and save it

Example: Server=tcp:aooc2mj075.database.windows.net,1433;Initial Catalog=jsanderswestmobileservice_db;Persist Security Info=False;User ID={your_username};Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;

 

Replace the User ID and Password with the Db Admin or equivalent login

Important: The UID and PWD from you old connection string does NOT have sufficient privilages to do the backup.

Example: Server=tcp:aooc2mj075.database.windows.net,1433;Initial Catalog=jsanderswestmobileservice_db;Persist Security Info=False;User ID=dbadmin;Password=dbadminpwd;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;

 
Create a new BackupConnection with the new value

Go back to your Migrated Mobile Service, App Settings, Connection Strings, View the connection strings and create a new Connection String called BackupConnection with the new connection string you created from the ADO.NET connection string.  Ensure the type is SQL Database.

Tab out of the field and save your changes.

 
Reset the Database configuration in Backup

Go to the Backups setting and hit the Configure Icon

Click on the Database Settings and toggle the check Icon on MS_TableConnectionString to turn it off and select the new BackupConnection, Hit OK and Save (this forces the configuration to update).

Test your Backup by Manually kicking it off.

 
Summary

It is not complicated to enable backups this way but it is a ‘Gotcha’ so please let me know if this Post helped you out!

BizTalk Boot Camp 2016: September 22 – September 23, 2016

MSDN Blogs - Fri, 07/29/2016 - 11:38

The BizTalk Boot Camp is a free open-to-the-public technical event that provides a deep-dive into our integration story. In this Boot Camp, our focus is on:

  • BizTalk Server 2016 and the new stuff, including using the new Logic Apps adapter
  • Logic Apps
  • Microsoft Flow

The itinerary and hands-on sessions are being created. More specific details will be added as they are finalized.

Requirements
  • In-person event; Skype is not offered
  • Laptop: Many discussions include hands-on activities
  • Non-disclosure agreement (NDA): A non-disclosure agreement (NDA) is required; which is available to sign upon arrival. Many companies already have a NDA with Microsoft. When you register, we’ll confirm if one already exists.
Location

Microsoft Corporation
8055 Microsoft Way
Charlotte, NC 28273

Bing map  Google map

Check-in is required. If you are driving to the event, car registration is also required (download and complete Vehicle and Customer Registration).

Registration

Will open week of August 1, 2016. Attendance is limited. The event is targeted towards a technical audience, including administrators and developers. Registration includes:

  • Attendance both days
  • Breakfast and lunch both days
  • Access to the Microsoft company store
Hotel

Details coming soon.

Questions

Ask in this post or contact me: mandi dot ohlinger at microsoft dot com.

We hope to see you here!
Mandi Ohlinger

Performance issues with Visual Studio Team Services – 7/29 – Investigating

MSDN Blogs - Fri, 07/29/2016 - 11:08

Initial Update: Friday, 29 July 2016 18:06 UTC

We are actively investigating performance and availability issues with Visual Studio Team Services in multiple geo locations. Some customers may experience issues accessing their Team Services accounts.

  • Next Update: Before 19:00 UTC

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Sri Harsha

SSH build task

MSDN Blogs - Fri, 07/29/2016 - 10:32

On the request of our Linux customers, we have shipped a new SSH build task that allows running commands or scripts on a remote server. This task is available on Visual Studio Team Services and will be available in the next release of Team Foundation Server for our on-premises customers.

Using the task is simple. You specify your SSH connection information in an SSH service endpoint, ensure the public key is setup on the remote machine and provide a script with your deployment or configuration steps to run on the remote machine. The task also allows specifying commands directly instead of a script file.

Checkout the documentation and demo video!

Try it out and let us know if you have any feedback on our GitHub repository.

To learn more about cross platform development with Team Services, visit http://java.visualstudio.com/

Thanks!

MS Bot Framework & Ngrok

MSDN Blogs - Fri, 07/29/2016 - 10:12

Configuring your bot for local debug development can be a bit confusing for new bot developers. Ngrok offers the ability to create a secure tunnel to your localhost environment. Here’s a quick checklist of the steps to configure your new bot with it:

1. Create a bot on MS Bot Framework
2. Run your bot website on localhost with a known port eg: 3978
3. Run ngrok against that port (ngrok http 3978)
4. Update the messaging endpoint of your bot in Step #1 to the generated ngrok endpoint
5. Chat to your bot using the channel you are developing against eg: Skype/Slack etc..

Enhancing information rights management in Word, Excel and PowerPoint mobile apps

MS Access Blog - Fri, 07/29/2016 - 10:00

Finding the balance between protection and productivity is critical to any organization. With the increased distribution of data, organizations need sensitive data to be born protected. This is why we invest in Azure Rights Management to help you protect information in today’s mobile-first, cloud-first world.

Information rights management (IRM) is now supported everywhere in Office Mobile as we are pleased to announce that we are extending Azure Rights Management to the Word, Excel and PowerPoint mobile apps for Android. You are now able to open, read and review rights-protected emails and Office documents on any device—whether it runs Windows, Mac, iOS or Android.

Other upcoming enhancements

We are hard at work building several other new features and enhancements to make the IRM experience even better for Office 365 subscribers in future updates.

These planned updates include:

  • Document tracking and revocation with Azure Rights Management Premium—Azure Rights Management Premium users will be able to track usage of and revoke access to documents that were protected with rights management services (RMS). We’ll deliver this first for Office for Windows, followed by Office for Mac and Office Mobile for iOS.
  • Single sign-on and multiple accounts in Office 2016 for Mac—We are making changes to support single sign-on in Office 2016 for Mac, which means you won’t need to sign in again to view an RMS-protected document if you’re already signed in. This will work for any Office 365 account that you’re signed in to—even if you have more than one account. We’re also removing the limitation where you have to view an RMS-protected document first before you are able to protect new documents with RMS.
  • Improved user experience in Office 2016 for Windows—We’re making targeted improvements to our error-handling and authentication mechanisms to make reading and authoring RMS-protected documents and emails more seamless. If you are unable to read RMS-protected content because, for example, you aren’t signed in to Office or you don’t have permission to read the content with any of your signed-in accounts, we will clearly explain why and offer options to resolve the issue.
  • Open legacy file formats—The Office apps for Windows Universal and Android will support opening RMS-protected documents that were saved in legacy formats, like .xls, .doc, and .ppt. Office apps for iPhone and iPad already support this.

Visit the Azure Rights Management website and read the product documentation to learn more. If you already use Azure Rights Management, make sure you update your Android devices with the latest versions of Word, Excel and PowerPoint today so you get all the new functionality we have released.

The post Enhancing information rights management in Word, Excel and PowerPoint mobile apps appeared first on Office Blogs.

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux