You are here

Feed aggregator

Dynamics CRM Online 体験版の申し込み手順が新しくなりました‼

MSDN Blogs - Wed, 08/20/2014 - 18:00

みなさん、こんにちは。

Microsoft Dynamics CRM Online 体験版の申し込み手順が変わりました。
今回は申し込みから構成までの手順を紹介します。

1. Microsoft Dynamics CRM のホームページを開きます。
http://www.microsoft.com/ja-jp/dynamics/default.aspx

2. 30日間トライアルのタイルをクリックします。

3. 無料トライアルの説明ページが開きますので、無料トライアルはこちらリンクを
クリックします。

4. 30 日間無料試用版の開始ページが開きます。先頭の国/地域より日本を選択します。
また、弊社より情報を受け取る際に利用したい言語を最下部の言語から日本語か英語を選択します。
※利用する CRM 組織の言語は後ほどより詳細を選択できます。

5. 姓、名、電子メール等必須項目を入力します。

6. [次へ] をクリックします。これまでは次ステップの認証コードを入力してから
アカウント情報を入力していましたが、 認証コードの入力前に変更になりました。

7. 電話番号の確認でテキストメッセージまたは着信を受けれる電話を指定し、
認証コードを入力します。

8. アカウント作成後、「準備が整いました」が表示されたら右矢印アイコンを
クリックします。

9. CRM Online 管理センターの画面が開きます。ここで任意の組織の言語、通貨を
選択します。

10. 設定が完了したらセットアップ続行をクリックします。

11. 以下の画面が出ますので、完了するまで待ちます。

 

12. 完了後以下の画面が表示されます。

 

以上で申し込み手順は完了です。

Office 365 管理センターは下記の新しいアドレスでアクセスが可能です。
https://portal.office.com/

まとめ

今回は画面のデザインが大きく変わりました。また、アカウントの情報を入力する
ステップが 電話番号の確認ステップの前に変更となりました。是非お試しください。

‐ プレミアフィールドエンジニアリング 河野 高也

 

Desenvolvimento de Jogos

MSDN Blogs - Wed, 08/20/2014 - 17:00

Olá! Meu nome é Fabricio Catae e sou evangelista técnico de Games.

Após trabalhar muitos anos com banco de dados SQL Server, decidi mudar o rumo da minha carreira. Hoje atuo na área de desenvolvimento, ajudando na criação de jogos. Garanto que é uma área muito divertida!

Nesse ano, minhas áreas de interesses são:

  • Programação em C# usando Unity 3D.
  • Jogos baseados em HTML5 e JavaScript
  • Game Engines em geral (Cocos2d, Corona, Construct2, GameMaker, MonoGame, etc)

Estamos trabalhando em um conteúdo técnico e disponibilizando nos canais:

Microsoft Virtual Academy – Games
http://www.microsoftvirtualacademy.com/training-topics/games

Games Brasil (Channel 9)
http://aka.ms/gamesbr

Se você tem interesse em jogos, não deixe de me contatar (@fcatae).

Reusing ViewModels in a Universal App – Part 1

MSDN Blogs - Wed, 08/20/2014 - 16:52

There is an increasing trend for Microsoft platforms to build what is called a “Universal App”.  In its ideal it is an application that you code once, and then it runs on desktop, tablet and phone platforms. 

This opportunity introduces new challenges – with a main one being that the different platforms have differences.  Differences in APIs, differences in capabilities and differences in display size and orientation.

All of these threaten the ideal of reusing most, if not all, of the code between the platforms.  The less code you can share between the platforms, means:

  • A higher cost for development – meaning either more resources needed, or more time required
  • More code that needs to be maintained
  • Higher probability of bugs, especially that impact one platform but another

The Model-View-ViewModel pattern has the potential to increase code reuse across the platforms.  This can be done by minimizing logic in the View and designing a View Model that is agnostic of the details of the View.  This will allow us to develop a different View for the different platforms, while sharing the code in the ViewModel and Model. 

To illustrate this I’ve created a Windows Store application, called My Calc, which is purposeful coded in a naive View centric way.  As is if one were to make the application available phone would be a large incremental cost.

MyCalc is a simple stack calculator application.  One can push numbers on the stack.  Operations, such as Add, take their operands from the stack and replace it with the resulting value.  Purposefully the set of operations implemented as basic, as the object is to focus on ViewModel development, and not develop a fully featured calculator app.

In the various parts of this blog we’ll first refactor the application to have a ViewModel, and end with porting the application to phone by coding up a phone specific View.

Attached is our starting solution.  It consists of 3 projects:

  • MyCalc.Shared
    • Contains code shared across the platforms
  • MyCalc.Windows
    • Contains code specific to Windows
  • MyCalc.WindowsPhone
    • Contains code specific to phone

The shared project contains a StackCalculator class.  This is our Model, or business logic.  It doesn’t know anything about the UI.

You can ignore the WindowsPhone project for now, as all it has in it is the broiler plate code the Visual Studio put in when the project was created.

The Windows project has the MainWindow.  If you’ve done UI development in XAML, WPF or Silverlight then likely you’ve seen (and written) code similar to this.

Here is a snippet of the xaml

<Button
x:Name="SixButton"
Grid.Column="4"
Grid.Row="2"
Style="{StaticResource ButtonStyle}"
Content="6"
Click="SixButton_Click"
/>

<Button
x:Name="ThreeeButton"
Grid.Column="4"
Grid.Row="3"
Style="{StaticResource ButtonStyle}"
Content="3"
Click="ThreeeButton_Click"
/>
</Grid>

<Grid
x:Name="StackGrid"
Grid.Column="1"
Grid.Row="1"
>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto" />
<ColumnDefinition Width="Auto" />
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="*" />
<RowDefinition Height="Auto" />
</Grid.RowDefinitions>

<ListBox
x:Name="StackBox"
Grid.Column="0"
Grid.ColumnSpan="2"
Grid.Row="0"
Margin="4"
FontSize="32"
ItemContainerStyle="{StaticResource ListBoxItemStyle}"
/>

<TextBox
x:Name="EnterBox"
Grid.Column="0"
Grid.Row="1"
Margin="4"
Width="400"
FontSize="32"
TextChanged="EnterBox_TextChanged"
/>

There are a few things I’d like to point out about thiscode:

  • The elements (buttons, ListBox, TextBox, etc…) have names so we can reference them in the code behind
  • Events are wired up to methods in the code behind, such as the Clicked and TextChanged events.

These are indicators that this UI was coded up in a traditional way without a ViewModel (or at least not a well designed one).  When we look at the code we get that confirmed with code like this:

private void EnterBox_TextChanged(object sender, TextChangedEventArgs e)
{
UpdateEnterBoxButtons();
}

private void UpdateEnterBoxButtons()
{
if (EnterBox.Text.Length == 0)
{
EnterButton.IsEnabled = false;
BackspaceButton.IsEnabled = false;
}
else
{
EnterButton.IsEnabled = true;
BackspaceButton.IsEnabled = true;
}
}

Here we see that the logic in the View is very tightly coupled with the XAML markup, since the View not only responds to events from the controls, but directly updates the state of various controls.

While one could theoretically compile this code against 2 sets of markup (and there are some challenges to just making that happen) we are quite limited in the changes that could be made – other than just layout
and style changes.  For example the View code expects both an Enter and Backspace button – but what if, due to space constraints, on phone we would rather just use the ones on the soft keyboard rather than have such buttons in our main UI.

 

 

Microsoft Small Basic Guru Awards - July 2014 WINNERS!!!

MSDN Blogs - Wed, 08/20/2014 - 16:41

Nonki and I were the only contributors this month, but a special thanks go out to Nonki for really knocking it out of the park with a lot of fantastic articles!

 

  

 Small Basic Technical Guru - July 2014  

 

Ed Price - MSFT Small Basic Survival Guide Michiel Van Hoorn: "Oh the overload!"
RZ: "Very nicely categorized and organized. Way to go!"

 

Nonki Takahashi Small Basic: Character Set - Unicode RZ: "Very well explained. I wager that many programmers can't quite explain what's unicode and utf-8 :)"
Michiel Van Hoorn: "Great article on this fundamental element in programming"

 

Nonki Takahashi Small Basic: Controls Michiel Van Hoorn: "Practical topic with good examples"

Also worth a mention were the other entries this month:

 

If you haven't contributed an article for August, and you think you have a good wiki article in mind, here's your chance to show it off! :D

 

Thanks!

   - Ninja Ed

Azure Network Maintenance Impacting Application Insights Services 8/23 - 8/25 UTC

MSDN Blogs - Wed, 08/20/2014 - 15:35

This is a proactive notification to inform our customers that our Azure partners are planning a network maintenance window in several Microsoft Azure regions and datacenters.  Application Insights is hosted in datacenters that will be affected on the weekend of 8/23 - 8/25.

We are taking steps to minimize impact to our live services, but customers may experience broad, intermittent service unavailability during the Azure maintenance window. 

Maintenance Window - 8/23 00:00 UTC to 8/25 00:00 UTC

For broader details about what this Azure maintenance might mean to you, please see the Azure blog post here:

http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=WAVirtualMachinesforWindows&announcementId=791785c1-84d1-4e8e-9c1e-d09cf27f8692

We apologize for any inconvenience.

-Application Insights Service Delivery Team

Visual Studio 2010 uninstall utility back online

MSDN Blogs - Wed, 08/20/2014 - 14:56

Four years ago I published a utility to help perform a clean uninstall of Visual Studio 2010. Before we added package reference counting and related bundles to Visual Studio setup, we couldn’t always be sure which products were still required so not everything was removed. This utility will remove everything provided one of a few command line options documented in the original article.

Unfortunately, the old site where the utility was hosted was archived and some readers recently pointed this out. I’ve since reposted the utility on the Microsoft Download Center and created a short URL (http://aka.ms/vs2010uninstall) if you want to remove older Visual Studio bits since by now I’m sure you’re enjoying newer versions!

Programming and writing are similar things

MSDN Blogs - Wed, 08/20/2014 - 14:39

At least, that’s my opinion. They’re both ways of recording and communicating ideas, and they’re both best when they’re clear and brief and unambiguous. I do believe that similar skills go into each. But they’re for different audiences and those different audiences have different natures. So maybe that’s why we think of programming and writing as being so different, and we say that someone who’s good at one isn’t necessarily good at the other. But I feel that if you genuinely are good at the one then you would be good at the other if you tried it and gave it as much time and energy.

The audience of source code is a compiler (sure, it’s also you when you have to re-read your own code in a month’s time, but that’s secondary). The audience of the written word is people.

A compiler always knows when it has misunderstood you. People don’t always know when they’ve misunderstood you. But in either case the fault lies primarily with the communicator. Still, it’s durned convenient to always know when you’ve been misunderstood. Imagine the disasters that would have averted in history if people gave compiler errors and warnings.

When a compiler knows it's misunderstood you, it will tell you so. When a person knows they've misunderstood you, they won’t always tell you. But they should. Perhaps they feel it’d be impolite to do so; perhaps they’re in a hurry; perhaps they don’t think it matters; perhaps they think it’ll become clearer over time. Perhaps.

So there you go. If from time to time you think your compiler is a PITA, just try writing for people! :P

-Steve

VSO is Happy to See You! Project Welcome Pages

MSDN Blogs - Wed, 08/20/2014 - 14:09

The August 18th news article on the Visual Studio site announced a fun new addition to VSO: Project Welcome pages.

Think of Welcome pages as documentation, a greeting, or basic contextual information for the Team Project.  You can use a Welcome pages for things like:

  • Describing the purpose/business value of the project.
  • Basic tips and tricks for navigating the VSO project.
  • Project-specific nomenclature or acronyms
  • Project sponsors or contacts
  • You get the idea.. whatever!

The implementation of these pages is surprisingly simple.  Pages are really just Markdown files (.md) which are checked in/committed to the root of your project.  The default page is named “readme.md”.  For example, in my “Awesome Calculator” project, I checked in a “readme.md”:

Now if I got to my project’s homepage, I see a “Welcome” tab.  If I click on that, I get to any/all of my Welcome pages:

Adding additional Welcome pages is simple as well.  Just check in/commit more markdown files! 

My new markdown file, “The Truth.md”, then renders like this:

If you’re not familiar with markdown, don’t fear.  It’s a simple and fast markup.  VSO utilizes of GitHub Flavored Markdown, a common convention already used in some open source version control systems, based on then “marked” open source library. You can use virtually any editor (they are just text files) to work on your markdown files, including VIsual Studio, MarkdownPad, and others.

For additional details, please read Martin Woodward’s post on the Visual Studio ALM blog.

Enjoy!

WebDeploy using publish settings

MSDN Blogs - Wed, 08/20/2014 - 13:25

In the new version of Web Deploy 3.6 you can now use your publish settings that were created within Visual Studio. So instead of doing this

msdeploy.exe -verb:sync -source:contentPath=d:\web\wwwroot -dest:contentPath=siteName,computername=https://dev/msdeploy.axd?site=siteName,username=myUser,password=myPass,authtype=basic

Can now be simplified to this:

msdeploy.exe -verb:sync -source:contentPath=d:\web\wwwroot -dest:contentPath=siteName,publishsettings=d:\dev.PublishSettings

Happy days!

Serie: Eigene Anwendungen auf Microsoft Azure - Teil 1: Überblick

MSDN Blogs - Wed, 08/20/2014 - 13:21

Die meisten Personen, die sich für das Thema Cloud Computing interessieren, haben zunächst ein Szenario im Blick: die Ausführung einer eigenen Anwendung in der Cloud. Ob nun eine bestehende oder eine neu zu entwickelnde Anwendung: in beiden Fällen kommt man recht schnell an den Punkt an dem auffällt, dass es hierzu in Microsoft Azure – anders als bei einem klassischen Hoster oder vielen anderen Cloud-Anbietern – mehrere Optionen gibt. Microsoft Azure bietet nicht nur das Hosting klassischer virtueller Maschinen im Sinne eines Infrastructure-as-a-Service-Ansatzes, sondern auch speziellere Hosting Varianten, die optimiert sind auf die schnelle Bereitstellung von Web-Anwendungen oder die flexible Skalierung mehrschichtiger Anwendungen. Diese Blog-Serie beschäftigt sich nun mit der Auswahl der für ein bestimmtes Szenario bestgeeigneten Ausführungsmodells. Sie ist unterteilt in

  • Teil 1: Überblick über die Ausführungsoptionen
  • Teil 2: Virtual Machines
  • Teil 3: Cloud Services
  • Teil 4: Websites
  • Teil 5: Mobile Services
  • Teil 6: Auswahl des besten Ausführungsmodells
  • Teil 7: Wechsel des Ausführungsmodells

Folgende Optionen stehen für die Ausführung eigener Anwendungen auf Microsoft Azure zur Verfügung:

Dies klingt zunächst verwirrend, letztlich lässt sich die Frage nach der besten Alternative aber recht schnell beantworten. Hinzu kommt die Möglichkeit, diese Optionen miteinander zu kombinieren. D.h. es ist durchaus möglich, dass Teile einer Anwendung als Website betrieben werden und andere Teile (z.B. die Datenbank) nutzen, die in einer Virtual Machine ausgeführt werden. Darüber hinaus ist ein Wechsel zwischen den Optionen möglich, d.h. man verbaut sich nichts, wenn man mit einer Option (z.B. Websites) beginnt und später auf eine andere Option (z.B. Cloud Services) wechselt.

Die vier Alternativen unterscheiden sich primär dadurch, wie viele Aspekte der Umgebung durch das Ausführungsmodell vorgegeben und damit von Azure automatisiert verwaltet werden (Einspielen von Patches, Updates, Failover-Management etc.) und welche Aspekte durch den Nutzer bestimmt werden können. Je mehr Aspekte bestimmt werden können, desto weniger Aspekte werden von Azure verwaltet. Umgekehrt gilt entsprechend: je mehr von Azure automatisiert verwaltet wird, desto weniger Einfluss auf die Umgebung ist durch den Nutzer möglich. Höchste Automatisierung (dafür geringste Möglichkeiten, die Umgebung zu kontrollieren) bieten Mobile Services. Bei Virtual Machines ist auf der anderen Seite die Möglichkeit, Einfluss auf die Umgebung zu nehmen, am größten, dafür ist der Automatisierungsgrad am kleinsten. Die beiden anderen Alternativen, Cloud Services und Websites, liegen dazwischen. Folgende Tabelle gibt einen Überblick auf die Alternativen und deren Automatisierungsgrad bzw. die Möglichkeiten, Einfluss auf die Umgebung zu nehmen.

  Virtual Machines Cloud
Services Websites Mobile
Services Automatisierungsgrad ● ●● ●●● ●●●● Einflussmöglichkeiten ●●●● ●●● ●● ●

Die Tabelle gibt schon den wichtigsten Hinweis: wann immer möglich, sollten Anwendungen als Mobile Service bzw. Websites ausgeführt werden. Dort ist der Automatisierungsgrad und damit die Einsparungen im Vergleich zu einem Eigenbetrieb oder dem Betrieb in einer virtuellen Maschine am höchsten. Nur wenn die Anforderungen bestimmte Konfigurationen der Umgebung erfordern, d.h. die Umgebung stark angepasst werden muss, sollten Virtual Machines zum Einsatz kommen.

Für einige Szenarien können recht einfach erste Empfehlungen für die Ausführungsoption genannt werden:

Szenario Naheliegende
Ausführungsoption Grund Hosting einer ASP.NET- oder PHP-basierten Webseite Websites Websites bieten den höchsten Grad an Automatisierung, und die bereitgestellte Umgebung eignet sich sehr gut zur Ausführung von skalierbaren Web-Anwendungen. Betrieb einer bestehenden Linux-basierten LOB-App Virtual Machines Virtual Machines bieten als einzige Ausführungsoption den Betrieb einer Linux-basierten Umgebung. Diese kann dann individuell konfiguriert werden. Der Nutzer kann sich hierzu als Admin anmelden. Bereitstellung eines REST-Backends für mobile Anwendungen Mobile Services Mit Mobile Services lassen sich in kürzester Zeit Backends für mobile Anwendungen bereitstellen. Die Dienste können dabei leicht mit Funktionen zur Benutzerauthentifizierung, Push Notifications, Datenspeicherung versehen werden. Cloud-Anwendung, die auf Ressourcen im lokalen Unternehmensnetzwerk zugreifen soll Cloud Services oder
Virtual Machines Cloud Services und Virtual Machines bieten die Möglichkeit, über ein virtuelles Netzwerk Verbindungen zu einem lokalen Rechenzentrum herzustellen.

Diese Tabelle gibt nur einen ersten Hinweis auf eine geeignete Alternative. Vor einer konkreten Umsetzung sollte diese noch genauer betrachtet werden. In den weiteren Teilen dieser Blog-Serie sollen die einzelnen Alternativen deshalb genauer vorgestellt werden.

Weitere Informationen

Fun with the Interns: Shaurya Arora on Designing .NET for NuGet

MSDN Blogs - Wed, 08/20/2014 - 13:18

A few weeks ago when I was up in Redmond I had the pleasure of interviewing some interns on the .NET team to talk about their experience as an intern at Microsoft and to show off the projects they are working on.

In this interview I sit down with Shaurya (a.k.a Shaun) Arora, a Program Manager Intern on the .NET Ecosystem team, and we chat about his internship experience and summer project.

You’ve probably noticed that we're releasing more and more .NET framework functionality via NuGet. Moving forward, we intend to bring the two even closer together. Shaun spent a lot of time thinking about this problem space and helped us shape our thoughts and design some ideas. Shaun is also a very talented developer and designer and helped us to build a catalog of all the .NET features we shipped since .NET 4. Check it out!

Watch: Fun with the Interns: Shaurya Arora on Designing .NET for NuGet

And for all those students out there pursuing a career in computer science, you should consider an internship at Microsoft. You can help build real software that helps millions of people! Learn more about the Microsoft internship program here.

Enjoy!

Announcing the release of Visual F# Tools 3.1.2

MSDN Blogs - Wed, 08/20/2014 - 12:23

We are happy to announce the availability of Visual F# Tools 3.1.2, the latest update to the Visual F# compiler and F# IDE tooling for Visual Studio.  You can download the installer package here.

Building upon Visual F# 3.1, and the first out-of-band update Visual F# Tools 3.1.1, Visual F# Tools 3.1.2 is a standalone, F#-only installer that packages the latest updates to the F# compiler, interactive, runtime, and Visual Studio integration.  The IDE updates apply to Visual Studio 2013 Pro+, Desktop Express, and Web Express.

A release by and for the F# community

To begin with, it’s worth noting that this is the first Visual F# Tools release since we expanded our open source efforts and began taking contributions, and therefore the first release to include direct contributions from the F# community! We would like to thank all of the developers who worked with us by opening bugs, sending pull requests, and participating in discussions.

Commits from the following community F# developers are included in Visual F# Tools 3.1.2: antofik, Don Syme, Jon Harrop, Robert Jeppesen, Steffen Forkmann, Vladimir Matveev, Will Smith, Xiang Zhang.

What’s new? Expanded portable library support, including Windows Phone 8.1

We continue to expand .NET portable library support in Visual F#.  New in this release, developers will find F# project templates, along with compiler and runtime support, for two additional portable profiles:

  • Profile 78 (.NET Framework 4.5, Windows 8, Windows Phone Silverlight 8)
  • Profile 259 (.NET Framework 4.5, Silverlight 5, Windows 8, Windows Phone 8.1)

This is in addition to the two profiles previously supported – Profile 47 (.NET Framework 4.5, Silverlight 5, Windows 8) and Profile 7 (.NET Framework 4.5, Windows 8).

Issues with adding project references between C# and F# portable libraries have also been addressed in this release. Altogether, this means that for the first time you can add an F# portable library project directly to a Windows Phone 8.1 solution!

  Non-locking assembly references in FSI

F# developers love the quick-iteration development workflow enabled by F# Interactive – “edit-compile-test” is more or less distilled to “edit-test” – and tend to dislike anything that slows them down.

Oddly enough, one of the most common speed bumps in this loop is actually F# Interactive itself! Adding a #r assembly reference causes the referenced assembly to be locked on disk. If that assembly is part of your solution, and you want to rebuild it, make sure to restart FSI first!  If you don’t, your build will fail with the dreaded “The process cannot access the file 'bin\Debug\ClassLibrary1.dll' because it is being used by another process.”

Those errors should no longer be an issue. F# Interactive now supports non-locking #r references via shadow copy.

This is a configurable behavior.  Non-locking references are enabled by default in the hosted Visual Studio F# Interactive, and configurable via the F# Interactive options dialog. On the command line, the default behavior for fsi.exe has not been changed, but you can easily enable the new behavior via the switch --shadowcopyreferences+

Support for Publish in web and Azure projects

If you are using F# for web development, perhaps utilizing one of the excellent community project templates that are available, you have likely noticed that the Visual Studio “Publish” action has not worked for F# projects in the past.  This long-standing issue has finally been addressed; you can now publish F# web and Azure projects directly from Visual Studio!

Minor language changes Improved *nix compatibility with #! in F# scripts

The F# compiler now allows for the shebang (#!) directive to exist at the beginning of F# source files, and will treat such a line as a comment.  This allows F# scripts to be compatible with the Unix convention whereby a script can indicate to the OS what interpreter to use by providing the path to that interpreter on the first line, following the #! directive.

This change does not affect how F# file associations are configured on Windows.

Compiler support for high-dimensional slicing

F# supports slicing syntax for working with arrays and other multi-dimensional data types, and developers can define custom slicing behavior for their own types by implementing appropriate overloads of the GetSlice and SetSlice methods.

In previous releases, the compiler enforced a limit of 4 slicing dimensions.  This limit has been lifted; developers are now free to define arbitrary-dimensional slicing semantics in their code.

Spaces in active pattern case identifiers

There was previously a restriction preventing space characters from being used in active pattern case identifiers.  This restriction has been relaxed; space characters may now be used in ``  ``-escaped case identifiers.

And beyond

In addition to what’s mentioned here, Visual F# 3.1.2 also includes a number of bug fixes, a couple of performance improvements, and even some optimized code generation for struct processing. A full list of changes and bug fixes is in our Codeplex repo.

More Information

For more information about F# generally, please visit http://fsharp.org. If you are interested in contributing to the Visual F# Tools, please visit our open source home at https://visualfsharp.codeplex.com/

This item cannot be declared a record because it is checked out.

MSDN Blogs - Wed, 08/20/2014 - 12:05

A customer recently asked me to help with a problem in a record center in SharePoint 2010. They had one specific record that seemed like it was stuck in some state where it couldn't be modified. The UI showed either of the following two messages when we tried to delete or modify it.

This item cannot be declared a record because it is checked out.

The item cannot be deleted, moved, or renamed because it is either on hold or is a record which blocks deletion.

Digging into the ULS logs, we found the record was set to locked as read-only, even though the properties of the document showed it wasn't read only.

01/01/2014 11:11:11.11    w3wp.exe (SERVERNAME)    0x044C    SharePoint Foundation    General    8kh7    High    This item cannot be updated because it is locked as read-only.    bad285fd-e35c-4e8b-8e1f-a61589a4e689
01/01/2014 11:11:11.11    w3wp.exe (SERVERNAME)    0x044C    SharePoint Foundation    General    8kh7    High    Stack trace: onetutil.dll: (unresolved symbol, module offset=00000000000A22A1) at 0x000007FEEC2322A1 onetutil.dll: (unresolved symbol, module offset=00000000000A3461) at 0x000007FEEC233461 owssvr.dll: (unresolved symbol, module offset=0000000000009002) at 0x000007FEE56A9002 owssvr.dll: (unresolved symbol, module offset=0000000000037941) at 0x000007FEE56D7941 mscorwks.dll: (unresolved symbol, module offset=00000000002BB727) at 0x000007FEF8F1B727 Microsoft.SharePoint.Library.ni.dll: (unresolved symbol, module offset=00000000000DFA24) at 0x000007FEE3D6FA24 Microsoft.SharePoint.ni.dll: (unresolved symbol, module offset=0000000001AFA6DD) at 0x000007FEE8BCA6DD Microsoft.SharePoint.ni.dll: (unresolved symbol, module offset=0000000001C68563) at 0x000007FEE8D38563 Microsoft.SharePoint.ni.dll: (unresolved symbol, module offset=0000000001C686C3) at 0x000007FEE8D386C3    bad285fd-e35c-4e8b-8e1f-a61589a4e689
01/01/2014 11:11:11.11    w3wp.exe (SERVERNAME)    0x044C    SharePoint Foundation    General    2mx9    Verbose    Ignore exception 'Microsoft.SharePoint.SPException: This item cannot be updated because it is locked as read-only. ---> System.Runtime.InteropServices.COMException (0x81020089): This item cannot be updated because it is locked as read-only.    
at Microsoft.SharePoint.Library.SPRequestInternalClass.DeleteItem(String bstrUrl, String bstrListName, Int32 lID, UInt32 dwDeleteOp, Guid& pgDeleteTransactionId)    
at Microsoft.SharePoint.Library.SPRequest.DeleteItem(String bstrUrl, String bstrListName, Int32 lID, UInt32 dwDeleteOp, Guid& pgDeleteTransactionId)     --- End of inner exception stack trace ---    
at Microsoft.SharePoint.SPGlobal.HandleComException(COMException comEx)     at Microsoft.SharePoint.Library.SPRequest.DeleteItem(String bstrUrl, String bstrListName, Int32 lID, UInt32 dwDeleteOp, Guid& pgDeleteTransactionId)    
at Microsoft.SharePoint.SPListItem.DeleteCore(DeleteOp deleteOp)     at Microsoft.SharePoint.SPListItem.Recycle()    
at Microsoft.SharePoint.SPListItem_Proxy.InvokeMethod(Object obj, String methodName, XmlNodeList xmlargs, ProxyContext proxyContext, Boolean& isVoid)    
at Microsoft.SharePoint.Client.ClientMethodsProcessor.InvokeMethod(Object obj, String methodName, XmlNodeList xmlargs, Boolean& isVoid)    
at Microsoft.SharePoint.Client.ClientMethodsProcessor.ProcessMethod(XmlElement xe)    
at Microsoft.SharePoint.Client.ClientMethodsProcessor.ProcessOne(XmlElement xe)    
at Microsoft.SharePoint.Client.ClientMethodsProcessor.ProcessStatements(XmlNode xe)    
at Microsoft.SharePoint.Client.ClientMethodsProcessor.ProcessExceptionHandlingScope(XmlElement xe)' when executing '<ExceptionHandlingScopeSimple Id="0" xmlns="http://schemas.microsoft.com/sharepoint/clientquery/2009"><ObjectPath Id="3" ObjectPathId="2" /><ObjectPath Id="5" ObjectPathId="4" /><ObjectPath Id="7" ObjectPathId="6" /><ObjectPath Id="9" ObjectPathId="8" /><ObjectPath Id="11" ObjectPathId="10" /><Query Id="12" ObjectPathId="10"><Query SelectAllProperties="false"><Properties><Property Name="FileLeafRef" ScalarProperty="true" /></Properties></Query></Query><Method Name="Recycle" Id="13" ObjectPathId="10" /></ExceptionHandlingScopeSimple>'.    bad285fd-e35c-4e8b-8e1f-a61589a4e689

To fix this issue, I found some code that helped removed the lock. Take a look at the sample code, and hopefully it can fix the issue for you.

We leveraged the UndeclareItemAsRecord function.
http://technet.microsoft.com/en-us/subscriptions/microsoft.office.recordsmanagement.recordsrepository.records.undeclareitemasrecord(v=office.14).aspx 

private static void UndeclareDeclare()
{
string recordCenterRootWebURL = "http://sp2010:1123/sites/records";
using (SPSite site = new SPSite(recordCenterRootWebURL))
{
using (SPWeb web = site.OpenWeb())
{
for (int i = 0; i < 300; i++)
{
string listName = "Administration";
SPList list = web.Lists[listName];
int listItemId = 4; // ID of the list item declared as record
SPListItem item = list.GetItemById(listItemId);
// Undeclare item as record
Microsoft.Office.RecordsManagement.RecordsRepository.Records.UndeclareItemAsRecord(item);
item = list.GetItemById(listItemId);
// Declare item as record
Microsoft.Office.RecordsManagement.RecordsRepository.Records.DeclareItemAsRecord(item);
Console.WriteLine(i);
}
}
}
}



If you can't undeclared the item as a record, it might be possible to turn off the lock. There's a method in the records management code by called Records.BypassLocks that might do the trick.
http://technet.microsoft.com/en-us/subscriptions/microsoft.office.recordsmanagement.recordsrepository.records.bypasslocks(v=office.14).aspx

private static void TouchItem()
{
using (SPSite site = new SPSite(recordCenterRootWebURL))
{
using (SPWeb web = site.OpenWeb())
{
SPList list = web.Lists[listName];
SPListItem item = list.GetItemById(listItemId);
Records.BypassLocks(item, delegate(SPListItem item1)
{
item1.File.CheckOut();
item1.Update();
item1.File.CheckIn("hello");
});
//Console.WriteLine("Done update");
}
}
}

There are a bunch of reasons why locks can get stuck, and some have to do with configuration and the ways records management works. Please make sure you understand your scenario, before applying a fix like this.

Log reader agent may fail to start on SQL Server 2012 Service Pack 1 Update version 11.0.3460.0

MSDN Blogs - Wed, 08/20/2014 - 11:18
Yesterday we have confirmed a SQL transactional replication problem in SQL Server 2012 version 11.0.3460.0 where running a second instance of the log reader agent for a different publication will fail with error “Another logreader agent for the subscription or subscriptions is running, or the server is working on a previous request by the same agent.” We are working on releasing a SQL Server 2012 Service Pack 1 hotfix next week to fix this problem. For an immediate fix, customers can...(read more)

Using the Correct Version of Web Deploy

MSDN Blogs - Wed, 08/20/2014 - 10:27

 

If you have encountered the error Using a 64-bit source and a 32-bit destination with provider is not supported:

 

This is due to a package created on a 64-bit system (source) using a 64-bit version of Web Deploy and attempting to deploy the package on a 64-bit system (target) with the 32-bit version of Web Deploy.

Resolution:

Install the 64-bit version of Web Deploy on the target server to deploy the package.

Azure による Node.js の Remote Debug (Node.js Tools for Visual Studio 編)

MSDN Blogs - Wed, 08/20/2014 - 10:15
環境 : Visual Studio 2013 Update 2 Node.js Tools for Visual Studio 1.0 Beta 2 こんにちは。 前回 は、node-inspector を使った Node.js の Remote Debug について紹介しました。 そこで今回は、Remote Debug をおこなうその他の手法として、現在 Beta がリリースされている Node.js Tools for Visual Studio を使用した Remote Debugging について紹介します。(なので、前回の投稿を、勝手に「node-inspector 編」と変更させていただきました。) 現実の開発では、実際にクラウド上にアプリを配置をしないと Debug できないことはよくあります。例えば、Facebook, Google, Live Services (OneDrive, Outlook.com) 等の Authentication を埋め込んだコード (許可された URI でしか動作しない場合) や、Twilio API を使用したコード...(read more)

Music industry professional Brian Mulligan helps Jamestown Revival fans connect via universal Windows app built with App Studio

MSDN Blogs - Wed, 08/20/2014 - 10:13

The Agency Group is a global talent agency that services top talent from numerous industries, including music. Brian Mulligan, assistant to the senior vice president, manages bands’ tours, and their show schedules. In the course of his work, he became concerned about the fragmentation of his bands’ identity across the myriad social media properties fans visit today. To solve the problem and encourage interaction with fans, Mulligan built a Windows and Windows Phone app for the group Jamestown Revival. He developed the app himself using Windows App Studio Beta, and now, he talks about his inspiration for and experience building his app.

To read the story in its entirety, head on over to http://blogs.windows.com/buildingapps/2014/08/20/developer-speak-music-business-professional-brian-mulligan-helps-jamestown-revival-fans-connect-via-universal-windows-app-built-with-app-studio/

SharePoint & SQL Server AlwaysOn vs Standalone Performance

MSDN Blogs - Wed, 08/20/2014 - 09:56

How to setup SharePoint with SQL Server AlwaysOn has been covered nicely now, but I’ve not covered the performance hit setting up such a system will incur.

For now we’ll benchmark just synchronous-commit AlwaysOn as that’s the safest yet slowest way of operating a SQL Server AlwaysOn cluster for SharePoint, even though some (most) databases support asynchronous commits.

Test Scripts

This isn’t going to be the be-all-and-end-all of experiments, just to give an idea of the performance gap when implementing AlwaysOn with SharePoint. Each test is measured with a System.Diagnostics.Stopwatch and were run several times to get an average, discounting the 1st run each time to make sure caches were warmed up etc. Here are said tests + scripts:

Create team-site site-collection

Simple new site-collection + feature activation.

$siteURL = "http://sp15/sites/perftest"

$template = Get-SPWebTemplate "STS#0"

New-SPSite -Url $siteURL -OwnerAlias "sfb-testnet\root" -Template $template

Create custom list and insert 1000 items

While loop to insert one-by-one a bunch of simple items.

$web = Get-SPWeb $siteURL

$listTemplate = $web.ListTemplates["Custom List"]

$list = $web.Lists.Add("List", "Test list", $listTemplate)

$i = 1

do {

$newItem = $list.Items.Add()

$newItem["Title"] = "AutoItem " + $i

$newItem.Update()

$i++

}

while ($i -le 1000)

Read 4,999 items

It’s 4,999 because that’s one less than the maximum that the query throttle will allow (by default).

$web = Get-SPWeb $siteURL

$list= $web.Lists["List"]

Write-Host ($list.GetItems()).Count "items read from list."

Test Hardware

Nothing special really. Hosted all on the same Hyper-V machine with 24 cores so plenty of CPU muscle to handle any background noise. All virtual machines use real, non-shared nor dynamic memory.

SQL Server

4 CPUs, 4GB RAM. AlwaysOn cluster of x2 machines on the same subnet; single instance on its own, on the same subnet.

Nothing fancy about the disk setup in either the standalone or AlwaysOn servers – data on OS disk to make it equally terrible a setup in both instances J.

SharePoint Server

4 CPUs, 8GB RAM. Also on the same subnet as the SQL boxes for lowest latency.

Just with the WFE roles installed – no search, UPA, AppFabric or anything else on each farm to avoid extra SQL traffic that’s not related to our PowerShell scripts.

AlwaysOn vs. Standalone Test Results

All results are in seconds elapsed taken from the PowerShell output.

Test

Standalone

AlwaysOn

Create site collection

30.9

52.3

Insert items

28

56

Read items

0.84

0.85

Reading is pretty much identical on both setups. Here’s that data in graphical format:

The slowdowns pretty much only happen for write operations.

Performance Conclusions

It’s pretty clear from this that writing suffers a lot more of a performance hit with AlwaysOn than reading. That makes sense given there’s no synchronous blocking for read – it’ll come from a SQL node without bothering the others.

Writing data on the other hand shows a near 100% performance decrease with synchronous AlwaysOn writes enabled. This should improve with asynchronous writes of course but that it for another day.

Cheers,

Sam

Simple inheritance with JavaScript

MSDN Blogs - Wed, 08/20/2014 - 09:51

A lot of my friends are C# or C++ developers. They are used to use inheritance in their projects and when they want to learn or discover JavaScript, one of the first question they ask is: “But how can I do inheritance with JavaScript?”.

Actually, JavaScript uses a different approach than C# or C++ to create an object oriented language. It is a prototype-based language. The concept of prototyping implies that behavior can be reused by cloning existing objects that serve as prototypes. Every object in JavaScript contains a prototype (this.prototype) which defines a set of functions and members that the object can use. There is no class. Just objects. Every object can then be used as prototype for another object.

This concept is extremely flexible and we can use it to simulate some concepts from OOP like inheritance.

Implementing inheritance

Let’s image we want to create this hierarchy using JavaScript:

First of all, we can create ClassA easily. Because there is no explicit classes, we can define a set of behavior (A class so…) by just creating a function like this:

var ClassA = function() { this.name = "class A"; }

This “class” can be instantiated using the new keyword:

var a = new ClassA(); ClassA.prototype.print = function() { console.log(this.name); }

And to use it using our object:

a.print();

Fairly simple, right?

The complete sample is just 8 lines long:

var ClassA = function() { this.name = "class A"; } ClassA.prototype.print = function() { console.log(this.name); } var a = new ClassA(); a.print();

Now let’s add a tool to create “inheritance” between classes. This tool will just have to do one single thing: Cloning the prototype:

var inheritsFrom = function (child, parent) { child.prototype = Object.create(parent.prototype); };

This is exactly where the magic happens! By cloning the prototype, we transfer all members and functions to the new class.

So if we want to add a second class that will be child of the first one, we just have to use this code:

var ClassB = function() { this.name = "class B"; this.surname = "I'm the child"; } inheritsFrom(ClassB, ClassA);

Then because ClassB inherited the print function from ClassA, the following code is working:

var b = new ClassB(); b.print();

And produces the following output:

class B

We can even override the print function for ClassB:

ClassB.prototype.print = function() { ClassA.prototype.print.call(this); console.log(this.surname); }

In this case, the produced output will look lie this:

class B
I’m the child

The trick here is to call ClassA.prototype to get the base print function. Then thanks to call function we can call the base function on the current object (this).

Creating ClassC is now obvious:

var ClassC = function () { this.name = "class C"; this.surname = "I'm the grandchild"; } inheritsFrom(ClassC, ClassB); ClassC.prototype.foo = function() { // Do some funky stuff here... } ClassC.prototype.print = function () { ClassB.prototype.print.call(this); console.log("Sounds like this is working!"); } var c = new ClassC(); c.print();

And the output is:

class B
I’m the child

Sounds like this is working!

A few comments about prototype mutability

As you can see, we can easily simulate inheritance thanks to the flexibility of prototyping. However, the inheritance is static. We clone the prototype at certain point in time. Because all objects in JavaScript are mutable, you can add more functions later on the root prototype. These functions won’t be propagated to the child.

To conclude, I just want to clearly state that JavaScript is not C# or C++. It has its own philosophy. If you are a C++ or C# developer and you really want to embrace the full power of JavaScript, the best tip I can give you is: Do not try to replicate your language into JavaScript. There is no best or worst language. Just different philosophies!

How to use PowerShell to Build Azure Storage Connection Strings

MSDN Blogs - Wed, 08/20/2014 - 09:34

I needed to create a bunch of connection strings to use in my code. But this can be error prone and painful, especially if you have many data centers. In addition, you may decide to regenerate the primary storage key, in which case the whole nightmare would need to be repeated.

In my previous post, I illustrated how to provision a bunch of Azure storage accounts. So this post is a continuation from there to retrieve the primary storage keys. PowerShell syntax is really simple so it really doesn't require much explanation.

Figure 1: Powershell output showing my connection strings

Now I can just past into my app.config file for my Visual Studio project.

Figure 2: The Visual Studio solution that will use my connection string

Figure 3: Updated App.config file

Getting Storage Account Keys 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

# A list of storage account names
$DataCenterList = @(
            ,@("terkalysoutheastasia", "Southeast Asia")
            ,@("terkalyenortheurope", "North Europe")
            ,@("terkalyeastus", "East US")
            ,@("terkalywestus", "West US")
            ,@("terkalyjapaneast", "Japan East")
            ,@("terkalycentralus", "Central US")
        )

# Loop through the storage accounts and get the primary storage key
ForEach($DC in $DataCenterList) {


    Start-Sleep -s 1

    # get the storage key using the storage account name
     $myStoreKey = (Get-AzureStorageKey –StorageAccountName $DC[0]).Primary
    $part1 = """;
    $part2 = """"
    $part3 = " value=""DefaultEndpointsProtocol=https;AccountName=";
    $part4 = ";AccountKey=";
    $part5 = """/>";
    # the connection string name is the same as the storage account anem
    Write-Host($part1 + $DC[0] + $part2 + $part3 + $DC[0] + $part4 + $myStoreKey + $part5);
  
}

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux