You are here

Feed aggregator

Multi-monitor docking with Surface Pro 3 and Lenovo Yoga 3 Pro

MSDN Blogs - 3 hours 24 min ago

[My article on dpi-scaling tweaks generated a ton of interest. Several enthusiasts asked questions on the forums about how to choose the best multi-monitor docking setup for their Surface Pro and other high-end laptops. This article provides an analysis of some available options and weighs them against a small set of criteria common for enterprise and consumer settings.]

Intro

Suppose you have a premium laptop like the Surface Pro 3 or Lenovo Yoga 3 Pro and you’d like to use it as desktop replacement with two external monitors. What accessories should you buy? To enable your laptop to fully replace a desktop computer, you need connections - lots of them. Desktops typically include 4 or more USB ports, 2 or more monitor ports, audio, Ethernet and many other options. Some ports like Ethernet are essential in enterprise settings but not so much in consumer settings. When choosing accessories to provide the ports you need, the following factors should be considered:

  • cost
  • # of USB ports
  • # of monitor ports
  • monitor resolutions supported
  • other ports (audio/Ethernet)
  • convenience of single step docking action vs manually plugging in several cables

There are a few key accessory categories that provide these ports:

  • Factory Dock option (Surface Pro 3)
  • DisplayLink-based USB 3.0 docks
  • DisplayLink-based USB graphics adapters
  • DisplayPort MST hubs
  • USB 3.0 hubs

 

Case Study Lenovo Yoga 3 Pro

The Lenovo Yoga 3 Pro is a wonderfully versatile laptop that already includes a lot of connections such as 2 USB 3.0 ports and a micro-HDMI monitor output. So out of the box you can hook up a big monitor (2560x1440@60Hz) and a full size keyboard/mouse without any additional accessories. But what if you need to add a hard drive, second monitor, USB memory stick, Ethernet, USB 3-D printer, etc.?

[Need a cool picture of a Yoga+ Thinkpad dock + 2 monitors. If you have one, please drop me a note as I no longer have a Yoga to play with. Bonus points if you're using the hinge in a creative way.]

 

Lenovo Enterprise scenario

For Enterprise, a DisplayLink-based dock that provides two monitor outputs, Ethernet, and several USB 3.0 ports is probably the best option. Lenovo makes their own Thinkpad USB 3.0 dock that includes 5 USB ports, 2 DVI monitor connections, Ethernet and Audio. Other brands like Plugable and Targus provide docks with similar functionality at different price points, but if you are purchasing for enterprise and have a supplier that works with Lenovo already, it may be simpler to get the same brand. Using these docks is simple. Just plug in a single USB cable to your laptop and the power cable that came with your laptop for a total of 2 cables and you’re good to go. The only drawback of using these USB-based docks is that they are not natively supported by Windows and piggy-back on the Intel, AMD, or NVIDIA GPU device driver outside the best practices documented on MSDN for GPU devices. For most environments they work just fine but they may not be as robust as a dedicated GPU running the monitor directly. Because the graphical output is managed by an additional software layer, some CPU clock cycles are used. On modern laptops, this usage is not noticeable to end-users and would only be apparent when running performance benchmarks. Because many enterprises encourage their employees to use two monitors for productivity/ergonomic reasons and the laptop itself only has a receptacle for one monitor, a USB dock is the obvious choice.

 

Lenovo Consumer scenario

The Enterprise solution works fine for consumers too but for folks on a budget, instead of getting a full dock, you can get a simple 4-port USB 3.0 hub and a micro-HDMI->HDMI adapter. This will allow you to use a single large monitor and up to 5 USB devices with your laptop (4 plugged into the hub and one plugged directly into the laptop). You’ll need to manually plug in 3 cables with this setup: power, USB, and HDMI. If you need a second external monitor, it’s probably best to just get a full USB dock like the Plugable UD-3900.

Case Study Surface Pro 3

The official Surface Blog provides a lot of info on this already so I won’t try to duplicate it.

 

[Image courtesy Surface Pro blog]

 

Surface Enterprise scenario

For enterprise, a DisplayLink dock works well and for budget-minded organizations the Plugable UD-3900 is probably the best choice. However there is an option to use the Surface brand docking station instead. The Surface dock provides 5 USB connections, Ethernet, audio, etc., just like the DisplayLink docks, but it does not use a DisplayLink chipset. It uses the built-in Intel GPU for all monitors so there is no CPU-usage penalty or potential compatibility concerns. Also it is a more premium device with excellent build quality. Plus with integrated power, you don’t need to remember your power brick and plug in power separately. The dock provides one mini-DisplayPort connection and the Surface tablet provides a second mini-DisplayPort connection. This allows 2 monitors to be connected with no additional devices. For customers that prefer a docking experience where cables don’t need to be plugged in manually, there are a few options:

  • If your monitors support DisplayPort MST then you can daisy-chain one monitor to the next so that all the monitors are connected through one DisplayPort cable attached to the dock. Most customers do not have these monitors and it is silly to go buy them just for this feature if you already have working monitors.
  • You can add an MST hub which allows connecting 2 or 3 monitors via a single DisplayPort cable. (Some MST hubs from 2012-2013 had hardware flaws which blocked using 2 or more monitors. The current models have corrected these issues and support 2 or 3 monitors just fine. If you happen to buy a used MST hub where the 2nd monitor doesn’t work, contact the vendor for a replacement.) These hubs need additional cables and some require an external power supply so you may have to deal with a mess of cables.
  • You can add a DisplayLink USB Graphics adapter. These are less expensive than MST hubs but have the same limitations as the USB docks as described above. They don’t need external power so there is much less cable clutter compared to the MST hubs.
Surface Consumer scenario

For consumer use, the DisplayLink docks and Surface brand docks work well, but they can be a little expensive. If you don’t need a premium experience, you can make do with a 4-port USB 3.0 hub and a mini-DisplayPort-> HDMI adapter to plug into your regular monitor. If you need a second monitor, you can get an MST hub or DisplayLink-based Dock or one of the USB graphics adapters. The most economical choice is, again, the Plugable UD-3900.

Product Summary Table

 

Dock/adapter Price Monitors added* Connectivity Ports

Surface Pro 3 Docking Station

$150

1x 2560x1440@60Hz

5 USB, combined audio in/out jack, Ethernet

Plugable UD-3900

$100

2x 1920x1200@60Hz

6 USB, audio in, audio out, Ethernet

Thinkpad USB 3.0 Dock 0A33970

$150

2x 1920x1200@60Hz

5 USB, audio in, audio out, Ethernet

Targus ACP77USZ DV2K

$275

2x 2560x1440@60Hz

5 USB, audio in, audio out, Ethernet

DisplayPort MST hubs

~$100

2x 2560x1440@60Hz

None

DisplayLink USB Graphics adapter

$35-60

1x 1920x1200@60Hz

None

Micro-HDMI->HDMI adapter

$6

1x 2560x1440@60Hz

None

Surface mini DisplayPort->HDMI adapter

$40

1x 2560x1440@60Hz

None

4-port USB 3.0 hub

$15

N/A

4 USB

  • Note many of these devices support alternate monitor resolutions such as 4K@30Hz, but I’ve listed the most popular premium 60Hz resolutions that folks actually use at work and home. If you have a specific monitor you’d like to use, check the specs of the device carefully to ensure it works at your desired refresh rate.

 

References:

 

 

Security: 5 URL’s that I found for K-12 programs from undercover agencies

MSDN Blogs - 3 hours 36 min ago
Security is a big deal right now, likely there will be money for research, training and high schools through various agencies. Not saying these are going to give you money, but if you are a non-profit trying to get funding for your CS program, these might be another source.  Might want to check with the most recent corporation that got hacked.  Sometimes you could jump in and get money or more likely curriculum for no charge from the various “undercover” agencies.  And here are the...(read more)

设置将 Microsoft Azure 的网络基础结构以支持设置为灾难恢复站点

MSDN Blogs - 10 hours 52 min ago

Prateek Sharma云 + Enterprise 高级项目经理

Azure Site Recovery (ASR) 可以将 Microsoft Azure 用作您的虚拟机的灾难恢复站点。

当管理员考虑向应用程序添加灾难恢复功能时,需要全盘考虑网络基础结构并制定相应计划,以便在停机时间最短的情况下完成故障转移,让用户能够访问应用程序。本文举例说明了如何设置必要的网络基础结构。我们首先介绍应用程序,然后了解如何在内部部署和 Azure 上完成网络设置,最后介绍如何执行测试故障转移和计划故障转移。

1. 应用程序

在本示例中,我们采用一个基于 IIS 的两层应用程序,它使用 SQL Server 作为后端。

在上图中,有一个名为 Contoso 的组织安装了Active Directory。还有一个两层IIS Web 应用程序使用 SQL Server作为其后端。SQL Server 通过 Windows 身份验证完成身份验证,所有虚拟机均加入 contoso.com 域。组织内部的办公室用户以及使用移动设备的员工均可访问该应用程序。外出员工使用 VPN 连接到组织网络。

2. 内部部署网络

contoso.com 的内部部署基础结构由 VMM 2012 R2 Server 管理。系统已经在 VMM Server 上创建一个名为 Application Network 的基于 VLAN 的逻辑网络,并使用此逻辑网络创建一个名为 Application VM Network 的虚拟机网络。应用程序中的所有虚拟机均使用静态 IP,因此,还为该逻辑网络定义了静态 IP 池。注意:如果虚拟机配置为使用 DHCP,则不需要静态 IP 池。

 

所有三个虚拟机中,作为 SQL 后端的域控制器和IIS 前端均加入了上一步骤中介绍的虚拟机网络。下面是指定给每个虚拟机的静态 IP

  • Active Directory 和 DNS – 192.168.0.3
  • SQL 后端 – 192.168.0.21
  • IIS 前端 – 192.168.0.22
3.Azure 网络

阅读 Azure 虚拟网络概述,了解有关 Azure 虚拟网络的基础知识。系统已经在 Microsoft Azure 中创建一个名为 AzureNetwork 的 Azure 虚拟网络。创建此网络时,内部部署 DNS Server 的 IP 给定为 DNS Server IP(本例中为 192.168.0.3)。该网络还启用了点到站点连接以及站点到站点连接。

AzureNetwork 使用地址范围 10.0.0.0 – 10.0.0.255

需要注意的是,出于两点主要原因,我们使用的地址范围与内部部署地址范围不同:

  • 我们需要使用内部部署网络建立站点到站点连接。S2S 网关不能在网络两端使用相同的 IP 范围
  • 如果内部部署正在运行多个应用程序,而我们希望该功能仅故障转移部分应用程序,而不转移整个子网

注意:如果没有上述两项要求,我们就可以定义与内部部署地址范围完全一致的地址范围。

4. 建立站点到站点连接和 AD 复制

Azure 中的网络应该只是内部部署网络的扩展,以便应用程序能够从一个站点无缝移至另一个站点。Azure 可以将站点到站点连接添加到在 Azure 中创建的虚拟网络。您可以在创建虚拟网络时添加站点到站点连接,也可以稍后添加。请在创建时参考将站点到站点连接添加到 Azure 虚拟网络的逐步指南。稍后添加站点到站点连接的步骤与此类似。

在两个站点间建立连接后,即可在 Azure 中创建 Active Directory 和 DNS Server。这样可以使在 Azure 中运行的应用程序不必转到内部部署 AD 和 DNS 来执行每个名称查找和身份验证请求。请按照下述步骤在 Azure 中创建 Active Directory:

  1. 建议您使用 Active Directory 站点和服务在内部部署 Active Directory 中为 AzureSite 另行创建一个站点
  2. 在第 3 步创建的网络上创建 IaaS VM
  3. 使用服务器管理器安装 Active Directory 域服务和 DNS 服务器角色
  4. 将服务器提升为域控制器时,指定内部部署域 contoso.com 的名称。IaaS 虚拟机应该能够将 contoso.com 解析为 DNS,就像在第 3 步我们指定内部部署 DNS 服务器的 IP 一样
  5. 在名为 AzureSite 的 Active Directory 站点(如果已创建)中添加此活动目录

因为已经在 Azure 中运行一个 DNS 服务器,因此从现在起,最好为已经创建的 IaaS 虚拟机使用此服务器。为此,我们转到 AzureNetwork 并修改 DNS 服务器 IP 以提供在上一步创建的虚拟机的 IP。

AD 复制频率:可以使用 Active Directory 站点和服务更改 DNS 记录复制的频率。您可以按照本文说明在 Active Directory 站点之间安排复制

是否必须将 Active Directory DNS 复制到 Azure如果正在制定应对全面站点灾难的计划,就必须将 Active Directory 复制到 Azure。但如果您预计要在某一时刻仅对部分应用程序执行计划故障转移,并且应用程序与 Active Directory 和 DNS 的通信不太频繁,您也可以选择不将 Active Directory 和 DNS 复制到 Azure。在这种情况下,您可以在从 Azure 创建的网络中提供内部部署 DNS 服务器的 IP。

5. 建立点到站点连接

在应用程序故障转移至 Azure 后,我们希望它仍支持使用移动设备的员工访问。要实现这一目标,我们需要创建到 AzureNetwork 的点到站点连接。链接中的文章提供了建立到 Azure 虚拟网络的点到站点 VPN 连接的逐步指南。建立此连接后,网络环境将如下图所示:

 

6.  创建测试网络

Azure Site Recovery 为您提供了执行测试故障转移的功能,不会影响生产工作负荷。为此必须另外创建一个 Azure 虚拟网络。创建另一个网络并使用在第 3 步创建的网络所使用的相同 IP 范围。我们将这个网络称为 AzureTestNetwork。只是不要在网络中添加站点到站点连接和点到站点连接。

7.   设置 Azure Site Recovery

现在,我们已经完成了基础结构设置,之后必须在 ASR 中完成下列步骤:

  1. 配置云
  2. 将“Application VM Network”映射到 AzureNetwork
  3. 为下列服务开启保护
    1. Active Directory – 尽管我们在生产环境中使用 AD 复制功能将 AD 复制到 Azure,我们在测试故障转移过程中还需要一个 AD 实例。这就是我们还需要使用 ASR 在 AD 上开启保护的原因。在测试故障转移中将如何使用 AD 的内容将在本文稍后的第 9 节中介绍
    2. IIS 前端
    3. SQL 后端

因为 ASR 入门指南ASR 部署指南已经对这些步骤做了详细阐释,本文不再赘述。完成此项设置后,网络环境将如下图所示:

8. 创建恢复计划

我们在创建恢复计划时,需要添加 IIS 前端和 SQL 后端这两个虚拟机。然后自定义恢复计划,方法是添加另一个组,并将虚拟机 IIS 前端移动到第 2 组中。我们希望首先故障转移 SQL 后端虚拟机,再启动 IIS 前端虚拟机,以便能够成功启动 IIS 应用程序。恢复计划应如下所示:

9. 执行测试故障转移或灾难恢复演习

组织必须定期执行测试故障转移或灾难恢复演习,检查灾难恢复措施是否完备以及是否满足合规性需求。ASR 为您提供了执行测试故障转移的功能,不会影响生产工作负荷。执行测试故障转移或灾难恢复演习时,我们将使用第 6 步中的测试网络。我们将按照下述步骤执行灾难恢复演习:

  1. 转到 ASR 中的 AD 虚拟机并在 AzureTestNetwork 中执行测试故障转移
  2. 在 AzureTestNetwork 中为 AD 创建 IaaS 虚拟机后,请检查为此虚拟机提供的 IP
  3. 如果 IP 与指定给 AzureTestNetwork 的 DNS 的 IP 不同,请将 DNS IP 修改为 AD VM 已获取的 IP。Azure 将从虚拟网络中定义的 IP 范围的第 4 个 IP 开始指定 IP。如果网络中添加的 IP 范围为 10.0.0.0 – 10.0.0.255,则此网中创建的第一个虚拟机的 IP 将为 10.0.0.4。因为 AD 将是灾难恢复演习中要执行故障转移的第一个虚拟机,您可以预测该虚拟机将要得到的 IP,并相应地将其添加为 AzureTestNetwork 中的 DNS IP
  4. 转到在第 8 步中创建的恢复计划并在 AzureTestNetwork 中执行测试故障转移。当虚拟机在 Azure 中进行故障转移并启动时,便会访问已在 AzureTestNetwork 中完成故障转移的 DNS 服务器并进行注册。此后AzureTestNetwork 中运行的 AD-DNS VM 将对恢复计划中的两个虚拟机使用更新后的 IP。注意,即使虚拟机之前使用静态 IP,此 IP 也可能与内部部署 IP 不同
  5. 在 AzureTestNetwork 中创建 IaaS 虚拟机
  6. 现在,应该能够使用 http://iisfrontend/ 从此 IaaS VM 访问 IIS 应用程序
  7. 完成测试后,您可以从 ASR 中的“作业”视图将测试故障转移标记为完成。这将删除在 AzureTestNetwork 上创建的虚拟机

10. 执行计划故障转移

为了执行应用程序的计划故障转移,我们需要转到在第 8 步创建的恢复计划,然后执行计划故障转移。完成计划故障转移并在 Azure 中启动虚拟机后,虚拟机将访问运行 DNS 的 Azure 服务器并更新其
IP。新 IP 将按可能已经在第 4 步设置的频率传播至在内部部署中运行的 DNS。通过转到 Active Directory 站点和服务并展开一个站点、然后展开域控制器然后是 NTDS 设置,您还可以选择执行按需复制 DNS 记录。右键单击 NTDS 设置可以选择选定域控制器的复制目标和来源:

复制DNS 记录并且 DNS 记录的生存期 (TTL) 过期后,应该能够从内部部署客户端访问应用程序,或者使用 P2S VPN 从连接至 AzureNetwork 的客户端访问应用程序:

在本文中,我们回顾了如何为基于 IIS 的两层 Web 应用程序(使用 SQL Server 作为其后端)设置灾难恢复。我们介绍了如何设置域控制器和 DNS 来执行灾难恢复演习或计划故障转移。此外,我们还了解了如何使用 Azure 虚拟网络建立站点到站点和点到站点连接,使最终用户即使在发生故障转移后仍能无缝访问应用程序。

如果您有其他问题,请访问 MSDN 上的 Azure 论坛,获取更多信息并与其他客户交流互动。

您还可以查看其他产品信息,也可以注册获取免费 Azure 试用,开始通过 Azure Site Recovery 试用 Microsoft Azure。

如果你有任何疑问, 欢迎访问MSDN社区,由专家来为您解答Windows Azure各种技术问题,或者拨打世纪互联客户服务热线400-089-0365/010-84563652咨询各类服务信息。

本文翻译自:http://azure.microsoft.com/blog/2014/09/04/networking-infrastructure-setup-for-microsoft-azure-as-a-disaster-recovery-site/

Azure Site Recovery 通过一键式流程将虚拟机故障转移至 Azure虚拟机

MSDN Blogs - 11 hours 42 min ago

Ruturaj Dhekane 云 + Enterprise 项目经理

现在,Azure Site Recovery 可以通过其“灾难恢复至 Azure”功能保护您的工作负荷,并在 Azure 中将其恢复为 IAAS 虚拟机。Brad Anderson 于 10 月 2 日宣布了全球版Azure的灾难恢复功能发布。此后,我们发现有大量客户采用这项功能在 Azure 中保护和恢复其虚拟机。如果您尚未开始使用这项服务,请观看 Teched 视频注册此项服务

在本博客短文中,您将了解如何利用称为恢复计划的 Azure Site Recovery 功能,以一贯准确可重复自动化方式实现灾难恢复到 Azure。本文还将简要介绍如何构建恢复计划,并针对在将故障转移至 Azure 虚拟机时需要注意的一些注意事项提供指导。

Azure 作为一个目标站点,使用户看待故障转移虚拟机的方式以及用户与这些虚拟机交互的方式发生了转变。然而Site Recovery 团队努力确保故障转移至 Azure 的体验保持直观、简单。我们的目标始终如一:一键式灾难恢复。此外,我们还确保灾难恢复至 Azure 的体验与将应用程序恢复至其他 VMM 站点的体验保持类似。毋庸置疑,Azure 恢复计划可以用于计划故障转移、非计划故障转移以及演习测试故障转移灾难恢复。

Azure 恢复计划继续致力于解决用户的下列需求:

  1. 定义同时故障转移的一组虚拟机。
  2. 定义虚拟机之间的依赖关系,以便准确恢复应用程序。
  3. 自动处理恢复以及自定义手动操作,以便也可以完成非虚拟机故障转移类任务

创建恢复计划

为了帮助您熟悉这一体验,让我们以一个简单的三层虚拟化应用程序为例进行说明。这个应用程序由 BackendSQLMiddlewareAppFrontendIIS
组成。让我们创建一个恢复计划,以便在需要时可以将该应用程序恢复到 Azure。我们将它命名为 FinanceAppRecovery

 

  • 新建一个恢复计划。
  • 为该恢复计划起一个名字。
  • 将源指定为 VMM 服务器,目标指定为 Microsoft Azure

  •  将应用程序的虚拟机添加到该恢复计划

自定义恢复计划

一个简单的三层应用程序可能如下所示。前端服务器依赖于中间件和 SQL 服务器。中间件服务器也要依赖于该 SQL 服务器。为了保证该应用程序正常工作,您需要确保对依赖项正确建模。我们的服务将按照您所建模的依赖项来恢复虚拟机。

使用,可定义该应用程序中的这些依赖项。在这个演示场景中,用户应创建三个组 – 每个应用程序层一个组,并将虚拟机移到正确的组。将BackendSQL
放入第 1 组,MiddlewareApp放入第 2 组,FrontendIIS放入第 3 组。这将确保在关闭内部部署的应用程序的过程中,首先关闭第 3 组虚拟机,随后关闭第 2 组虚拟机,最后关闭第 1 组虚拟机。这将确保在关闭过程中不会发生任何数据丢失。

  • 使用任务栏上的“Group”(组)按钮可添加新组

  • 添加一个组后,选择虚拟机,然后选择“Move Virtual Machine”(移动虚拟机)将虚拟机移入所需的组

启动顺序与组编号顺序一致。第 1 组虚拟机将首先启动,第 2 组随后启动,第 3 组最后启动。这是为了确保后端虚拟机在对其有依赖的其他虚拟机之前启动。在本例中,当第 3 组中的FrontendIIS 虚拟机启动时,它所依赖的BackendSQL
MiddlewareApp 虚拟机也将启动并在 Azure 中运行

上图显示了为三层应用程序创建恢复计划时所呈现的外观。

启动故障转移

当您启动故障转移时,该计划会将内部部署的虚拟机恢复为Azure中的IAAS 虚拟机该计划还确保各依赖项得到遵守并以正确顺序启动虚拟机。虚拟机附加到基于网络映射输入的虚拟网络

启动故障转移后,您可以转到作业页面以查看进度。作业页面将向您显示精细进度并告知在执行过程中是否有任何错误。

故障转移完成后,您可以看到已经为您创建了一个FinanceAppRecovery云服务。如果您浏览部署在应用程序中的实例,便可看到三个虚拟机已经恢复为 IAAS 虚拟机并准备就绪。

现在,虚拟机已经在 Azure 中完成恢复。您可以将其视为 IAAS VM 进行交互。

恢复计划现在处于“等待提交中”状态。确保提交计划,以便完成故障转移。

当您准备就绪后,可以查看Azure Site Recovery 产品页面注册获取 Azure试用以开始使用。如果遇到任何问题或希望与其他用户交流,请访问MSDN 上的 Azure Site Recovery 论坛。我们将不断改进和推出新功能 – 倾听您的声音!

如果您陷入了困境,或者想要了解问题缘由,请查阅下面的常见问题解答。

问题 1:我在创建恢复计划时收到一条警告:“The name can contain only letters, numbers, and hyphens. It should start with a letter and end with a letter or a number.”(名称只能包含字母、数字和连字符,应以字母开头,以字母或数字结尾。)我以前创建恢复到其他 VMM 站点的计划时,从未收到这样的警告。

回答:当恢复计划故障转移到 Azure 时,会创建一个与恢复计划同名的云服务,恢复之后的虚拟机将部署在云服务中。该虚拟机将创建在与计划中的虚拟机所属网络相同的关联组中。云服务名称是您可以用于到达虚拟机的全局公共端点。如果恢复计划名称为“FinanceApplication”,云服务将生成一个 financeapplication.cloudapp.net 子域。当您将恢复计划故障转移回 VMM 时 – 云服务将被删除

问题 2:一项恢复计划对虚拟机数量是否有限制?

回答:一项云服务所容纳的虚拟机不能超过 50 个。因此,对于可以添加到恢复计划的虚拟机数量也有限制,即 50。

问题 3:每份订购限额为20 项云服务。这是否表示我创建的恢复计划不能超过 20 个?

回答:对于云服务数量的限制不会限制您可以创建的恢复计划的数量。但实际执行的故障转移的恢复计划不能超过 20 个。在 Azure 客户支持的帮助下创建一个申请,即可增加云服务的数量。

问题 4:我有一个虚拟机使用端口 80 提供网页。但在故障转移至 Azure 之后,我无法通过端口 80 连接到虚拟机,也无法通过 RDP 建立连接。我应该怎么做才能启用此端口?

回答:无法通过 Internet 直接访问故障转移至 Azure 的虚拟机。您需要配置端点,才能进行访问。请查看本教程,了解端点配置方法。

问题 5:我已配置远程桌面 (RDP) 端点,但仍无法访问虚拟机。出了什么问题?

回答:执行故障转移前,是否已在虚拟机上启用 RDP 端口?如果未在 Windows 设置中完成此项配置,则无法访问虚拟机。确保您已经允许其他计算机使用远程桌面连接到您的计算机。

如果你有任何疑问, 欢迎访问MSDN社区,由专家来为您解答Windows Azure各种技术问题,或者拨打世纪互联客户服务热线400-089-0365/010-84563652咨询各类服务信息。

本文翻译自:http://azure.microsoft.com/blog/2014/08/05/azure-site-recovery-enables-one-click-orchestrated-failover-of-virtual-machines-to-azure/

Known issues with BHM v2

MSDN Blogs - 15 hours 49 min ago

This post is used to list the known issues reported in the last version of BizTalk Health Monitor (currently the v2) and when possible workarounds.
You can let in this post your comments if you want to report new issues.
We will update periodically BHM to fix them and bring also new features.

 

Known issues:

 


When a user account is specified to schedule a BHM collect, the creation of the task returns the following error: "A specified logon session does not  exist. It may already have been terminated". 

The policy "Do not allow storage of passwords and credentials for network authentication" is maybe enabled preventing to create a task under a specific user account.
If it is a local policy and if you can disable it, follow these steps :

- Open the 'Local Security Policies' MMC
- Expand 'Local Policies' and select 'Security Options'
- Disable that property - the system must reboot for the change to take effect



BHM on a localized OS is hanging and crashing when creating a new monitoring profile. 

The root cause was identified and fixed. We will release soon a post V2 update fixing that problem.
Users who want a temporary fix can contact JP (jpierauc@microsoft.com).



When we create a monitoring profile targeting a BizTalk group not accessible by the logged-on user, the  console is crashing.

During the creation of a profile, BHM connects to the Mgmt db of the targeted group to retrieve the list of BizTalk servers of that group - this list will be used in the performance nodes.
This connection attempt made under the interactive logged-on user account is failing because this user does not have rights on the Mgmt Db, and this error is not well catched.
We will release soon a post V2 update allowing to specify the user account in the new profile dialogbox and managing better connection errors.

You should be able workaround that issue by editing manually an XML profile file following the steps below  :

- Duplicate an existing profile using the new "duplicate profile" menu item

- open a CMD window under elevated admin priviledge

- in this cmd change the folder to "c:\programdata\Microsoft\BizTalk Health Monitor"

- open in notepad the XML file corresponding to this duplicated profile

- modify in this xml file the value of MGMTDBSERVER and MGMTDBNAME properties to specify the location of the new Mgmt db you want to target

- modify the value of the "DEFAULTBTS" property entering the name of one BizTalk server of that new targeted group

- modify the value of the "ALLBTS" property entering this same BizTalk server or the complete list of all your BTS of the new targeted group, ex : Value="BTSRV1:True:True:True, BTSRV2:False:False:True"

- save the xml file

- Open BHM console : you should see then the new profile listed

- Display the Settings dialogbox of the new profile and specify the user to run the collect as

Luke 20: Videoer fra Connect() er tilgjengelig on-demand!

MSDN Blogs - 16 hours 52 min ago
Velkommen til luke 20 i Microsofts julekalender for utviklere! Husk at hver luke du deler på Twitter eller Facebook gir deg mulighet til å vinne en Surface Pro 3 , men kun 1 deling pr luke i hvert media telles ! Høstens store begivenhet, Connect(), var i November. Om du ikke fikk med deg live sendingene så fortvile ikke, keynotes og de andre tekniske videoene er tilgjengelig via Channel 9. Fyll opp koppen med gløgg, finn frem pepperkakene...(read more)

Merry Christmas

MSDN Blogs - 17 hours 52 min ago

How to capture a Fiddler trace for Git for Windows

MSDN Blogs - Fri, 12/19/2014 - 21:27

Prerequisites to download/install:

Steps:

  1. Run gitfiddler.cmd in a new command prompt.  This script configures git.exe to use Fiddler as a proxy, and then waits for a key press.  Once a key is pressed, the script clears the settings that it sets, and then exists.
    • Leave the script running for now.
    • READ the script before running it!
  2. Start Fiddler
  3. Run your git commands
  4. Select the captured requests in Fiddler and save them in a ArchiveZip (*.saz) file.

Building Mission Critical Systems using Cloud Platform Services

MSDN Blogs - Fri, 12/19/2014 - 20:42
Overview

Mission critical systems will often have higher SLA requirements than those offered by the constituent cloud platform services. It is possible to build systems whose reliability and availability are higher than the underlying platform services by incorporating necessary measures into the system architecture. Airplanes, satellites, nuclear reactors, oil refineries and other mission critical real world systems have been operating at higher reliabilities in spite of the constant failures of the components. These systems employ varying degrees of redundancy based on the tolerance for failure. For instance, Space Shuttle used 4 active redundant flight control computers with the fifth one on standby in a reduced functionality mode. Through proper parallel architecture for redundancy, software systems can attain the necessary reliability on general purpose cloud platforms like Microsoft Azure. The SLA numbers used in this document are theoretical possibilities and the real SLA numbers depend on the quality of the application architecture and the operational excellence of the deployment.

Reliability of Software Systems

Any system that is composed of other sub systems and components exhibits a reliability trait that is the aggregate of all of its constituent elements. This is true for airplane rudder control, deployment of solar panels in a satellite, or the control system that maintains the position of control rods in a nuclear reactor. Many of these systems include both electronic and mechanical components that are prone to fail, yet these systems apparently function at a high degree of reliability. These real world systems attain high reliability through parallel architecture for the components in the critical path of execution. The parallel architecture is characterized by varying degrees of redundancy based on the reliability goals of the system.

The redundancy can be seen as heterogeneous or homogeneous; in heterogeneous redundancy the hardware and software of the primary and standby components will be made by entirely independent teams. The hardware may be manufactured by two independent vendors and in the case of software two different teams may write and test the code for the respective components. Previously mentioned Space Shuttle flight control system is an example of heterogeneous redundancy where the 5th standby flight control computer runs software written by an entirely different team.

Not every system can afford heterogeneous redundancy because the implementation can get very complex and expensive. Most commercial software systems use homogeneous redundancy where same software is deployed for both primary and redundant components. With homogeneous redundancy it is difficult to attain the same level of reliability as its heterogeneous counterpart due to the possibility of the correlated failures resulting from the software bugs and hardware faults replicated through homogeneity. For the sake of simplifying the discussion we will ignore the complexity of correlated failures in commercial systems caused by homogeneous deployments.

Impact of Redundancy on Reliability

Borrowing from the control system reliability concepts, if there are n components and if each of the components have a probability of success of Ri, the overall system reliability is the Cartesian product of all the success probabilities as shown by the following equation:

If a system has three independent components in the execution path the reliability of the system can be expressed by the following equation:

Rsystem = R1 x R2 x R3

This equation assumes that the reliability of the individual component doesn’t depend on the reliability of the other participating components in the system. Meaning that there are no correlated failures between the components due to the sharing of the same hardware or software failure zones.

System with no Redundancy

The following is a system with three components with no redundancy built into the critical path of execution. The effective reliability of this system is the product of R1, R2 and R3 as shown by the above equation:

Since no component can be 100% reliable, the overall reliability of the system will be less than the reliability of the most reliable component in the system. We will use this in the context of a cloud hosted web application that connects to cloud storage through a web API layer. The server side architecture schematic with hypothetical SLAs for the respective components is shown below:

Figure 1: Multi-tiered web farm with no redundancy

SLA of the Web Farm and Web API is 99.95 and the Cloud Storage is 99.9; hence the overall system reliability is shown by the following equation:

Rsystem = 0.9995 x 0.9995 x 0.9990 = 0.998

The system’s reliability went down to 0.998 in spite of each component operating at a higher level of individual reliability. Complex systems tend to have more interconnected components than shown in this example and you can imagine what it does to the overall reliability of the system.

Let us see how redundancy can help us reaching our reliability objective of, say 99.999, through the progressive refinement of the above application architecture.

Double Modular Redundancy (DMR)

In a Double Modular Redundancy (DMR) implementation, two active physical components combined with a voting system will form the logical component. If one of the components were to fail, the other will pick up the workload, and the system will never experience any downtime. Proper capacity planning is required to accommodate the surge in workload due to the failover in an active-active system. Active-passive systems tend to have identical capacity for both the execution paths and hence may not be an issue. The application outage can only occur if both the physical components fail.

Component #2 in the above system is duplicated so that if one instance fails the other will take over. The requests from component #1 will go through both the paths and the voting system will decide which output of the components is the correct one to be accepted by component #3. This typically happens in a control system circuitry, but similar concepts can be applied to software systems. To attain higher reliability in this configuration, it is absolutely critical for both the instances of component #2 to be active. In control system circuitry, DMR is not sufficient; if two outputs of component #2 disagree there is no way to verify which component is correct. DMR can effectively be employed in scenarios where component #1 can decide if the connection to component #2 is successful through the execution status codes known a priori.

An example of DMR in the computer networking space is the creation of redundancy (active-standby mode) with network load balancers to prevent single point of failure. In this case, the voting system typically is the heartbeat from master to the standby (e.g. Cisco CSS) system. When the master fails the standby takes over as the master and starts processing the traffic. Similarly DMR can be applied to web farms where HTTP status codes form the implicit voting system.

The reliability of a DMR applied component can be computed by subtracting the Cartesian product of the probabilities (since both of them have to fail at the same time) of failure from “1”:

Rcomponent = (1 - (1 - R1) x (1 - R2))

If the individual load balancer’s SLA is 99.9, the redundant deployment raises its reliability to 99.9999.

Now we will work on improving the availability of the deployment shown in Figure 1 through DMR. After applying redundancy to the Web API layer the systems looks like Figure 2 shown below:

Figure 2: Double Modular Redundancy for the Web API layer

Reliability of this system can be computed using the following equation:

Rsystem = 0.9995 x (1 – (1 - 0.9995) x (1 - 0.9995)) x 0.999 ≃ 0.9985

Redundancy of component #2 only marginally improved the system’s reliability due to the availability risk posed by the other two components in the system. Now let us implement redundancy for the storage layer which gives us the system reliability of 99.95 as shown by the following equation:

Rsystem = 0.9995 x (1 – (1 - 0.9995) x (1 - 0.9995)) x (1 – (1 - 0.999) x (1 - 0.999)) = 0.9995

Redundancy with stateful components like storage is more complex than the stateless components such as web servers in a web farm. Let us see if we can take the easy route to get acceptable reliability by adding redundancy at the web frontend level and removing it at the storage level. The schematic looks like the one below:

Figure 3: Double Modular Redundancy for the Web and the Web API layers

 

Rsystem = (1 – (1 - 0.9995) x (1 - 0.9995)) x (1 – (1 - 0.9995) x (1 - 0.9995)) x 0.999 ≃ 0.999

This still did not give us the target rate of, say, 0.99999. The only option we have now is to bring back the redundancy to storage as shown below:

Rsystem = (1 – (1 - 0.9995) x (1 - 0.9995)) x (1 – (1 - 0.9995) x (1 - 0.9995)) x (1 – (1 - 0.999) x (1 - 0.999)) ≃ 0.999999

The following table shows the theoretical reliabilities that can be attained by applying redundancy at various layers of the schematic in Figure 1:

 

Redundancy ModelWeb App (SLA = 99.95)Web API(SLA = 99.95)Azure Table (SLA = 99.9)Attainable SLA Deployment #1 NO NO NO 99.80 Deployment #2 YES NO NO 99.85 Deployment #3 NO YES NO 99.85 Deployment #4 NO NO YES 99.90 Deployment #5 YES YES NO 99.90 Deployment #6 NO YES YES 99.95 Deployment #7 YES YES YES 99.99999

After applying redundancy across all system components, we finally reached our goal of 99.999. Now we will look at the implementation strategies of this scheme on Azure. As mentioned previously the homogeneity of the hardware and software for commercial systems may impact the reliability numbers shown in the above table. See the Wikipedia article “High Availability” for the downtime numbers of various nines of availability.

DMR within an Azure Region

When redundancy is being designed into the system, careful consideration has to be given to make sure that both the primary and the redundant components are not deployed to the same unit of fault isolation. Windows Azure infrastructure is divided into regions where each region is composed of one or more data centers. In our discussion we will use “region” and “data center” interchangeably based on the physical or logical aspects of the context.

Failures in any given data center can happen due to the faults in the infrastructure (e.g. Power and cooling), hardware (e.g. networking, servers, storage), or management software (e.g. Fabric Controller). Given that the first two are invariant for a DC, software faults can be avoided if applications are given a chance to deploy to multiple fault isolation zones. In the context of Azure, compute cluster and storage stamp are such fault isolation zones.

Each Windows Azure data center is composed of clusters of compute nodes and each cluster contains approximately a 1000 servers. The first instinct is to deploy multiple cloud services for compute redundancy; however, at the time when this article was written, the ability to override Azure decisions on compute cluster selection for provisioning PaaS or IaaS VMs is not enabled through the portal or through the REST APIs. So, there is no guarantee that any two cloud services either within the same Azure subscription or across multiple subscriptions will be provisioned on two distinct compute clusters.

Similarly Azure Storage uses clusters of servers called storage stamps, each of which is composed of approximately a 1000 servers. The developers can’t achieve redundancy at the storage layer by placing two storage accounts on two different storage stamps in a deterministic manner.

Since developers have no control over increasing the redundancy for single region deployments, the model in Figure 3 can’t be realized unless multiple regions are considered. The system shown in Figure 1 is what can be attained realistically with Windows Azure through a single region deployment which caps the maximum achievable system reliability at 0.998.

DMR through Multi-Region Deployments

The implementation shown in Figure 3 can only be realized through multiple Azure regions because of the lack of control on the compute cluster and storage stamp selection in a single Azure region. The only reliable way of getting desired redundancy is to deploy the complete stack on each region as shown in Figure 4.

Figure 5: DMR architecture deployed to two Azure regions

Affinity-less interaction between application layers in an active-active deployment shown above is too complex to implement due to the near real time multi-master replication requirements of the persistent application state (e.g. Azure Table) resulting from the arbitrary cross DC access of the application layers. No matter which storage technology is used, changes to the business entities have to be written to storage tables in both the data centers. Preventing arbitrary layer interaction between data centers reduces the state complexity issues and hence the architecture can be simplified as shown below:

Figure 6: DMR architecture deployed to two Azure regions

Let us describe the changes that we made from the hypothetical system in Figure 5 to an implementable model.

  • The inter-layer communications between the data centers is removed due to the complexities involved with the multi-master write activities. Instead of an active-active model we selected an active-passive model where the secondary is in a warm standby mode.

  • Azure Traffic Manager is replaced with a custom traffic manager (e.g. an on-premise distributed traffic manager or a client rotating through a list of URLs) that can have higher availability than Azure Traffic Manager’s SLA of 99.99. With Azure Traffic Manager usage, the max reliability of even the multi-region deployment will mathematically be <= 99.99.

  • Since Azure Table doesn’t support multi-master replication and doesn’t give storage account level control to failover (entire stamp has to fail6) to the geo replicated secondary for writes, application level writes are required. In order for this to work, a separate storage account is required on the secondary data center.

  • Web API level Azure Table replication requires local writes to be asynchronously queued (e.g. storage or service bus queue) to the remote storage table because otherwise it will reduce the SLA due to the tight coupling of the remote queuing infrastructure.

The above implementation can turn into an active-active set up if the storage state can be cleanly partitioned across multiple deployments. The transactional data from such partitions can be asynchronously brought into a single data warehouse for analytics4.

Content driven applications like media streaming, e-Books, and other services can easily build systems with 99.999 on Azure by distributing content through multiple storage accounts located in multiple Azure regions. So, with shared nothing architecture the set up in Figure 3 can easily be translated into an active-active deployment.

Triple Modular Redundancy (TMR)

Triple modular redundancy, as the name indicates, involves the creation of 3 parallel execution paths for each component that is prone to fail. TMR can dramatically raise the reliability of the logical component even if the underlying physical component is highly unreliable. For most software systems, DMR is good enough unless the systems (such as industrial safety systems and airplane systems) can potentially endanger human life. In fact, Space Shuttle2 employed quadruple redundancy for its flight-control systems.

The Azure architecture shown in Figure 3 is morphed into the Figure 7 shown below for TMR:

Figure 7: TMR architecture deployed to three Azure regions

The theoretical availability for such a system is as shown below:

Rsystem = (1 – (1-0.998) x (1-0.998) x (1-0.998)) = 0.99999999

99.999999 is unthinkable for most cloud-hosted systems because it is an extremely complex and expensive undertaking. The same architecture patterns that are applied to DMR can easily be applied to TMR, so we will not rehash the strategies here.

CAP Theorem and High Availability of Azure Deployments

Distributed multi-region deployment is a key success factor for building highly available systems on cloud platforms including Microsoft Azure. Per CAP theorem8, you can only guarantee two of the three (C – Consistency, A- Availability, P – network Partition tolerance) systemic qualities in applications that use state machines. Given that multi-region deployments on Azure are already network partitioned, the CAP theorem can be reduced to the inverse relationship of C and A. Consequently it is extremely hard to build a highly consistent system that is also highly available in a distributed setting. Availability in CAP theorem refers to the availability of the node of a state machine, and in our case Azure Table is such a logical node. In this case, the overall deployment’s Availability and Consistency are correlated to A and C of Azure Table . Azure Table can be replaced with any state machine (e.g. Mongo DB, Azure SQL Database, and Cassandra) and the above arguments are still true.

High availability at the order of 99.99 and above requires the relaxation of system’s dependency on state consistency. Let us use a simple e-commerce scenario where shoppers can be mapped to their respective zip codes. We needed TMR due to the availability requirement of 99.999 or above for this hypothetical system. We can safely assume that the reference data like product catalog is consistently deployed to all the Azure regions.

Figure Figure 8: Embarrassingly parallel data sets for high availability

This multi-region deployment requires the identification of embarrassingly parallel data sets within the application state and map each deployment as the primary for mutability of the respective data sets. In this case, the deployment in each region is mapped to a set of zip codes and all the transactional data from the users belonging to these zip codes will be written to the respective regional database partition (e.g. Azure Table). All the three regions are serving the same application in an active-active setting. Let us assume that the replication latency is 15 min for an entity write. We could set up the following mapping for primary and fail-over operations:

Zip Code RangePrimarySecondaryTertiary 10000 to 39999 Region #1 Region #2 Region #3 40000 to 69999 Region #2 Region #3 Region #1 70000 to 99999 Region #3 Region #1 Region #2

If Region #1 is down, users can be sent to other regions with appropriate annotation that their shopping carts and order status are not current. Users can continue browsing the catalog, modify the shopping cart (albeit an older one) and view the delayed order status. Given that this only happens at the failure of a region, which is expected to be a rare event, users may be more forgiving than the dreaded “System is down and comeback later” message.

Tying this back to CAP theorem, we attained higher availability through an active-active TMR deployment at the expense of the overall consistency of the state data. If system functionality can’t tolerate data inconsistency resulting from the replication latency, one can’t use eventual consistency model. Strong consistency implementation between Azure regions can result in reduced availability due to the cascading effect of a region’s failure because of the tight coupling with other regions.

Summary

Given the current SLA model of Azure managed services such as compute, storage, networking and other supporting services, architecting systems with high availability like 99.99 and beyond requires a multi-region deployment strategy. In most cases the ceiling for a single region Azure deployment is 99.95 or below due to the compounding effect of the SLAs of multiple services used in a typical cloud service. Redundancy with shared nothing architecture is the key success factor of achieving high system reliability. Building highly reliable systems of 99.99 or above is an expensive undertaking that requires deliberate designs and architecture practices regardless of whether it is for the cloud or on-premise. Factoring in the trade-offs between consistency and availability into the application architecture will produce the desired solution outcome whose complexity can be manageable.

Thanks to Mark Russinovich, Patrick Chanezon, and Brent Stineman for helping me in improving this article.

References
  1. "Redundant System Basic Concepts." - National Instruments. N.p., 11 Jan. 2008. Web. 30 Mar. 2014. http://www.ni.com/white-paper/6874/en/.
  2. "Computers in the Space Shuttle Avionics System." Computer Synchronization and Redundancy Management. NASA, n.d. Web. 30 Mar. 2014. http://www.hq.nasa.gov/office/pao/History/computers/Ch4-4.html.
  3. Russinovich, Mark. "Windows Azure Internals." Microsoft MSDN Channel 9. Microsoft, 1 Nov. 2012. Web. 30 Mar. 2014. http://channel9.msdn.com/Events/Build/2012/3-058.
  4. McKeown, Michael, Hanu Kommalapati, and Jason Roth. "Disaster Recovery and High Availability for Windows Azure Applications." Msdn.microsoft.com. N.p., n.d. Web. 30 Mar. 2014. http://msdn.microsoft.com/en-us/library/windowsazure/dn251004.aspx.
  5. Mercuri, Marc, Ulrich Homann, and Andrew Townsend. "Failsafe: Guidance for Resilient Cloud Architectures." Msdn.microsoft.com. Microsoft, Nov. 2012. Web. 30 Mar. 2014. http://msdn.microsoft.com/en-us/library/jj853352.aspx.
  6. Haridas, Jai, and Brad Calder. "Windows Azure Storage Redundancy Options and Read Access Geo Redundant Storage." Microsoft MSDN. Microsoft, 11 Dec. 2013. Web. 30 Mar. 2014. http://blogs.msdn.com/b/windowsazurestorage/archive/2013/12/04/introducing-read-access-geo-replicated-storage-ra-grs-for-windows-azure-storage.aspx.

Sending to an Event Hub with JMS

MSDN Blogs - Fri, 12/19/2014 - 17:19

Newly posted on GitHub, a sender sample for use with the Qpid JMS AMQP 1.0 client. Note that the ability to set a partition key on a message requires a feature not in the currently released version of the client (0.30) but which should be available in the next release. The other ways of sending a message (locking all messages to one partition, or letting Service Bus round-robin messages across partitions) both work with the current released client.

There will be a matching receiver sample soon.

New Features of Power BI

MSDN Blogs - Fri, 12/19/2014 - 14:49

Today we are previewing new features for Power BI, our self-service business intelligence solution designed for everyone. Power BI reduces the barriers to deploying a business intelligence environment to share and collaborate on data and analytics from anywhere.

You can try out what is coming next here: http://www.powerbi.com/dashboards

More information available in our blog post here: http://blogs.msdn.com/b/powerbi/archive/2014/12/18/new-power-bi-features-available-for-preview.aspx

Alex.

  

 

SSIS Catalog and Project Deployment with PowerShell

MSDN Blogs - Fri, 12/19/2014 - 14:46

This may be my shortest blog post ever as I get ready to sign off from work for the next three weeks. But before I do, I wanted to share a quick script to automate deployment for SSIS 2012 (and 2014). I can’t take full credit for this script as the foundation was taken from Matt Masson post over on MSDN (HERE).

Overview

A brief summary of the script below:

  1. Check for the catalog and create it if it doesn’t exist
  2. Checks for a project folder in the catalog, creating it if it doesn’t exist
  3. Deploys the project from the ISPAC file
  4. Creates an environment (again if it doesn’t already exist) in the project folder and then adds a reference to the Project
  5. Adds a variable programmatically to the Environment folder
  6. Configures a package parameter within the project to use the environment variable

Without further ado, the script is provided below:

Script $ServerName = "localhost" $SSISCatalog = "SSISDB" $CatalogPwd = "P@ssw0rd1" $ProjectFilePath = "C:\Dev\SSISDeploymentDemo\SSISDeploymentDemo\bin\Development\SSISDeploymentDemo.ispac" $ProjectName = "SSISDeploymentDemo" $FolderName = "Deployment Demo" $EnvironmentName = "Microsoft" # Load the IntegrationServices Assembly [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.IntegrationServices") # Store the IntegrationServices Assembly namespace to avoid typing it every time $ISNamespace = "Microsoft.SqlServer.Management.IntegrationServices" Write-Host "Connecting to server ..." # Create a connection to the server $sqlConnectionString = "Data Source=$ServerName;Initial Catalog=master;Integrated Security=SSPI;" $sqlConnection = New-Object System.Data.SqlClient.SqlConnection $sqlConnectionString $integrationServices = New-Object "$ISNamespace.IntegrationServices" $sqlConnection $catalog = $integrationServices.Catalogs[$SSICatalog] # Create the Integration Services object if it does not exist if (!$catalog) { # Provision a new SSIS Catalog Write-Host "Creating SSIS Catalog ..." $catalog = New-Object "$ISNamespace.Catalog" ($integrationServices, $SSISCatalog, $CatalogPwd) $catalog.Create() } $folder = $catalog.Folders[$FolderName] if (!$folder) { #Create a folder in SSISDB Write-Host "Creating Folder ..." $folder = New-Object "$ISNamespace.CatalogFolder" ($catalog, $FolderName, $FolderName) $folder.Create() } # Read the project file, and deploy it to the folder Write-Host "Deploying Project ..." [byte[]] $projectFile = [System.IO.File]::ReadAllBytes($ProjectFilePath) $folder.DeployProject($ProjectName, $projectFile) $environment = $folder.Environments[$EnvironmentName] if (!$environment) { Write-Host "Creating environment ..." $environment = New-Object "$ISNamespace.EnvironmentInfo" ($folder, $EnvironmentName, $EnvironmentName) $environment.Create() } $project = $folder.Projects[$ProjectName] $ref = $project.References[$EnvironmentName, $folder.Name] if (!$ref) { # making project refer to this environment Write-Host "Adding environment reference to project ..." $project.References.Add($EnvironmentName, $folder.Name) $project.Alter() } # Adding variable to our environment # Constructor args: variable name, type, default value, sensitivity, description $customerID = $environment.Variables["CustomerID"]; if (!$customerID) { Write-Host "Adding environment variables ..." $environment.Variables.Add( “CustomerID”, [System.TypeCode]::String, "MSFT", $false, "Customer ID") $environment.Alter() $customerID = $environment.Variables["CustomerID"]; } $package = $project.Packages["Package.dtsx"] $package.Parameters["CustomerID"].Set( [Microsoft.SqlServer.Management.IntegrationServices.ParameterInfo+ParameterValueType]::Referenced, $customerID.Name) $package.Alter() Wrap-Up

I hope this script is useful in helping you automate your SSIS deployment so that they are as pain free as possible. Feel free to drop any question of comments you may have below.

Until next time and wishing you all a Happy Holidays!

Chris

SPSCC Artist and Lecture Series Brings Author Terry McMillan

SPSCC Posts & Announcements - Fri, 12/19/2014 - 14:34

South Puget Sound Community College welcomes best-selling author Terry McMillan as part of the college’s 2014-15 Artist and Lecture Series. McMillan will be on campus Thursday, Feb. 5 at 7:30 p.m. on the Kenneth J. Minnaert Center for the Arts Mainstage for “An Evening with Terry McMillan.”

McMillan has authored many books, including Disappearing Act, How Stella Got Her Groove Back, and Waiting to Exhale (the latter two became award-winning Hollywood blockbusters). Several of her novels spent significant time at No. 1 on the New York Times’ Bestsellers list. McMillan’s latest book, Who Asked You?, takes an intimate look at the burdens and blessings of family and speaks to trusting your own judgment even when others don’t agree.

The South Puget Sound Community College Artist and Lecture Series brings a diverse group of distinguished scholars, activists and artists under a common theme: “Reflections.” For 2014-2015, we celebrate role and power of women. Our presenters are recognized nationally or internationally for their work.  Our hope is that they will initiate courageous and purposeful discussion within our community concerning critical and contemporary issues.

Tickets for “An Evening with Terry McMillan” are $10 for general admission, and $8 for non-SPSCC students with ID (both prices include a $3 Washington Center service fee), and the event is free to all SPSCC staff, faculty and students. Tickets are available online at OlyTix.org or by calling The Washington Center for the Performing Arts box office at (360) 753-8586. For more information about the Artist and Lecture Series at South Puget Sound Community College, visit www.spscc.edu/ALSeries.

Little Rock Nine’s Ernest Green to Speak at SPSCC

SPSCC Posts & Announcements - Fri, 12/19/2014 - 14:22

Ernest Green, one of the storied Little Rock Nine, will visit South Puget Sound Community College as part of the 2014-15 Artist and Lecture Series.

Green will keynote the college’s Martin Luther King, Jr. banquet hosted in partnership between SPSCC and the Thurston Group of Washington State. The event takes place Saturday, Jan. 17 at 6 p.m. in the Student Union Building. Olympia Federal Savings is a presenting sponsor of the evening.

Green earned his high school diploma from Central High School in Little Rock, Arkansas. He and eight other black students were the first to integrate Central High, following the 1954 US Supreme Court decision in Brown v. Board of Education that declared segregation illegal. They later would become known as the "Little Rock Nine." Green then went on to receive his bachelors in social science and masters in sociology from Michigan State University. He also received honorary doctorates from Michigan State University, Tougaloo College, and Central State University.

Tickets for the banquet are $40 per person and are available online at www.spscc.edu/MLKtix, or by calling (360) 596-5334. For more information about the Artist and Lecture Series at South Puget Sound Community College, visit www.spscc.edu/ALSeries.

Project 2013 CU page added–and Blog post roundup -

MSDN Blogs - Fri, 12/19/2014 - 14:01

I managed to get the second page of my Cumulative Update project live today – the Cumulative Updates for Microsoft Project 2013 (this covers both Standard and Professional) – now just the 2010 products to go.  That will take me a little longer as there is a bit more history there!  Please be patient.  It is live at Project 2013 Cumulative Updates and there is a link from both the main page and individual blog post pages.

That screenshot also highlight a recent post you might want to read if Firefox is your browser of choice.  You might want to avoid spaces in your Project names - http://blogs.technet.com/b/projectsupport/archive/2014/12/18/project-online-opening-projects-from-project-center-with-firefox.aspx.

Other news from around the Project Blogosphere:

Alex Burton gave a good overview of the new Nav Bar and App Launcher - Customising the Nav Bar & App Launcher in Office 365 Sorry the ‘Projects’ link still only takes you to the default …/sites/pwa though.

Nenad introduces some new terms - Crashing and Fast Tracking in MS PROJECT 2013

Looks like the other MVPs are either too busy in the run up to the holiday’s – or have already switched off!  If there are any blogs posts out there from the last few weeks I should be noticing then let me know!

.

New Mobile App Samples

MSDN Blogs - Fri, 12/19/2014 - 14:00

We’re happy to release a collection of mobile app SDK samples to help deliver mobile-first solutions with Dynamics CRM. These starter apps for iOS, Android, and Windows platforms are for mobile developers looking to get started with Dynamics CRM as well as for seasoned Dynamics CRM developers exploring mobile development.

The sample apps leverage the Dynamics CRM web service and showcase patterns and practices for data connectivity with OData (REST) and SOAP endpoints as well as standards-based authentication with OAuth in mobile scenarios. You can download the source code for the apps and related documentation here:

ActivityTracker is a reference scenario for the sample apps. ActivityTracker helps a user quickly search for contacts, access recent contacts and easily report ‘check-in’ activities in CRM. It is designed for sales and customer service professionals to quickly access and update information on the go. With the published source code, the app can be easily modified by developers for your own scenarios and requirements.

 

Please don’t forget the tremendous improvements we’ve made in our first party Dynamics CRM tablet apps and phone apps for iOS, Android and Windows devices that are highly customizable and ready-to-use out of the box. Custom apps built using our SDK complement our first party apps in improving mobile productivity.  These samples provide a great starting point for developing role-tailored advanced mobile solutions that target specific scenarios and personas. We would love to hear more about cool mobile solutions you deliver and we look forward to showcasing your mobile success stories.

Happy Holidays!

Set your default dashboard in Microsoft Dynamics CRM

MSDN Blogs - Fri, 12/19/2014 - 13:49

In Microsoft Dynamics CRM, each time you sign in to the system you’ll see the dashboard, which gives you easy-to-read charts and graphs that help you see how you and your team are doing with key metrics (also known as key performance indicators, or KPIs).

The system comes with several different dashboard layouts to help you highlight the data and performance metrics you’re most interested in.

Find a dashboard layout you like…

The best way to find one you like is to take a look at a few. After you settle on a favorite, you can make it your default dashboard so that you see it each time you first sign in.

  • To see the different dashboard layouts, choose the down arrow next to the name of the dashboard, and then select the layout you want to check out.

              

    …and then make it your default dashboard

When your system is set up, the system administrator picks a default dashboard layout that everyone sees when they first sign in. If you want to see a different dashboard, you can override the system-wide default.

  • Display the dashboard you want, and then choose Set as Default at the top of the screen.

             

More resources for creating or using dashboards

Your system administrator can set up custom dashboard layouts that anyone in your organization can use. Check out this eBook: Create or customize system dashboards for details.

If you want to stay in tune with what’s trending on social media, you may be able to add social listening charts and graphs to your own dashboard. (Whether you have access to Microsoft Social Listening depends on your license.) For ideas and steps, see this eBook: Microsoft Social Listening for CRM.

More resources for CRM training

Looking for more self-paced training for Microsoft Dynamics CRM? Check out the CRM Basics eBook for a quick run-down of essentials for new users. 

Or, if you’re responsible for training in your organization, you can use an editable Word version of the eBook as your starting point, and customize it to create your own instructor-led training. The Training and Adoption Kit also includes other great content that you can customize to fit your needs.

 

This article was adapted from eBook: CRM Basics

Shelly Haverkamp
Senior Content Developer / User BFF

Office 365 increases its malicious URL coverage

MSDN Blogs - Fri, 12/19/2014 - 13:39

Over the past two weeks, Office 365 (Exchange Online Protection) has improved its detection of spam, phishing and malware by increasing the number of URLs in its reputation lists. Two months ago we were at 750,000 URLs, we are now at 1.7 million – an increase of almost 100%!

Secondly, we decreased the amount of time between refresh intervals; that is, the time between when we download a new list and when those first are replicated across the network has shrunk. I don’t have the exact before and after numbers (i.e, I could be off on the numbers by a wide margin), but it’s something like this - We used to be at 30-45 minutes, now we are 15-17 minutes. We are going to be shrinking that window even further.

If you’re a customer, you’ll notice a change immediately be seeing fewer spam messages in your inbox. You may have even noticed it a couple of weeks ago.

That’s not all the changes that are coming, though:

  • Even more URL reputation lists.

    We’re at 1.7 million URLs, and we’re always checking to see if new lists can help us even more.


  • Reducing the replication time even more.

    We’ve made great strides in how fast we can distribute new lists, and we want to make sure we can push out the data even faster to shrink the window of when a new malicious URL appears to when our customers are protected.


  • Changes to the email client to identify phishing and malware.

    One of the things we are working on is making the mail client (e.g., Outlook, Outlook Web Access) work better with the spam filter. One of the problems that companies face is that even when a message is detected as spam or phishing, users can still dig into their junk folders or spam quarantines, think the message is real but mistakenly marked as spam, and then take action on it. “Why is that message from Bank of America in my junk folder? I better check it out.”

    Well, it turns out there is something we can do. Two of the URL lists we use – Spamhaus’s Domain Block List (DBL) and the SURBL list – divide up their lists into categories. Both of these have sub-categories of malware and phish. We can make the mail client understand that the spam filter thinks these messages contain links to malware or phishing links and then disable the links in the message.

    Your Outlook and OWA mail clients disable messages if they are marked as spam and sent to the quarantine. But, you can still rescue them and inspect them. By modifying the mail client, users can still go into their junk folders and quarantines and rescue the message but the mail client still prevents them from taking action as if to say “We know you think it’s legitimate but trust us – it’s not.”

    This is still in the early stages but we think that getting your email client to work together with the mail filter will help add an additional layer of security to protect users and organizations better.


Those are some of the recent changes to Office 365, as of December 2014. As always, if you have problems or want to say “Hey, good to see this!”, let us know.

 

Resource Management - Configure Synchronization Service to manage New Resource.

MSDN Blogs - Fri, 12/19/2014 - 13:15
Now that you have created a new resource possibly with the assistance of the post Schema Management - Creating a New Resource you may wish to synchronize this resource with an External Data source. Before you can synchronize data from external data sources to the FIM Portal via the Synchronization Service we need to do a few things. Add the new Resource to the Synchronization Filter , to allow external resources to be synchronized to the new resource in the FIM portal. Prepare the FIM MA...(read more)

The magic of Christmas, Kinect style

MSDN Blogs - Fri, 12/19/2014 - 13:00

Every December, British shoppers look forward to the creative holiday ad campaign from John Lewis, a major UK department store chain. It’s been a tradition for a number of years, is seen by millions of viewers in the UK annually, and won a coveted IPA Effectiveness Award in 2012. The retailer’s seasonal campaign traditionally emphasizes the joy of giving and the magic of Christmas, and this year’s ads continue that tradition, with a television commercial that depicts the loving relationship between a young boy and his pet penguin, Monty.

(Please visit the site to view this video)
The 2014 Christmas advertisement from John Lewis tells the story of a boy and his penguin—and the
magic of giving just the right gift.

But the iconic British retailer has added a unique, high-tech twist to the 2014 campaign: Monty’s Magical Toy Machine, an in-store experience that uses the Kinect for Windows v2 sensor to let kids turn their favorite stuffed toy into an interactive 3D model. The experience deftly plays off the TV ad, whose narrative reveals that Monty is a stuffed toy that comes alive in the boy’s imagination.

Monty’s Magical Toy Machine experience, which is available at the John Lewis flagship store on London’s Oxford Street, plays to every child’s fantasy of seeing a cherished teddy bear or rag doll come to life—a theme that runs through children’s classics from Pinocchio to the many Toy Story movies. The experience has been up and running since November 6, with thousands of customers interacting with it to date. Customers have until December 23 to enjoy the experience before it closes.

The toy machine experience was the brainchild of Microsoft Advertising, which had been approached by John Lewis to come up with an innovative, technology-based experience based on the store’s holiday ad. “We actually submitted several ideas,” explains creative solutions specialist Art Tindsley, “and Monty’s Magical Toy Machine was the one that really excited people. We were especially pleased, because we were eager to use the new capabilities of the Kinect v2 sensor to create something truly unique.”

John Lewis executives loved the idea and gave Microsoft the green light to proceed. "We were genuinely excited when Microsoft presented this idea to us,” says Rachel Swift, head of marketing for the John Lewis brand. “Not only did it exemplify the idea perfectly, it did so in a way that was both truly innovative and charming.”  

Working with the John Lewis team and creative agency adam&eveDDB, the Microsoft team came up with the design of the Magical Toy Machine: a large cylinder, surrounded by three 75-inch display screens, one of which is topped by a Kinect for Windows v2 sensor. It is on this screen that the animation takes place.



The enchantment happens here, at Monty's Magical Toy Machine. Two of the enormous display
screens can be seen in this photo; the screen on the left has a Kinect for Windows v2 sensor
mounted above and speakers positioned below.

The magic begins when the child’s treasured toy is handed over to one of Monty’s helpers. The helper then takes the toy into the cylinder, where, unseen by the kids, it is suspended by wires and photographed by three digital SLR cameras. The cameras rotate around the toy, capturing it from every angle. The resulting photos are then fed into a customized computer running Windows 8.1, which compiles them into a 3D image that is projected onto the huge screen, much to the delight of the toy’s young owner, who is standing in front of the display. This all takes fewer than two minutes.



Suspended by wires, the toy is photographed by three digital SLR cameras (two of which are
visible here) that rotate around the toy and capture its image from every angle.

The Kinect for Windows v2 sensor then takes over, bringing the toy’s image to life by capturing and responding to the youngster’s gestures. When a child waves at the screen, their stuffed friend wakens from its inanimate slumber—magically brought to life and waving back to its wide-eyed owner. Then, when the child waves again, their toy dances in response, amazing and enchanting both kids and parents, many of whom cannot resist dancing too.

The Kinect for Windows SDK 2.0 plays an essential role in animating the toy. Having added a skeletal image to the toy, the developers used the Kinect for Windows software development kit (SDK) 2.0 to identify key sequences of movements, thus enabling the toy to mimic the lifelike poses and dances of a human being. Because the actions map to those of a human figure, Monty’s Magical Toy Machine works best on toys like teddy bears and dolls, which have a bipedal form like that of a person. It also functions best with soft-textured toys, whose surface features are more accurately captured in the photos.

The entire project took two months to build, reports Tindsley. “We began with scanning a real toy with Kinect technology, mapping it to create a surface representation (a mesh), then adding in texture and color. We then brought in a photogrammetry expert who created perfect 3D images for us to work with,” Tindsley recalls.

Then came the moment of truth: bringing the image to life. “In the first trials, it took 12 minutes from taking the 3D scans of the toy to it ‘waking up’ on the screen—too long for any eager child or parent to wait,” said Tindsley. “Ten days later, we had it down to around 100 seconds. We then compiled—read choreographed and performed—a series of dance routines for the toy, using a combination of Kinect technology and motion capture libraries,” he recounts.

(Please visit the site to view this video)
A teddy bear named Rambo Ginge comes to life through Monty's Magical Toy Machine, and, as this video shows, even adults are enraptured to see their priceless toys come alive.

None of this behind-the-scenes, high tech matters to the children, who joyfully accept that somehow their favorite stuffed toy has miraculously come to life. Their looks of surprise and wonder are priceless.

And the payoff for John Lewis? Brand loyalty and increased traffic during a critical sales period. As Rachel Swift notes, “The partnership with Microsoft allowed us to deliver a unique and memorable experience at a key time of year. But above all,” she adds, “the reward lies in surprising and delighting our customers, young and old.” Just as Monty receives the perfect Christmas gift in the TV ad, so, too, do the kids whose best friends come to life before their wondering eyes.

The Kinect for Windows Team

Key links



Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux