You are here

Feed aggregator

Executando testes cross-browser com Coded UI e Selenium integrados

MSDN Blogs - 1 hour 47 min ago

Acredito que um excelente motivo para utilizar o Selenium integrado ao Coded UI para executar testes automatizados, é a incrível velocidade com que os testes são executados. Além disso também temos o benefício de poder executar em outros browsers como o Firefox, Chrome, etc.

Para gravar o cenário de testes vou utilizar o Selenium IDE plug-in para Firefox. Porém poderia utilizar o próprio Coded UI Test Builder para gravar e fazer com que executasse em outro browser.

Após instalar o Selenium components for Coded UI Cross Browser Testing, vou criar um novo projeto do tipo Coded UI Test Project, pois minha intenção é utilizar o MSTest, porque ele já carrega algumas referências necessárias. Se a intenção for utilizar o xUnit ou NUnit, você poderia utilizar uma Class Library, inclusive utilizar NUnit pode ser mais simples para quem quer gravar os testes com Selenium, pois já pode exportar para direto da Selenium IDE para C# NUnit e executar no Visual Studio de forma simples.

 

 

Assim que o projeto é criado a janela para gravar os cenários utilizando Test Builder será aberta, vou ignorar e cancelar.

 

 

O próximo passo agora é adicionar os assemblies e drivers do Selenium no projeto de Coded UI criado utilizando o gerenciador do Nuget e adicionar suas referências a classe. Eu instalei somente os drivers do Chrome e do IE, porque o driver de Firefox já faz parte do core framework e não precisa ser instalado a parte.

 

 

Referências:

using OpenQA.Selenium.Firefox; using OpenQA.Selenium.Chrome; using OpenQA.Selenium.IE; using OpenQA.Selenium;


Como estou utilizando a classe default criada junto com o projeto de Coded UI, ela tem um trecho de código comentado “Additional test attributes” que vou utilizar para preparar o driver que utilizarei em meus testes. O método TestInitialize será executado antes de cada método de testes. O interessante de criar o driver aqui é que ele fica centralizado em um único lugar para todos os testes dessa classe. Como vou utilizar o mesmo browser e URL para esses testes, vou colocar aqui também a chamada ao site que será testado e realizar a navegação.

 

#region Additional test attributes // You can use the following additional attributes as you write your tests: ////Use TestInitialize to run code before running each test [TestInitialize()] public void MyTestInitialize() { // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items. driver = new ChromeDriver(); baseURL = https://blogs.msdn.microsoft.com/; driver.Navigate().GoToUrl(baseURL); driver.Manage().Window.Maximize(); driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(30)); } ////Use TestCleanup to run code after each test has run [TestCleanup()] public void MyTestCleanup() { // To generate code for this test, select "Generate Code for Coded UI Test" from the shortcut menu and select one of the menu items. try { driver.Quit(); } catch (Exception) { // Ignore errors if unable to close the browser } } #endregion

 

O método TestCleanup será executado após cada teste. Nesse caso estou limpando o driver após cada execução.

Ao executar o método de teste default, recebi a seguinte mensagem de erro:

 

Result Message: Initialization method TestesAutomatizados.Selenium.TestandoComSelenium.MyTestInitialize threw exception. OpenQA.Selenium.DriverServiceNotFoundException: OpenQA.Selenium.DriverServiceNotFoundException: The chromedriver.exe file does not exist in the current directory or in a directory on the PATH environment variable. The driver can be downloaded at http://chromedriver.storage.googleapis.com/index.html..


Isso aconteceu porque os testes não estão sendo executados onde está o projeto. Para resolver, precisei “falar” para o Test Framework copiar o chromedriver.exe que foi adicionado ao projeto quando instalei a referência do Nuget, para a pasta onde os testes vão ser executados. Utilizei o atributo DeploymentItem na Test Class.


namespace TestesAutomatizados.Selenium { /// <summary> /// Summary description for CodedUITest1 /// </summary> [CodedUITest] [DeploymentItem("chromedriver.exe")] public class TestandoComSelenium …


Agora é possível executar os testes! Nesse caso, por enquanto como não gravamos nenhum cenário, somente o browser será aberto no endereço padrão escrito no método MyTestInitialize. Sendo assim, vamos gravar nosso primeiro cenário!

Vou utilizar a Selenium IDE para gravar o cenário de Testes. Após a instalação, ao abrirmos a IDE teremos a seguinte tela:

 

 

O objetivo do meu cenário de teste é abrir meu blog, entrar em um de meus posts e fazer um comentário. Como já programamos a execução do browser no endereço correto, vamos gravar desse ponto para frente. Após abrir o browser e entrar na aplicação, vou abrir e iniciar a gravação no Selenium IDE e executar as ações como se estivesse navegando como um usuário comum.

Passos:

  1. Abrir website
  2. Clicar em um post
  3. Preencher informações pessoais (nome, e-mail, website)
  4. Escrever comentário
  5. Publicar comentário

Esse é o resultado da gravação no Selenium IDE:

 

 

Agora vamos exportar esse caso de teste. File –> Export Test Case As… –> C# / NUnit / WebDriver:

 

 

Não estou utilizando NUnit, porém vou utilizar parte da classe gerada em nosso teste configurado anteriormente. Ficando somente com as chamadas dos elementos da classe exportada pelo Selenium IDE e adicionando em meu método no Coded UI, ele ficou assim:

[TestMethod] [TestCategory("UI")] public void abrir_post_e_fazer_um_comentario() { driver.Navigate().GoToUrl(baseURL + "/luizmacedo/"); driver.FindElement(By.LinkText("Livro: Desenvolvimento efetivo na plataforma Microsoft")).Click(); driver.FindElement(By.Id("author")).Clear(); driver.FindElement(By.Id("author")).SendKeys("Luiz"); driver.FindElement(By.Id("email")).Clear(); driver.FindElement(By.Id("email")).SendKeys("luizmacedo@outlook.com"); driver.FindElement(By.Id("url")).Clear(); driver.FindElement(By.Id("url")).SendKeys("https://blogs.msdn.microsoft.com/luizmacedo"); driver.FindElement(By.Id("comment")).Clear(); driver.FindElement(By.Id("comment")).SendKeys("Meu primeiro comentário utilizando testes automatizados!!!"); driver.FindElement(By.Id("submit")).Click(); }


A criação do cenário de testes foi finalizada e a integração com Selenium está funcionando. Agora é só executar o teste e esperar o resultado. Nesse caso estou utilizando o Google Chrome, mas poderia estar executando pelo Firefox, IE ou com outros browsers que tenham o WebDriver do Selenium disponível.



 

Teste executado com sucesso!

Vores blog er flyttet!

MSDN Blogs - 2 hours 6 min ago

Hej, hvis du fortsat vil modtage nyheder om Office 365 i den danske uddannelsessektor kan du følge med på dette site.

Du kan i øvrigt også tilmelde dig vores nyhedsbrev inde på siden, hvor du hver måned vil modtage de nyheder, events og meget andet direkte i din indboks.

 

Announcing Dv2-Series Virtual Machines in US Gov – Iowa

MSDN Blogs - 3 hours 14 min ago

In the last few weeks, we have made Dv2-Series VMs available in the US Gov – Iowa region for Azure Government. This means that both Azure Government regions now have access to D-Series VMs.  US Gov – Virginia has the original D-Series VMs.

Dv2-Series VMs (a follow-on to the original D-Series VMs) feature similar memory and hard disk configurations, but more powerful CPUs. Based on the latest generation 2.4 GHz Intel Xeon E5-2673 v3 (Haswell) processors, Dv2 VMs are designed to offer an optimal configuration for running workloads that require increased processing power and fast local disk I/O.

Currently D1 through D14 are enabled in both US Gov – Iowa (Dv2-Series) and US Gov – Virginia (D-Series). The largest (D15) instance type is not yet available.  We are working on enabling this largest instance type in the coming months.

To summarize the current types of virtual machines that are currently available in Azure Government:

Region A-Series D-Series US Gov – Iowa A1 to A7 D1 to D14 V2 US Gov – Virginia A1 to A7 D1 to D14 V1

For more information, see the Image Specification Details in the Azure Public pricing details page.

Vendor catalogs in Dynamics AX

MSDN Blogs - 4 hours 2 min ago
Vendor catalogs import

In Microsoft Dynamics AX, purchasing professionals can create and maintain catalogs for company employees to use when they order items and services for internal use. When you create a procurement catalog you can add the items and services that you make available to employees, either by importing the vendor catalog data or by adding the vendor catalog data to the product master manually. If you use the catalog import process the vendor can send you the product catalog data and you can upload it by using the Microsoft Dynamics AX client.

The product data that the vendor submits to you, in the form of a catalog maintenance request (CMR) file, must be in XML file format. The CMR file should contain all of the details for the products that the vendor supplies to your company.

Importing vendor catalog data

To import vendor catalog data, you must complete the following tasks:

  1. Set up a project in data management workspace. Here you have to define your data mapping rules.
  2. Set up a procurement category hierarchy, and assign your vendors to procurement categories. If you use commodity codes, add the commodity codes to the procurement categories.
  3. Configure the vendor for catalog import.
  4. Configure workflow for catalog import.
  5. Create a CMR file template and share this with your vendor.
  6. Create a vendor catalog. The catalog maintenance request (CMR) files that you receive from your vendor are grouped in this catalog.
  7. Upload the CMR file.
  8. Review, approve, or reject the products in the vendor catalog. Details that can be reviewed include the product name, description, pricing, or order quantity requirements. Approved products are added to the product master and are released to the selected legal entities. Only approved products can be added to the procurement catalog. The products are now automatically mapped to the procurement categories in AX.
Overview

For the current version of Microsoft Dynamics AX, we are using the Data Import/Export Framework and a predefined composite entity (you can read more on data entities here).

The product data that the vendor submits to you, should still be in the form of a catalog maintenance request (CMR) file, must be in XML file format.

Set up the system to import vendor catalog data

You will need to set up the system to support the vendor catalog import scenario, by creating an import job for vendor catalogs.

  1. On the main dashboard, click the Data management tile to open the data management workspace.
  2. Click the Import tile to create a new data project.
  3. Enter a valid job name, source data format, and entity name.
    Do note for data source we support only XML-element or XML-attribute. Entity name should be “Vendor catalogs”.

    Upload an XML mapping data file for vendor catalogs data entity.
    The XML mapping data file defines which fields you expect to import from your vendor CMR files. These fields will appear as your source and you will need to map these fields to Dynamics AX internal schema reflected by the staging schema.

In this way you have the possibility to map the XML schema template that you provide to your vendors, to the internal schema defined in Dynamics AX. There can be only one mapping defined at a time.

The internal schema we currently support you can also see it below and it reflects a catalog maintenance request entity which includes product details entities with the associated product pricing, order quantity requirements and detailed product descriptions entities.

This is how the initial XML file should like:

<?xml version="1.0" encoding="utf-8"?><Document> <CatVendorCatalogMaintenanceRequestEntity> <UploadDateTime></UploadDateTime> <CatVendorProductCandidateEntity> <ProductCategoryHierarchyName></ProductCategoryHierarchyName> <ProductCategoryName></ProductCategoryName> <ActionType></ActionType> <ProductNumber></ProductNumber> <ProductSubtype></ProductSubtype> <SearchName></SearchName> <BarCode></BarCode> <ProductColorId></ProductColorId> <ProductConfigurationId></ProductConfigurationId> <ProductSizeId></ProductSizeId> <ProductStyleId></ProductStyleId> <CatVendorProductCandidatePurchasePriceEntity> <CurrencyCode></CurrencyCode> <UnitSymbol></UnitSymbol> <Price></Price> </CatVendorProductCandidatePurchasePriceEntity> <CatVendorProductCandidateSalesPriceEntity> <CurrencyCode></CurrencyCode> <UnitSymbol></UnitSymbol> <Price></Price> <SuggestedPrice></SuggestedPrice> </CatVendorProductCandidateSalesPriceEntity> <CatVendorProductCandidateDefaultOrderSettingsEntity> <UnitSymbol></UnitSymbol> <LeadTime></LeadTime> <OrderQuantityMultiples></OrderQuantityMultiples> <MaximumOrderQuantity></MaximumOrderQuantity> <MinimumOrderQuantity></MinimumOrderQuantity> <StandardOrderQuantity></StandardOrderQuantity> </CatVendorProductCandidateDefaultOrderSettingsEntity> <CatVendorProductCandidateTranslationEntity> <ProductName></ProductName> <Description></Description> <LanguageId></LanguageId></CatVendorProductCandidateTranslationEntity> </CatVendorProductCandidateEntity> </CatVendorCatalogMaintenanceRequestEntity></Document>

In the mapping file you have to include UploadDateTime field at the level of CatVendorCatalogMaintenanceRequestEntity (see example above) . This field is used internally to track the upload time for the catalog maintenance request and it should not be included in the actual CMR files.

You can also include the field ActionType at the level of CatVendorProductCandidateEntity. This field can be used to explicitly specify the type of action you want to take for the product: add, update or delete. The values supported for these fields in the CRM file are: Add, Update and Delete.

If you will decide to use your own schema, you should get warnings when you upload the mapping file:

Go to the View map and here you will see the vendor catalog composite entity and also the individual entities defined in the composite entity: CatVendorCatalogMaintenanceRequestEntity, CatVendorProductCandidateEntity, CatVendorProductCandidatePurchasePriceEntity, CatVendorProductCandidateSalesPriceEntity, CatVendorProductCandidateDefaultOrderSettingsEntity, CatVendorProductCandidateTranslationEntity

Make sure that each entity is added, and that all errors are fixed.

You can click each entity data card to set up, review, or modify field maps, and to set up XSLT-based transforms that must be applied to inbound data.
o Source – These are inbound CMR files Typically data format includes – CSV, xml, tab delimited ; This will be the xml schema you will use for you CMR files.
o Staging – These are auto generated tables which map very closely with data entity. When “data management enabled” is true staging tables are generated to provide intermediary storage. This enables the framework to do high volume file parsing, transformation and some validations.

Note: important, mandatory fields marked with red star should be always mapped to a field.

To map your own field to an existing field in the fixed schema, you can do this by drag a drop from source field to destination field and pressing save.

For example in here, we use VendorProductCode instead of ProductNumber.

You can proceed when the import job mapping icon shows no error.

Note: The mapped fields are case sensitive, which means the CMR files should use exact field names.

Also if fields were defined in the mapping even they are not mandatory they are expected to be present in the input. That means if for example BarCode was presented in the initial mapping file, we expect to see it in xml file even though it doesn’t contain any value.

  • Click Save. At this point your system is configured to support importing of vendor catalog files.Currently the product related data we support is: product name, description, translations. pricing or order quantity requirements.

 

NOTE: You should not import data directly from data management workspace. The vendor catalog maintenance request files should be imported in the context of a vendor catalog which also enforces a process of review and approval.

 

Set up a procurement category hierarchy

You won’t be able to import products from categories where the vendor is not approved for procurement. To approve the vendor:

  1. Go to Procurement and sourcing > Procurement categories.
  2. Select the category. Add the vendor to the list of approved vendors in the Vendors fast tab.
Configure the vendor for catalog import

In order to be able to import catalogs for a particular vendor it has to be enabled for catalog import. There are two ways to achieve that

  • For a specific vendor go to Procurement > Set up > Configure vendor for catalog import
  • If you don’t do this you will be prompted if you want to enable the vendor for catalog import when creating a new catalog for the vendor
Set up workflow

After a catalog maintenance request (CMR) file has been successfully uploaded, the purchasing professional can review the product details in the file. The vendor indicates whether the product is new, modified, or must be deleted. Information about the product pricing, product descriptions, product attributes, and order quantity requirements are also included in the CMR file. As an approver, you can indicate whether products are made available to selected legal entities and approve or reject the products in the file. Approved products are added to the product master and are released to the selected legal entities.

To support the approval process we are leveraging another powerful feature of AX: workflow processing.

You are allowed to set up rules for automated approval of vendor catalogs and specify one or more reviewers if manual approval is required. To enable the vendor catalog import functionality it is required to set up two types of workflows: Catalog import product approval (line-level), Catalog import approval (catalog-level). We should always define both workflows since the Catalog import approval will always require Catalog import product approval to approve products, either manually or automatically.

Catalog import product approval

This type of workflow processes all the products that are included in the CMR file. Completion of all of the individual line-level workflow completes the overall catalog import workflow. In order to create a product approval workflow:

  1. Click Procurement and sourcing > Setup > Procurement and sourcing workflows.
  2. On the Action Pane, click New.
  3. Select Catalog import product approval and then click Create workflow.
General setup

The common catalog import product approval workflow should look like this:

Set up approvers
  1. Double click the Catalog import product approval element.
  2. Click the Step 1 element.
  3. Click Assignment in the action pane.
  4. The simplest assignment would be User->Admin.
Set up automatic actions

Automatic actions allow the workflow framework to automatically approve or reject the products in the imported vendor catalog which meet certain conditions. In order to set up an automatic action you need to:

  1. Select the Catalog import product approval element.
  2. Click Automatic actions in the action pane.
  3. Click the Enable automatic actions check box
  4. Setup the conditions for auto approval/rejectionThere is one type of condition which I would like to focus on. You can specify Product candidate.Price delta as a parameter of the automatic action. The price delta is calculated as a ratio: (new price – old price) / old price. So if you want to make sure that the price delta is within 20% you need to set the condition to Product candidate.Price delta <= 0.2
  5. Select the type of automatic action (approve/reject)

You can also setup auto approval for condition that IsAutomatedApproval enabled,.

IsAutomatedApproval setting is controled directly on the vendor catalog page. On the Action Pane, on the Catalogs tab, in the Maintain group, click Enable automated approval. This sets the Automated approval field to Enabled.

Catalog import approval

This type of workflow is used for setting up the rules for approving the whole catalog. When you configure this workflow, you can reference the Catalog import product approval workflow that you configured earlier. The common setup would be to automatically approve the whole catalog import after all the products have been approved:

In the properties of the Vendor catalog lines (products) element you need to reference the catalog import product approval that you created earlier.

Import a catalog from a vendor

First you have to set up a catalog for your vendor. The catalog maintenance request (CMR) files that you receive from your vendor are grouped in this catalog. After you set up the catalog, you can upload CMR files for the vendor. You can also view the details and the event log for new or existing CMR files that have been added to the vendor catalog

Use this procedure to create a new vendor catalog. This is the catalog to which you upload a catalog maintenance request (CMR) file for a vendor. If you delete the vendor catalog, you can no longer import CMR files to it. If you still want to import catalog data from the vendor, you must create a new catalog for the vendor

  1. Go to Procurement and sourcingCatalogs > Vendor catalogs and create new catalog.
  2. In the New catalog form, in the Vendor field, select the vendor that you are creating the catalog for.
  3. Enter a name and description for the catalog, and then click Save.

Upload a catalog maintenance request (CRM) file. Go to Catalog file history tab, click Upload file.

  1. In the Upload file dialog box, browse to the location of the CMR file that you created. The maximum file size allowed to upload is 10 MB per file.
  2. Enter an effective date and an expiration date. These dates define the date range in which the pricing for the products in the CMR file is valid. When trade agreement are created it will include these dates.
  3. Select one of the following update types for the CMR file:
    1. Add updates to the existing vendor catalog – Add product updates to an existing catalog.
    2. Replace the existing vendor catalog with a new catalog – Add a new catalog, or replace an existing catalog with a new catalog.
  4. If you are creating a new catalog to replace an existing catalog, all items and services that are in that catalog are overwritten by the matching items and services in the new CMR file. Any existing items and services that are not in the new CMR file are deleted. Any items and services that are included in the new CMR file, but that are not included in the existing catalog, are treated as new products
  5. Select category hiererachy type you want to associate the product data. You can also select either procurement category hierarchy or commodity code hierarchy.
  6. Click OK to start the upload process for the CMR file.

When you upload the CMR file, the file is validated against the category hierarchy type selected and the categories the vendor is allowed for procurement. If the validation fails and some of the data is invalid the catalog upload process will fail. You can view the details for the CMR file upload status in the event log. After the CMR file is uploaded successfully, you can review the details of the CMR file in the vendor catalog.

  • Start processing – The CMR file has been submitted for import.
  • Start catalog upload – The CMR file has been submitted and is in process.
  • Catalog upload failed – An error occurred after the file was submitted for processing, and the CMR file was not imported.
  • Catalog upload complete – The CMR file was successfully uploaded.
  • Invalid CMR – The CMR file does not comply with the current schema for catalog import. If the CMR file was created by using an outdated schema, the CMR file must be recreated by using the current schema. You can also use advanced troubleshooting to narrow done the cause of the failure.
  • Pending approval – The CMR file is in review and waiting for approval by the purchasing agent.
  • Product rejections – Products in the CMR file were rejected for import into the procurement catalog. Product rejections are indicated by a warning during the import process, and the file continues to be processed for products that are approved.
  • Approval complete – The purchasing professional has completed the review of the products and images that are contained in the CMR file.
  • Finish processing – Approved products have been added to the product master in Microsoft Dynamics AX and trade agreement journals have been created. The data in the CMR file has been passed to the archive folder for the vendor.

After the products in the CMR file are reviewed and approved, you can release the approved products to the legal entities in which the vendor is authorized to supply products, and appropriate trade agreements can be created.

 

Validate and approve imported catalogs
  1. Click Procurement and sourcing > Common > Catalogs > Vendor catalogs. On the Vendor catalogs list page, double-click a vendor catalog.
  2. In the Update catalog form, on the Catalog file history fast tab, select a CMR file, and then click Details.
  3. In the Catalog approval page form, review the product details for the products that are included in the CMR file. You can use the Item status field to view all items or only items that have a selected status. Select one of the following options:
    1. All items – View all products that are included in the CMR file. This is the default setting.
    2. Add – View only new products. After a new product is approved, it is added to the product master. When the product is released to the legal entity, the corresponding trade agreement is created and the product appears on the Released products list page.
    3. Update – View only existing products that are being modified. After a product is approved, the modifications are applied to the existing product in the product master. New trade agreements are created only if there are price changes for the product.
    4. Delete – View only products that are no longer offered by the vendor and should be deleted. After the product request is approved, the product is no longer available for purchase.
  4. To filter product changes by legal entity, select a legal entity in the Buying legal entity field. You can view the price, name and description and purchase quantity changes by legal entity.
    Note A legal entity appears in the list only if vendor products that have been imported are approved and released to the legal entity. If no products have been released to a legal entity for the vendor, the list is empty.
  5. In the lower pane, on the Price tab, review the pricing for a selected product. The current price and new price are displayed in the currency and unit of measure in which the product is offered.
  6. On the Name and description tab, review the name and description of a selected product in specified languages.
  7. On the Purchase quantity tab, review changes to the purchase order quantity requirements and purchase lead time for the selected product.
  8. In the upper pane, select the products that you want to approve, and in the workflow message bar, click Actions > Approve. Approved products are added to the product master.
  9. In the upper pane, select the products that you want to reject, and in the workflow message bar, click Actions > Reject. Products that are rejected are not added to the product master.
  10. Click Release products to legal entity, and then in the Release products to legal entity form, select the legal entities in which the vendor’s approved products will be available. Corresponding trade agreements will be created for the products in these legal entities. Products must be added to the product master and be available in the legal entities before they can appear in the procurement catalog.
    Note If you do not release the approved products to the legal entities by using the Catalog approval page form, you can release approved products to legal entities by using either the Vendor catalogs list page or the Update catalog form.

 

Advanced troubleshooting

We have a first level of logs to see if anything failed during the import of a file.

However we don’t expose yet all the detailed information here in regards with data management framework processing.

For this now we have an advanced ways of seeing more details when import fails, by looking at execution details of the import job by going to data management workspace:

For the failed job, I can drill through and see more details

By going further into view staging data and viewing data you can identity there is a problem in this case in product data:

Signin issues with Visual Studio Team Services – 5/25 – Investigating

MSDN Blogs - 4 hours 23 min ago

Initial Update: Wednesday, 25 May 2016 10:23 UTC

We are actively investigating issues  with Visual Studio Team Services. Small  subset of users may face intermittent failures while accessing git repositories.

         Workaround: Customers can use PAT token to access git repositories.

We are working to resolve this issue and apologize for any inconvenience.

Sincerely,
Zainudeen

 

 

 

Oracle Client for SQL Server

MSDN Blogs - 4 hours 49 min ago

Oracle Client for SQL Server

Oracle Client Support Windows version Document / Download Oracle 12c

for Microsoft Windows x64 (64-Bit) •Windows Server 2008 x64 and Windows Server 2008 R2 x64 – Standard, Enterprise, Datacenter, Web, and Foundation editions.

•Windows 7 x64 – Professional, Enterprise, and Ultimate editions

•Windows 8 x64 and Windows 8.1 x64 – Pro and Enterprise editions

•Windows Server 2012 x64 and Windows Server 2012 R2 x64 – Standard, Datacenter, Essentials, and Foundation editions

  Client Quick Installation Guide

12c Release 1 (12.1) for Microsoft Windows x64 (64-Bit)

http://docs.oracle.com/database/121/NXCQI/toc.htm

 

Oracle Database 12c Release 1 (12.1.0.2.0)

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-win64-download-2297732.html

 

  Oracle 11g R2

for Microsoft Windows x64 (64-Bit) •Windows Server 2003 – all x64 editions

•Windows Server 2003 R2 – all x64 editions

•Windows XP Professional x64

•Windows Vista x64 – Business, Enterprise, and Ultimate editions

•Windows Server 2008 x64 – Standard, Enterprise, Datacenter, and Web editions.

•Windows Server 2008 R2 x64 – Standard, Enterprise, Datacenter, Web, and Foundation editions.

•Windows 7 x64 – Professional, Enterprise, and Ultimate editions

•Windows 8 x64 – Pro and Enterprise editions

•Windows 8.1 x64 – Pro and Enterprise editions

•Windows Server 2012 x64 and Windows Server 2012 R2 x64 – Standard, Datacenter, Essentials, and Foundation editions

  Client Quick Installation Guide

11g Release 2 (11.2) for Microsoft Windows x64 (64-Bit)

http://docs.oracle.com/cd/E11882_01/install.112/e49700/toc.htm

 

Oracle Database 11g Release 2 (11.2.0.1.0)

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-win64soft-094461.html

 

  Oracle 10g R2

for Microsoft Windows x64 (64-Bit) •Windows Server 2003, Standard x64 Edition

•Windows Server 2003, Enterprise x64 Edition

•Windows Server 2003, Datacenter x64 Edition

•Windows XP Professional x64 Edition

•Windows Vista x64, Service Pack 1 – Business, Enterprise, and Ultimate editions

•Windows Server 2008 x64 – Standard, Enterprise, Datacenter, Web, Standard without Hyper-V, Enterprise without Hyper-V, and Datacenter without Hyper-V editions

The specific operating system components that are not supported are Windows Server 2008 x64 Hyper-V and Server Core.

  Database Client Installation Guide for Microsoft Windows (x64)

http://docs.oracle.com/cd/B19306_01/install.102/b15684/toc.htm

 

 

 

 

 

재사용 가능한 리눅스용 VM 이미지 만들기(Linux Deprovision)

MSDN Blogs - 5 hours 11 min ago

이번 포스트에서는 리눅스 Deprovision 과정을 알아보려고 합니다. 이 과정을 거치지 않고 이미지를 만들어서 새로운 VM을 생성하신다면 아래와 같은 문제가 발생할 수 있습니다.

새롭게 생성된 VM의 이름을 linux-copied-vm 이라고 정했으나, deprovision 과정을 거치지 않은 이미지를 이용하였기 때문에 hostname이 바뀌지 않은 것을 확인하실 수 있습니다.

Ubuntu Server 14.04 LTS 버전의 VM을 생성하고, deprovision을 한 후에 이미지를 캡쳐하여, 새로운  VM을 생성하는 과정을  보여드리려고 합니다. 이번 포스팅은 Azure 구포탈 기준으로 작성되었으며, 링크를 통해 Azure 홈페이지에서 제공하는 공식 가이드 문서도 확인하실 수 있습니다.

Step 1. VM 구성하기

1. VM 만들기
: Azure 포탈에 로그인 한 후에 좌측 하단에 위치한 “새로만들기 -> 계산 -> 가상컴퓨터 -> 빨리만들기”를 선택하신 후, 생성하고자 하는 VM의 정보를 입력하시기 바랍니다.

2. VM 접속
: 생성한 VM의 상태가 “실행중” 으로 바뀌면 VM의 준비가 완료된 것입니다. 리눅스 VM에 접속하기 위해서는 putty와 같은 별도의 프로그램이 필요합니다. 다운로드 및 실행하신 후에 IP 주소를 이용하여 VM에 접속하시기 바랍니다.

VM의 IP주소는 “대시보드”에서 확인하실 수 있습니다.

VM을 만들 때에 입력했던 아이디와 비밀번호를 입력하여 로그인하시기 바랍니다.

Step 2. 재사용 가능한 리눅스용 VM 이미지 만들기(Linux Deprovision)

1. Deprovision 수행
VM에 접속한 상태에서 “sudo waagent -deprovision+user” 명령어를 실행하시고, “y” 를 선택하여 계속 진행하시기 바랍니다.

2. VM 이미지 캡쳐
VM 이미지를 캡쳐하시기 전에 현재 실행 중인 VM을 “종료”하시기 바랍니다.

VM이 종료되면 하단의 “캡쳐” 버튼을 통해 리눅스 VM 이미지를 캡쳐하시기 바랍니다.

생성하셨던 VM 에 대한 설명을 간단하게 적으시고, “가상 컴퓨터에서 waagent – 프로비전 해제를 실행했습니다” 라는 체크박스도 반드시 선택하신 후 확인 버튼을 누르시기 바랍니다.

여기까지 진행하시고 나면, 기존에 생성했던 VM과 같은 환경을 가지지만 시스템정보는 삭제된 상태의 VM 이미지가 생성되면서, 동시에 기존에 생성했던 VM이 삭제되오니 주의하시기 바랍니다.

Step 3. 생성한 이미지를 이용하여 새로운 VM 만들기

1. 캡쳐한 이미지를 이용하여 새로운 VM 생성
: 캡쳐한 이미지를 이용하여 VM을 생성해볼 차례입니다. “새로만들기 -> 계산 -> 가상컴퓨터 -> 갤러리”에서를 선택하세요.

좌측에 있는 메뉴 중 “내 이미지”를 선택한 후에, 캡쳐해둔 VM 이미지를 선택합니다.

VM 이미지를 이용하여 새롭게 만들 “가상 컴퓨터 이름, 크기, 새 사용자 이름, 암호” 를 입력하신 후 다음 단계로 넘어가시기 바랍니다.

새롭게 생성될 VM의 DNS 이름을 다시 한번 확인하신 후 계속 진행하시기 바랍니다.


2. 새롭게 생성된 VM의 구성 정보 확인
: Deprovision 이 잘 되었는지 확인하는 과정만 남았습니다. 새롭게 생성한 VM의 “대쉬보드”에 접속하여 새롭게 입력한 hostname이 반영되었는지 확인하시기 바랍니다.

또는 위에서 언급하였던 putty라는 프로그램을 이용하여 VM에 접속한 후, “hostname”이라는 명령어를 통해 확인하실 수도 있습니다.

다음번 포스팅에서는 Azure 포탈이 아닌 Powershell 을 이용한 Windows Sysprep 과정을 소개할 예정입니다.
감사합니다.

Using Technology For Good: RefME for Word Helps Students Succeed

MSDN Blogs - 5 hours 47 min ago

An important part of projects undertaken in further and higher education is the referencing of sources and acknowledging the work of others who have gone before. Citations are therefore expected in any works of length, with the dreaded dissertation being the obvious example that should be familiar to all.

However, for many students, activities such as referencing and citing can often become incredibly arduous and time consuming, ending up being just as challenging as writing of the main body of text.

The following post is from Microsoft in Education Partner RefME, announcing the launch of RefME for Word – a tool for students and researchers.

Using Technology For Good: RefME for Word Helps Students Succeed

RefME are on a mission to transform education one citation at a time – with the help of Microsoft. Together, launching RefME for Word we are helping students all over the world improve their writing and research journey.

RefME is available for Word 2016 on Windows, Mac and Word for iPad.

The integration sets an example for future educational tools. Sharing expertise and collaborating on a joint-effort towards improving educational utilities for students is investing in the future, in our opinion. Both Microsoft and RefME offer resources for students that are unique on their own, but together, they’re even more transformative. We’re offering a tool and service that better serves the needs of students and researchers; ultimately: using technology for good.


The technology integration introduces a unique multi-platform tool that empowers students, researchers and educators to broaden their research sources and generate accurate citations in Microsoft Word as you write. The goal of educational utilities for RefME and Microsoft is to provide additional functionality outside of the ability to create a citation – this means, we’ve worked hard to make the integration easy to use, easy to teach and easy to adopt. Microsoft and RefME believe that educational tools should be accessible and encourage students to do more, learn more and cite more.

The Microsoft Word 2016 citation add-in integrates seamlessly and is available via the Office Store with a RefME Plus subscription.

“RefME wants to improve the way it helps serve the research needs of students globally and enhancing our offering through integration with Microsoft Word is a critical step towards that.” - Tom Hatton, CEO and co-founder, RefME

The new Microsoft features are also available through RefME’s institute edition called RefME Institute. Developed to support the whole university community, institutions can gain customised referencing styles, library support, single sign-on authentication and access to all the premium features, including Word 2016 for Windows, Mac and Word for iPad.

The new RefME Plus features include:
  • RefME for Word: cite as you write and manage your bibliography from within Microsoft Word 2016 for Windows, Mac, or Word for iPad
  • Photo Quotes app feature that uses OCR technology to turn printed text into digital text with a smartphone camera and save it as a quote
  • Folders (coming soon): an organisational feature that enables users to rearrange projects or drag-and-drop them into folders to keep citations organisedSolving the referencing problem through the use of technology and helping students discover tools that improve their development, research skills and knowledge is putting education on the pedestal that it deserves.

“By working with RefME, in support of their Microsoft Word 2016 citation add-in, we are helping millions of students around the world to seamlessly integrate RefME’s citation web service directly into Microsoft Word via the Office Store. And with RefME for Word being developed in JavaScript, the same add-in works in Microsoft Word 2016 for Windows, Mac and iOS.” - Liam Kelly, General Manager, Developer Experience, Microsoft UK.

Solving the referencing problem through the use of technology and helping students discover tools that improve their development, research skills and knowledge is putting education on the pedestal that it deserves.

Using Technology For Good: RefME for Word Helps Students Succeed

MSDN Blogs - 5 hours 48 min ago

An important part of projects undertaken in further and higher education is the referencing of sources and acknowledging the work of others who have gone before. Citations are therefore expected in any works of length, with the dreaded dissertation being the obvious example that should be familiar to all.

However, for many students, activities such as referencing and citing can often become incredibly arduous and time consuming, ending up being just as challenging as writing of the main body of text.

The following post is from Microsoft in Education Partner RefME, announcing the launch of RefME for Word – a tool for students and researchers.

Using Technology For Good: RefME for Word Helps Students Succeed

RefME are on a mission to transform education one citation at a time – with the help of Microsoft. Together, launching RefME for Word we are helping students all over the world improve their writing and research journey.

RefME is available for Word 2016 on Windows, Mac and Word for iPad.

The integration sets an example for future educational tools. Sharing expertise and collaborating on a joint-effort towards improving educational utilities for students is investing in the future, in our opinion. Both Microsoft and RefME offer resources for students that are unique on their own, but together, they’re even more transformative. We’re offering a tool and service that better serves the needs of students and researchers; ultimately: using technology for good.


The technology integration introduces a unique multi-platform tool that empowers students, researchers and educators to broaden their research sources and generate accurate citations in Microsoft Word as you write. The goal of educational utilities for RefME and Microsoft is to provide additional functionality outside of the ability to create a citation – this means, we’ve worked hard to make the integration easy to use, easy to teach and easy to adopt. Microsoft and RefME believe that educational tools should be accessible and encourage students to do more, learn more and cite more.

The Microsoft Word 2016 citation add-in integrates seamlessly and is available via the Office Store with a RefME Plus subscription.

“RefME wants to improve the way it helps serve the research needs of students globally and enhancing our offering through integration with Microsoft Word is a critical step towards that.” - Tom Hatton, CEO and co-founder, RefME

The new Microsoft features are also available through RefME’s institute edition called RefME Institute. Developed to support the whole university community, institutions can gain customised referencing styles, library support, single sign-on authentication and access to all the premium features, including Word 2016 for Windows, Mac and Word for iPad.

The new RefME Plus features include:
  • RefME for Word: cite as you write and manage your bibliography from within Microsoft Word 2016 for Windows, Mac, or Word for iPad
  • Photo Quotes app feature that uses OCR technology to turn printed text into digital text with a smartphone camera and save it as a quote
  • Folders (coming soon): an organisational feature that enables users to rearrange projects or drag-and-drop them into folders to keep citations organisedSolving the referencing problem through the use of technology and helping students discover tools that improve their development, research skills and knowledge is putting education on the pedestal that it deserves.

“By working with RefME, in support of their Microsoft Word 2016 citation add-in, we are helping millions of students around the world to seamlessly integrate RefME’s citation web service directly into Microsoft Word via the Office Store. And with RefME for Word being developed in JavaScript, the same add-in works in Microsoft Word 2016 for Windows, Mac and iOS.” - Liam Kelly, General Manager, Developer Experience, Microsoft UK.

Solving the referencing problem through the use of technology and helping students discover tools that improve their development, research skills and knowledge is putting education on the pedestal that it deserves.

Post Event Download Links: Technology Outlook : Build

MSDN Blogs - 6 hours 6 min ago
Über den Event

An diesem Nachmittag rücken wir die angekündigten Neuerungen von der Build 2016 Konferenz in den Vordergrund und fokussieren dabei auf die Windows 10 Plattform.

Die rasch wachsende Installationsbasis von Windows 10 ermöglicht ISVs, von neuen Innovationen zu profitieren und ihren Kunden Mehrwert zu bieten. Ronnie Saurenmann wird neben den Themen Windows Universal Apps und Windows Store for Business, u.a. den neuen Desktop App Converter vorstellen. Diese Technologie ermöglicht es, herkömmliche Win32 Apps im Windows Store zu publizieren.

Ein ganz neues Anwenderszenario bietet der Surface Hub. Nachdem wir bei der letzten Veranstaltung einen Prototypen präsentieren konnten, haben wir nun das finale Produkt für Sie zum Anfassen. Dario Trindler wird uns aufzeigen wie Meetings damit wesentlich effizienter werden.

Im dritten Teil erfahren wir, wie mittels Xamarin, Apps rasch und nativ für unterschiedliche mobile Plattformen entwickelt werden können. Der Partner Noser Engineering, zertifiziert als Xamarin Premier Consulting Partner, wird am Beispiel von umgesetzten Projekten seine Erfahrungen mit uns teilen.

 

 Agenda

15:00 Registration/Kaffee 15:30 Begrüssung und News von der Build Konferenz
Olaf Feldkamp, Business Development Manager, Microsoft Schweiz 15:40 Windows 10 Anniversary Update und Universal Windows Apps: Neuigkeiten für ISVs
Ronnie Saurenmann, Principal Technical Evangelist, Microsoft Schweiz 16:20 Surface Hub und Demo
Dario Trindler, Solution Specialist Devices, Microsoft Schweiz 16:50 Pause 17:10 Xamarin in Businesslösungen: Vorteile und Erfahrungen aus der Praxis
Mark Allibone, Senior Consultant, Noser Engineering 17:45 Apéro riche und Networking

 

wollen Sie mehr über das Thema erfahren?

Sehen Sie sich die folgenden Microsoft Virtual Academy und Channel 9 Beiträge an:

Downloads

[Sample Of May. 25] How to get the location of the custom Task Pane in Word when it is floating

MSDN Blogs - Tue, 05/24/2016 - 23:47
May. 25

Samples:

This sample demonstrates How to get the location of the custom Task Pane in Word when it is floating.

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

フロントエンド サーバーとバックエンド サーバー(SQL Server)の同一サーバー上への実装について

MSDN Blogs - Tue, 05/24/2016 - 22:09

Skype for Business サポートチームです。

Skype for Business Server の構築にあたり、コスト削減の観点でサーバー台数を最小にするため、フロントエンド サーバーとバックエンド サーバーを同一サーバーにインストールすることは可能かとご質問頂く場合がございます。
こちらについては、残念ながら、以下にご説明いたしますが、フロントエンド、バックエンドは、それぞれ異なる役割を有して、相互に補完する関係にあるため、同一サーバー上へのインストールができません。

Skype for Business Server 2015 また、ひとつ手前のバージョンである Lync Server 2013 のいずれのバージョンでも、フロントエンド サーバーとバックエンド サーバーの役割を同一サーバー上に実装することはできません。
実際に両者を同一サーバー上にインストールする場合には、セットアップ手順中のトポロジへの公開の際にエラーが発生して同一サーバーに共存されないようにブロックされ、誤ってインストールされないよう実装されております。

この理由としては、フロントエンドサーバー、バックエンドデータベースは、それぞれデータの保存先として異なる役割を分担しており
また、一部の情報についてバックアップの目的を担っているため、別途のサーバーに分けていただく必要性があり、このような実装となっております。

そのため、構築にあたっては、それぞれ別のサーバー上への構築が必要となることを予めご了承ください。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Are You an Employee or a Founder?

MSDN Blogs - Tue, 05/24/2016 - 21:48

If you search on the differences between a “founder” and an “employee”, you will see a lot of articles on start-ups, shares of stock, being an entrepreneur, etc.  And if we look at the definition of a founder, it is someone who establishes or creates.  Ironically, the verb form of the word means to fill with water and sink.  I’m sure there are some founders of start-ups that first established and created, and then sank, but let’s not go there.

The definition of an employee is someone who does work for a business or another person for pay.  So with those two definitions in mind, let’s take this in a different direction and see how to apply this to corporate life and being an engineer.

Well, the first question is, are you an employee or a founder?  If you work in a business of any size, your answer is probably that you are an employee.  So the more interesting question is, do you act like an employee or a founder?  And more importantly, how do you think your answer affects your career trajectory?

Many of us are employees by definition and we like the benefits of being employees in a business that is not our own.  We don’t need to find customers, we don’t need to worry about sales and marketing, how to distribute pay checks, and all the other business aspects that you would have to consider when running your own business.  But in a corporation, if you truly want to see your career grow, you need to stop thinking like an employee and start thinking like a founder.  Treat your company like it is your very own.  Now I don’t mean go make crazy decision at work because you are going to pretend like you are in control of the company.  But what I do mean is that you should take more interest in the direction of the company.  A founder cares about the products that get produced, they care about their customers, they care about spending company money, and their work is a part of their life.  They don’t separate out their work time from the time they aren’t at work.  In their minds, founders are always thinking about their work, how to make their products better, how to make their customers happier.  And then when they are at work, they go make those things happen.

If you think like you are only an employee, you are limiting your potential.  You’ll approach problems as a follower of others instead of a leader.  You will miss opportunities where you could step up and be accountable for something important.  As an employee, you truly go to work to get paid and to do what you are tasked to do.  And maybe for some folks, that is enough.  But if you truly want to differentiate yourself from your peers, act like a founder.  Care more about the success of the company and not the thickness of your wallet or the time spent doing “work”.  If you have a difficult time wanting to act like a founder, then maybe you aren’t in the right role or even at the right company.  If you don’t have the passion to make the business that you are working in great, you may want to reconsider your career direction and make a shift in your job (and therefore your perspective) so that you can find an opportunity that will give you the desire to go from an employee to a founder.

In the next day, watch how you do your work.  How do you approach it?  Look for places where you respond to problems like an employee and think about how you may change your response or jump more deeply into a discussion when you act like a founder.  Having that added commitment to do quality work and do it for the sole reason of making the company great should help you grow in your career.  Therefore, remember to help establish and create like a founder so that you don’t fill with water and sink.

주간닷넷 2016년 5월 16일

MSDN Blogs - Tue, 05/24/2016 - 21:38

.NET Core, ASP.NET Core, Entity Framework Core 의 RC 2 버전과 SDK Preview 1 버전이 드디어 출시되었습니다. .NET Core 에 대한 관심이 높아지고 있는 가운데 Taeyo.NET 에서 ASP.NET Core 강좌를 시작했다고 합니다. 더불어서 이번 주 금요일에 마이크로소프트 사옥에서 Xamarin 세미나가 예정되어 있으니 관심 있는 분들의 많은 참석 부탁드립니다. 여러분들의 적극적인 참여를 기다리고 있습니다. 혼자 알고 있기에는 너무나 아까운 글, 소스 코드, 라이브러리를 발견하셨거나 혹은 직접 작성하셨다면 Gist주간닷넷 페이지를 통해 알려주세요. .NET 관련 동호회 소식도 알려주시면 주간닷넷을 통해 많은 분과 공유하도록 하겠습니다.

ASP.NET Core RC2, .NET Core RC2 그리고 프리뷰 1 버전의 SDK 가 드디어 배포되었습니다. 또한, Entity Framework Core RC2 도 함께 출시되었습니다. 기억하기 쉬운 주소의 닷넷 소식을 확인하실 수 있는 웹 사이트인 dot.net도 오픈하였습니다.

금주의 커뮤니티 소식

Taeyo.NET 에서 http://docs.asp.net 의 ASP.NET Core 문서를 한글화하여 연재하기 시작했다고 합니다.

On.NET 소식

지난 On.NET 인터뷰에는 지난 5/15 일에 출시된 .NET Core RC2 와 Preview 1 버전 도구에 대해 이야기 나누었습니다.
이번 주 On.NET 인터뷰에는 On.NET 인터뷰 1회의 초대손님이었던 Miguel de Icaza 를 다시 모시고 이야기 나누었습니다.

금주의 툴 – Web Accessibility Checker

웹 사이트의 접근성을 일일이 확인하는 것은 그리 쉬운 일이 아닙니다. 하지만 만약 여러분이 개발한 소스코드에 이를 체크하는 기능이 포함되어 있어서 웹 표준에 근거하여 접근성을 체크하고, 표준에 위반되는 항목을 Visual Studio 에서 오류 목록으로 보여준다면 정말 편리하겠죠? Mads Kristensen 가 개발한 Web Accessibility Checker 를 이용해 보시기 바랍니다. 그리고 이 도구를 소개하는 포스트도 함께 확인해보시기 바랍니다.

금주의 Xamarin 애플리케이션 – Xactware

Xactware 는 모든 종류의 빌딩 및 수리와 연관된 소프트웨어 솔루션 제공 업체로 미국의 주택과 관련된 분쟁의 약 80%를 처리하고 있으며, 약 417조원의 가치를 지니는 회사입니다. Xactware 는 Xamarin 을 이용하여 기존에 개발된 C# 코드를 기반으로 iOS 및 Android 를 지원하는 모바일 애플리케이션인 Xactimate 을 만들었습니다. 이를 이용하여 업무를 확인하고, 3차원의 설계도를 제작하고, 비용을 산정하고, 현장에 머무르며 필요한 모든 자료를 즉시 전송할 수 있습니다.

금주의 게임 – Dex

Dex는 모던하면서도 아주 잘 만들어진 2차원 게임입니다. 좌우로 이동하는 스타일의 액션/RPG 게임으로 아름다운 배경과 실감 나는 대화가 돋보입니다. 마치 만화책을 연상시키는 중간마다 등장하는 화면은 게임의 전반적인 내용과 아주 잘 어울립니다.

플레이어는 Habor Prime 의 cyberpunk 라는 가상의 도시에서 도착하고, 거리를 활보하고 다니며 임무를 완성하게 됩니다. 레벨에 따라서 게임 스타일에 맞는 캐릭터를 선택하실 수 있습니다. 예를 들어 적과 마추치는 경우, 조용히 접근해서 대화할 것인지 아니면 총을 쏠 것인지 선택할 수 있습니다. 또한, 플레이어의 의식을 디지털화하여 적의 전자 방어시스템에 침투한 후 미션을 수행할 수도 있습니다.

Dex 는 Dreadlocks LTD 에서 제작한 게임으로 UnityC# 으로 개발되었습니다. 현재는 Linux, Mac, Windows 에서 즐기실 수 있으며 빠른 시일 내로 Good Old Games (GoG), Xbox One, Play Station 4 도 지원 할 예정입니다. 좀 더 자세한 정보는 Made With Unity 페이지에서 확인하실 수 있습니다.

.NET 소식 ASP.NET 소식 Xamarin 소식 F# 소식 Games
  • Build A Unity Game Part 2 (video) : What Up Games 의 공동 대표이자 개발자인 Stacey Haffnerrk 가 Unity 에서 게임을 개발하는 방법을 소개했습니다.
  • Unity and IPv6 Support : Mantas Puida 가 Unity 의 IPv6 지원소식 및 예상되는 문제 해결방안을 설명했습니다.

주간닷넷.NET Blog 에서 매주 발행하는 The week in .NET 을 번역하여 진행하고 있으며, 한글 번역 작업을 오픈에스지의 송기수 이사님의 도움을 받아 진행하고 있습니다.

송 기수, 기술 이사, 오픈에스지
현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 닷넷과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Serminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’이다.

Support for Interaction Centric Forms within Unified Service Desk

MSDN Blogs - Tue, 05/24/2016 - 21:20

Unified Service Desk 2.1.0 is released now, and is available for download here. Among other things, this release is primarily focused on getting the same level of support for the CRM Interaction centric (IC) forms as it had for the CRM web client forms.

Before continuing on the Unified Service Desk support for IC forms, it would help to understand some important concepts about IC forms.

In This Post

Interaction Centric Forms: Introduction

Configuration Experience

Important considerations for changing the hosted control type to host interaction centric forms

Interaction Centric Forms: Introduction

IC forms are used by the Interactive Service Hub. Interactive Service Hub is a single page application, which means there are no popups and there is only one navigation stack irrespective of the number of browser instances where IC pages are loaded. This poses challenge when loaded in an application like Unified Service Desk, which relies on popup/multi page windows, and supports independent navigation stacks for each browser window hosted.

In Microsoft Dynamics CRM 2016 Online Update 1 and CRM 2016 Service Pack 1 (on-premises) release, we updated Interactive Service Hub to support multiple navigation stacks and also support URL addressability to IC forms. This gives Unified Service Desk the ability to load IC forms inside a tab and make them participate in Unified Service Desk specific capabilities such as actions, events, sessions, and navigation rules.

NOTE: As we rely on the infrastructure support from Interactive Service Hub, this particular feature of Unified Service Desk is supported only in CRM 2016 Online Update 1 and CRM 2016 Service Pack 1 (on-premises) and later. Customers currently using Interactive Service Hub in an earlier versions of CRM won’t be able to take advantage of this feature.

Configuration Experience

The configuration experience for working with IC forms is similar to that of CRM web pages. A new hosted control type called Interactive Service Hub Page is available in Unified Service Desk 2.1.0 release, which provides similar actions and events as the CRM Page hosted control type. Changing type of a Unified Service Desk hosted control from CRM Page to Interactive Service Hub Page will make the hosted control load IC forms instead of the CRM web forms. Most of the actions of CRM Page hosted control type will work for Interactive Service Hub Page type, so changing the hosted control type will take care of most of the configuration changes automatically.

Important considerations for changing the hosted control type to host interaction centric forms

These are the few important changes you need to consider before changing your CRM Page type of hosted control instances to the Interactive Service Hub Page type.

  • GoBack action for Interactive Service Hub Page acts differently than CRM Page.

In the CRM Page hosted control, the GoBack action is effectively similar to window.history.back() JavaScript call or pressing the back button in the browser. In the Interactive Service Hub, pressing back on the browser or executing window.history.back() takes the user back to the main dashboard rather than the previous page on the navigation stack. There is a special back button in ISH that navigates to previous page on the navigation stack.

In the Interactive Service Hub Page hosted control, the GoBack action works similar to the back button within the Interactive Service Hub application until the navigation stack is empty and then switch to browser back functionality.

This is needed for the user to go back in the history, especially if they are navigating from non-Interactive Service Hub Page hosted control to an Interactive Service Hub Page hosted control.

  • The GoForward action for ISH page does not exist.

There is no concept of going forward in ISH and this action does not make any sense in the context of IC forms.

  • The PageLoadComplete event does not exist.

NOTE: You have to redefine your logic if you are using the PageLoadComplete event in your CRM Page hosted control (you should not be, unless you need to perform actions for specific frame), and want to change the type to the Interactive Service Hub Page hosted control.

The PageLoadComplete event of CRM Page control fires for each frame load of the CRM page. So, for each CRM record, this event fires multiple times with different URL and frame data. IC forms have different page construct than CRM web pages. They also load differently and not necessarily load by frames. So, the PageLoadComplete event does not make sense for IC forms.

  • The BrowserDocumentComplete event does not exist

The BrowserDocumentComplete is one of the most useful events of the CRM Page hosted control type. This event is fired when the CRM record is loaded completely and Unified Service Desk completes scanning the data on that page. Although this is extremely useful event, the name is quite misleading and will be changed in future for the CRM Page hosted control. The Interactive Service Hub Page control introduces a new event called DataReady which is effectively doing the same thing, but with more appropriate naming convention.

NOTE: When you change the type of a hosted control from CRM Page to Interactive Service Hub Page, you have to move any action calls configured for BrowserDocumentComplete to DataReady event.

  • Open_Crm_Page and New_Crm_Page actions on the Global Manager  hosted control will continue to open CRM web pages and there are no equivalent actions on Global Manager  to open IC forms.

Is Pluto a Planet?

MSDN Blogs - Tue, 05/24/2016 - 20:07

Is Pluto a planet?

And how could that question be related to Data Science?

Well, neither the physical properties of Pluto, nor the ways we study it change if we call it a “planet”, a “dwarf planet”, a “candelabrum”, or a “sea cow”. Pluto stays the same Pluto regardless of all that. For astronomy or physics, the class name does not really matter.

Yet we don’t expect to find publications on Pluto in the Journal of Marine Biology. They belong to astronomy or planetary science. So, when we talk about information storage and retrieval, proper classification does matter, and does a lot.

From that standpoint, the distinction is material and reflects the physical world. When we study “real” planets like Mars or Venus, we often mention features that only or mostly “real” planets have, such as atmospheres, past or present tectonics, or internals differentiation. Small asteroids and “boulders” rarely if ever have those features. That difference should show up in the vocabulary.

So it may make sense to classify something as a “planet” if language use with respect to that object follows the same patterns as language use for “real” planets, simply because it is easier to store, search and use information when it’s organized that way.

That calls out for an experiment:

  1. Collect a body of texts concerning an object that we definitely consider a planet — say, Mars.
  2. Collect a body of texts (preferably from the same source) associated with “definitely not a planet” — say, 67P Churyumov-Gerasimenko comet or an asteroid.
  3. Do the same for Pluto.
  4. Using text mining algorithms, compute vocabulary similarity between each pair of subjects and check whether Pluto’s texts language lands closer to Mars — or to a non-planetary object.

That was what I did.

 

Algorithm Outline

[Skip to tests if not interested] Assuming the documents are represented as a collection of .txt files:

  1. Read all text from the file
  2. Split it into tokens using a set of splitters like ‘ ‘ (whitespace), ‘.’ (dot) or ‘-‘ (dash) (42 in total, see the code for the full list)
  3. Drop empty tokens
  4. Drop tokens that are too short. In these experiments I used 2 as the threshold, but got very similar results with 1 or 3.
  5. Drop tokens consisting only of digits (e.g., “12345”)
  6. Drop the so-called “stop words” — very common terms like “a” or “the” that rarely carry significant information. I used the list from Wikipedia, extended with a few words like “fig” (a short for “figure”), or “et” and “al” common in the scientific context
  7. Run Porter suffix stemming algorithm on each token. The algorithm maps grammatical variations of the same word into common root (e.g., “read”, “reads”, “reading” -> “read”). Published in 1980, the algorithm is still considered a “golden standard” of suffix stemming, not least because of its high performance. While the algorithm idea is relatively straightforward, the implementation is quite sensitive to getting all the details exactly right. So I used publicly available C# implementation by Brad Patton from Martin Porter’s page (greatest thanks to the author!)
  8. Again drop tokens that were too short — if any.
  9. Add the resulting keywords to the collection of words count maintained per each data source (e.g., “Mars” or “Pluto”). So, if A is such a collection, then A[a] is be the count of word a in A.
  10. While performing steps 1-8 above, also keep a count of documents that use each word. Intuitively, it makes sense to base similarity measure on less common words that carry more information (e.g., “exosphere” vs. “small”) — and that is what so called the “tf-idf” trick accomplishes. After a few experiments, I arrived to the following definition of IDF weight per word:

      While the extra length term deviates from more

traditional definitions

    , I found the discriminative power of this approach to be better than several other variations tested.
  1. For each pair of words collections A = {<ai, count(ai)>} and B = {<bi, count(bi)>}, compute the cosine similarity:

    [Again, several other variations were tested, with this one producing the best classes discrimination in tests]

The resulting metric, J(A,B), is the vocabulary similarity between the sets of documents A and B, ranging from 0 (totally different) to 1 (exactly the same).

The code is here if you want to take a look, except for the Porter algorithm which as I mentioned was adopted from C# implementation by Brad Patton with no change.

OK, the code is written. But before we run it on anything real, how do we know it works?

That’s why we have testing.

 

Test 1. Classic Music vs. Chemistry.

For this test, two bodies of text were used. The first corpus consisted of Wikipedia articles on inert gases Helium, Neon, Argon, Krypton and Xenon. Their sections starting after the table of content, and continuing (but not including) to “References” or “See Also” were used.

The second corpus included articles on classic music composers Bach, Beethoven, Mozart and Rachmaninoff, similarly pre-filtered.

Two test subjects were Wikipedia article about Oxygen (gas), and Wikipedia article about Oxygene, a famous musical album of Jean Michelle Jarre, the composer.

The question was: can this method properly attribute the latter two articles to their respective classes (gases vs. music)?

After fixing couple of bugs, some fine-tuning and experimentation, the code kicked in and produced the following results:

Gases Composers Oxygen (gas) Oxygene (album) Gases 100% (0.237) 2.33% (0.092) 23.3% (0.197) 1.38% (0.244) Composers 2.33% (0.092) 100% (0.225) 3.61% (0.152) 5.74% (0.348) Oxygen (gas) 23.3% (0.197) 3.61% (0.152) 100% (0.347) 2.25% (0.189) Oxygene (album) 1.38% (0.244) 5.74% (0.348) 2.25% (0.189) 100% (0.587)

 

The percentage is the degree of similarity J(A, B). The number in parenthesis is the support metric of how many unique words entered the intersection set, relative to word count of the smaller set.

Highlighted is the largest non-self similarity value in each row. As you can see, the method correctly attributed Oxygen (gas) to gases, and Oxygen (musical album) to Music/Composers.

And yes, I’m explicitly computing the distance matrix twice — from A to B and from B to A. That is done on purpose as (essentially) a built-in test for implementation correctness.

OK, this was pretty simple. Music and chemistry are very different. Can we try it with subjects somewhat closer related?

 

Test 2. Different Types of Celestial Bodies, from Wikipedia again.

Now we will use seven articles, with all the text up to “References” or “See also” sections:

The resulting similarity matrix:

67P Betelgeuse Halley Comet Mars Pluto Sirius Venus 67P 100% (0.481) 0.76% (0.259) 2.8% (0.23) 1.78% (0.286) 0.696% (0.234) 0.297% (0.202) 1.13% (0.252) Betelgeuse 0.76% (0.259) 100% (0.307) 1.27% (0.183) 2.66% (0.133) 1.66% (0.159) 6.9% (0.204) 2.86% (0.167) Halley Comet 2.8% (0.23) 1.27% (0.183) 100% (0.393) 2.05% (0.195) 2.12% (0.159) 1.48% (0.15) 1.97% (0.164) Mars 1.78% (0.286) 2.66% (0.133) 2.05% (0.195) 100% (0.289) 3.87% (0.175) 1.76% (0.19) 9.11% (0.191) Pluto 0.696% (0.234) 1.66% (0.159) 2.12% (0.159) 3.87% (0.175) 100% (0.326) 1.07% (0.155) 2.68% (0.15) Sirius 0.297% (0.202) 6.9% (0.204) 1.48% (0.15) 1.76% (0.19) 1.07% (0.155) 100% (0.448) 2.01% (0.171) Venus 1.13% (0.252) 2.86% (0.167) 1.97% (0.164) 9.11% (0.191) 2.68% (0.15) 2.01% (0.171) 100% (0.334)

 

What do we see? Stars are correctly matched against stars. Comets — against comets (although Halley’s comet classification wasn’t very strong). “True” planets are matched to “true” planets with much greater confidence. And Pluto? Seems like the closest match among the given choices is… Mars. With the next being… Venus!

So at least Wikipedia, when discussing Pluto, uses more “planet-like” language rather than “comet-like” or “star-like”.

To be scientifically honest — if I add Ceres to that set, the method does correctly group Ceres with Pluto, sensing the “dwarfness” in both. But the next classification choice in each case was still strongly a “real” planet rather than anything else. So at least from the standpoint of this classifier, a “dwarf planet” is a tight subset of a “planet”, rather than a “comet”.

Now let’s put it to real life test.

 

Test 3. Scientific Articles

The 47th Lunar and Planetary Science Conference held in March 2016 featured over two thousand great talks and poster presentations on the most recent discoveries about numerous bodies of the Solar System, including Mars, Pluto, and 67P/Churyumov–Gerasimenko comet. The program and abstracts of the conference are available here. What if we use them for a test?

This was more difficult than it seems. For ethical reasons I did not want to scrape the whole site, preferring to use a small number (12 per subject) of purely randomly chosen PDF abstracts instead. Since the document count was low, I decided not to bother with PDF parsing IFilter and copy relevant texts manually. That turned out to be a painful exercise requiring great attention to detail. I needed to exclude authors lists (to avoid accidentally matching on people or organization names), references, and manually fix random line-breaking hyphens in some texts. For any large scale text retrieval system, this process would definitely need serious automation to work well.

But finally, the results were produced:

67P Mars Pluto 67P 100% (0.252) 17.9% (0.126) 19.7% (0.132) Mars 17.9% (0.126) 100% (0.224) 21% (0.111) Pluto 19.7% (0.132) 22% (0.111) 100% (0.21)

 

The differences are far less pronounced, probably because the texts have very similar origins and format restrictions, and use the same highly specialized scientific vocabulary. Yet still, within the space of this data set, Pluto is slightly more of a planet rather than of a comet.

 

Closing Remarks

Should those result be taken seriously? By all means, no. They are obtained on a small data set, with a home-grown classifier and with relatively modest differences detected. Yet it shows a possibility of a Data Science driven approach to a question of what objects are more natural to be classified as “planets”.

Thank you for reading,

Eugene V. Bobukh

How to call REST APIs and parse JSON with Power BI

MSDN Blogs - Tue, 05/24/2016 - 18:55

Just finished this week’s Power BI community webinar: Data Preparation is the Keystone by Reza Rad and one of the questions asked was:

Can you use Power BI to call REST APIs and parse JSON?

As luck would have it i was actually playing with doing exactly that and promised to do a blog post on it. 

When I took the 400 line “getting started” sample from the Visual Studio documentation down 6 lines i didn’t think it could be made simpler.

The fact one of my readers were still confused showed there was still room for improvement!

The fact that Power BI takes this to 0 lines and renders the resulting JSON into a beautiful set of Visuals with a flexible data model I am pretty certain sets the standard for the easiest way to call REST APIs and parsing JSON.

For this sample I have decided to call  REST API’s close to my heart –the Power BI Uservoice data at https://ideas.powerbi.com

With most REST API samples; you start by enabling access …With Uservoice is done with Tokens assigned to to an Admin. (in the call below I have removed my token and replaced it with XXXXXXXXXXXXXX’s).

For more information about using Tokens with the Uservoice API please see:  https://developer.uservoice.com/docs/api/technical-details/.  To grant a Client Token https://YOURDOMAIN.uservoice.com/admin/settings/api.

http://powerbi.uservoice.com/api/v1/forums/1/suggestions.json?client=XXXXXXXXXXXXXXXXXXXXXX  (note this won’t run as it doesn’t have a valid token!)

At this point you can  execute the call directly in a browser….But just like my Visual Studio REST sample, just not very pretty or useful!

Now that we have access  comes the fun part – putting into Power BI!

1. Choose Get Data from the start screen or the Home Tab.

 

2.  Select the “Other” category and select Web

3.  Paste/type in the REST call you want to make.  

 

4.  Depending on the call you are making this may take a while and return different JSON schema’s.

 

 

5.  At this point you will to explore the schema.  You can expand the elements one at time and if it isn’t the data in you are interested in simply remove it from the query properties.

In this case it is a pretty straightforward schema. A record with the meta data about the data called “Response_data” (see image below) and the list of elements(columns and their corresponding rows-which is what we want!).

6.  Expanding out the Suggestions > List gives use the following….So close!

7. Right clicking on the “List” column heading you want to expand this list into a table

8.   At this point we just need to expand the columns to have the REST API feeding the data model directly with a format we can build reports on. 

 

9.  Pretty amazing how easy this was compared to writing the code to parse this!

 

10.   Of course now that i have this data constantly available to Power BI creating reports is a breeze!

 

(In a future post i will need to do show pagination which Uservoice requires and setting up data model relationships with other REST API calls)

Group Managed Service Accounts (gMSA) and SQL Server 2016

MSDN Blogs - Tue, 05/24/2016 - 18:28

This post comes from another colleague of mine, Norm Eberly.  Norm is a dedicated Premier Field Engineer for Microsoft.  An overt anglophile and avid Alaskan angler, he lives near Seattle and has been working with SQL Server since 1994.  His experiences include database administration, external storage subsystems, consulting, and support engineering.  Norm’s expertise is in performance tuning, operational excellence, high availability, and functional business expertise.

 

By Norm Eberly

There are two main drivers behind the development of Group Managed Service Accounts (gMSA’s) for services such as SQL Server:

  1. They remove the need to manage the service accounts with respect to the overhead of service account password management.
  2. Service Principal Names (SPNs) registration can be done automatically.

Managed Service Accounts (MSAs) are also designed to address these two issues. However, MSAs are limited to a single computer account – they cannot be used as the service account for a SQL Server failover clustered instance which can run across multiple Windows servers.

Group Managed Service Accounts extend MSA functionality to cover multiple servers.

See the following reference for a more detailed discussion about gMSAs: https://technet.microsoft.com/en-us/library/hh831782.aspx

A major pain point in environments with a large number of SQL Server instances deployed is managing the service accounts according to published best practice guidelines, especially when the service accounts are domain accounts:

  • Each service should be using a different service account (to prevent the compromise of all services using the same service account if one service account is compromised).
  • Each service account should have its passwords managed in accordance with domain account policies (changing every 90 days for example)

Imagine the administrative overhead of having to manage 1000 separate domain accounts and their passwords. While some of the tasks can be automated, there is still overhead and coordination to ensure passwords meet complexity requirements as well as usage history. A single PowerShell script would have to be able to connect remotely to all of the relevant servers in order to access the WMI service on each server to change the password’s for the services programmatically.

Under normal circumstances, it is not unusual for domain account service accounts and their passwords to be known by the service administrators, after all they are usually the people responsible for setting and maintaining them. And at some level having even administrators know these accounts and passwords may be considered a security vulnerability.

Likewise the overhead of SPN registration and management is tangible. SPNs can only be registered by accounts with Domain Admin level permissions – although it is possible to delegate the specific permission required. Many environments are not keen on delegating this permission, especially when there are hundreds to thousands of domain accounts involved.

In response to the challenges with password management, many environments compromise by using a single domain account as the service account for all of their SQL Server instances. Many also take a different approach to password policy for service accounts (perhaps they allow the same password for 12 months rather than 90 days, etc.).

Group MSA’s address both of these:

  1. By automating the process within Active Directory for the password management. Passwords are very complex and changed automatically as often as desired (by default every 30 days). The passwords are cryptographically random and 240 bytes long. In addition, they cannot be used to interactively logon. Nor can they be locked out.  There is also no longer a need to restart the SQL Server service after a service account password reset, which prevents downtime, etc.
  2. By delegating the SPN registration permission to the gMSA, there is no vulnerability associated with a human being using the service account to cause problems by registering duplicate or even bogus SPN’s, etc.

 

This is a step-by-step implementation of Group Managed Service Accounts (gMSAs) for use as the service account for SQL Server 2016.

This implementation is done using Windows Server 2012 Active Directory domain controllers (DCs), all servers running Windows Server 2012 or Windows Server 2012 R2, and SQL Server 2016 CTP 3.2.

 

 

Prerequisites

In order to utilize gMSA accounts, there must be at least one Windows Server 2012 (or R2) DC in the domain. There is no forest or domain functional level requirement.

The Key Distribution Services (KDS) Root Key needs to be created before a gMSA can be created. This is done via a PowerShell command and requires Domain Administrator or Enterprise Administrator level privileges.

  • See https://technet.microsoft.com/en-us/library/jj128430.aspx for details and steps
  • Note that there is a 10 hour lag between the time the KDS root key is created and the time a gMSA can be created. This is to allow full replication between the Windows Server 2012 DC’s in order to allow password retrieval to work as necessary.
  • There are steps at the above reference to allow the use of the root key immediately for testing purposes.
  • This requires a 64-bit environment, but only has to be done on one Windows Server 2012 R2 DC.

 

gMSA Implementation for SQL Server 2016

This section was developed using the steps outlined in the following blog post: http://blogs.technet.com/b/askpfeplat/archive/2012/12/17/windows-server-2012-group-managed-service-accounts.aspx

It was not necessary to accomplish every step in the blog post and this section discusses these areas where necessary.

 

Create a Global Security Group in Active Directory Users and Computers.
  • This step is actually optional, but it allows for easier management of the necessary rights required to use the gMSA for the member servers.
  • Note also that there will be a reboot requirement for the member servers added to this Security Group.

In Active Directory Users and Computers, under the domain where the gMSA is to be created, right click on Computers, New and Group. This will open the New Object – Group dialog:

 

 

  • Enter a Group Name, Group scope should be Global and Group type is Security. In this demo, we will use SQLServers
  • Open the newly created Security Group by double clicking on it and go to the Members tab. Or right click on the Security Group and go to Properties then the Members tab.

{Note – Click on Images to Expand}

 

 

 


 

  • Click Add and add the domain member servers that will be hosting the SQL Server instances that will be using the gMSA. In this demo, I added all of the member servers that will be running SQL Server:

Note that these servers will require a reboot in order for their tokens to pick up membership in the group.

This group will be given specific rights to its members that will allow the member servers to retrieve the gMSA password.

 

 

 

 

 

Create the gMSA account
  • This must be done with a PowerShell script in a PowerShell session that also has the Windows Server 2012 AD cmdlets available (it does not need to be done on a DC). See https://technet.microsoft.com/en-us/library/dd378937(v=ws.10).aspx for guidance on installing the AD Powershell module.
  • The command that creates the gMSA will also grant the right to retrieve the accounts password to the members of the Security Group created earlier
  • This is the PowerShell command used:

New-ADServiceAccount -name gMSAsqlservice -DNSHostName gMSAsqlservice.contoso.com -PrincipalsAllowedToRetrieveManagedPassword SQLServers

 

Grant the gMSA account the “Validated write to service principal name” permission
  • One of the main benefits of MSA/gMSA is that these service accounts have the ability to register and deregister the SPN’s for the services that use them. This is normally done with Domain Admin rights.
  • In order for the gMSA to be able to register/deregister SPN’s, it will need the “Validated write to service principal name” permission granted to it.
  • This is an optional step for using gMSA service accounts. If Kerberos authentication is not going to be used, or if delegating this permission is not possible, then the steps below are not necessary.
    • In Active Directory Users and Computers, right click on the domain and go to Properties/Security.
    • Click Advanced and on the Permissions tab, click Add
    • At the top, click the “Select a principal” link to open the Select User, Computer, Service Account, or Group dialog box.
    • Click Object Types…, ensure Service Accounts is checked and hit OK.
    • Enter the gMSA account in Enter the object name to select and then Check Names, then click OK.
    • In Applies to: choose Descendent Computer objects.
    • Under Permissions, locate the “Validated write to service principle name” option, check the box.

 

 

 

 

  • Click OK three times to close all dialog boxes.

 

 

 

Configure and validate the gMSA service account on the member servers.
  • This is a step that might not need to be done: the accounts may already be configured on the member servers when they were rebooted. However, the steps are quite quick and the validation of the account should be considered necessary. These commands also require the AD module for Powershell.
  • To configure the gMSA account, run the following PowerShell command on the member server: Install-ADServiceAccount gMSAsqlservice
  • To validate the gMSA account, run the following PowerShell command on the member server: Test-ADServiceAccount gMSAsqlservice
    • This should return True.
Configure SQL Server to use the gMSA service account
  • Start SQL Server Configuration Manager
  • Under SQL Server Services, right click the instance of SQL Server you want to assign the service account to and go to Properties
  • In Log On tab, choose “This account”.
  • In Account Name enter the domain account and include a “$” after the gMSA name:
    • contosogMSAsqlservice$
    • The “$” may automatically be added for you.
    • Do not enter a Password, it will be retrieved automatically from AD.
  • Start or Restart the SQL Server service

 

We can check that the SPN was correctly registered using the following command:

Setspn –L gMSAsqlservice

This should return two registered SPN’s for the instance:

MSSQLSvc/W12R2-C3N1-S16.contoso.com:49514

MSSQLSvc/W12R2-C3N1-S16.contoso.com:I01

 

 

Using gMSA accounts during SQL Server 2016 installation

We can also designate the gMSA account during the SQL Server 2016 setup process. Just enter the domain account, in our example contosogMSAsqlservice$, as the Account Name on the Service Accounts page of the setup process. The setup process does not prompt for a password as it checks with Active Directory for the correct authentication, etc. and setup completes as expected.

Returning to Non-gMSA Service Accounts

To return to using non-gMSA service accounts, just use the SQL Server Configuration Manager to set the new service account and password. A SQL Server service restart will be required.

Note that this can, and likely will, unregister any SPN for the instance in AD. So if Kerberos authentication is required, the SPN will need to be re-registered.

Summary

With the release of SQL Server 2016, SQL Server service account management becomes much easier with Group Managed Service Accounts. Gone are the tedious planning and implementation phases of changing accounts and/or passwords, requiring SQL Server service restarts and then troubleshooting when things go wrong.

If your goal is to reduce management and administrative overhead while at the same time reducing security vulnerability, MSA and gMSA service accounts might well be worth the effort of evaluation.

 

Completely Off Topic

“Coppice” is a word that describes a growth of trees or shrubs that have been cut back periodically to stimulate growth and to harvest wood. Coppice is also used to describe the act of this periodic cutting. Coppicing continues to be practiced in many parts of the world for both gardening and commercial purposes.

View some coppiced woodlands

 

Novo livro: Desenvolvimento efetivo na plataforma Microsoft

MSDN Blogs - Tue, 05/24/2016 - 17:28

Novo livro escrito pelo  Time de Suporte Microsoft Modern Apps: 

Desenvolvimento efetivo na plataforma Microsoft

Como desenvolver e suportar software que funciona

Os Engenheiros de Suporte Microsoft em Modern Apps têm a oportunidade de trabalhar com sistemas críticos nas maiores companhias do mundo dos mais diversos segmentos. Ao longo dos anos, estes profissionais qualificaram-se no desenvolvimento e suporte baseando-se nas recomendações dos produtos e em boas práticas vivenciadas nas experiências de campo. Trabalhando lado a lado com os clientes, compartilhando conhecimento com milhares de times de desenvolvimento e auxiliando cada pessoa e cada organização a atingir todo o seu potencial.

Com foco em DevOps, .NET Framework, IIS (Internet Information Services) e Microsoft Azure, desenvolvedores e arquitetos estarão aptos a aperfeiçoar a qualidade e disponibilidade de seu software, aumentar seu nível de maturidade em desenvolvimento, economizar tempo e reduzir custos.

O livro está disponível em:

https://www.casadocodigo.com.br/products/livro-plataforma-microsoft

Esta é uma obra independente e sem fins lucrativos. O dinheiro arrecadado será revertido para o projeto CDI (Comitê para Democratização da Informática / http://www.cdi.org.br/). O CDI é uma organização social que usa a tecnologia para transformação social, fortalecendo comunidades e estimulando o empreendedorismo, a educação e a cidadania.

How to Correctly Create an Override for a SCOM Workflow

MSDN Blogs - Tue, 05/24/2016 - 16:36

 

Never store overrides in the “Default Management pack”. Don’t do it.

 

 In fact, rename the Default Management Pack right now to:

 DO NOT USE – Default Management Pack – DO NOT USE

 

 

If you store an override in the Default Management Pack then somewhere a puppy loses its ears and then Andres bursts a vessel.

 

What Not to Do

 

In the screenshots below are examples of the opposite of “best practice” and a great example of what I see in the field regularly. These are fantastic examples of what you should NOT do.

Notice there are an excessive number of unsealed management packs; apparently one for each and every override related to a SPN alert(s).  Additionally, there are an excessive number of generic and vaguely named unsealed packs.

 

 

 

 

An example of why this is a bad idea, consider the following for fictitious company “DIY”:

Let’s say that you want to override the Logical Disk Free Space monitor for a server: MyServer01

Consider the following scenario:

MyServer01 is a virtual machine on a Hyper-V cluster which consists of two physical Dell servers.

MyServer01 is also a domain controller and standalone certificate authority (with IIS) hosting a certificate enrollment website.

 

The server in question (MyServer01) is:

·         an Active Directory server (domain controller)

·         living on a Dell host

·         a Hyper-V virtual machine

·         an IIS server

 

Let’s pretend that you have the following unsealed management packs in your environment:

DIY – Active Directory

DIY – Dell

DIY – Hyper-V 2008

DIY – IIS

DIY – SCOM Overrides

 

In which unsealed MP do you store the override for the monitor???

 

You could make an argument for each of the unsealed MPs listed above!  There are plenty of articles, blogs, books, and guides which suggest that you should create a general SQL unsealed pack for your “SQL-related” overrides or a perhaps a general AD unsealed pack for your Active Directory overrides. However, we can see from the example above how this is problematic. If left up to the casual admin(s) to take a guess at which pack to use, you will end up with a spaghetti clown mess of dependencies and excessive number of unnecessary MPs like the example screenshots above.  If you follow the recommended practice outlined below, there will never be any doubt or confusion and you will have a tidy, clean, efficient environment. It requires a little bit more effort initially but it will save you much pain down the road. Don’t be lazy! 

 

In the example below I demonstrate how to override a SQL Server DB Engine monitor. The best process (yes, the best) for creating an override is as follows:

*Note: I’m sure there are sticklers out there who will point out a few exceptions like the Exchange 2010 workflows. This article applies to normal MPs with normal workflows. This article doesn’t address overrides for custom groups either. This will be addressed in a later blog post.

 

Steps for Creating an Override Correctly

 

Identify the Base Sealed Pack

1.       First identify in which management pack the workflow (monitor, rule, or discovery) is defined. You can find this information on the General tab of the workflow properties. In the example screenshot below, an alert was selected, then the Alert Monitor link was clicked to open the monitor Properties window. This is the Properties window of the actual workflow (a monitor in this case) that spawned the alert. In this window you can see the sealed management pack where the monitor is defined:  “Microsoft SQL Server 2014 (Monitoring)

 

 

This monitor is defined in this sealed pack: “Microsoft SQL Server 2014 (Monitoring)

 

 

 

Create the Unsealed “Buddy Pack”

2.       Now that we know the sealed management pack name, we can store the override into an appropriately named “buddy pack”; that is an unsealed management pack that should only contain overrides that apply to workflows contained within the base sealed pack. Another way to say this is that any and all overrides that are applicable to workflows in this sealed base pack, “Microsoft SQL Server 2014 (Monitoring)“, should be stored in this unsealed “buddy pack”. The “buddy pack” should be named nearly identical to the base pack but with an obvious visual identifier appended. Example:  “Microsoft SQL Server 2014 (Monitoring) (OVERRIDES)” The process is demonstrated in the screenshots below.

 

Note: These instructions assume the “buddy pack” does not already exist.

 

 

 

 

 

 

Now we have an unsealed pack in which to store our overrides that are related to only the sealed base pack of the workflow. With this “buddy pack” strategy there should never be any doubt or confusion about where we should store an override related to any workflow in a sealed pack as long as we stick to the formula; a consistent repeatable process. Also we will keep our overall dependency ratio to a minimum which is a good thing. Our environment will be orderly and efficient. No clown spaghetti messes!

 

 

 

 

Document Your Changes

3.       Add a notation to the Enabled parameter with details about WHY you made the customization along with your name and a date. It’s very helpful to yourself and others to know WHY you made this change because you may not always be available to answer questions about this customization, especially at 3am. Save your changes with “OK”. (Selecting “Apply” is a waste of time for those with “admin OCD”.)

 

Note: It is possible to add a notation to each parameter that you have activated by checking the option box. However, it’s a little faster and less tedious to add all of your comments to just one line item. In this example, the only parameter being modified is the “Enabled” parameter.  For consistency, I suggest that you use the “Enabled” parameter as shown in the example above. Practically every override set includes this parameter. (I can’t think of an example right now where this parameter is not present.) Even if this parameter is already enabled by default, it’s OK to activate it, leave it “Enabled”, then add your comments to it. Consistency is important.

 

Your unsealed override packs will appear neat, orderly, professional, and alphabetized. By simply looking at the name of the unsealed pack you will immediately know what it is used for.

 

Examples:

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux