You are here

Feed aggregator

Future Decoded: Tickets Now Available

MSDN Blogs - Mon, 09/29/2014 - 02:20

Join us on the 10th November for Future Decoded. We have assembled some of the world's most influential thought leaders to speak on how we need to evolve our thinking on the forces that are driving how economies are run, how societies develop and the pivotal role that transformational technology will play in making those changes happen.

Future Decoded is an event that does not attempt to predict the future but instead will work to decipher it, it is a forum for us to discover, provoke and provide insight into all the uncertainty we may face in the years ahead.

Future Decoded is a free event being held at London Excel. In addition to hearing from Microsoft CEO Satya Nadella, who will be delivering the keynote address and outlining how Microsoft is creating the next generation of technology of business, you will also hear from the following inspirational and thought provoking speakers.

  • Sir Bob Geldof, who will open our eyes to a different perspective on education in the 21st century
  • Jeremy Paxman will explore and discuss the crucial changes that are currently underway in our society and how they will influence our future
  • Dame Stella Rimington will share her unique views on leadership in chaotic times
  • Sara Murray OBE will talk of the new role for innovation at the heart of our people, not just our business
  • Sir Martin Sorrell, who will take us through his vision for what all this change will mean for business and creativity.

In the afternoon, you'll hear Microsoft customers share their experiences across four main theatres as well as have the opportunity to take part in discussions and network with the UK's top business leaders. There will also be a series of demonstrations showcasing some of the latest technology and innovations, enabling you to gain a head start on adapting to a fast-changing world.

Future Decoded will challenge you to think and work differently. It will arm you with the critical insights you need to prepare for the future. A future that, we all know, will look nothing like today.

Tickets are free of charge and subject to availability – Register Today.

Future Decoded: Tickets Now Available

MSDN Blogs - Mon, 09/29/2014 - 02:18

Join us on the 10th November for Future Decoded. We have assembled some of the world's most influential thought leaders to speak on how we need to evolve our thinking on the forces that are driving how economies are run, how societies develop and the pivotal role that transformational technology will play in making those changes happen.

Future Decoded is an event that does not attempt to predict the future but instead will work to decipher it, it is a forum for us to discover, provoke and provide insight into all the uncertainty we may face in the years ahead.

Future Decoded is a free event being held at London Excel. In addition to hearing from Microsoft CEO Satya Nadella, who will be delivering the keynote address and outlining how Microsoft is creating the next generation of technology of business, you will also hear from the following inspirational and thought provoking speakers.

  • Sir Bob Geldof, who will open our eyes to a different perspective on education in the 21st century
  • Jeremy Paxman will explore and discuss the crucial changes that are currently underway in our society and how they will influence our future
  • Dame Stella Rimington will share her unique views on leadership in chaotic times
  • Sara Murray OBE will talk of the new role for innovation at the heart of our people, not just our business
  • Sir Martin Sorrell, who will take us through his vision for what all this change will mean for business and creativity.

In the afternoon, you'll hear Microsoft customers share their experiences across four main theatres as well as have the opportunity to take part in discussions and network with the UK's top business leaders. There will also be a series of demonstrations showcasing some of the latest technology and innovations, enabling you to gain a head start on adapting to a fast-changing world.

Future Decoded will challenge you to think and work differently. It will arm you with the critical insights you need to prepare for the future. A future that, we all know, will look nothing like today.

Tickets are free of charge and subject to availability – Register Today.

Future Decoded: Tickets Now Available

MSDN Blogs - Mon, 09/29/2014 - 02:17

Join us on the 10th November for Future Decoded. We have assembled some of the world's most influential thought leaders to speak on how we need to evolve our thinking on the forces that are driving how economies are run, how societies develop and the pivotal role that transformational technology will play in making those changes happen.

Future Decoded is an event that does not attempt to predict the future but instead will work to decipher it, it is a forum for us to discover, provoke and provide insight into all the uncertainty we may face in the years ahead.

Future Decoded is a free event being held at London Excel. In addition to hearing from Microsoft CEO Satya Nadella, who will be delivering the keynote address and outlining how Microsoft is creating the next generation of technology of business, you will also hear from the following inspirational and thought provoking speakers.

  • Sir Bob Geldof, who will open our eyes to a different perspective on education in the 21st century
  • Jeremy Paxman will explore and discuss the crucial changes that are currently underway in our society and how they will influence our future
  • Dame Stella Rimington will share her unique views on leadership in chaotic times
  • Sara Murray OBE will talk of the new role for innovation at the heart of our people, not just our business
  • Sir Martin Sorrell, who will take us through his vision for what all this change will mean for business and creativity.

In the afternoon, you'll hear Microsoft customers share their experiences across four main theatres as well as have the opportunity to take part in discussions and network with the UK's top business leaders. There will also be a series of demonstrations showcasing some of the latest technology and innovations, enabling you to gain a head start on adapting to a fast-changing world.

Future Decoded will challenge you to think and work differently. It will arm you with the critical insights you need to prepare for the future. A future that, we all know, will look nothing like today.

Tickets are free of charge and subject to availability – Register Today.

The Cloud Platform competency

MSDN Blogs - Mon, 09/29/2014 - 02:12

Launching September 30, 2014, the new Cloud Performance Competencies deliver unparalleled value for Microsoft partners. Learn all you need to know in this four post series.

  1. Ushering in a new era, with the Cloud Performance Competencies - post one
  2. The Small & Midmarket Cloud Solutions competency - post two
  3. The Cloud Productivity competency - post three
  4. The Cloud Platform competency - post four


The Cloud Platform competency
: for partners who specialise in delivering infrastructure and SaaS solutions on Microsoft Azure

The Azure Circle program reborn - the Cloud Platform competency is tailor made to help partners capitalise on the growing demand for IaaS and SaaS solutions in the public cloud. From Independent Software Vendors to Managed Service Providers and everyone in between, earning this competency places you amongst the Azure elite.

Qualify through your proven cloud performance, attain the Silver level at no cost, Gold at a reduced fee and receive exclusive benefits in addition to the core Silver and Gold entitlement.


Silver Requirements

1. Meet the sales goal

Demonstrate US$25,000 Microsoft Azure customer consumption and/or Azure partner EA consumption within the previous 12 months

2. Pass the qualifying assessments

  • Learn more about the assessments and available readiness through this competencies' Learning Path (you may need to copy & paste the URL into your browser)

One individual must pass one of the following (online, no cost) technical assessments:

  • Technical Assessment for Using Microsoft Azure for Datacenter Solutions
  • Technical Assessment for Datacenter and Data Platform
  • Technical Assessment for Using Microsoft Azure for Application Development

 

3. Provide customer evidence

Submit three customer references for deals involving Azure which occurred within the past 12 months

4. Pay the silver competency fee

To help you invest in your future, the silver Cloud Performance Competency fee has been waived for the first year (offer available until June 30, 2015)


Gold Requirements

1. Meet the sales goal

Demonstrate US$100,000 Microsoft Azure customer consumption and/or Azure partner EA consumption within the previous 12 months

2. Pass the qualifying assessments

  • Learn more about the assessments and available readiness through this competencies' Learning Path (you may need to copy & paste the URL into your browser)

Two individuals must each pass one of the following (online, no cost) technical assessments:

  • Technical Assessment for Using Microsoft Azure for Datacenter Solutions
  • Technical Assessment for Datacenter and Data Platform
  • Technical Assessment for Using Microsoft Azure for Application Development

 

3. Provide customer evidence

Submit five customer references for deals involving Azure which occurred within the past 12 months

4. Pay the gold competency fee

To help you invest in your future, the gold Cloud Performance Competency fee has been reduced to $5,610 ($AUD, inc GST)

  • One gold fee is due once per year—no matter how many additional competencies your organisation earns


Exclusive Cloud Platform benefits

In addition to the core Silver and Gold competency entitlement*, Cloud Platform partners are eligible for:

*Please note: Cloud Performance Competencies attained through the no-cost Silver promotion are ineligible for the APC ticket entitlement.


Next steps

  • See the competency page for full benefits and requirements
  • Track your progress in the Online Services Dashboard
    • Seats not visible? Make sure you're listed as Partner Of Record
  • Already meet the requirements? Attain your competency on launch day (September 30, 2014) through the Partner Membership Centre (Global Administrator Rights required)
  • Looking to transition to this competency from Cloud Accelerate or Cloud Deployment? See this blog post for more


Partner resources

University IT Camps!

MSDN Blogs - Mon, 09/29/2014 - 02:04

Microsoft's technical experts are running a hands-on training session to help Universities extend the datacentre using Microsoft Azure! Spaces are filling up, secure your spot today.

Some of you reading this may have had an invitation to something called the University IT Camp sitting in your inbox and may be wondering what this event actually is. I have written this quick blog to explain a little bit more about what this is, and why it might be of interest to you.

I’m Andrew Fryer and I’ll be your host on the day to guide you through how you can scale on-demand services, rapidly respond to your students and faculty, immediately address short term, short notice requests for research IT resources and maintain total control over your data at all times, through a Hybrid Cloud model infrastructure.

IT Camps are not your typical Microsoft event, in fact IT Camps are quite the opposite. IT Camps are different in that they are hands on and give you access to the technology and the expertise to explain and answer your
questions.  We sit you round in tables and encourage discussion between the audience as well with the presenters and within the broad theme of the day you agree the agenda. That sort of thing is still needed and that will be the format at Future Decoded (10-12th November at the Excel Centre in London).   

IT Camps come in different flavours and for the event in London on 17th

October we are going to look at the hybrid cloud – keeping some of what you have and extending this into Microsoft Azure in the same way that some of you are making use of Office 365 but possibly keeping some Exchange server in house say for permanent staff. 

We do have other venues and we’ll also be running camps based around Enterprise Mobility – the business of managing all of the devices, users and applications that connect to your resources and your also welcome to come to those if that is of interest.  However for the 17th October, we are restricting this just to colleges and universities as although you might be in competition for students grants etc, the IT challenges you face are similar and we think it would be good to air those and learn from each other as well as from us.

So hopefully we’ll see you there all you need is a device or two capable of accessing an html5 website and an open mind.

If you cannot make it on the 17th October, please feel free to register for another date or location. Below are other camps scheduled in over the next few months.

 

London 17 October - An IT Camp for Universities: Extend your Data Centre with Microsoft Azure

Birmingham 21 October - Implementing a Mobile-First World with Microsoft Enterprise Device Infrastructure

Birmingham 22 October - Extend your Datacentre with Microsoft Azure

Cardiff 25 November - Implementing a Mobile-First World with Microsoft Enterprise Device Infrastructure

Cardiff 26 November - Extend your Datacentre with Microsoft Azure

The Cloud Productivity competency

MSDN Blogs - Mon, 09/29/2014 - 01:59

Launching September 30, 2014, the new Cloud Performance Competencies deliver unparalleled value for Microsoft partners. Learn all you need to know in this four post series.

  1. Ushering in a new era, with the Cloud Performance Competencies - post one
  2. The Small & Midmarket Cloud Solutions competency - post two
  3. The Cloud Productivity competency - post three
  4. The Cloud Platform competency - post four


The Cloud Productivity competency: for partners deploying Microsoft Office 365 for Enterprise customers

The evolution of the Cloud Deployment program, the Cloud Productivity competency demonstrates your best of breed Office 365 deployment capabilities - creating new opportunities through FastTrack 2.0 alignment and eligibility to participate in Office 365 Adoption Offers.

Qualify through your proven cloud performance, attain the Silver level at no cost, Gold at a reduced fee and receive exclusive benefits in addition to the core Silver and Gold entitlement.


Silver Requirements

1. Meet the sales goal

500 seats of Exchange Online deployed (assigned) within the last 12 months

  • NOTE: when the data is available, we will transition the measurement from assigned users to active use.
  • Track your progress in the Online Services Dashboard
  • Seats not visible? Make sure you're listed as Partner Of Record for all Office 365 deals

2. Pass the qualifying exams

  • Learn more about the exams/certifications and available readiness through this competencies' Learning Path (you may need to copy & paste the URL into your browser)

One individual must pass one of the following Office 365 core options:

Office 365 Core

  • Option 1
    • Exam 70-346: Managing Office 365 Identities and Requirements, and
    • Exam 70-347: Enabling Office 365 Services
  • Option 2
    • MCSA: Office 365

One individual must also pass one of the following workload options:

Messaging Workload

  • Option 1
    • Exam 70-341: Core Solutions of Microsoft Exchange Server 2013, and
    • Exam 70-342: Advanced solutions of Microsoft Exchange Server 2013
  • Option 2
    • MCSE: Messaging

3. Provide customer evidence

Submit three customer references for Office 365 deals closed within the past 12 months

4. Pay the silver competency fee

To help you invest in your future, the silver Cloud Performance Competency fee has been waived for the first year (offer available until June 30, 2015)


Gold Requirements

1. Meet the sales goal

1,500 seats of Exchange Online deployed (assigned) within the last 12 months

  • NOTE: when the data is available, we will transition the measurement from assigned users to active use.
  • Track your progress in the Online Services Dashboard
  • Seats not visible? Make sure you're listed as Partner Of Record for all Office 365 deals

2. Pass the qualifying exams

  • Learn more about the exams/certifications and available readiness through this competencies' Learning Path (you may need to copy & paste the URL into your browser)

Two unique individuals must each pass one of the following Office 365 core options:

Office 365 Core

  • Option 1
    • Exam 70-346: Managing Office 365 Identities and Requirements, and
    • Exam 70-347: Enabling Office 365 Services
  • Option 2
    • MCSE: Messaging

The same two unique individuals must also pass one of the following workload options:

Messaging Workload

  • Option 1
    • Exam 70-341: Core Solutions of Microsoft Exchange Server 2013, and
    • Exam 70-342: Advanced solutions of Microsoft Exchange Server 2013
  • Option 2
    • MCSE: Messaging

 

3. Provide customer evidence

Submit five customer references for Office 365 deals closed within the past 12 months

4. Pay the gold competency fee

To help you invest in your future, the gold cloud performance competency fee has been reduced to $5,610 ($AUD, inc GST)

  • One gold fee is due once per year—no matter how many additional competencies your organisation earns


Exclusive Cloud Productivity benefits

In addition to the core Silver and Gold competency entitlement*, Cloud Productivity partners are eligible for:

*Please note: Cloud Performance Competencies attained through the no-cost Silver promotion are ineligible for the APC ticket entitlement.


Next steps

  • See the competency page for full benefits and requirements
  • Track your progress in the Online Services Dashboard
    • Seats not visible? Make sure you're listed as Partner Of Record
  • Already meet the requirements? Attain your competency on launch day (September 30, 2014) through the Partner Membership Centre (Global Administrator Rights required)
  • Looking to transition to this competency from Cloud Accelerate or Cloud Deployment? See this blog post for more


Partner resources

Lync 2013 2014 年 9 月の更新がリリースされました。

MSDN Blogs - Mon, 09/29/2014 - 01:55

こんばんは。Lync サポートの久保です。

Lync 2010 (クライアント) の 2014 年 9 月の更新がリリースされております。

 

http://support.microsoft.com/kb/2889860/ja

 

以下のような問題が修正されます。

 

Lync 2013 の VDI プラグインのペアを Lync 2013 クライアントを使用すると、無効なパスワードのカウントがインクリメントされます。

http://support.microsoft.com/kb/2992445

Lync 2013 は、ユーザーが全画面表示の実際のサイズの共有デスクトップを切り替えたときにクラッシュします。

http://support.microsoft.com/kb/2992447

デスクトップの共有やアプリケーションの共有 Lync 2013 では、会話中の問題

http://support.microsoft.com/kb/2992448

 

なお、必要条件として MSO・MSORES・Lynchelp のインストールが必要となりますが、

最新の MSO に MSORES の更新が含まれているため、

最新の MSO と Lynchelp をインストールすることで、

修正プログラムのインストールは可能です。

 

新しい バージョンは [15.0.4649.1000] になります。

 

引き続き、快適な Lync ライフをお楽しみください。

 

University IT Camps!

MSDN Blogs - Mon, 09/29/2014 - 01:43

Microsoft's technical experts are running a hands-on training session to help Universities extend the datacentre using Microsoft Azure! Spaces are filling up, secure your spot today.

Some of you reading this may have had an invitation to something called the University IT Camp sitting in your inbox and may be wondering what this event actually is. I have written this quick blog to explain a little bit more about what this is, and why it might be of interest to you.

I’m Andrew Fryer and I’ll be your host on the day to guide you through how you can scale on-demand services, rapidly respond to your students and faculty, immediately address short term, short notice requests for research IT resources and maintain total control over your data at all times, through a Hybrid Cloud model infrastructure.

IT Camps are not your typical Microsoft event, in fact IT Camps are quite the opposite. IT Camps are different in that they are hands on and give you access to the technology and the expertise to explain and answer your
questions.  We sit you round in tables and encourage discussion between the audience as well with the presenters and within the broad theme of the day you agree the agenda. That sort of thing is still needed and that will be the format at Future Decoded (10-12th November at the Excel Centre in London).   

IT Camps come in different flavours and for the event in London on 17th

October we are going to look at the hybrid cloud – keeping some of what you have and extending this into Microsoft Azure in the same way that some of you are making use of Office 365 but possibly keeping some Exchange server in house say for permanent staff. 

We do have other venues and we’ll also be running camps based around Enterprise Mobility – the business of managing all of the devices, users and applications that connect to your resources and your also welcome to come to those if that is of interest.  However for the 17th October, we are restricting this just to colleges and universities as although you might be in competition for students grants etc, the IT challenges you face are similar and we think it would be good to air those and learn from each other as well as from us.

So hopefully we’ll see you there all you need is a device or two capable of accessing an html5 website and an open mind.

If you cannot make it on the 17th October, please feel free to register for another date or location. Below are other camps scheduled in over the next few months.

 

London 17 October - An IT Camp for Universities: Extend your Data Centre with Microsoft Azure

Birmingham 21 October - Implementing a Mobile-First World with Microsoft Enterprise Device Infrastructure

Birmingham 22 October - Extend your Datacentre with Microsoft Azure

Cardiff 25 November - Implementing a Mobile-First World with Microsoft Enterprise Device Infrastructure

Cardiff 26 November - Extend your Datacentre with Microsoft Azure

 

 

The Small and Midmarket Cloud Solutions competency

MSDN Blogs - Mon, 09/29/2014 - 01:39

Launching September 30, 2014, the new Cloud Performance Competencies deliver unparalleled value for Microsoft partners. Learn all you need to know in this four post series.

  1. Ushering in a new era, with the Cloud Performance Competencies - post one
  2. The Small & Midmarket Cloud Solutions competency - post two
  3. The Cloud Productivity competency - post three
  4. The Cloud Platform competency - post four


The Small and Midmarket Cloud Solutions competency: for partners selling Microsoft Office 365 to Small and Midmarket customers.

The spiritual successor to the Cloud Accelerate program and Small Business competency, the Small and Midmarket Cloud Solutions competency is the perfect way to capitalise on your expertise selling Office 365 into SMB. Qualify through your proven cloud performance, attain the Silver level at no cost, Gold at a reduced fee and receive exclusive benefits in addition to the core Silver and Gold entitlement.


Silver Requirements

1. Meet the sales goal

150 seats of Office 365 sold with at least 10 new customers within the previous 12 months

  • Track your progress in the Online Services Dashboard
  • Seats not visible? Make sure you're listed as Partner Of Record for all Office 365 deals

2. Provide customer evidence

Submit three customer references for Office 365 deals closed in the past 12 months

3. Pay the silver competency fee

To help you invest in your future, the silver Cloud Performance Competency fee has been waived for the first year (offer available until June 30, 2015)

 


Gold Requirements

1. Meet the sales goal

300 seats of Office 365 sold with at least 30 new customers within the previous 12 months

  • Track your progress with the Online Services Dashboard
  • Seats not visible? Make sure you're listed as Partner Of Record for all Office 365 deals

2. Pass the qualifying exams

Two individuals must each pass Exam 70-347: Enabling Office 365 Services

  • Learn more about the exam and available readiness through this competencies' Learning Path (you may need to copy & paste the URL into your browser)

3. Provide customer evidence

Submit five customer references for Office 365 deals closed within the past 12 months

4. Pay the gold competency fee

To help you invest in your future, the gold Cloud Performance Competency fee has been reduced to $5,610 ($AUD, inc GST)

  • One gold fee is due once per year—no matter how many additional competencies your organisation earns



Exclusive Small and Midmarket Cloud Solutions benefits

In addition to the core Silver and Gold competency entitlement*, Small and Midmarket Cloud Solutions partners are eligible for:

*Please note: Cloud Performance Competencies attained through the no-cost Silver promotion are ineligible for the APC ticket entitlement.


Next steps

  • See the competency page for full benefits and requirements
  • Track your progress in the Online Services Dashboard
    • Seats not visible? Make sure you're listed as Partner Of Record
  • Already meet the requirements? Attain your competency on launch day (September 30, 2014) through the Partner Membership Centre (Global Administrator Rights required)
  • Looking to transition to this competency from Cloud Accelerate or Cloud Deployment? See this blog post for more

 

    
Partner resources

     

     


    Share your thoughts! Join the discussion in the Microsoft Australia Small Business Reseller LinkedIn Group.

    Shadow Boxing - Featured Small Basic Program

    MSDN Blogs - Mon, 09/29/2014 - 00:00

    Today, I will introduce a Small Basic program Shadow Boxing written by NaochanON.

    In this program, a stick man runs and does shadow boxing lightly.

    The program ID is JNL860.  Have fun!

    Thank you NaochanON for nominating your animation program.

    By the way, got a game/program you made that you want us to review for being featured on this blog?  Post the game/program in following thread to nominate them!

    Nominate games (or other programs) here to get featured on our Blog! (PART 2)

    2014 年 9 月 SQL Server 最新モジュール

    MSDN Blogs - Sun, 09/28/2014 - 23:05

    2014 年 9 月 29 日時点の SQL Server 最新モジュールです。

    SQL Server 2000 は 2013 年 4 月 9 日に延長サポートが終了しました。長らくのご愛用ありがとうございました。
    SQL Server 2008 は 2014 年 7 月 8 日にメインストリームサポートが終了しました。CU17 が最後の累積的な更新プログラムとなります。

     

    サービス
    パック

    更新プログラム

    バージョン

    リリース年月

    SQL Server 2014

    RTM

    KB 2984923 (CU3)

    12.0.2402.0

    2014/8

    メインストリームサポート

    SQL Server 2012

    SP2

    KB 2976982 (CU1)

    11.0.5532.0

    2014/7

    メインストリームサポート

    SQL Server 2008 R2

    SP3

    無し

    10.50.6000.34

    2014/9

    メインストリームサポート

    ※2014年7月8日にメインストリームサポートが終了しました。

    SQL Server 2008

    SP3

    KB 2958696 (CU17)

    10.00.5861

    2014/5

    メインストリームサポート

    ※2014年7月8日にメインストリームサポートが終了しました。

    SQL Server 2005

    SP4

    KB 2598903 (OD)

    KB 2716427 Reporting Services (MS12-070)

    9.00.5295

    9.00.5324

    2011/8

    2012/10

    延長サポート
    (2016/4/12 終了)

    RTM : Release To Manufacturing (製品出荷版)
    SP : Service Pack (サービスパック)
    CU : Cumulative Update (隔月リリースの累積更新プログラム)
    OD : On-Demand (オンデマンドリリースの累積更新プログラム)

    SQL Server の更新プログラムの詳細については、SQL Server の更新プログラムを参照して下さい。

    メインストリームサポート、延長サポートについては、マイクロソフトサポートライフサイクルを参照して下さい。

    Ushering in a new era, with the Cloud Performance Competencies

    MSDN Blogs - Sun, 09/28/2014 - 22:06

    Humble beginnings

    The cloud incubation programs (Cloud Essentials, Cloud Accelerate, Cloud Deployment and Azure Circle) have always been one of my Microsoft Partner Network favourites. Much like cloud technology, these programs levelled the playing field - if you had a desire to deliver public cloud solutions - Cloud Essentials provided you with everything needed to succeed. As your capabilities grew, new program tiers became available, with additional benefits to accelerate development and increase profitability.

    If enrolments are anything to go by, the cloud programs were a partner favourite too.

    Thousands of Australian organisations have taken part - many of today's most successful cloud partners having leveraged program benefits to change the direction of their business or accelerate growth from humble beginnings. What's life like in the cloud today? To paraphrase IDC - "quite nice". With their recent study* showing cloud-oriented partners make 1.5 times the gross profit, 1.6 times the recurring revenue and have a 1.3 times higher new customer ratio when compared to other partners.

    *IDC. Successful Cloud Partners 2.0. 2014

    Like the industry itself, the Microsoft Partner Network must continue to evolve, enabling new and existing partners to capitalise on emerging opportunities. At the end of the month we take the next step, farewelling some old favourites and welcoming three new ones.


    The cloud goes mainstream

    September 30th marks the day the cloud truly moves from the fringe, taking place front and centre of the Microsoft Partner Network. The cloud programs will retire, whilst we introduce the long awaited Cloud Performance Competencies.

    • Cloud Productivity: for partners deploying Microsoft Office 365 for Enterprise customers
    • Cloud Platform: for partners who specialise in delivering infrastructure and SaaS solutions on Microsoft Azure

    Benefits to set you apart

    Like other Microsoft competencies, Australian partners will enjoy benefits including:

    1. Internal use rights (IUR) software

    Run your business on the latest Microsoft technology

    • Silver = licences for up to a 25 person business
    • Gold = licences for up to a 100 person business

    2. Partner Advisory Hours

    Technical presales assistance and advisory services from expert Microsoft consultants

    • Silver = 20 hours
    • Gold = 50 hours

    3. Tickets to Australia Partner Conference

    Microsoft Australia's premier partner event

    • Silver = n/a*
    • Gold = two tickets*

    *APC tickets are an exclusive benefit for partners that pay competency fees in Australia. Paying the silver fee makes you eligible for one ticket. Paying the gold fee makes you eligible for two. Cloud Performance Competencies attained through the no-cost for Silver promotion will be ineligible for tickets.

    In addition, partners who qualify will receive a host of exclusive benefits, including:

    • Unlimited Signature Cloud support
    • Internal-use rights and development and test environments
    • Channel incentives and access to special offers
    • Guaranteed assigned Microsoft contact


    Rewarding your cloud success

    Like the cloud incubation programs before them, the primary requirement for Cloud Performance Competency attainment is - cloud performance. If you're actively selling or deploying Office 365, or delivering solutions based on Azure, you're half way there! See if you're already eligible:

    To sweeten the deal, the silver Cloud Performance Competency fee has been waived for the first year (offer available until June 30, 2015), whilst the gold fee has been reduced to $5,610 per year ($AUD, inc GST).



    Grandfathering opportunities for Cloud Accelerate, Cloud Deployment and Azure Circle partners

    Are you an existing Cloud Accelerate, Cloud Deployment or Azure Circle partner? If so, and you meet the required performance threshold as of September 29 - you will be grandfathered into the competency at the silver level and will have 12 months to meet the remaining requirements.

    • Cloud Accelerate and Cloud Deployment partners:
      • With 150 seats of Office 365 sold and at least 10 new customers within the previous 12 months will be grandfathered into the Silver Small and Midmarket Cloud Solutions competency
      • With 500 seats of Exchange Online deployed (assigned) within the previous 12 months will be grandfathered into the Silver Cloud Productivity competency
    • Azure Circle partners:
      • That demonstrate US$25,000+ Microsoft Azure customer consumption and/or Azure partner EA consumption within the previous 12 months will be grandfathered into the Silver Cloud Platform competency

    Next steps




    Handy resources for partners

    Project Siena Beta3:使得业务用户将业务流程转换为自定义应用程序

    MSDN Blogs - Sun, 09/28/2014 - 21:52

    [原文发表地址] Project Siena Beta 3: Enabling business users to create custom apps to transform business processes

    [原文发表时间] 2014-07-14 5:45 AM

    非常高兴地向大家宣布: Project Siena的Beta3版本可用了。这个新版本使得业务专家,业务分析师和那些想要打造应用的爱好者创建自定义移动应用程序更为便捷。这些功能强大的应用程序是可以连接企业服务的,主要包括SaaS、主流的Web以及社会服务数据等。

    自从我们面向公众发布了Project Siena,我们看到,它已经深受我们的企业客户和解决方案合作伙伴的喜爱,并被广泛使用。我们已经看到了商业人士创建强大的,定制移动先行应用程序,通常在几小时内就能完成,不需要任何的编程。我们也看到,移动应用程序通过扩展后端服务和信号丰富、媒体资源丰富的SaaS来变换业务流程,这些是以前的移动设备根本无法想象的体验。

    不过,最重要的是,我们已经看到了人们会将想法转化为应用程序这一转变的开始。

    转换业务流程

    Toro就是一个很好的例子,Toro 是一家具有创新精神的草坪和园林景观维护设备的全球领先提供商。在Talladega赛道举行的NASCAR比赛开始前的四周,一个营销经理和网络部门经理发现了Project Siena。之后,他们就构想并创建处了具有影响力的应用程序,该应用让与会者探索并了解Toro产品目录,而且包括为该比赛特制的内容和动态。

    另一个例子是Persistent Systems,一个专门从事软件产品和技术服务的全球性公司。他们的业务分析师和业务架构师使用ProjectSiena改变了与客户的“应用发现”模式:现在,他们在会话过程中转换创意为高效的功能性原型,因此那些创意的验证和细化也都是当场完成的。

    Aditi Technologies是一家专注于云服务,IT外包和数字营销的全球性技术解决方案合作伙伴。他们已经将Project Siena运用在他们的售前工组中。Project Siena可以近乎实时创建应用程序,使得Aditi在适应现场的变化时具有优势,这就缩短了其销售周期,建立起了与客户较强的战略合作关系,尤其是跟那些业务优势决策者和流程负责人。

    Siena Beta 3

    Beta3的发布对业务专家和分析师对Project Siena的使用添加了一臂之力。Beta3主要的新功能包括:

    • 一键式读/写连接到Yammer,Facebook,Twitter,Instagram,YouTube,Coursera,必应搜索,必应翻译。
    • 支持OAuth1和OAut2h的更多种类的RESTful服务连接。
    • 一个生态系统的WADL加速器来快速创建到RESTful服务器的连接-这将在四个星期之后发布。
    • 写回SharePoint列表。
    • 通过交互式图表使数据可视化。
    • 国际化语言支持整个用户界面,及公式和函数 – 将在四个星期之后发布。
    • 全线改进,包括更多在交互性和设计方面的控件。

    在Beta2中,我们介绍了这个概念,如下图所示,非程序员能如同使用Excel功能那样轻松使用服务。组成多个服务就像连接两个Excel功能一样便捷。

    Beta3 使得与服务交互的能力达到另一个层次,现在开展一项新的服务就像在桌面上添加一个PowerPoint幻灯片模版一样简单了。

    小结

    Project Siena Beta3是我们使命的一个重要步骤,给最有创新生意头脑的人们授予真正开启移动和云的力量。

    从Windows应用商店安装最新版本的Project Siena,在http://microsoft.com/ProjectSiena了解详细内容,观看视频教程,下载一个应用程序示例寻找灵感,然后在一个Siena应用程序里将自己的想法加入到生活中。

    敬谢!

    BitTalk如何侦测主机实例是否”死亡”?

    MSDN Blogs - Sun, 09/28/2014 - 21:08

    BitTalk如何侦测主机实例是否”死亡”?

     

    这项工作需要BizTalk前端主机实例进程和后端BizTalk SQL计划MessageBox_DeadProcesses_Cleanup_BizTalkMsgBoxDb 的相互配合来完成。

    当一个BizTalk主机实例启动之后,首先调用一次存储过程bts_ProcessHeartbeat_<HostName>,调用方式如下:

    exec
    [dbo].[bts_ProcessHeartbeat_<HostName>]
    @uidProcessID=NULL,@dwCommand=1,@nHeartbeatInterval=60

    其中@dwCommand=1 表示启动进程。

    当一个BizTalk主机实例关闭的时候,同样要调用一次存储过程 bts_ProcessHeartbeat_<HostName>,调用方式如下:

     

    exec [dbo].[bts_ProcessHeartbeat_<HostName>]
    @uidProcessID=NULL,@dwCommand=2,@nHeartbeatInterval=60

     

    这里 @dwCommand=2 表示停止进程。

     

     

    从存储过程bts_ProcessHeartbeat_<HostName>的代码中可以发现,如果参数是dwCommand=1(Process
    Startup) 或 dwCommand=2(Process Shutdown)的时候,将会调用另一个存储过程int_ProcessCleanup_<HostName>。该存储过程的作用是内部用来清除主机实例进程相关的记录并且释放被这个实例进程占用的消息和服务实例资源。

    关闭主机实例的时候调用int_ProcessCleanup_<HostName> 是非常好理解的,调用该存储过程可以在主机实例被关闭前释放被该进程占用的消息和服务实例,以便于主机实例重启的时候可以获得这些资源,或者被多实例BizTalk环境下其他正在运行的主机实例获得。

    如果被占有的消息和服务实例在可以被存储过程int_ProcessCleanup_<HostName> 释放掉,那么为什么同样的存储过程在主机实例启动的时候还是要被调用一次呢?对于正常关闭的实例来说,这个操作确实是冗余的,但进程可能会被意外终止或崩溃,因而不能在退出之前正常地调用相应的存储过程。例如直接Kill掉服务器进程就会导致异常关闭。如果使用命令“kill /f” 关闭掉主机实例,是无法在关闭前调用bts_ProcessHeartbeat_<HostName> 并传入参数 dwCommand=2的。正因为不能保证内部清理存储过程一定会被执行,所以在实例启动时调用这个清理过程是必要的,尽管有些情况下是冗余的。

    另一个问题是,如果一个主机实例进程hang住或者”死掉”了,亦或是进程崩溃终止而没能完成重启,怎么才能侦测到这种情况并释放掉被占用的资源和服务实例锁?怎么办,我们继续看。

    每一个运行的BizTalk主机实例进程的背后,你能看到如下的线程:

    0:016> kc

    ntdll!NtWaitForSingleObject

    kernel32!WaitForSingleObjectEx

    kernel32!WaitForSingleObject

    BTSMessageAgent!CAdminCacheRefresh::OnCall

    BTSMessageAgent!CThreadPoolWrapper::ThreadWorker

    ntdll!RtlpWorkerCallout

    ntdll!RtlpExecuteWorkerRequest

    ntdll!RtlpApcCallout

    ntdll!RtlpWorkerThread

    kernel32!BaseThreadStart

     

    默认情况下,实例线程每隔60s会调用一次存储过程bts_ProcessHeartbeat_<HostName> 并传递参数@dwCommand=0。可以通过修改BizTalkMGMTDb数据库中的adm_Group表里的ConfigurationCacheRefreshInterval 列中存储值来修改这个间隔时间,或者通过BizTalk group的管理客户端进行设置。

     

    exec
    [dbo].[bts_ProcessHeartbeat_Newhost] @uidProcessID=NULL,@dwCommand=0,@nHeartbeatInterval=60

     

    这里
    @dwCommand=0 表示进程运行期。

     

    查看存储过程bts_ProcessHeartbeat_<HostName>的代码,我们会发现,如果dwCommand=0(进程运行期)内部存储过程int_ProcessCleanup_<HostName> 是不会被调用的。bts_ProcessHeartbeat_<HostName> 只会为一个主机实例更新ProcessHeartbeats  表中的记录,如果没有,则插入一条新的记录。

     

    让我们看一下ProcessHeartbeats
    表的结构

    uidProcessID

    nvcApplicatioName

    dtCreationTime

    dtLastHeartbeatTime

    dtLastHeartbeatTime

    4d50992e-2ae1-4bbd-9d61-b712b052b99c

    BizTalkServerApplication

    11/25/2009 4:52:44
      AM

    11/25/2009 4:52:44
      AM

    11/25/2009 5:02:44
      AM

     

    uidProcessID是主机实例的唯一GUID。你可以从BizTalkMGMTDb中的adm_HostInstance表中UniqueId列查询到主机实例的uidProcessID。也可以通过Windows Service Control Manager查看主机实例服务属性“Path to Executable”来找到这个ID。

    例如,下面是在我的测试用例里查看“BizTalkServerApplication”的“Path
    to Executable”项的方法,加上选项“-btsapp”就可以看到我们要ID了。

     "C:\Program Files (x86)\Microsoft BizTalk
    Server 2006\BTSNTSvc.exe" -group "BizTalk Group" -name
    "BizTalkServerApplication" -btsapp "{4D50992E-2AE1-4BBD-9D61-B712B052B99C}"

     

    你会发现,dtNextHeartbeatTime是等于(dtLastHeartbeatTime+10*HeartbeatInterval)的。因为HeartbeatInterval的默认值是60s(上面已经提到过如何修改),而dtNextHeartbeatTime的默认设置是dtLastHeartbeatTime加10分钟。当然,如果HeartbeatInterval被修改成30s,那么dtNextHeartbeatTime就是dtLastHeartbeatTime再加上5分钟。

    正常情况下,ProcessHeartbeat表中存在的主机实例每隔[HeartbeatInterval]秒,都会通过运行中的主机实例进程调用一次存储过程 bts_ProcessHeartbeat_<HostName>
    来更新ProcessHeartbeat表中对应的记录。

    在SQL计划MessageBox_DeadProcesses_Cleanup_BizTalkMsgBoxDb中,它指定了每隔一分钟执行一次存储过程bts_CleanupDeadProcesses。

    bts_CleanupDeadProcesses这个存储过程需要查询ProcessHeartbeats 表,然后检查是否有主机实例的dtNextHeartbeatTime大于当前时间。如果大于,则表明我们至少已经丢失了10个主机实例的heartbeats,那么这个实例可以被认为”死掉“了,然后SQL计划会调用针对这个实例的存储过程int_ProcessCleanup_<HostName>来清理相关记录和释放被占用的消息与服务实例资源。

     

    如果SQL计划被禁用或者运行中出现问题,BizTalk 就失去了侦测”死亡“主机实例的能力 。

     

     

    祝好,

    Old Zhu

    Performance Quiz #14: Memory Locality, x64 vs. x86, Alignment, and Density

    MSDN Blogs - Sun, 09/28/2014 - 20:26

     

     It's been a very long time since I did a performance quiz and so it's only right that this one covers a lot of ground.  Before I take even one step forward I want you to know that I will be basing my conclusions on:

    1. A lot of personal experience
    2. A micro-benchmark that I made to illustrate it

    Nobody should be confused, it would be possible to get other results, especially because this is a micro-benchmark.  However these results do line up pretty nicely with my own experience so I'm happy to report them.  Clearly the weight of the "other code" in your application would significantly change these results and yet they illustrate some important points, as well as point out a mystery...  But I'm getting ahead of myself discussing the answers.... First, the questions:

     

    Q1: Is x64 code really slower than x86 code if you compile basically the same program and don't change anything just like you said all those years ago? (wow, what a loaded question)

    Q2: Does unaligned pointer access really make a lot of difference?

    Q3: Is it important to have your data mostly sequential or not so much?

    Q4: If x64 really is slower, how much of it relates to bigger pointers?

     

    OK kiddies... my answers to these questions are below.... but if you want to make your own guesses then stop reading now... and maybe write some code to try out of a few theories.

     

     

     

     

     

    Keep scrolling...

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Are you ready?

     

     

     

     

    OK.

    To answer these questions I wrote a benchmarking program (see link at the bottom of this posting) that creates a data structure and walks it.  The primary thing that it does is allocate an array with ever increasing size and then build a doubly-linked list in it.  Then it walks that list forwards then backwards.  The time it takes to walk the list is what is measured, not the construction.  And the times reported are divided by the number of items, so in each case you see the cycles per item.  Each item is of course visited twice, so if you like the numbers are scaled by a factor of two.  And the number reported is in nanoseconds.

    To make things more interesting, I also shuffle the items in the list so that they are not in their original order.  This adds some randomness to the memory access order.  To shuffle the data I simple exchange two slots a certain percentage of times starting from 0% and then growing quickly to 100%.  100% shuffles means the number of exchanges is equal to the number of items in the list, that's pretty thoroughly mixed.

    And as another datapoint I run the same exact code (always compiled for maximum speed) on x64 and then on x86.  It's the exact same machine,  my own home desktop unit.  Which is a hexacore high end workstation.

    And then some additional test cases.  I do all that 4 different ways.  First with regular next and prev pointers and an int as the payload.  Then I add a bogus byte just to make the alignment totally horrible (and by the way it would be interesting to try this on another architecture where alignment hurts more than Intel but I don't happen to have such a machine handy).  To try to make things better I add a little padding so that things still line up pretty good and we see how this looks.  And finally I avoid all the pointer resizing by using fixed size array indices instead of pointers so that the structure stays the same size when recompiled.

    And without further ado, here are the results.  I've put some notes inline. 

     


    Pointer implementation with no changes sizeof(int*)=4  sizeof(T)=12   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.99 1.99 1.99 2000 1.99 1.85 1.99 1.99 1.99 1.99 4000 1.99 2.28 2.77 2.92 3.06 3.34 8000 1.96 2.03 2.49 3.27 4.05 4.59 16000 1.97 2.04 2.67 3.57 4.57 5.16 32000 1.97 2.18 3.74 5.93 8.76 10.64 64000 1.99 2.24 3.99 5.99 6.78 7.35 128000 2.01 2.13 3.64 4.44 4.72 4.8 256000 1.98 2.27 3.14 3.35 3.3 3.31 512000 2.06 2.21 2.93 2.74 2.9 2.99 1024000 2.27 3.02 2.92 2.97 2.95 3.02 2048000 2.45 2.91 3 3.1 3.09 3.1 4096000 2.56 2.84 2.83 2.83 2.84 2.85 8192000 2.54 2.68 2.69 2.69 2.69 2.68 16384000 2.55 2.62 2.63 2.61 2.62 2.62 32768000 2.54 2.58 2.58 2.58 2.59 2.6 65536000 2.55 2.56 2.58 2.57 2.56 2.56   Average 2.20 2.38 2.86 3.27 3.62 3.86 Overall 3.03

     

    This is the baseline measurement.  You can see the structure is a nice round 12 bytes and it will align well on x86.  Looking at the first column, with no shuffling, as expected things get worse and worse as the array gets bigger until finally the cache isn't helping much and you have about the worst you're going to get, which is about 2.55ns on average per item.

    The results for shuffling are not exactly what I expected.  At small sizes, it makes no difference.  I expected this because basically the entire table is staying hot in the cache and so locality isn't mattering.  Then as the table grows you see that shuffling has a big impact at about 32000 elements.  That's 384k of data.  Likely because we've blown past a 256k limit.

    Now the bizarre thing is this: after this the cost of shuffling actually goes down, to the point that later on it hardly matters at all.  Now I can understand that at some point shuffled or not shuffled really should make no difference because the array is so huge that runtime is largely gated by memory bandwidth regardless of order.  However... there are points in the middle where the cost of non-locality is actually much worse than it will be at the endgame.

    What I expected to see was that shuffling caused us to reach maximum badness sooner and stay there.  What actually happens is that at middle sizes non-locality seems to cause things to go very very bad...  And I do not know why :)

    But other than that one anomaly things are going pretty much as expected.

    Now let's look at the exact same thing, only it's now on x64

     

    Pointer implementation with no changes

    sizeof(int*)=8  sizeof(T)=20   shuffle 0% 1% 10% 25% 50% 100% 1000 2.28 2.28 2.28 1.99 2.28 1.99 2000 2.28 2.28 2.56 2.99 3.13 3.27 4000 2.28 2.35 3.06 3.91 4.84 5.26 8000 2.28 2.38 3.27 4.48 5.9 6.15 16000 2.36 2.63 4.12 6.28 8.53 10.2 32000 2.36 2.68 5.3 9.24 13.4 16.76 64000 2.25 2.9 5.5 8.28 10.36 10.62 128000 2.42 2.92 4.86 6.31 6.49 6.34 256000 2.42 2.74 4.25 4.52 4.43 4.61 512000 2.75 3.86 4.31 4.42 4.56 4.48 1024000 3.56 4.82 5.42 5.42 5.28 5.21 2048000 3.72 4.36 4.64 4.64 4.66 4.67 4096000 3.79 4.23 4.2 4.23 4.2 4.23 8192000 3.77 3.99 3.98 4 3.99 3.99 16384000 3.75 3.88 3.87 3.87 3.89 3.89 32768000 3.78 3.86 3.83 3.8 3.81 3.83 65536000 3.74 3.8 3.79 3.81 3.83 3.82   Average 2.93 3.29 4.07 4.83 5.50 5.84 Overall 4.41 X64/X86 1.46

     

    Well would you look at that... the increased data size has caused us to go quite a bit slower.  The average ratio shows that execution time is 1.46 times longer.  This result is only slightly larger than typical in my experience when analyzing data processing in pointer rich structures.

    Note that it doesn't just get bad at the end, it's bad all along.  There's a few weird data points but this isn't an absolutely controlled experiment.  For instance the 1.99 result for 1000 items isn't really indicating that it was better with more shuffling.  The execution times are so small that timer granularity is a factor and I saw it swiching between 1.99 and 2.28.  Things get a lot more stable as n increases.

    Now let's look what happens when the data is unaligned.

     

    Pointer implementation with bogus byte in it to force unalignment

    sizeof(int*)=4  sizeof(T)=13   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.99 2.28 1.99 2000 2.13 2.13 2.13 2.13 2.13 2.13 4000 2.13 2.13 2.49 3.06 3.7 3.91 8000 2.1 2.17 2.88 3.88 4.76 5.33 16000 2.1 2.2 3.08 4.21 5.4 6.17 32000 2.17 2.39 4.21 6.92 10.1 12.83 64000 2.16 2.46 4.5 6.74 8.18 8.62 128000 2.14 2.45 4.13 5.19 5.4 5.41 256000 2.14 2.41 3.61 3.78 3.77 3.77 512000 2.18 2.51 2.97 3.12 3.16 3.11 1024000 2.45 3.12 3.44 3.43 3.46 3.54 2048000 2.76 3.3 3.36 3.35 3.37 3.36 4096000 2.75 3.08 3.05 3.04 3.07 3.05 8192000 2.75 2.9 2.88 2.9 2.9 2.9 16384000 2.75 2.82 2.82 2.82 2.82 2.82 32768000 2.74 2.78 2.77 2.79 2.77 2.78 65536000 2.74 2.76 2.75 2.75 2.76 2.76   Average 2.36 2.56 3.12 3.65 4.12 4.38 Overall 3.37

     

    This data does show that things got somewhat slower.  But also data size grew by about 8%.  In fact if you look at first column and compare the bottom row there, you'll find that amortized execution at the limit grew by 7.4%  Basically the same as the data growth.  On the other hand, changes due to shuffling were greater so that the overall index grew by 8.3%.  But I think we could support the conclusion that most of the growth had to do with the fact that we read more memory and only a small amount of it had to do with any extra instruction cost.

    Is the picture different on x64?

     

    Pointer implementation with bogus byte in it to force unalignment

    sizeof(int*)=8  sizeof(T)=21   shuffle 0% 1% 10% 25% 50% 100% 1000 2.28 2.28 2.28 2.28 2.28 2.28 2000 2.42 2.42 2.84 3.27 3.7 3.84 4000 2.42 2.49 3.34 4.48 5.55 6.12 8000 2.56 2.52 3.7 5.23 6.4 7.15 16000 2.61 2.81 4.85 7.36 9.96 12.02 32000 2.53 2.86 5.8 10.18 15.25 18.65 64000 2.53 2.94 5.88 9.14 11.33 11.64 128000 2.53 2.94 5.41 7.11 7.09 7.09 256000 2.57 3.09 5.14 4.96 5.07 4.98 512000 3.21 3.58 5.29 5.05 5.14 5.03 1024000 3.74 5.03 5.94 5.79 5.75 5.94 2048000 4.01 4.84 4.96 4.93 4.92 4.96 4096000 4 4.47 4.49 4.46 4.46 4.46 8192000 3.99 4.21 4.21 4.21 4.06 4.21 16384000 3.97 4.08 4.08 4.07 4.08 4.08 32768000 3.96 4.02 4.02 4.03 4.03 4.03 65536000 3.96 3.99 4 3.99 4 3.99   Average 3.13 3.45 4.48 5.33 6.06 6.50 Overall 4.83 X64/X86 1.43

     

    The overall ratio was 1.43 vs the previous ratio of 1.46.  That means the extra byte did not disproportionally affect the x64 build either.  And in this case the pointers are really crazily unaligned.  The same shuffling weirdness happen as before.

    Unaligned pointers don't seem to be costing  us much.

    What about if we do another control, increasing the size and realigning the pointers.

     

    Pointer implementation with extra padding to ensure alignment

      sizeof(int*)=4  sizeof(T)=16   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.71 1.99 1.99 2000 1.99 1.99 2.13 2.13 2.13 2.13 4000 2.28 1.99 2.49 3.34 3.7 4.05 8000 1.99 2.06 2.74 3.66 4.59 5.08 16000 2.04 2.26 3.16 4.18 5.32 6.06 32000 2.04 2.35 4.44 7.43 10.92 14.2 64000 2.04 2.38 4.6 7.03 8.74 9.11 128000 2.03 2.37 4.24 5.42 5.58 5.59 256000 2.05 2.36 3.66 3.84 3.83 4.07 512000 2.22 2.59 3.15 3.37 3.1 3.39 1024000 2.76 3.81 4.1 4.09 4.26 4.18 2048000 3.03 3.66 3.83 3.82 3.78 3.78 4096000 3.04 3.42 3.4 3.43 3.41 3.42 8192000 3.06 3.23 3.24 3.23 3.24 3.24 16384000 3.05 3.15 3.14 3.14 3.13 3.14 32768000 3.05 3.1 3.1 3.09 3.1 3.09 65536000 3.07 3.08 3.07 3.08 3.07 3.08   Average 2.45 2.69 3.32 3.88 4.35 4.68 Overall 3.56

     

    Well in this result we converge at about 3.07, and our original code was 2.55.  Certainly re-aligning the pointers did not help the situation.  We're actually just 20% than the original number and 12% worse than the unaligned version.

    And let's look at x64...

     

    Pointer implementation with extra padding to ensure alignment

      sizeof(int*)=8  sizeof(T)=24   shuffle 0% 1% 10% 25% 50% 100% 1000 1.99 1.99 1.99 1.99 1.99 1.99 2000 2.13 2.28 2.7 2.99 3.7 3.7 4000 2.2 2.28 2.99 3.84 4.55 4.84 8000 2.42 2.38 3.34 4.37 4.98 5.37 16000 2.45 2.68 4.55 7.04 9.71 11.88 32000 2.46 2.8 5.43 9.25 13.48 17.16 64000 2.42 2.8 5.46 8.46 10.37 10.7 128000 2.4 2.8 5 6.43 6.55 6.56 256000 2.51 3.18 4.92 5.34 5 4.89 512000 3.9 4.7 5.97 6.5 5.63 5.59 1024000 4.15 5.24 6.34 6.28 6.24 6.33 2048000 4.32 5.13 5.28 5.33 5.34 5.27 4096000 4.32 4.78 4.77 4.81 4.78 4.79 8192000 4.29 4.55 4.55 4.56 4.55 4.54 16384000 4.28 4.42 4.42 4.43 4.42 4.42 32768000 4.3 4.36 4.37 4.37 4.38 4.37 65536000 4.23 4.38 4.35 4.34 4.34 4.33   Average 3.22 3.57 4.50 5.31 5.88 6.28 Overall 4.79 X64/X86 1.35

     

    Now with the extra padding we have 8 byte aligned pointers, that should be good right?  Well, no, it's worse.  The top end is now about 4.3 nanoseconds per item compared with about 4 nanoseconds before.  Or about 7.5% worse having used more space.  We didn't pay the full 14% of data growth so there is some alignment savings but not nearly enough to pay for the space.  This is pretty typical.

     

    And last but not least, this final implementation uses indices for storage instead of pointers.  How does that fare?

     

    Standard index based implementation

     

    sizeof(int*)=4  sizeof(T)=12   shuffle 0% 1% 10% 25% 50% 100% 1000 3.41 3.7 3.41 3.41 3.41 4.27 2000 3.41 3.56 3.41 3.41 3.41 3.98 4000 3.41 3.48 3.63 3.98 4.41 4.62 8000 3.41 3.59 3.88 4.62 5.23 5.76 16000 3.43 3.48 4.02 4.8 5.76 6.31 32000 3.5 3.64 5.1 7.2 9.8 11.99 64000 3.48 3.74 5.41 7.26 8.52 8.88 128000 3.49 3.72 5.1 5.98 6.17 6.18 256000 3.48 3.7 4.66 4.82 4.83 4.82 512000 3.52 3.72 4.13 4.24 4.14 4.3 1024000 3.57 4.25 4.6 4.59 4.46 4.43 2048000 3.79 4.23 4.37 4.35 4.36 4.34 4096000 3.77 4.05 4.06 4.06 4.06 4.07 8192000 3.77 3.91 3.93 3.92 3.91 3.93 16384000 3.78 3.84 3.83 3.83 3.84 3.84 32768000 3.78 3.8 3.8 3.8 3.8 3.79 65536000 3.77 3.78 3.78 3.78 3.8 3.78   Average 3.57 3.78 4.18 4.59 4.94 5.25 Overall 4.39

     

    Well, clearly the overhead of computing the base plus offset is a dead loss on x86 because there is no space savings for those indexes.  They are the same size as a pointer so messing with them is pure overhead.

    However... let's look at this test case on x64...

     

    Standard index based implementation

    sizeof(int*)=8  sizeof(T)=12   shuffle 0% 1% 10% 25% 50% 100% 1000 3.41 3.41 3.41 3.98 3.41 3.41 2000 3.41 3.41 3.7 3.41 3.41 3.41 4000 3.41 3.48 3.63 3.98 4.34 4.76 8000 3.45 3.45 3.84 4.48 5.33 5.69 16000 3.48 3.57 3.98 4.78 5.71 6.28 32000 3.48 3.64 5.11 7.16 9.69 11.99 64000 3.48 3.73 5.37 7.2 8.47 8.84 128000 3.48 3.72 5.1 5.96 6.25 6.14 256000 3.49 3.69 4.66 4.83 4.82 4.88 512000 3.52 3.72 4.22 4.22 4.22 4.24 1024000 3.59 4.01 4.31 4.53 4.45 4.4 2048000 3.8 4.27 4.33 4.25 4.35 4.38 4096000 3.8 3.97 4.06 4.06 4.07 4.06 8192000 3.79 3.92 3.92 3.93 3.93 3.91 16384000 3.77 3.84 3.83 3.82 3.85 3.85 32768000 3.76 3.81 3.81 3.8 3.8 3.81 65536000 3.76 3.78 3.78 3.79 3.78 3.78   Average 3.58 3.73 4.18 4.60 4.93 5.17 Overall 4.37 X64/X86 1.00

     

    And now we reach our final conclusion... At 3.76 the top end is coming in a dead heat with the x86 implementation.  The raw x64 benefit in this case is basically zip.  And actually this benchmark tops out at about the same cost per slot as the original pointer version but it uses quite a bit less space (40% space savings).  Sadly the index manipulation eats up a lot of that savings so in the biggest cases we only come out about 6% ahead.

     

     

    Now of course it's possible to create a benchmark that makes these numbers pretty much whatever you want them to be by simply manipulating how much pointer math there is, vs. how much reading, vs. how much "actual work".   

    And of course I'm discounting all the other benefits you get from running on x64 entirely, this is just a memory cost example, so take this all with a grain of salt.  If there's a lesson here it's that you shouldn't assume things will be automatically faster with more bits and bigger registers, or even more registers.

    The source code I used to create this output is available here

     

    *The "Average" statistic is the average of the column above it

    *The "Overall" statistic is the average of all the reported nanosecond numbers.

    *The x64/x86 ratio is simply the ratio of the two "Overall" numbers

    Microsoft Azure WebJobs SDK の 1.0.0-rc1 を発表

    MSDN Blogs - Sun, 09/28/2014 - 19:50

    このポストは、9 月 22 日に投稿された Announcing the 1.0.0-rc1 of Microsoft Azure WebJobs SDK の翻訳です。

    マイクロソフトはこのたび、Microsoft Azure WebJobs SDK プレビューの新しいバージョンをリリースしました。WebJobs SDK については Scott Hanselman 氏がこちらで紹介 (英語) しています。以前のバージョンの詳細については、こちらの発表記事 (英語) を参照してください。

    今回のリリースは 0.6.0-beta と同じ基本機能に、バグの修正が加えられています。

    このリリースをダウンロードする

    WebJobs SDK は、NuGet のギャラリーからダウンロードできます。NuGet ギャラリーで NuGet Package Manager Console を使用して、パッケージをインストールまたは更新します。

    Install-Package Microsoft.Azure.WebJobs –Pre

    Microsoft Azure Service Bus のトリガーを使用したい場合は、次のパッケージをインストールします。

    Install-Package Microsoft.Azure.WebJobs.ServiceBus -Pre WebJobs SDK とは

    Microsoft Azure WebSites の WebJob は、サービスやバックグラウンド タスクなどのプログラムを WebSites で簡単に実行できるようにするための機能で、.exe、.cmd、.bat などの各種実行形式ファイルを WebSites にアップロードしたり、実行したりすることができると同時に、WebJob としてトリガーで実行したり継続的に実行したりすることができます。WebJobs SDK なしでバックグラウンド タスクの接続や実行を行うとなると、複雑で膨大な量のプログラミングが必要になりますが、SDK が提供するフレームワークを使用すれば、最低限の量のコードで一般的なタスクを実行することが可能になります。

    WebJobs SDK のバインディング システムとトリガー システムは、Service Bus だけでなく、Microsoft Azure Storage サービスの BLOB、Queue、Table にも対応しています。バインディング システムでは、Microsoft Azure Storage オブジェクトを読み込んだり書き込んだりするコードを簡単に作成できます。トリガー システムは、Queue または BLOB が新しいデータを受信するたびに、コードに記述されている関数を呼び出します。

    WebJobs SDK のシナリオ

    Azure WebJobs SDK を使用すると容易に対応できる、代表的なシナリオをいくつか紹介します。

    • ž画像処理などの CPU 負荷の高い作業。
    • ž電子メール送信など、バックグラウンドのスレッドで実行する処理に長い時間がかかるタスク。このようなタスクは、アプリケーションが一定時間以上アイドル状態だと IIS がアプリケーションをリサイクルするため、従来は ASP.NET では不可能でした。しかし、Azure WebSites に AlwaysOn (英語) が導入されたため、アイドル状態でも WebSites のリサイクルを防げるようになりました。AlwaysOn (英語) は、サイトがスリープ状態にならないように維持することができるため、処理に長い時間がかかるタスクやサービスを WebJob や WebJobs SDK で実行できます。
    • žQueue の処理。Web のフロントエンド サービスとバックエンド サービスの通信には Queue を使用するのが一般的です。よくある Producer-Consumer パターンの 1 つです。
    • žRSS 情報集約。RSS フィードのリストを保持するサイトを所有している場合、バックグラウンド プロセスでフィードからすべての記事を取得することがあります。
    • žファイルのメンテナンス。ログ ファイルの集約や整理など。
    • 受信処理。CSV リーダー、ログの解析、Table へのデータの格納など。
    SDK の目的
    • Azure Storage を使用してバックグラウンド処理を容易に行えるようにする手段を提供すること。
    • žアプリケーションで容易に Azure Storage を使用できるようにすること。SDK を利用すれば、ストレージからのデータの読み書きを行うコードを作成する必要はありません。
    • ž診断やログ出力用のコードを開発することなく使用できる、リッチな診断機能と監視機能を提供すること。
    今回のプレビューで更新された点

    今回のプレビューの変更点は、バグの修正のみです。機能自体には前回から変更はありません。次に、今回のリリースで追加された重要な動作上の変更点を紹介します。

    BLOB を Stream にバインディングする場合に FileAccess パラメーターが必要に

    今回のリリース以降、BLOB 属性を使用して Stream にバインディングする場合に FileAccess パラメーターの設定が必要になりました。

    public static void BindingToBlob(
    [BlobTrigger("container/input")]  Stream input,
    [Blob("container/output", FileAccess.Write)]  out Stream output
    )
    {

    } MaxPollingInterval の既定値の変更

    この設定は、Queue のメッセージを確認するまでに待機する最長の時間を指定するために使用します。

    これまでの既定値は 10 分でしたが、今回より 1 分に変更になりました。この変更は、既定の Queue のプロセスの応答性を高めることを目的としています。

    Stream ではなく String を使用してメッセージを使用した場合に Service Bus トリガーは機能しない

    次の方法で Service Bus メッセージを作成した場合、関数はトリガーされません。

    var triggerMessage = new BrokeredMessage("Text"); JSON.NET の最低バージョンを 6.0.4 に更新 QueueTrigger メッセージまたは BlobTrigger メッセージへのバインディング

    SDK では、Queue や BLOB で関数がトリガーされた場合に、Queue メッセージや BLOB パスにバインディングすることができます。次のコードは、queueTrigger または blobTrigger にバインディングしてメッセージにアクセスし、そのメッセージをバインディングに使用する方法を示しています。

    このコードは、処理が完了した BLOB を削除する一般的なクリーンアップ関数を示しています。BlobToQueueForCleanup 関数の blobTrigger には、BLOB パスが指定されています (SDK によって blobTrigger に BLOB の完全なパスが入力されています)。関数の処理が完了すると、「deleteblob」という Queue に BLOB パスが送信されます。

    新しいメッセージが「deleteblob」に送信されると、Cleanup 関数がトリガーされ、queueTrigger にバインディングします (SDK によって queueTrigger にメッセージの内容が入力されています)。その後、CloudBlockBlob にバインディングして BLOB を削除します。

    public static void BlobToQueueForCleanup(
        [BlobTrigger("input/{name}")] TextReader reader, string blobTrigger,
        [Queue("deleteblob")] out string blobToDelete)
    {
        blobToDelete = blobTrigger;
    }

    public static void Cleanup(
        [QueueTrigger("deleteblob")] string blobToDelete, string queueTrigger,
        ICloudBlob blob)
    {
        blob.Delete();
    } BLOB の名前と拡張子へのバインディング

    次のように BLOB の名前と拡張子にアクセスする関数を使用する場合、名前とは拡張子の前までを指します (パスのサブディレクトリを含む)。

    たとえば、BLOB パスが「input/foo.bar.baz.txt」の場合、名前は「foo.bar.baz」で拡張子は「txt」になります。BLOB パスが「input/foo/bar/baz.txt」の場合は、名前が「foo/bar/baz」で拡張子が「txt」になります。

    public static void BlobBinding(
                [BlobTrigger("input/{name}.{extension}")] TextReader input, string name,string extension,
                [Blob("output/{name}_old.{extension}",FileAccess.Write)] TextWriter writer
            )
            {
                writer.Write(input.ReadToEnd());
            } 存在しないエンティティへのバインディング

    今回のリリース以降、存在しないエンティティはスローされずに、null 値が渡されるようになりました。たとえば、次のコードではリーダーが存在しない場合に null 値が渡されます。

    public static void BlobBinding(
    [QueueTrigger("blobname")] string input, string queueTrigger,
    [Blob("input/{queueTrigger}", FileAccess.Read)] Stream reader
    )
    {
    //リーダーは null
    }
    }

    これと同じ動作には、他にも次のような例があります。

    • 常に null 参照 (Stream、TextReader、String) を呼び出す
    • žオブジェクトをバインディングする場合に ICloudBlobStreamObjectBinder に null Stream を渡す
    • žSDK 型 (CloudBlockBlob、CloudPageBlob、CloudTable、CloudQueue) に参照を渡す
    • Table、Queue、BLOB コンテナーが存在しない場合に作成する
    • žIQueryable によって空の Queryable が返される

    : 次のセクションでは、今回のリリースで利用可能な機能の概要を紹介します。

    SDK の機能 トリガー

    Queue または BLOB で新たな入力が検出されると、関数が実行されます。

    バインディング機能

    この SDK ではバインディング機能がサポートされており、C# プリミティブ型と、BLOB、Table、Queue、Service Bus といった Azure Storage の間でモデル バインディングを行います。これにより、BLOB、Table、Queue からデータを読み書きする処理が簡単になり、Azure Storage からの読み書きを実行するコードを開発者が習得する必要がなくなります。

    • 利便性: 最も使いやすい型を選択すると、WebJobs SDK がグルー コードとして働きます。BLOB で文字列操作を行う場合、TextWriter への変換方法を気にすることなく、直接 TextReader および TextWriter にバインドできます。
    • žフラッシュとクローズ: WebJobs SDK は、未処理の出力を自動的にフラッシュおよびクローズします。
    • žユニット テストが可能: この SDK は、ICloudBlob などではなく TextWriter などのプリミティブ型のモックとして動作させることができるため、作成したコードのユニット テストが可能です。
    • ž診断機能: ダッシュボードから使用可能なモデル バインディング機能により、パラメーターの使用状況をリアルタイムで診断できます。

    現在 StreamTextReader/WriterString のバインディングがサポートされています。カスタム型や Storage SDK のその他の型へのバインディングのサポートを追加することが可能です。

    Azure Queues

    SDK を使用して、Queue に新しいメッセージが送信された場合に関数をトリガーすることができます。String、Poco (Plain old CLR object)、byte[]、Azure Storage SDK の各型にバインディングすることにより、メッセージの内容に簡単にアクセスできます。次に、Queue で使用できるその他の主要な機能を紹介します。

    詳細については、0.5.0-beta0.4.0-beta0.3.0-beta の発表記事を参照してください。

    • 関数をトリガーして、メッセージの内容を String, Poco (Plain old CLR object)、byte[]、CloudQueueMessage にバインディングします。
    • 単一または複数のメッセージを Queue に送信します。
    • Queue の並列実行: 1 つの QueueTrigger 内で、並行して Queue の複数のメッセージがフェッチされます。つまり、関数が Queue をリッスンしている場合、この Queue に対して 16 個 (既定値) の Queue のメッセージのバッチが並行して取得されます。また、関数も並行して実行されます。
    • Azure Queues の有害メッセージの処理
    • Queue の DequeueCount プロパティへのアクセス
    • Azure Queues のポーリング ロジックの改良: ストレージのトランザクション コストに対するアイドル時の Queue のポーリングの影響を減らすために、SDK にはランダムな指数バックオフ アルゴリズムが実装されています。
    • 高速なパスの通知: SDK を使用して複数の Queue にメッセージを送信する場合、SDK によってメッセージの高速追跡が実行されます。0.3.0-beta では、約 2 秒間隔でポーリングが実行されていました。つまり、アプリで 20 個の関数のチェーン (1 つの関数が Queue に書き込み、これがトリガーとなって次の関数が Queue に書き込み、またこれが次の関数のトリガーとなる、という処理の繰り返し) を使用していた場合、20 個のメッセージを処理するために約 40 秒かかりました。これまでの変更により、この時間は現在約 8 秒に短縮されています。
    • Queue のポーリングの構成オプション: Queue のポーリング動作を構成できるように、数点のオプションが用意されています。
      • MaxPollingInterval は、Queue が空の状態が続く場合に、メッセージを確認するまでに待機する最長の時間を指定するために使用します。既定値は 1 分です。
      • MaxDequeueCount は、Queue のメッセージを有害な Queue に移動するタイミングを指定するために使用します。既定値は 5 です。
    Azure Blobs

    SDK を使用して、新しい BLOB が検出された場合や、既存の BLOB が更新された場合に、関数をトリガーすることができます。Stream、String、Poco (Plain old CLR object)、byte[]、TextReader、TextWriter、Azure Storage SDK の各型にバインディングすることにより、BLOB の内容にアクセスできます。

    詳細については、0.5.0-beta0.4.0-beta0.3.0-beta の発表記事を参照してください。

    • BlobTrigger は、新しい BLOB が検出された場合または既存の BLOB が更新された場合にのみトリガーされます。
    • ž   BLOB の再試行およびエラー処理: BLOB の処理時にエラーが発生した場合に、関数の再試行がサポートされています。BlobTrigger では、指定した回数の上限に達するまで処理を再試行します (既定値は 5 回)。しきい値に達して関数が 5 回実行されると、「webjobs-blobtrigger-poison」という Queue にメッセージが送信されます。この Queue の QueueTrigger を使用して関数をトリガーし、メッセージに対してカスタムのエラー処理を実行することができます。
    Azure Storage Tables

    SDK を使用して、Table にバインディングし、読み取り、書き込み、更新、削除の操作を実行することができます。

    詳細については、0.6.0-beta (英語)0.5.0-beta0.4.0-beta0.3.0-beta の発表記事を参照してください。

    受信処理は、BLOB に格納されたファイルを解析し、値を CSV リーダーなどの Table に格納する場合の一般的なシナリオです。このような場合、Ingress 関数によって多数の行 (場合によっては何百万もの行) が書き込まれる可能性があります。

    WebJobs SDK では、この機能を簡単に実装し、Table に書き込まれた行数などをリアルタイムで監視する機能を追加することができるため、Ingress 関数の進捗状況を監視することができます。

    次の関数は、Azure Storage Tables に 100,000 行���書き込む方法を示しています。

    public static class Program
    {
        static void Main()
        {
            JobHost host = new JobHost();
            host.Call(typeof(Program).GetMethod("Ingress"));
        }
        [NoAutomaticTrigger]
        public static void Ingress([Table("Ingress")] ICollector<Person> tableBinding)
        {
    // 多数の行の受信処理をシミュレーションするには、ループ処理を使用します。
    // この部分は、Blob Storage から読み込み、Azure Tables に書き込むための
    // 独自のロジックに置き換えてください。
            for (int i = 0; i < 100000; i++)
            {
                tableBinding.Add(
                new Person()
                { PartitionKey = "Foo", RowKey = i.ToString(), Name = "Name" }
                );
            }
        }
    }
    public class Person
    {
        public string PartitionKey { get; set; }
        public string RowKey { get; set; }
        public string Name { get; set; }
    } Azure Service Bus

    Azure Queues と同様に、SDK を使用して、新しいメッセージが Service Bus Queue またはトピックに送信された場合に関数をトリガーすることができます。String、Poco (Plain old CLR object)、byte[]、BrokeredMessage にバインディングすることにより、メッセージの内容に簡単にアクセスできます。

    詳細については、0.3.0-beta の発表記事を参照してください。

    全般

    次に、SDK のその他の有用な機能の一部を紹介します。

    - 非同期のサポート: async 関数がサポートされています。

    - CancellationToken: 関数に CancellationToken パラメーターを使用して、ホストからキャンセルのリクエストを受信できます。

    - NameResolver: SDK の拡張レイヤーでは、Queue 名または BLOB 名のソースを指定することができます。たとえば、この機能を使用して、構成ファイルから Queue 名を取得できます。このサンプル (英語) をご覧ください。

    - WebJob シャットダウン通知: WebJob の正常なシャットダウン (Graceful Shutdown、英語) 通知機能では、WebJob が停止するときに通知が表示されます。SDK では、キャンセルのトークンを使用して、この通知を関数に伝達します。SDK は WebJob がシャットダウンするタイミングをユーザーに通知することで、WebJob の正常なシャットダウンをサポートします。この情報は CancellationToken により関数に伝えられます。次の関数は、WebJob が停止すると CancellationToken を通じてキャンセルのリクエストを受信します。

    public static void UseCancellationToken(
        [QueueTrigger("inputqueue")] string inputText,
        TextWriter log,
        CancellationToken token)
    {
    // 長い時間がかかる関数をキャンセルすることが可能  
          while (!token.IsCancellationRequested)
          {
              Thread.Sleep(2000);
              log.WriteLine("Not cancelled");
          }
          log.WriteLine("cancelled");
    } WebJob の監視用ダッシュボード

    WebJob の実行中は、任意の言語で作成された任意の型でリアルタイムの監視が可能です。これにより、WebJob の状態 (Running、Stopped、Successfully completed)、最終実行日時、実行ログを確認できます。次の画面で WebSites で稼動するすべての WebJob を確認できます。

    この SDK を使用して WebJob を作成すると、プログラム内の関数の診断と監視を行えます。ここで、画像処理を行う「ImageResizeAndWaterMark」という名前の WebJob を作成した場合について考えます。フローは次のようになります。

    まず、ユーザーが画像を BLOB の「images-input」というコンテナーにアップロードすると、Resize 関数がトリガーされます。Resize はこの画像を処理した後に「images2-output」コンテナーに書き込みます。ここで Watermark 関数がトリガーされ、画像のサイズ変更が実行された後に「images3-output」という BLOB コンテナーに書き込まれます。次のコードは、上記で説明した WebJob を示しています。

        public class ImageProcessing
        {
            public static void Resize(
                [BlobTrigger(@"images-input/{name}")] WebImage input,
                [Blob(@"images2-output/{name}")] out WebImage output)
            {
                var width = 80;
                var height = 80;
                output = input.Resize(width, height);
            }

            public static void WaterMark(
                [BlobTrigger(@"images2-output/{name}")] WebImage input,
                [Blob(@"image3-output/{name}")] out WebImage output)
            {
                output = input.AddTextWatermark("WebJobs", fontSize: 6);
            }
        }

        public class WebImageBinder : ICloudBlobStreamBinder<WebImage>
        {
            public Task<WebImage> ReadFromStreamAsync(Stream input, System.Threading.CancellationToken cancellationToken)
            {
                return Task.FromResult<WebImage>(new WebImage(input));
            }

            public Task WriteToStreamAsync(WebImage value, Stream output, System.Threading.CancellationToken cancellationToken)
            {
                var bytes = value.GetBytes();
                return output.WriteAsync(bytes, 0, bytes.Length);
            }
        }

    WebJob を Azure で実行しているときに Microsoft Azure WebSites のポータルで [WEBJOBS] タブの「ImageResizeAndWaterMark」というログへのリンクをクリックすると、WebJob のダッシュボードが表示されます。

    ダッシュボードは SiteExtension なので、https://<自身のサイト>.scm.azurewebsites.net/azurejobs という URL でアクセスできます。SiteExtension にアクセスするには、デプロイメントの認証情報が必要です。SiteExtension へのアクセスの詳細については、Kudu プロジェクトのドキュメントをお読みください。
    https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service (英語)

    関数実行の詳細

    「ImageResizeAndWaterMark」という WebJob の実行を監視する場合、プログラム内の関数呼び出しに関する次のような詳細情報を確認できます。

    • 関数のパラメーターの種類
    • 関数の実行時間
    • BLOB からの読み込みの所要時間と読み書きのバイト数

    呼び出しと再実行

    上の例では、WaterMark 関数が何らかの理由で正常に動作しない場合、新しい画像をアップロードして WaterMark 関数を再実行すると、実行チェーンがトリガーされ、Resize 関数が呼び出されます。これは、関数どうしが複雑に組み合わされている場合に問題を診断してデバッグするのに便利です。また、ダッシュボードからも関数を呼び出すことができます。

    関数の因果関係

    上の例では、WaterMark 関数が BLOB への書き込みを行うと、Resize 関数がトリガーされます。ダッシュボードでは、この関数の因果関係が表示されます。多数の関数が複雑に絡み合い、新しい入力が検出されるたびにトリガーされるような場合に、この因果関係グラフが役立ちます。

    BLOB の検索

    [Search Blobs] をクリックし BLOB を検索すると、該当する BLOB で発生した事象に関する情報を確認できます。たとえば、ImageResizeAndWaterMark の場合は WaterMark 関数が実行されると、BLOB に対する書き込みが行われます。BLOB の検索の詳細については、こちらの記事 (英語) を参照してください。

    サンプル

    WebJobs SDK のサンプルについては、次のページをご覧ください。
    https://github.com/Azure/azure-webjobs-sdk-samples (英語)

    • BLOB、Table、Queue、および Service Bus でのトリガーやバインディングの使用例をご覧いただけます。
    • PhluffyShuffy というサンプルでは、画像処理を行う Web サイトでユーザーが写真をアップロードすると、BLOB ストレージのその画像を処理する関数がトリガーされます。

    チュートリアル: Azure WebJobs SDK の使用を開始する (英語)

    このチュートリアルにしたがって、WebJobs SDK の使用を開始することができます。

    関連資料 SDK を使用した Azure WebSites への WebJob のデプロイ

    Visual Studio 2013 Update 3 と Azure SDK 2.4 では、WebJob を Azure WebSites に発行する Visual Studio ツールがサポートされています。詳細については、Azure Websites への Azure WebJob のデプロイ方法 (英語) を参照してください。

    0.6.0-beta から 1.0.0-rc1 への移行時の既知の問題

    BLOB Stream にバインディングする場合に FileAccess パラメーターが必要

    今回のリリース以降、Blob 属性を使用して Stream にバインディングする場合に FileAccess パラメーターの設定が必要になりました。

    public static void BindingToBlob(
    [BlobTrigger("container/input")]  Stream input,
    [Blob("container/output", FileAccess.Write)]  Stream output
    )
    {

    }

    Azure Storage SDK (CloudBlockBlobCloudPageBlobICloudBlob など) へのバインディング

    既定の FileAccess は Read ではなく、ReadWrite です。その他のオプションを選択することはできません。

    フィードバックとヘルプについて

    Microsoft Azure WebSites の WebJob (英語) 機能と Microsoft Azure WebJobs SDK は現在プレビュー版です。皆様からの改善のためのフィードバックをお待ちしています。

    チュートリアルに直接関係のないご質問は、Azure フォーラムASP.NET のフォーラム (英語)、または StackOverflow.com (英語) までお寄せください。Twitter は #AzureWebJobs をお付けください。StackOverflow ではタグ「Azure-WebJobsSDK」をご使用ください。

    Campus Days er nok mere som de store, internationale konferencer

    MSDN Blogs - Sun, 09/28/2014 - 16:10

    IT-konsulent Mikkel Andreasen fra Miracle A/S var glad for Danish Developers Conference – men ser også Campus Days som et ”utrolig spændende” arrangement og som en begivenhed, der nok er noget af det nærmeste, man kommer TechEd på dansk jord.

    – Det faglige indhold på en konference er alfa og omega. Men det er mindst lige så afgørende, at konferencen danner meningsfuld ramme om at udvide og dyrke kontakten med udviklere på samme niveau som jeg selv. Det er ikke blot hyggeligt og lærerigt – for det er det da helt bestemt – men også et utrolig nyttigt netværk at være del af. For eksempel når vi skal ud og finde nye kolleger med en helt bestemt spidskompetence, siger Mikkel Andreasen.

    Som IT-konsulent hos Miracle A/S går det meste af arbejdsdagen på .NET-udvikling indenfor backend og integration, mens fritiden bl.a. bruges på udvikling af apps til Windows Phone og Windows 8.

    Ikke længere helt så nålestribet

    Derfor deltog Mikkel Andreasen også et par gange på den nu hedengangne DDC (Danish Developer Conference) i henholdsvis Horsens og Aarhus, hvor han selv arbejder. For to år siden var han første gang med på Campus Days i København, selv om konferencens ry var mere orienteret i retning af det nålestribede end mod udviklere.

    – Men i 2012 havde Campus Days for første gang et decideret udviklerspor, og det var en god oplevelse med mange gode oplæg. Campus Days som koncept har også den fordel, at den – modsat DDC – foregår over flere dage. Det giver mere tid til både at netværke og til at gå i dybden med de udviklingsspecifikke emner, som jeg interesserer mig for, siger Mikkel Andreasen.

    – Samtidig oplevede jeg allerede i 2012 konferencen som langt større end DDC med mere plads og forskellige udstillerstande. Samlet lagde arrangementet sig mere op ad de store, internationale konferencer som f.eks. TechEd, siger han.

    ”Det er nok her, du får en klar pejling på, hvor Microsoft er på vej hen.”

    Mikkel Andreasen satser også at deltage i Campus Days i år, hvor indholdet er udvidet til tre separate udviklerspor forløbende over samtlige tre dage.

    – Campus Days ser utrolig spændende ud i år med mange interessante oplæg. Der er ikke meget tvivl om, at det er her, du skal være med, hvis du vil have en klar pejling på, hvor Microsoft er på vej hen. Og dermed også, hvor det er nyttigt at lægge sine kræfter som udvikler, siger Mikkel Andreasen, der bl.a. har oplæg om Azure DocumentDB, Azure Search og nyheder i C# på sin bucket list.

    – Skal jeg være ærlig, så er det da lidt trist, at DDC er væk. For det var fint med en konference i det jyske. Men Campus Days er langt større og har virkelig potentiale. Så når Microsoft også – som de har lovet – vender tilbage med flere aktiviteter i provinsen, så tror jeg egentlig ikke, at jeg vil være så ked af det endda, siger han.

    Campus Days 2014 løber af stabelen den 25.-27. november i CinemaxX på Fisketorvet i København. Find alle detaljerne, program, praktiske informationer på campusdays.dk.

    My personal Azure FAQ on Azure Networking SLAs, bandwidth, latency, performance, SLB, DNS, DMZ, VNET, IPv6 and much more

    MSDN Blogs - Sun, 09/28/2014 - 05:57

    In the last three years I worked extensively on Azure IaaS, during engagements with my partners I have been asked many questions on Azure networking in general, and more specifically on Virtual Network (VNET) and VMs. Providing adequate answers is not always easy, sometime the documentation is not clear or missing in details, while other times you can deduct yourselves but requires pretty good knowledge. Since I detected some recurring patterns in these questions, I decided to write my personal FAQ list in this blog post. Just to be clear: even if I’m a MSFT and have inner knowledge of Azure, I’m not going to unveil you any reserved or secret information. Everything you will read here can be found playing directly with Azure or retrieved by using official and public documentation. If you have additional “nice” questions requiring a non-trivial or non-easy-to-retrieve answer, fill free to insert a comment below this post. Additionally, if you have proposals or feedbacks regarding new feature that Azure networking should have, please use the link below to submit your ideas or vote existing ones:

    FEEDBACK FORUM: Networking (DNS, Traffic Manager, VPN, VNET)

    http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet/category/77469-virtual-networks-vnet

    As usual, you can also follow me on Twitter at @igorpag. Regards.

     

    Does Azure provide SLA on VM network bandwidth?

    The answer is NO. Azure officially does not provide today neither a minimum guaranteed network bandwidth or maximum cap. You probably found in Internet, some old tables and information related to maximum network bandwidth for various VM sizes, then please don’t trust since outdated and Azure infrastructure evolved! That information was valid in the past when Microsoft published those limits, but they have been retired since in some very specific situations they were not effectively valid. You can still use that info to have a general idea, but you will need to wait for a new release of this piece of documentation.

     

    Does Azure provide SLA on VM-to-VM network latency?

    The answer is NO. To be honest, at least at my knowledge, there is no Cloud vendor able to guarantee a maximum network latency, backed up by a formal SLA, between two different VMs. It’s very easy to test yourself and have a good understanding, but you need to be very careful on your test criteria. First of all, you need to test over various hours during the day including peak and non-peak hours, various days over the week including weekends and working days, then finally you should use calculate the average and consider a good percentile sampling near the 95% to eliminate peaks and occasional strange values. Additionally, don’t assume latencies will be the same in all Azure datacenters. If you want to do your own tests, I would recommend you PSPING tool from SYSINTERNAL suite that you can find here: http://technet.microsoft.com/en-us/sysinternals/bb896649.aspx. What is really nice regarding this tool, is that it will not use ICMP then no problem to traverse firewalls and load-balancers, you can decide which TCP port to use and, very important, you can test for bandwidth, not only for latency. You can read full details on using this tool to test Azure latency in one of my previous blog post:

    Azure Network Latency & SQL Server Optimization

    http://blogs.msdn.com/b/igorpag/archive/2013/12/15/azure-network-latency-test-and-sql-server-optimization.aspx

    Another complexity factor to consider is where the two VMs are placed inside the Azure datacenter: are the VMs in the same Azure cluster or in different clusters? If in different clusters the latency is expected to be slightly higher due to an extra-hop, again there is no official documentation, and even more interesting, you have no way to know in Which Azure cluster your VMs are located, except opening a Case to Microsoft Support and ask it. But there is an easy way to co-locate VMs in the same Azure cluster, you can read it in the next section.

     

    Azure Datacenter-to-Datacenter traffic will go through public Internet?

    This is a difficult question. Microsoft owns its dark fibers but there is no guarantee that, for example, traffic between VNETS or VMs in different datacenters will use these private fibers.

    Is it possible to white-list Azure public IP addresses?

    YES, all Azure public IP ranges are published at the link below. If you want to white-list, for security reasons, on your on-premise network you can do that selectively by region but not by service.

    Microsoft Azure Datacenter IP Ranges

    http://www.microsoft.com/en-us/download/details.aspx?id=41653

    If you rely on specific Azure resources, for example an Azure SQLDB instance or blob storage account, be aware that the assigned IP you will see, resolving their specific DNS names, may change without notice. IMPORTANT: don’t rely on any geo-location service for Internet IP used by Azure since may report incorrect information, for more details see the blog post below:

    Microsoft Azure’s use of non-US IPv4 address space in US regions

    http://azure.microsoft.com/blog/2014/06/11/windows-azures-use-of-non-us-ipv4-address-space-in-us-regions 

    If you created your own Cloud Service, and want to be possible to validate it using reverse-DNS, you can do that enabling PTR record registration (by Azure) for your Cloud Service VIP as described at the link below:

    Announcing: Reverse DNS for Azure Cloud Services

    http://azure.microsoft.com/blog/2014/07/21/announcing-reverse-dns-for-azure-cloud-services

     

    Is it possible to co-locate VMs in order to reduce network latency?

    YES, it’s possible but the procedure has been changed with the recent introduction of “Regional Virtual Network” as described in the blog post below:

    Regional Virtual Networks

    http://azure.microsoft.com/blog/2014/05/14/regional-virtual-networks

    Essentially, you will not need to create an Affinity Group (AG) bound to a Virtual Network (VNET), it will be sufficient to specify the AG when creating the VMs and you will obtain co-location on the same Azure Cluster even if you are using a “Regional Virtual Network”. Please note that every new VNET will be by default a “Regional Virtual Network” and you will not be able to create a “Local Virtual Network” anymore in the Azure Portal. Additionally, already existing VNETs will be migrated automatically to “Regional Virtual Network” with no user intervention required. At this point, it’s not clear to me if the AG mechanism will be deprecated in the future, but today is still available in the way I just described you.

     

    What is “*.internal.cloudapp.net” DNS prefix in Azure VM?

    If you build an Azure VM and join to an Azure Virtual Network and try to ping it using the VM host name (legacy Netbios name to be clear), this is what you will see:

    As you can see in the picture, there are two strange parts in the FQDN: nothing strange in “cloudapp.net”, but what are “internal” and “a3”? You can easily guess that “internal” is the default DNS sub-zone that Azure internal DNS (iDNS) will use to host records for VMs and resolve to internal VM IP (DIP). “a3” instead is a little bit complex: in short, it’s describing the network zone, inside the specific Azure datacenter, where the VM is allocated. There is no official documentation or list of these zones, then don’t ask more, but please note that
    it will change not only between Azure datacenters, but also inside a single datacenter.

     

    Is there any latency overhead in connecting VMs in different Cloud Services?

    The answer is NO, or at least it is negligible. In the recent past, I worked with several partners concerned about placing the application VMs in one Cloud Service and then placing the backend VMs (SQL Server) in a different Cloud Service. This is the typical scenario if you want to use SQL Server AlwaysOn Availability Group (AG) mechanism. Then, even if the application VM will connect to the backend VM through the Azure Load Balancer (SLB), the network latency overhead is really minimal due to an Interesting network optimization in Azure. During the initial TCP connection handshake between the two VMs, Azure will recognize that the communication is between two internal resources and will allow direct communication as in the case of network connection using DIP inside the same Cloud Service. You can read more about measurements I have done in the past in my blog post below:

    Azure Network Latency & SQL Server Optimization

    http://blogs.msdn.com/b/igorpag/archive/2013/12/15/azure-network-latency-test-and-sql-server-optimization.aspx

     

    Ping and Tracert tools will work inside Azure Virtual Network?

    YES, they will work perfectly inside a VNET, and even from on-premise to VNET via VPN connection. They will not work if you try to cross the Azure Load Balancer (SLB).

     

    Azure Load Balancer (SLB) uses Round-Robin policy to distribute incoming connections?

    NO. This is a very common misunderstanding on how SLB works. Actually, Azure SLB is a Layer-4 software Load Balancer and uses a 5-tuple (source IP, source port, destination IP, destination port, protocol type) to calculate the hash that is used to map traffic to the available servers behind a VIP. The hash function is chosen such that the distribution of connections to servers is fairly random. Additionally, at least today, session affinity is not supported.

    Microsoft Azure Load Balancing Services

    http://azure.microsoft.com/blog/2014/04/08/microsoft-azure-load-balancing-services

     

    Azure Load Balancer (SLB) supports SSL termination?

    NO. Currently Azure SLB does not support SSL termination at the edge; you have to handle the termination process for HTTPS encryption and decryption within each VM or Web/Worker role instance. This is one of the network improvement areas that is under consideration by the Azure Networking team. 

    Allow SSL termination at the load balancer

    http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet/suggestions/4573108-allow-ssl-termination-at-the-load-balancer

     

    How many connections a VNET joined VM can support?

    For Windows Server, a VM can support about 500k TCP connections but you need to be careful about other potential limits that may come into the game before this threshold: if you expose your VM to Internet traffic through an Azure Load Balancer endpoint, you may be limited by SLB capacity or DDOS security mechanisms. Actually, there is no public documentation on SLB or Endpoint limits, neither on the number of connection nor network bandwidth, then you should conduct your own performance scalability test to ensure everything will work correctly.

     

    Does Azure provides DDOS network protection?

    YES, Azure infrastructure is designed to protect the network from DDOS originating from the Internet and also internally from other tenants VMs, you can read the details in the white-paper below: 

                    http://download.microsoft.com/download/4/3/9/43902EC9-410E-4875-8800-0788BE146A3D/Windows%20Azure%20Network%20Security%20Whitepaper%20-%20FINAL.docx

     Please be aware of the following important points:

    • Windows Azure’s DDoS defense system is designed not only to withstand attacks from the outside, but also from within.
    • Windows Azure monitors and detects internally initiated DDoS attacks and removes offending VMs from the network.
    • Windows Azure’s DDoS protection also benefits applications. However, it is still possible for applications to be targeted individually. As a result, customers should actively monitor their Windows Azure applications.

     

    Can I use IPv6 inside an Azure Virtual Network?

    NO, at the moment you cannot use IPv6 for your VMs inside an Azure Virtual Network (VNET), if you try to use it, communications will fail. IPv6 support in general has been confirmed under development: 

    Support IPv6 throughout the Azure Platform

    http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet/suggestions/4992369-support-ipv6-throughout-the-azure-platform

     

    UDP broadcast and multicast are supported inside Azure Virtual Network?

    NO, this type of communication is not allowed inside a VNET or even across Azure SLB.

    Support Multicast within Virtual Networks

    http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet/suggestions/3741233-support-multicast-within-virtual-networks

     

    Is it possible to have multiple NICs on VM?

    NO, but this feature is under development and should be available soon.

    Multiple Network Interface Cards on VM

    http://feedback.azure.com/forums/217313-networking-dns-traffic-manager-vpn-vnet/suggestions/3144627-multiple-network-interface-cards-on-vm

     

    Is it possible to run network penetration tests against my Azure VMs?

    YES, you can do that but it’s highly recommended to follow the specific procedure mentioned below, before running the test, otherwise Azure monitoring and defense systems will trigger in and blacklist your connections, IPs and/or VMs. You need to download a from http://download.microsoft.com/download/C/A/1/CA1E438E-CE2F-4659-B1C9-CB14917136B3/Penetration%20Test%20Questionnaire.docx , fill-in the required information and then open a support ticket to Azure Customer Support and specify "Support Type: Billing", "Problem type: Legal and Compliance" and "Category: Request for penetration testing":

     

    Which is the guaranteed bandwidth for Azure VPN?

    There is no minimum guaranteed bandwidth for Azure VPN, the only SLA provided is about high-availability (99,90%) and you can download the related document from the link below. If you search over public content in Internet, you may find several sources reporting various values for max bandwidth, starting from 60 to 100Mbit/sec: this is what I normally obtained with my personal tests, let me emphasize again that there is no minimum guaranteed. The max cap seems to be dictated by the VM size supporting the Azure side of the VPN software (Small size), I heard rumors about future enhancements but nothing official yet.

    Microsoft Azure Cloud Services, Virtual Machines, and Virtual Network SLA

    http://www.microsoft.com/en-us/download/details.aspx?id=38427

     

    Access to Azure storage will count toward VM network bandwidth cap?

    NO, if your application, both PaaS or installed inside an IaaS VM, accesses Azure tables, queues or blobs, it will not count against the VM max network limit. Seems a tricky question, but since persistent VM storage is networked, some customers and partners frequently ask it. Please remember that this limit, per VM size, is still pending publishing. Additionally, if you use some special features like SQL Server 2014 Azure Blob Storage integration or Azure Files (over SMB), it will count toward your VM network bandwidth limits.

    UPDATED: New White-Paper on SQL Server 2014 and Azure Blob storage integration

    http://blogs.msdn.com/b/igorpag/archive/2014/06/22/update-new-white-paper-on-sql-server-2014-and-azure-blob-storage-integration.aspx

    Introducing Microsoft Azure File Service

    http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx

     

    Can I use A8/A9 VM Infiniband NIC for my application traffic?

    It depends. I have been asked several time if, for example, Infiniband NIC can be used for SQL Server AlwaysOn Availability Group replication traffic, since it provides very high-bandwidth, low-latency connection and RDMA support. In this specific case, the answer is NO, this NIC does not provide general TCP/IP connectivity, as for all the applications that cannot talk over “Network Direct” interface and MS-MPI protocol.

    New High Performance Capabilities for Windows Azure

    http://blogs.technet.com/b/windowshpc/archive/2014/01/30/new-high-performance-capabilities-for-windows-azure.aspx

     

    Can I create a DMZ in Azure?

    In the on-premise world, the concept of de-militarized zone is widely used to expose resources to Internet: the security requirements dictate that the DMZ network is isolated from the inner on-premise network where the real services are located, and then only the DMZ’ed servers can connect to them and proxy client request. Today in Azure, the only way to create a real full-featured DMZ is to use a virtual appliance as Barracuda Firewall:

    Microsoft Azure™ — Secured by Barracuda

    https://www.barracuda.com/programs/azure

    There is an alternative solution you can build very easily today, essentially you need to create two Cloud Services, one for the VMs exposed to the Internet that will act as proxies, and another one for the internal VMs that will provide the inner services through specific published
    ports (endpoints). The trick here is to use ACLs on the latter Cloud Service VIP and restrict access only to the source VIP of the former Cloud Service: be sure to use “Reserved IP” for the first Cloud Service since may change in the future and then could invalidate the ACLs on the incoming connection for the second Cloud Service. Additionally, if you want the VMs in these two Cloud Services to be network isolated, you don’t have to place them in the same Azure Virtual Network (VNET) otherwise they will have open connectivity among all of
    them. What this is not a real DMZ? Because the VIP of the second Cloud Service described in my example is exposed to the Internet, even if only connections from the first Cloud Service will be allowed. There are many rumors on Upcoming Azure networking features, what I would really like to see soon is the possibility to have one front-end VM with multiple NICs, each one connected to different subnets and security ACL: in this way we could define subnet level security and custom routing and then isolating traffic inside the same VNET.

     

    Azure ILB can be used to augment networking security?

    YES, but it’s necessary to clarify what is the security benefit since there is some confusion on how this feature works. Essentially, it can help security since will let you define one (or more) load-balanced endpoints, as when using Azure SLB, not exposed to the Internet, then more secure. Conversely, it’s not an isolation mechanism since will not segregate resources inside a VNET, all the VMs will have open and full connectivity to all other VMs if not using ILB, even if ACLs will be used. For the same reason, ILB cannot be used to realize a DMZ.

     

     

    Can I include Azure PaaS services inside a Virtual Network?

    YES. Microsoft recently began to add platform PaaS services addition to VNET: doing this, you will be able to have bi-directional access between your IaaS VMs and PaaS services. Now, you can add Web Sites and HDInsight HBASE clusters to Azure VNET, more have to come in the future: 

    Azure Websites Virtual Network Integration

    http://azure.microsoft.com/blog/2014/09/15/azure-websites-virtual-network-integration

    Provision HBase clusters on Azure Virtual Network

    http://azure.microsoft.com/en-us/documentation/articles/hdinsight-hbase-provision-vnet

     

     

     

     

    Microsoft Azure Media Services 支援 RTMP 協定與即時編碼

    MSDN Blogs - Sun, 09/28/2014 - 04:18

    感謝北科大劉建昌同學翻譯微軟公司 Azure Media Services 團隊主管  Cenk Dingiloglu 於 2014 年 9 月 18 日所發表的文章 http://azure.microsoft.com/blog/2014/09/18/azure-media-services-rtmp-support-and-live-encoders/

     

    Microsoft Azure Media Services 直播串流服務功能日前已經進入技術預覽階段,公開接受用戶測試。而在直播串流服務中所用到的RTMP 協定是 Microsoft Azure Media Services 支援的內嵌 ( ingest protocol ) 協定之一,也是目前市場上常用的協定,用來獲取與傳送多媒體資訊。

    Microsoft Azure Media Services 提供了使用 RTMP 協定來內嵌串流資訊並且使用動態封裝 (Dynamic Packaging) 來傳送不同的媒體串流格式 (例如 : MPEG-DASH、Microsoft Smooth Streaming、Apple HLS、Adobe HDS )。RTMP 協定目前被廣泛應用在影音輸入與傳輸,支援 RTMP 協定讓 Microsoft Azure Media Services 直播串流服務能夠將獲取的影片;多重輸出串流至不同媒體格式的裝置與端點上,並且能夠保持與傳統撥放器的相容性。

    關於在 Azure Media Service 上設定一個直播通道 ( Live Channel ) 和串流端點的資訊,請參考 Microsoft Azure 中文部落格 - 如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming )

    本篇文章的重點將放在介紹 Azure Media Service 的 RTMP 內嵌功能,以及如何透過 Wirecast、Flash Media Live Encoder ( FMLE )、FFmpeg 等編碼器,利用 RTMP 協定將多種畫質之多重位元資訊 ( multi-bitrate ) 即時送進 Azure Media Service 的通道中 ( Channel )。

    即時串流 ( Live Streaming ) 的基本資訊與架構

    即時串流的架構最主要由三個主要的元件所組成:通道/節目 ( Channel/Program )、串流端點與串流單位 ( Streaming Endpoints )、儲存體 ( Storage )

    1. 通道/節目( Channel/Program ) :

    • 通道 ( Channel ) 用來啟用直撥服務並且支援 RTMP 和 MP4 ( Smooth Streaming ) 兩種內嵌協定。即時編碼器 ( Live Encoder ) 透過內嵌點 ( ingest point ),將串流傳送到通道中。
    • 節目 ( Program ) 為通道內的一個邏輯組件,節目會發布收到的串流資訊並且將這些資訊歸檔,轉換成 VOD ( Video On Demand ) 或是即時撥放窗口。

    2. 串流端點 ( Streaming Endpoint ) 與串流單位 ( Streaming Units ):

    • 串流端點 ( Streaming Endpoint ) 提供了一個URL,從中您可以得到您的即時串流或是 VOD ( Video On Demand ) 資產,同時,也提供了動態封裝功能以及安全的傳送串流。

    3. 儲存體 ( Storage )

    • 節目 ( Program ) 利用 Azure Storage 來儲存即時檔案。而 VOD 和編碼器服務也需要使用到儲存的服務。

     

    通道 ( Channel ) 支援 RTMP 協定

    Azure Media Services Channel 支援 RTMP 協定將串流推向通道中,它可以支援單一 ( single bitrate ) 和多重位元輸入 ( multi-bitrates ),不過我們強烈建議使用多重位元輸入 ( multi-bitrates ) ,這樣的好處是可以讓不同單位可以使用自己最適合的串流位元。在未來的 Azure Media Services 中,將會提供即時轉碼服務,這個服務能夠將單一的位元 ( single bitrate ) 輸入轉換成多重位元輸出 ( multi-bitrates )。

    要使用 RTMP 嵌入協定 ( ingest ),需要符合下列要求 :

    • 備妥支援 RTMP 輸出的編碼器 (Encoder)
    • 支援能夠輸出 H.264 標準壓縮的影片以及進階音訊編碼 ( AAC ) 的音訊編碼器
    • 圖像組 (GOP) ( Group of pictures ) 或主要畫面格 ( Key Frame ) 與能夠搭配不同影片畫質
    • 主要畫面格的間隔需達2秒 (透過特殊的設定,您可以使用最多長達 6 秒的間隔時間。請參照本文之後介紹的進階設定 )
    • 不同的串流品質名稱都是唯一的
    • 網路連接 ( 頻寬需求量為視訊與音訊位元率的總和 )
    • 建議採用 CBR ( Constant bit rate ) 編碼以優化自動適應性 ( Adaptive ) 編碼效能

    在這篇文章中,將會使用三種不同的視訊輸出品質,並且內嵌 ( ingest ) 到 Azure Media Service Channel 之中。您可以使用更多的視訊輸出品質,但是要記住的是,您的輸出品質將被您的電腦編碼能力還有網路頻寬所限制,若您的網路頻寬較小,您可能會需要調整所使用的輸出品質數量,並且使用較低的編碼位元率 (bitrate)。當您嘗試輸出較多種視訊品質時,需要注意所需的網路頻寬是所有視訊品質的位元率總合。

    注意 : 當您重新設定編碼器或是重新建立編碼器與通道之間的連線時,都要對通道進行 "Reset" 的動作。

    使用 Wirecast 時的設定

    Wirecast 是一種支援 RTMP 協定的商用編碼器軟體。它可以即時編碼直播時所獲取的即時串流。您可以到 Telestream 網站下載 Wirecast 試用版並且得到相關的資訊,目前 Wirecast 最新版本為版本 5,並且能夠用來測試 Azure Media Service。

    輸入設定

    • 選取 "+" 按鈕。

    • 選擇相機的圖示,此時會顯示目前連接電腦的攝影裝置,您可以在此選擇自己需要的攝影裝置。

    • 選取完可用的攝影裝置後,您可以在錄影來源上看到目前攝影機的輸出畫面。點擊輸出畫面,並且讓它顯示在使用者介面上 ”Preview” 的位址

    輸出設定

    • 在上方工具列選取"Output" ->"Output Setting"

    • 在"Select an Output Destination"對話框中選擇目標伺服器,在此選擇RTMP Server

    此時會出現輸出設定的對話框

    • 為您第一個輸出品質 ( quality ) 等級作命名。本範例中取名為 "Azure Media Services Quality1"
    • 輸入一個唯一的串流名稱 ( myStream1 )。若您有多種不同的輸出品質,每一個輸出品質都需要有一個唯一的串流名稱
    • 為您第一個輸出品質建立一個新的預設編碼。
    • 在 "Output Setting dialog box" 中,選擇"New Preset",並且輸入新的預設編碼名稱 ( MyQuality1 )。

    注意 :

    1. 當您在建立自己的預設編碼時,您必須保持 “Frames per second” 和 “Key frame every” 這兩個值在不同的輸出品質之間是相同的。

    2. 不同輸出品質,必須使用相同的音訊編碼設定並且確定有設置 "Keyframe Aligned",否則串流將無法運作或是無法內嵌置通道中。

    • 上述步驟結束後,您的輸出配置應該與下圖一樣

    • 增加其他的輸出品質等級。點擊"Add",並且按照上述步驟來添加新的輸出品質

    注意 :

    再次提醒,在建立新的預設編碼時,"Frames per second" 和 "Key frame every" 這兩個值在不同的輸出品質也要是一樣的。除此之外,也要確認有設置 "Keyframe Aligned" 並且為每個 Stream 命名一個唯一的名稱。

    • 下圖為 Wirecast 設定三種不同的輸出品質等級

    開始編碼並且將串流資料內嵌到通道中

    • 接下來,點擊“->”按鈕,將“Preview”的畫面移動到“Live”上,此時您在畫面上可以看到錄影的輸出

    • 點選左上角 ”Stream” 按鈕,您將可以看到有一個紅點在按鈕中,讓您知道您目前正在進行直播。

    預覽串流

    您可以透過Azure管理網站來預覽甚至是發布您的串流。

    (關於預覽與發布串流的詳細資訊,請參考Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 (Live Streaming))

    做為替代方案,您也可以使用http://amsplayer.azurewebsites.net/來選擇不同的播放器來預覽您的串流視訊。

    使用 Flash Media Live Encoder 時的設定

    FMLE 為 Adobe 公司發行的一個免費軟體。您可以至 http://www.adobe.com/products/flash-media-encoder.html 下載FMLE以及了解更多相關訊息。

    在預設的情況下,FMLE 支援 MP3 格式的音訊輸出。目前 Azure Media Service 並不提供即時轉碼服務,並且要求必須使用進階音訊編碼 ( AAC ) 以動態封裝串流到多種格式中 ( MPEG-DASH、Smooth Streaming、HLS )。因此,為了要在 Azure Media Service 上使用 Adobe FMLE,您會需使用額外的 ACC 插件 ( plugin ) 。在本篇文章中,我們將會使用一個由 Main Concept 所提供的 FMLE ACC 插件。(您可以從 http://www.mainconcept.com/eu/products/plug-ins/plug-ins-for-adobe/aac-encoder-fmle.html 下載與安裝此一 ACC 插件)

    設定 FMLE

    您需要做的第一件事情是設定讓 FMLE 使用 Network Time Protocol (NTP) 做為 RTMP 協定的時間標籤,請依照下列步驟 :

    1. 關閉您的編碼器

    2. 使用文字編輯器打開 FMLE 的組態檔 (config.xml)。

    1. 若您沒有更改預設的安裝路徑,則您可以在 C:\Program Files\Adobe\Flash Media Live Encoder 3.2\ 找到FMLE的組態檔。
    2. 若 Windows 作業系統為 x64 則是在 C:\Program Files (x86)\Adobe\Flash Media Live Encoder 3.2\ 找到FMLE的組態檔。
    3. 在 MAC OS 作業系統的預設安裝路徑為 HD:Applications:Adobe:Flash Media Live Encoder 3.2

    3. 將 streamsynchronization/enable 設定true,如以下範例 :

    <streamsynchronization>

    <!– “true” to enable this feature, “false” to disable.                

    <enable>true</enable>

    4. 儲存檔案,並且再次打開您的編碼器。

    • 在目前連接的設備清單中,選擇您要的攝影裝備。
    • 在編碼清單中選擇您要的預設編碼,或是自己建立一個預設編碼。

    若您要自己建立一個預設編碼,請參考本文 "通道支援 RTMP 協定" 章節。

    在本篇範例中,將使用

    "Multi Bitrate – 3 streams (1500) Kbps – H.264"

    此選項將採用H.264標準的多位元率編碼,並且輸出三種不同的輸出串流。

    • 在視訊編碼格式的進階設定中,設定 "Key Frame frequency" 為2秒。

    • 設定 "Frame rate" 為 30 fps。

    • 選擇您的音訊輸入裝置。
    • 設定音訊輸出格式為 AAC

    ( 注意 : 在預設的狀況下 ACC 和 HE-AAC 不能使用,因為 FMLE 只能夠輸出 MP3 格式的音訊檔,因此您需要通過外部的插件來讓 FMLE 使用 AAC 編碼 )

    • 設定所需的音訊頻寬,在本篇範例中,將使用 96Kbps 的位元率和 44100 Hz 的取樣率。

    • 在 Stream 欄位中輸入 "stream%i",這項設定可以讓 FMLE 將每一個輸出品質 ( Quality ) 命名為唯一的串流名稱

    下圖為上述完成的設定 

    開始編碼並且將串流資料內嵌到通道中

    • 點擊 "Connect",將編碼器連接到

    • 選取 "start" 開始進行編碼

    注意 : 您也可以使用 FMLE 的指令模式完成上述的操作。

    詳細資料請參考Start Flash Media Live Encoder in command-line mode”

    • 下圖為直播開始的畫面

    預覽串流

    您可以透過 Azure 管理網站來預覽甚至是發布您的串流。關於預覽與發布串流的詳細資訊,請參考 Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming )

    做為替代方案,您也可以使用 http://amsplayer.azurewebsites.net/ 來選擇不同的播放器來預覽您的串流視訊。

    在 Azure Media Services 上使用 FFmpeg 時的設定

    FFmpeg 是知名的開放原始碼計畫,可以支援多種格式的音訊和視訊輸出。RTMP 也是 FFmpeg 上所支援的一項協定。您可以在FFmpeg 的官網下載與了解更多相關資訊。在這篇文章中,將不會特別介紹和說明 FFmpeg 的指令和其用處,而是使用先前已經撰寫好的指令來將本地端的檔案進行串流,並且模擬一個即時串流。您可以使用 FFmpeg 來截取來自多種不同設備 (包含攝影機、桌面截取等設備) 的輸入的資訊。您可以在 FFmpeg 的官網下載與了解更多相關資訊。

    範例指令

    以下為 FFmpeg 的範例指令

    • 輸出單一位元率( Single Bitrate ) :

    C:\tools\ffmpeg\bin\ffmpeg.exe -v verbose -i MysampleVideo.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/mystream1

    • 輸出多種位元率 ( Multi bitrates ) ( 500Kbps,300Kbps,150Kbps ):

    C:\tools\ffmpeg\bin\ffmpeg.exe -threads 15 -re -i MysampleVideo.mp4 -strict experimental -acodec aac -ab 128k -ac 2 -ar 44100 -vcodec libx264 -s svga -b:v 500k -minrate 500k -maxrate 500k -bufsize 500k  -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/Streams_500 -strict experimental -acodec aac -ab 128k -ac 2 -ar 44100 -vcodec libx264 -s vga -b:v 300k -minrate 300k -maxrate 300k -bufsize 300k -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/Streams_300 -strict experimental -acodec aac -ab 128k -ac 2 -ar 44100 -vcodec libx264 -s qvga -b:v 150k -minrate 150k -maxrate 150k -bufsize 150k  -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/Streams_150

    在輸出多種位元率的指令碼中,建立了三種不同的視訊輸出品質 ( 500Kbps,300Kbps,150Kbps ),主要畫面間隔為��秒。並且在將串流輸出到 Azure Media Service 通道 ( Channel ) 中。請參考 Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming ),裡面有關於通道內嵌URL的詳細介紹。

    預覽串流

    您可以透過 Azure 管理網站來預覽甚至是發布您的串流。關於預覽與發布串流的詳細資訊,請參考Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming )

    做為替代方案,您也可以使用 http://amsplayer.azurewebsites.net/ 來選擇不同的播放器來預覽您的串流視訊。

    進階設定

    在預設的情況下,Azure Media Service 通道被設定為每兩秒內嵌主畫面資料,並且使用 HLS 輸出 3 對 1 的對映設定 ( 3 to 1 mapping ),這代表著若您每兩秒內嵌一次主畫面資料,則您的 HLS 輸出片段則為六秒 ( 3 * 2 = 6 second )。

    若您想要調整這個內嵌的時間間隔,您必須使用 SDK。因為這種進階設定無法透過 Azure 入口網站做設定。您可以透過 Creating a Live Streaming Application with the Media Services SDK for .NET 來了解更多使用SDK來進行進階設定的資訊

    關於設定的參數,請參閱 :

    ChannelInput/KeyFrameInterval

    ChannelOutput/Hls/FragmentsPerSegment

    總結與未來發展

    本篇文章介紹了如何在 Azure Media Service 使用支援 RTMP 協定的多種編碼器,以及介紹了詳細的設定細節 。

    除了上述的功能之外,您還可以使用 Azure Media Service 完成更多的功能,更多的資訊您可以參考官網 Azure Media Services”,也可以透過 SDK 來完成建立即時串流 Working with Azure Media Services Live Streaming”

    希望您透過上述的文章可以了解如何在 Azure Media Service 上使用 RTMP 協定的編碼器,若是有任何的問題,可以透過官網讓我們知道。

    Microsoft Azure Media Services 支援 RTMP 協定與即時編碼

    MSDN Blogs - Sun, 09/28/2014 - 04:18

    感謝北科大劉建昌同學翻譯微軟公司 Azure Media Services 團隊主管  Cenk Dingiloglu 於 2014 年 9 月 18 日所發表的文章 http://azure.microsoft.com/blog/2014/09/18/azure-media-services-rtmp-support-and-live-encoders/

     

    Microsoft Azure Media Services 直播串流服務功能日前已經進入技術預覽階段,公開接受用戶測試。而在直播串流服務中所用到的RTMP 協定是 Microsoft Azure Media Services 支援的內嵌 ( ingest protocol ) 協定之一,也是目前市場上常用的協定,用來獲取與傳送多媒體資訊。

    Microsoft Azure Media Services 提供了使用 RTMP 協定來內嵌串流資訊並且使用動態封裝 (Dynamic Packaging) 來傳送不同的媒體串流格式 (例如 : MPEG-DASH、Microsoft Smooth Streaming、Apple HLS、Adobe HDS )。RTMP 協定目前被廣泛應用在影音輸入與傳輸,支援 RTMP 協定讓 Microsoft Azure Media Services 直播串流服務能夠將獲取的影片;多重輸出串流至不同媒體格式的裝置與端點上,並且能夠保持與傳統撥放器的相容性。

    關於在 Azure Media Service 上設定一個直播通道 ( Live Channel ) 和串流端點的資訊,請參考 Microsoft Azure 中文部落格 - 如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming )

    本篇文章的重點將放在介紹 Azure Media Service 的 RTMP 內嵌功能,以及如何透過 Wirecast、Flash Media Live Encoder ( FMLE )、FFmpeg 等編碼器,利用 RTMP 協定將多種畫質之多重位元資訊 ( multi-bitrate ) 即時送進 Azure Media Service 的通道中 ( Channel )。

    即時串流 ( Live Streaming ) 的基本資訊與架構

    即時串流的架構最主要由三個主要的元件所組成:通道/節目 ( Channel/Program )、串流端點與串流單位 ( Streaming Endpoints )、儲存體 ( Storage )

    1. 通道/節目( Channel/Program ) :

    • 通道 ( Channel ) 用來啟用直撥服務並且支援 RTMP 和 MP4 ( Smooth Streaming ) 兩種內嵌協定。即時編碼器 ( Live Encoder ) 透過內嵌點 ( ingest point ),將串流傳送到通道中。
    • 節目 ( Program ) 為通道內的一個邏輯組件,節目會發布收到的串流資訊並且將這些資訊歸檔,轉換成 VOD ( Video On Demand ) 或是即時撥放窗口。

    2. 串流端點 ( Streaming Endpoint ) 與串流單位 ( Streaming Units ):

    • 串流端點 ( Streaming Endpoint ) 提供了一個URL,從中您可以得到您的即時串流或是 VOD ( Video On Demand ) 資產,同時,也提供了動態封裝功能以及安全的傳送串流。

    3. 儲存體 ( Storage )

    • 節目 ( Program ) 利用 Azure Storage 來儲存即時檔案。而 VOD 和編碼器服務也需要使用到儲存的服務。

     

    通道 ( Channel ) 支援 RTMP 協定

    Azure Media Services Channel 支援 RTMP 協定將串流推向通道中,它可以支援單一 ( single bitrate ) 和多重位元輸入 ( multi-bitrates ),不過我們強烈建議使用多重位元輸入 ( multi-bitrates ) ,這樣的好處是可以讓不同單位可以使用自己最適合的串流位元。在未來的 Azure Media Services 中,將會提供即時轉碼服務,這個服務能夠將單一的位元 ( single bitrate ) 輸入轉換成多重位元輸出 ( multi-bitrates )。

    要使用 RTMP 嵌入協定 ( ingest ),需要符合下列要求 :

    • 備妥支援 RTMP 輸出的編碼器 (Encoder)
    • 支援能夠輸出 H.264 標準壓縮的影片以及進階音訊編碼 ( AAC ) 的音訊編碼器
    • 圖像組 (GOP) ( Group of pictures ) 或主要畫面格 ( Key Frame ) 與能夠搭配不同影片畫質
    • 主要畫面格的間隔需達2秒 (透過特殊的設定,您可以使用最多長達 6 秒的間隔時間。請參照本文之後介紹的進階設定 )
    • 不同的串流品質名稱都是唯一的
    • 網路連接 ( 頻寬需求量為視訊與音訊位元率的總和 )
    • 建議採用 CBR ( Constant bit rate ) 編碼以優化自動適應性 ( Adaptive ) 編碼效能

    在這篇文章中,將會使用三種不同的視訊輸出品質,並且內嵌 ( ingest ) 到 Azure Media Service Channel 之中。您可以使用更多的視訊輸出品質,但是要記住的是,您的輸出品質將被您的電腦編碼能力還有網路頻寬所限制,若您的網路頻寬較小,您可能會需要調整所使用的輸出品質數量,並且使用較低的編碼位元率 (bitrate)。當您嘗試輸出較多種視訊品質時,需要注意所需的網路頻寬是所有視訊品質的位元率總合。

    注意 : 當您重新設定編碼器或是重新建立編碼器與通道之間的連線時,都要對通道進行 "Reset" 的動作。

    使用 Wirecast 時的設定

    Wirecast 是一種支援 RTMP 協定的商用編碼器軟體。它可以即時編碼直播時所獲取的即時串流。您可以到 Telestream 網站下載 Wirecast 試用版並且得到相關的資訊,目前 Wirecast 最新版本為版本 5,並且能夠用來測試 Azure Media Service。

    輸入設定

    • 選取 "+" 按鈕。

    • 選擇相機的圖示,此時會顯示目前連接電腦的攝影裝置,您可以在此選擇自己需要的攝影裝置。

    • 選取完可用的攝影裝置後,您可以在錄影來源上看到目前攝影機的輸出畫面。點擊輸出畫面,並且讓它顯示在使用者介面上 ”Preview” 的位址

    輸出設定

    • 在上方工具列選取"Output" ->"Output Setting"

    • 在"Select an Output Destination"對話框中選擇目標伺服器,在此選擇RTMP Server

    此時會出現輸出設定的對話框

    • 為您第一個輸出品質 ( quality ) 等級作命名。本範例中取名為 "Azure Media Services Quality1"
    • 輸入一個唯一的串流名稱 ( myStream1 )。若您有多種不同的輸出品質,每一個輸出品質都需要有一個唯一的串流名稱
    • 為您第一個輸出品質建立一個新的預設編碼。
    • 在 "Output Setting dialog box" 中,選擇"New Preset",並且輸入新的預設編碼名稱 ( MyQuality1 )。

    注意 :

    1. 當您在建立自己的預設編碼時,您必須保持 “Frames per second” 和 “Key frame every” 這兩個值在不同的輸出品質之間是相同的。

    2. 不同輸出品質,必須使用相同的音訊編碼設定並且確定有設置 "Keyframe Aligned",否則串流將無法運作或是無法內嵌置通道中。

    • 上述步驟結束後,您的輸出配置應該與下圖一樣

    • 增加其他的輸出品質等級。點擊"Add",並且按照上述步驟來添加新的輸出品質

    注意 :

    再次提醒,在建立新的預設編碼時,"Frames per second" 和 "Key frame every" 這兩個值在不同的輸出品質也要是一樣的。除此之外,也要確認有設置 "Keyframe Aligned" 並且為每個 Stream 命名一個唯一的名稱。

    • 下圖為 Wirecast 設定三種不同的輸出品質等級

    開始編碼並且將串流資料內嵌到通道中

    • 接下來,點擊“->”按鈕,將“Preview”的畫面移動到“Live”上,此時您在畫面上可以看到錄影的輸出

    • 點選左上角 ”Stream” 按鈕,您將可以看到有一個紅點在按鈕中,讓您知道您目前正在進行直播。

    預覽串流

    您可以透過Azure管理網站來預覽甚至是發布您的串流。

    (關於預覽與發布串流的詳細資訊,請參考Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 (Live Streaming))

    做為替代方案,您也可以使用http://amsplayer.azurewebsites.net/來選擇不同的播放器來預覽您的串流視訊。

    使用 Flash Media Live Encoder 時的設定

    FMLE 為 Adobe 公司發行的一個免費軟體。您可以至 http://www.adobe.com/products/flash-media-encoder.html 下載FMLE以及了解更多相關訊息。

    在預設的情況下,FMLE 支援 MP3 格式的音訊輸出。目前 Azure Media Service 並不提供即時轉碼服務,並且要求必須使用進階音訊編碼 ( AAC ) 以動態封裝串流到多種格式中 ( MPEG-DASH、Smooth Streaming、HLS )。因此,為了要在 Azure Media Service 上使用 Adobe FMLE,您會需使用額外的 ACC 插件 ( plugin ) 。在本篇文章中,我們將會使用一個由 Main Concept 所提供的 FMLE ACC 插件。(您可以從 http://www.mainconcept.com/eu/products/plug-ins/plug-ins-for-adobe/aac-encoder-fmle.html 下載與安裝此一 ACC 插件)

    設定 FMLE

    您需要做的第一件事情是設定讓 FMLE 使用 Network Time Protocol (NTP) 做為 RTMP 協定的時間標籤,請依照下列步驟 :

    1. 關閉您的編碼器

    2. 使用文字編輯器打開 FMLE 的組態檔 (config.xml)。

    1. 若您沒有更改預設的安裝路徑,則您可以在 C:\Program Files\Adobe\Flash Media Live Encoder 3.2\ 找到FMLE的組態檔。
    2. 若 Windows 作業系統為 x64 則是在 C:\Program Files (x86)\Adobe\Flash Media Live Encoder 3.2\ 找到FMLE的組態檔。
    3. 在 MAC OS 作業系統的預設安裝路徑為 HD:Applications:Adobe:Flash Media Live Encoder 3.2

    3. 將 streamsynchronization/enable 設定true,如以下範例 :

    <streamsynchronization>

    <!– “true” to enable this feature, “false” to disable.                

    <enable>true</enable>

    4. 儲存檔案,並且再次打開您的編碼器。

    • 在目前連接的設備清單中,選擇您要的攝影裝備。
    • 在編碼清單中選擇您要的預設編碼,或是自己建立一個預設編碼。

    若您要自己建立一個預設編碼,請參考本文 "通道支援 RTMP 協定" 章節。

    在本篇範例中,將使用

    "Multi Bitrate – 3 streams (1500) Kbps – H.264"

    此選項將採用H.264標準的多位元率編碼,並且輸出三種不同的輸出串流。

    • 在視訊編碼格式的進階設定中,設定 "Key Frame frequency" 為2秒。

    • 設定 "Frame rate" 為 30 fps。

    • 選擇您的音訊輸入裝置。
    • 設定音訊輸出格式為 AAC

    ( 注意 : 在預設的狀況下 ACC 和 HE-AAC 不能使用,因為 FMLE 只能夠輸出 MP3 格式的音訊檔,因此您需要通過外部的插件來讓 FMLE 使用 AAC 編碼 )

    • 設定所需的音訊頻寬,在本篇範例中,將使用 96Kbps 的位元率和 44100 Hz 的取樣率。

    • 在 Stream 欄位中輸入 "stream%i",這項設定可以讓 FMLE 將每一個輸出品質 ( Quality ) 命名為唯一的串流名稱

    下圖為上述完成的設定 

    開始編碼並且將串流資料內嵌到通道中

    • 點擊 "Connect",將編碼器連接到

    • 選取 "start" 開始進行編碼

    注意 : 您也可以使用 FMLE 的指令模式完成上述的操作。

    詳細資料請參考Start Flash Media Live Encoder in command-line mode”

    • 下圖為直播開始的畫面

    預覽串流

    您可以透過 Azure 管理網站來預覽甚至是發布您的串流。關於預覽與發布串流的詳細資訊,請參考 Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming )

    做為替代方案,您也可以使用 http://amsplayer.azurewebsites.net/ 來選擇不同的播放器來預覽您的串流視訊。

    在 Azure Media Services 上使用 FFmpeg 時的設定

    FFmpeg 是知名的開放原始碼計畫,可以支援多種格式的音訊和視訊輸出。RTMP 也是 FFmpeg 上所支援的一項協定。您可以在FFmpeg 的官網下載與了解更多相關資訊。在這篇文章中,將不會特別介紹和說明 FFmpeg 的指令和其用處,而是使用先前已經撰寫好的指令來將本地端的檔案進行串流,並且模擬一個即時串流。您可以使用 FFmpeg 來截取來自多種不同設備 (包含攝影機、桌面截取等設備) 的輸入的資訊。您可以在 FFmpeg 的官網下載與了解更多相關資訊。

    範例指令

    以下為 FFmpeg 的範例指令

    • 輸出單一位元率( Single Bitrate ) :

    C:\tools\ffmpeg\bin\ffmpeg.exe -v verbose -i MysampleVideo.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/mystream1

    • 輸出多種位元率 ( Multi bitrates ) ( 500Kbps,300Kbps,150Kbps ):

    C:\tools\ffmpeg\bin\ffmpeg.exe -threads 15 -re -i MysampleVideo.mp4 -strict experimental -acodec aac -ab 128k -ac 2 -ar 44100 -vcodec libx264 -s svga -b:v 500k -minrate 500k -maxrate 500k -bufsize 500k  -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/Streams_500 -strict experimental -acodec aac -ab 128k -ac 2 -ar 44100 -vcodec libx264 -s vga -b:v 300k -minrate 300k -maxrate 300k -bufsize 300k -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/Streams_300 -strict experimental -acodec aac -ab 128k -ac 2 -ar 44100 -vcodec libx264 -s qvga -b:v 150k -minrate 150k -maxrate 150k -bufsize 150k  -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b424099364f7ad5bd4940/Streams_150

    在輸出多種位元率的指令碼中,建立了三種不同的視訊輸出品質 ( 500Kbps,300Kbps,150Kbps ),主要畫面間隔為兩秒。並且在將串流輸出到 Azure Media Service 通道 ( Channel ) 中。請參考 Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming ),裡面有關於通道內嵌URL的詳細介紹。

    預覽串流

    您可以透過 Azure 管理網站來預覽甚至是發布您的串流。關於預覽與發布串流的詳細資訊,請參考Microsoft Azure中文部落格如何使用 Microsoft Azure Media Services 進行現場直播 ( Live Streaming )

    做為替代方案,您也可以使用 http://amsplayer.azurewebsites.net/ 來選擇不同的播放器來預覽您的串流視訊。

    進階設定

    在預設的情況下,Azure Media Service 通道被設定為每兩秒內嵌主畫面資料,並且使用 HLS 輸出 3 對 1 的對映設定 ( 3 to 1 mapping ),這代表著若您每兩秒內嵌一次主畫面資料,則您的 HLS 輸出片段則為六秒 ( 3 * 2 = 6 second )。

    若您想要調整這個內嵌的時間間隔,您必須使用 SDK。因為這種進階設定無法透過 Azure 入口網站做設定。您可以透過 Creating a Live Streaming Application with the Media Services SDK for .NET 來了解更多使用SDK來進行進階設定的資訊

    關於設定的參數,請參閱 :

    ChannelInput/KeyFrameInterval

    ChannelOutput/Hls/FragmentsPerSegment

    總結與未來發展

    本篇文章介紹了如何在 Azure Media Service 使用支援 RTMP 協定的多種編碼器,以及介紹了詳細的設定細節 。

    除了上述的功能之外,您還可以使用 Azure Media Service 完成更多的功能,更多的資訊您可以參考官網 Azure Media Services”,也可以透過 SDK 來完成建立即時串流 Working with Azure Media Services Live Streaming”

    希望您透過上述的文章可以了解如何在 Azure Media Service 上使用 RTMP 協定的編碼器,若是有任何的問題,可以透過官網讓我們知道。

    這篇文章原始發佈於「Microsoft Azure 中文部落格」

    Pages

    Subscribe to Randy Riness @ SPSCC aggregator
    Drupal 7 Appliance - Powered by TurnKey Linux