You are here

Feed aggregator

Azure App Service appLens – finding the root cause

MSDN Blogs - 1 hour 51 min ago

There is a really cool new Azure App Service Web App tool called appLens that is explained in more detail here.  Open it by selecting the Settings –> AppLens from the App Service you are interested in analyzing, similar to that shown in Figure 1.

Figure 1, AppLens shows this is an application issue not a platform issue

Looking at the output of the report I see that there was some some down time on 3-JUN-2016 at around 00:30.  I then decided to take a look at the Event Viewer logs accessible via the KUDU console which I discuss here and here.  And what I found is shown in Figure 2.

Figure 2, why my azure app service web app is not working

It turns out that this was because an old issue with my web app that I discussed here and here.

And the Warning event 2299 logged just after the one shown in Figure 2 further proves the root cause.  The detail for event 229 is:  Worker Process requested recycle due to ‘Status Code’ limit.

I also took a look into my IIS logs and see the 500s recorded, then after the restart all is back up and running.  I might actually fix this one day…

How to view the event logs of your Azure App Service

MSDN Blogs - 1 hour 52 min ago

I have mentioned how to download the eventlog.xml file to view the events which are happening on your Azure App Service (Web App, Mobile App, API App, Logic App) here and here. You can also look at the event logs in a event viewer using KUDU, I discussed KUDU here. Keep in mind that if you are running multiple instances, that you are viewing the event logs for a single instance only.

To access the event logs using KUDU, after login in select Support from the Tools menu as shown in Figure 1.

Figure 1, view event logs in KUDU

Once the Support page renders, click on the Analyze link, then Event Viewer, as shown in Figure 2.

Figure 3, Azure App Service Event Viewer details

How schools use technology effectively in the classroom

MSDN Blogs - 2 hours 21 min ago

The following was originally posted on the Guardian’s Microsoft Partner Zone, and was written by Anthony Salcito.

Teachers share their stories of how Microsoft technologies help promote innovation in teaching and learning in their classroom

Microsoft Innovative Educator Expert (MIEE) is an exclusive programme created to recognise global teacher visionaries who are paving the way for their peers in the effective use of technology for better learning and student outcomes.

MIE Experts work closely with Microsoft to lead innovation in education. They advocate and share their thoughts on the effective use of technology in education with peers and policy makers. They provide insight for Microsoft on new products and tools for education, and they exchange best practices as they work together to promote innovation in teaching and learning.

Taken from Daily Edventures, UK MIEE’s share their journey with vice-president of Microsoft Worldwide Education, Anthony Salcito.

“We have the chance to sculpt such a bright future for the next generation.” – Jose Kingsley, UK

Jose Kingsley grew up dreaming of being a teacher. So it’s no surprise that he’s now tirelessly committed not only to ensuring successful outcomes for his young students from diverse cultures, but to helping his fellow teachers discover the value of teaching with technology.

To fulfil those commitments, Kingsley takes full advantage of the resources and opportunities that come with being a MIEE – from participating in the Microsoft E2 Educator Exchange (E2) in Budapest to being an active member of the Microsoft Educator Community (MEC).

“I believe that Microsoft allows us as MIEE’s to work through challenges we face, making it personalised, Kingsley says. “[Attending E2] was almost like being back in school myself. It was on returning to school the week after and drawing up an action plan that I realised just how much I had taken away. I felt like I had enough to write a whole new curriculum, I kid you not.”

That new curriculum had a lot to do with putting the power of learning in the hands of his students – an approach Kingsley brings to life using programs like Skype in the Classroom (Kingsley started the #SkypeMeet series) and Sway.

“Giving children ownership of their learning empowers meaningful learning,” he says. “Using the MEC, as MIEE’s we have the opportunity to create a continuing professional development (CPD) programme that entirely suits the learner’s need. I adopted this attitude within my classroom, giving children the power to make decisions about what they wanted to learn. The motivation and confidence of my children have increased massively and they feel prepared to take risks and learn from mistakes.”

Kingsley’s passion for teaching is palpable, especially when he reflects on his students’ future.

“I teach five- to six-year-old children,” he tells us, “and it is makes me immensely proud to see their excitement about using technology within their everyday learning. They often enjoy telling me that if they presented their learning using Office 365, the outcome would be enhanced. Always in the back of my mind is that question: What impact will this have on their future? This is a question that I cannot answer, but I can prepare them for.”

Read the full interview here.

“OneNote has enabled us to work with different people, different cultures, and just collaborate and share with students what’s going on outside the classroom.” – Lee Whitmarsh, UK

For art educator and new Microsoft Innovative Educator Expert (#MIEExpert) Lee Whitmarsh, technology is a means to an important end: helping his students experience art as fully as possible, and in a global context.

I was excited to chat with Whitmarsh at the recent E2event, after I’d looked at his project using OneNote to support creative collaboration. (Here’s a great Office Mix detailing the project, “Creating a photography A-level teaching and learning hub.”)

“OneNote,” he says, “has enabled us to work with different people, different cultures, and just collaborate and share with students what’s going on outside the classroom.”

But how does it work in the realm of art education?

“Through photography – it’s such a visual medium, that students are getting to see visual literacy from different areas and [it changes] how all the students see their environment. They’re able to take [what they’ve seen] on board, and then apply it to what they’re doing in their environment.”

And it’s all happening inside OneNote.

“We first set it up and thought about not what the technology can do, but what it needs to do for the students,” Whitmarsh says.

When it came to setting up the collaboration space, his students were thinking big.

“The students – innocently – said, ‘Why do we have to [only] collaborate with each other? Why can’t we go wider?’” says Whitmarsh. “So we just tried it, and we have a wonderful IT department that found a way to actually get students in Seattle (Lynnwood high school) within our OneNote. We do cross-collaboration links, we do peer review, self-review, and we’re going to set up a Skype session to put names with faces.”

Whitmarsh wasn’t at E2 just to share his project, he was also focused on collecting new ideas, some that he might share on his blog.

“It’s been beyond what I thought it would be,” he told me. “It’s been inspirational. To see all the educators from across the world so passionate, so open to ideas, and talking to you about what they’re doing – sharing and listening. It’s just been revolutionary.”

Here’s to revolution, and to teachers like Lee Whitmarsh who are leading the way.

Enjoy the full interview here.

Computational thinking takes hold – even in the youngest of students – Henry Penfold, UK

Kodu, Minecraft and Skype-a-thons: just three of the many ways that MIEE Henry Penfold uses technology with his primary school students. Penfold is an enthusiastic proponent of these technologies in his classroom, and he’s not shy about sharing why.

“Their growth mindset and their attitude towards it is fantastic,” he says. “With Kodu particularly, they go, ‘right, I’m going to solve this,’ because it’s quite manageable. With the blocks, they can drag and really see it visually.”

Penfold is passionate about getting students involved in computer science, and he has seen some big changes after just a few short years of bringing ICT into his classroom.

“Computing is so important in society,” he says. “Children have their tablets at home, so they are constantly revolved around computers. Digital literacy is so important for giving them the tools for later on when they’re learning, and later on in life. It’s amazing how much they use that technology. I see, particularly, the practical side.”

And with the computer science now mandated as part of the UK curriculum, Penfold has also noticed a difference in how his students speak on a daily basis. “I hear, ‘Right, I’ve just done some de-bugging … I’m going to tinker with this to make it better.’ It’s almost become its own language,” he notes.

As veterans of the recent Skype-a-thon – which was a huge hit in his classroom – trying new technologies is now de rigueur in Penfold’s classroom.

“My children absolutely love being able to talk with other people,” he says. And while their Skype was also in the UK, that didn’t matter. “Just the differences between what Southampton was like and what Wales was like … it’s like a field trip for children that don’t get out as much.”

Next up? “We are hopefully going to Skype someone from Pixar,” says Penfold. “[Skype] makes that thousand miles seem like no distance at all.”

Watch the full interview here.

Turning the tables on technology training in Scotland – Natalie Lochhead, Scotland

It’s not unusual for teachers to train other teachers to leverage – and make the most of – technology. In fact, we know it’s one of the most effective ways to scale knowledge. But Natalie Lochhead has taken the concept to the next level by using the expertise of her young students (primary 4-7) to turn that tradition on its head.

As part of the Digital Leaders programme , a wide network of schools throughout the UK that train teachers and students in ICT, Lochhead’s students teach full-class lessons to fellow students, and support teachers as they work to adopt 1 to 1 approaches in the classroom. All of this is made possible by Glow, Scotland’s national digital learning environment.

“My proudest moment in my career has been watching my digital leaders grow into amazing, confident and skilled young people,” Lochhead tells us. “I know that no matter that they do when they leave school, they will have success.” She goes on to say: “[I love] watching them training their teachers, connecting online with other educators via Glow and generally being awesome. They have made such an impact on our school becoming genuine leaders in technology. I am so proud of every one of them.”

Lochhead’s Digital Leaders are trained each Wednesday at lunch time, and then train other classes and teachers on that particular skill. To date, they’ve taught Windows 365, the basics of using a tablet and Kodu. They’ve participated in Scotland’s Kodu Cup and Hour of Code, and have even been featured in the Chamber of Commerce Business Matters Magazine. To keep track of all their activities, Lochhead maintains a class blog.

This impressive educator’s students aren’t the only ones transferring technology knowledge. As a MIEE, Lochhead shares her approach through the MEC and participates in events like last December’s meeting of UK Showcase Schools and MIEEs. She also publishes helpful tutorials, like this one on creating animations in the classroom.

Natalie Lochhead isn’t the first teacher to recognize students’ capacity to teach, but in building a formal program to support this potential, she’s making a big difference.

Read the full interview here.

Want to be a MIEExpert?

Read more about the Microsoft Innovative Educator Expert programme.

Self-nominations for 2016-2017 are now open!

Before you fill out and submit the self-nomination form, you will need to complete two tasks:

  1. Join the Microsoft Educator community and complete your profile. You will need to submit the URL to your public profile as a part of the nomination process. You can find your URL by going into edit pskypatrofile and looking under “basic information”.
  2. Create a two minute Office Mix, video or Sway that answers the following questions in a manner that creatively expresses what makes you a MIEE. To share the Mix/video/Sway in your nomination, you will need to post it somewhere that allows you to create a URL to share it.
  • Why do you consider yourself to be a Microsoft Innovative Educator Expert?
  • Describe how you have incorporated Microsoft technologies in innovative ways in your classroom. Include artifacts that demonstrate your innovation.
  • If you become a MIE-Expert, how do you hope it will impact your current role?
Once you have completed those two tasks, complete the self-nomination form. This form will be open until July 15, 2016.

Productivity mechanics

MSDN Blogs - 2 hours 52 min ago

 Let’s say you run a great team that already has trust, fails and recovers quickly and safely, and sets clear goals, priorities, and limits. (If you don’t, internalize Is it safe? and I can manage.) How do you take a team like that and make it even more productive?

You could search the internet and my blog postings to find many suggestions for increasing productivity, but how do you know which ones will work for your team? What’s meant by productivity anyway? And how does individual productivity relate to team productivity? Can you feel productive and still not produce much of value?

If team productivity were simple, your team would already be maximally productive and you wouldn’t be reading this sentence. However, team productivity is complex. It’s not a sum of individual productivity, which is already complicated, but instead has a boatload of second-order effects on pacing, blockages, incomplete work, and technical debt. To understand team productivity, you need to go back to basics, learn the elements of productivity, and then apply that knowledge to impact your team’s effectiveness. Fortunately for you, I’ve done that in pretty pictures below.

Do you know how fast you were going?

Look up productivity online, and you’ll see many different measures. At Microsoft, we’ve typically focused on a few common ones: cycle time, throughput (aka delivery rate), bug and work-item counts (aka work in progress or WIP), and response time (aka lead time). Even though they measure different elements of productivity, it turns out these measures are related.

Rather than some cold technical definitions, the easiest way to understand the various productivity measures (and their relationships) is to see them in a cumulative flow diagram.

A cumulative flow diagram plots work-item counts (bugs and work) against time, colored by their current state (pending, active, and complete, in this example). On any given day, the work in progress (WIP) is the height of the active portion, the response time is the width of the active portion, and the throughput is the slope of the line connecting them (the slope of completed work over time). Cycle time is the width of the active portion for a specific work item or the reciprocal of throughput, if you want an average over the response time. (In fact, average cycle time can be thought of in many ways, but let’s keep things simple.)

What about me?

Throughput is what many engineers associate with productivity—the faster they get things done, the more productive they feel. WIP is the primary concern of release managers; after all, you can’t ship until all the work is completed and all the bugs are closed. However, response time is how internal and external customers measure the productivity of your team—how quickly can your team respond to a request?

Companies around the world, and internal partner teams around the corner, measure productivity by response time. How quickly can Boeing deliver an ordered airplane or Wendy’s deliver an ordered cheeseburger? How long will your team take to deliver a working feature needed by a team that depends on you? Response time is the productivity measure that matters.

As you can see from the cumulative flow diagram, response time is related to throughput and WIP. Specifically, response time is WIP divided by throughput (a relationship known as Little’s Law). Thus, to improve response time, you need to increase throughput, reduce WIP, or both. (Engineers and release managers are both right.)

Eric Aside

As Bill Hanlon reminded me, another way of measuring response time and health of a codebase is to calculate the average age of bugs (and active work items). The result does not precisely equal response time, but it correlates well and is easy to compute. The higher the average age, the longer your response time and the more technical debt you’ve accumulated.

Finishing more

Here’s a cumulative flow diagram with increased throughput. Notice how the increased slope of throughput has shortened the response time, reflecting work that has gotten done sooner.

How do you get increased team throughput?

  • Ensure fast build, test, sync, and code movement. Your inner loop of development is critical to productivity—make it fast. Once you’ve got code built and tested, ensure it moves quickly to where it’s needed by keeping your branch structure shallow. See Cycle time—the soothsayer of productivity for details.
  • Reduce meetings and run them effectively. Nothing kills throughput like overhead and inane, ineffective meetings. Drop meetings, shorten meetings, and run them properly, as described in “The day we met” (chapter 3).
  • Break down work into smaller items. Superficially, this seems superficial. After all, the total work is the same. However, breaking down work gets you started faster and enables you to iterate and adjust faster, which gets you to the goal faster, as I reveal in The value of navigation.
  • Track work items, search, and debug quickly and effectively. Make information and bugs easy to find and work-item tracking quick and painless. More about this and other forms of wasted time in “Lean: more than good Pastrami” (chapter 2).
  • Co-locate the entire feature team and have daily standups. This speeds communication, which is key to team velocity, as I discuss in Collaboration cache.

Looking back at the cumulative flow diagram, notice that even though the throughput is consistently high, the response time is shorter at the beginning of the period than it is on day 12 (where I drew the arrows). That’s because on day 12, there’s more WIP. Having faster throughput isn’t enough for short and consistent response time—you also need to reduce WIP.

Doing less

Here’s a cumulative flow diagram with reduced WIP. Notice that the response time is shorter than in the previous example, even with clearly lower throughput. The response time is also more consistent, which makes your team more reliable and predictable to your customers and partners.

How do you reduce team WIP?

  • Complete features and resolve bugs before doing new features. This is the most fundamental technique for reducing WIP and the basis of feature crews, Kanban, Scrum, and lean software development, as I elaborate in “Lean: more than good Pastrami” (chapter 2).
  • Assign work and documentation (specs) just in time. By holding off on assigning work, you avoid people starting work early or having too much.
  • Proactively manage dependencies. If you’ve started work that depends on another team, and the dependency comes in late or unstable, you’ve got work in progress that’s stalled. You need to negotiate acceptance criteria and scheduling to avoid such situations, along with contingencies for when they occur anyway, as I describe in You can depend on me.
  • Limit bug and technical debt. Carrying around a bunch of unresolved bugs is a recipe for long stabilization periods, slipped schedules, and death marches. Instead, limit your technical debt with bug jail and done definitions, as laid out in The evils of inventory.
  • Iterate based on data. The faster you show your work to customers, the sooner you’ll learn what’s wrong with it and the fewer bugs you’ll carry until the end of the project. Code, share, learn, and repeat. (See Data-driven decisions for details.)
  • Limit “off-board” projects. Don’t write a feature in secret. Don’t clean up code on a whim. Ask yourself, Is it important? Work in team priority order and avoid surprises—managers hate surprises, and opaque development increases WIP.

Faster throughput gets the work done in less time, but reduced WIP produces even shorter and more reliable response times. Which is better? To customers, the reduced WIP makes the team more responsive and predictable. To operations, faster throughput gets more work done. Naturally, it’s best to do both—increase throughput and reduce WIP.

They’ve gone to plaid

Maybe your team is already pretty agile and practices everything I’ve mentioned. Are you ready to take it to the next level?

Here are more sophisticated techniques for increasing throughput and reducing WIP, but they require significant changes to established team practice.

  • Adopt Kanban. This technique achieves the following improvements, as described in Too much of a good thing? Enter Kanban:
    • Monitors work on physical boards for instantaneous work-item tracking.
    • Makes input and output steps match the pace of your slowest step and applies the theory of constraints (TOC) with drum-buffer-rope to speed your throughput.
    • Directly limits your WIP.
  • Shorten your critical chain. As TOC tells you, your throughput is constrained by your slowest step. By analyzing your entire engineering flow, you can breakdown steps, parallelize steps, and shorten the critical chain of steps that dictates your throughput. Learn more in chapter 9 of Agile Project Management with Kanban.
  • Ensure leads have time for full IC contribution by restricting them to four or fewer reports. Staying small increases productivity and focus. I describe the perfect team size in Span sanity.
  • Adopt a DevOps approach. DevOps is painful initially, because it forces your team to fix much of the technical debt that operations hid from you for years. However, that existing debt is real WIP that’s slowing you down. Once you fix it, your response time improves dramatically. Read more in Bogeyman buddy—DevOps.
There is no try

Team productivity may be more complex than individual productivity, but there are straightforward things you can do that have a significant impact. The key is to increase throughput AND reduce work in progress.

Start small. Maybe reduce meetings AND introduce bug jail, or breakdown your work items AND write acceptance tests for your dependencies, or use Kanban to enjoy a number of throughput AND WIP benefits at once. By changing a little at a time, you can experience the benefits without causing too much turmoil and angst.

No matter how you start improving your response time, your customers and partners will notice and appreciate the difference. Being customer obsessed and delivering value soon after it’s requested gets us all closer to the Microsoft we aspire to be.

https://msdnshared.blob.core.windows.net/media/2016/07/0716-Productivity-mechanics.mp3

Office 365 E5 Nuggets of week 26

MSDN Blogs - Thu, 06/30/2016 - 23:15
Fiscal Year 16 is now closed #FY16!

Small Basic Guru Winners – May 2016

MSDN Blogs - Thu, 06/30/2016 - 23:09

All the votes are in! 

And click here for all the results for the TechNet Guru Awards, May 2016 !!!!

We want to thank Philip and Nonki for some great articles in May!

 

 Small Basic Technical Guru – May 2016    Philip Munts Small Basic: Simpler and Cheaper Raspberry Pi GPIO Michiel Van Hoorn: “This is really Awesome (see also the original article). It opens up Small Basic to the real world. ”
Ed Price: “Building off his Raspbery Pi article, this article does an amazing job of digging deeper and showing you more options, such as Rasperry Pi Zero.”   Nonki Takahashi Small Basic: Image Michiel Van Hoorn: “Really cool overview of working with Images (like photos) in SmallBasic. We good topic to inspire programming”
Ed Price: “Very thorough end to end overview of usig Images!”

 

More about the TechNet Guru Awards:

 

Have a Small and Basic week!

– Ninja Ed

de:code 2016 セッション動画が公開されました!

MSDN Blogs - Thu, 06/30/2016 - 22:00

みなさん、こんにちは。

先日行われた de:code 2016 のセッション動画が Chanel 9 に公開されました。

セッション内で利用されているソリューションやコードは過去の記事で紹介しています。合わせて参照ください。

PRD-006: 機械学習で顧客対応はこう変わる! Azure ML と Dynamics で造る次世代 CRM

https://channel9.msdn.com/Events/de-code/2016/PRD-006

https://blogs.msdn.microsoft.com/crmjapan/2016/06/08/decode-2016-prd-006-followup/

PRD-007: Dynamics CRM Online 2016 新 Web API で開発するモバイル アプリ

https://channel9.msdn.com/Events/de-code/2016/PRD-007

https://blogs.msdn.microsoft.com/crmjapan/2016/05/30/decode-2016-prd-007-followup/

– プレミアフィールドエンジニアリング 河野 高也

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります

 

When should I use .NET Core? And when should I NOT?

MSDN Blogs - Thu, 06/30/2016 - 21:23

Microsoft has recently released .NET Core 1.0 RTMWith the acquisition of Xamarin in February, Microsoft has three major frameworks in the .NET family: .NET Framework, .NET Core and Xamarin.

When should I use .NET Core?

Here are the six typical scenarios where you should consider using .NET Core instead of .NET Framework or Xamarin:

  1. Cross-Platform Needs
  2. Microservices
  3. Best performant and scalable systems
  4. Command line style development for Mac, Linux or Windows
  5. Need side by side of .NET versions per application level
  6. Windows 10 UWP .NET apps

You can read details about each scenario through this URL.

 

When should I NOT use .NET Core?

Here are the five typical scenarios where you should NOT use .NET Core:

  1. Current .NET Framework applications in Production / Migrations
  2. New large monolithic applications
  3. Need full capabilities of higher level frameworks like Entity Framework 6.x, WCF and Windows Workflow Foundation.
  4. Need sub-frameworks not supported by .NET Core.
  5. Possible frameworks to be ported to .NET Core

 

SQL Updates Newsletter – June 2016

MSDN Blogs - Thu, 06/30/2016 - 20:30
Recent Releases and Announcements

 

Recent Whitepapers/E-books/Training/Tutorials

 

Monthly Script Tips

 

Issue Alert

 

 

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services

Road to WPC 2016: Software and workload insights deliver game changing opportunities for optimisation

MSDN Blogs - Thu, 06/30/2016 - 19:07

To build on the story earlier this week in the “Road to WPC” partner profile series, we take a closer look at the solution they delivered for the Western Australia’s Housing Authority.

 Software Optimisation Services’ (SOS) innovative team sets a high standard in combining energy, passion, and long term customer value through every Microsoft Software Asset Management engagement. This unique set of qualifications and innovative approach, created the winning criteria for SOS to take top honours in the Software Asset Management (SAM) category for Microsoft’s Worldwide Partner awards.

Western Australia’s Housing Authority performs an important role providing affordable housing for the State’s residents.

Richard Barry is the Chief Information Officer for the Authority, charged with providing the IT systems that underpin its operations. At a time when WA is facing budgetary constraints Barry wants to ensure that the Authority not only has a robust and reliable environment, but that its licence agreements are appropriate, and that any risks that might arise from shadow IT deployments or aging platforms are identified and appropriately managed.

The trigger for a software asset management review came from Richard wanting to optimise software licences deployed across hundreds of servers both on-premise and in a cloud environment. Barry commissioned Preston and her SOS team to work with him, initially on a 90 day proof of concept basis, and then more broadly to conduct a systems audit that would provide him with a line in the sand, and a clear understanding of what was currently deployed and in use – and identify any risks and vulnerabilities, and opportunities for improvement.

Using the Azure-based Movere platform, Filipa Preston conducted a Microsoft software review and began also to map potential workloads that could move to Azure across the Authority. Matching actual software inventory against the Microsoft licence statement identified surpluses which were addressed, delivering savings to the Authority.

The insights afforded Richard the opportunity to refresh the information ecosystem based on a much sharper understanding of the deployed assets.

Leveraging SQL Workloads SAM Engagement, Filipa and Richard  reviewed the Authority’s workloads and identified SQL workloads that could be transitioned off old in house SQL databases and onto Azure SQL databases, in line with the Authority’s cloud strategy and effectively future proofing the environment. According to Movere, SOS has already helped the Authority migrate 13% of their SQL footprint into Azure.

According to Richard, “We started out with a growing licencing pain that if left unchecked would have prevented us from realising our cloud strategy. SOS armed us with the intelligence and clear guidance to turn a problem into a Platform as a service (PaaS) opportunity that will reduce our costs and the support of those services.”

“This was consultancy that goes to the heart of our business. SOS are not just compliance experts, they’re change makers.”

 

 

移植 .NET Core 計畫,整合各平台變得更簡單了!

MSDN Blogs - Thu, 06/30/2016 - 19:00

【2016/6/27 號外】.NET Core 1.0 ! 正式釋出 –> 詳情請見 連結

在前篇文章中我提到了如何移植 .NET Core,並邀請使用者們不吝嗇的回報您的使用經驗和改進意見。

這項措施帶動起了非常多使用者之間的討論。

根據這些討論的重點和我們與第一與第三方夥伴合作的經驗,我們決定把核心 API 跟其他 .NET 平台,主要是 .NET Framework 和 Mono/Xamarin,做一次整合,藉此來大幅簡化移植 .NET Core 的功夫。

在此篇文章中,我會介紹我們的計畫、我們將會如何達成這個目標、預計的上市時間,以及這對現在 .NET Core 使用者的意義。

回顧 .NET Core

.NET Core 平台 起始於微軟對於開發者們想要研發一個現代化、模組化、app-local、並且能夠跨平台的 .Net stack 所推出的產品。這項產品背後的商業目標是希望可以提供一個整合的 stack 給全新的應用程式(例如:觸控型 UWP 程式) 或者現代跨平台程式(例如:ASP.NET Core 網站與服務)。

在我們即將推出的 .NET Core 1.0 中,我們成功的開發了一個強大的跨平台開發 stack。.NET Core 1.0 開創了把 .NET 推行到所有平台的先河。

雖然 .NET Core 在我們所制定的情境中運行良好,但不能否認的是它相較於其他市場上的 .NET 平台來說,可兼容的程式較少。相較於 .NET framework 時此情況尤其明顯。一部分是因為不是所有東西都是以跨平台做為目標的情況下開發的,另一部分也是因為我們把一些我們認為不必要功能給刪除了。

種種原因之下,我們了解到如果想要學習並使用 .NET Core,現役的 .NET 開發者必須花費很長一段時間來移植它。

當然,直接重新編一個新的 API 給新的客戶也是一個不錯的方案,但是這種作法彷彿變相的懲罰了長年以來一直使用微軟 API 與技術的忠實客戶。我們是想要把 .NET 平台變得更加強大,並推廣給更多的開發者,但是我們並不能漠視現有使用者的權益。

Xamarin 在這一點上就做得非常好。他們使 .NET 開發者們輕鬆地在 iOS 和 Android 手機上開發行動程式。讓我們來看看 iOS,iOS 其實跟 UWP 有許多的相似之處,例如對終端使用者經驗的高度重視和對靜態編譯的要求。Xamarin 跟 .NET Core 不同的地方在於, Xamarin 並沒有重新構想 .NET Stack。 Xamarin 把 Mono 整套搬過來,刪去了應用程式模型元件 (Windows.Forms, ASP.NET),加入了 iOS 的元件,再改動了一些細節使其適用為內嵌使用。由於 Mono 和 .NET framework 本質上非常相似,經過這種處理方式之後的 API 是非常容易被現役 .NET 使用者學習與接受的,並且使移植現有的程式碼到 Xamarin 更為輕鬆。

當初在構想 .NET 的時候,我們最重要的核心理念就是希望可以使開發者更有生產力以及協助他們撰寫更嚴謹的程式碼。我們設計 .NET 能在更豐富的領域和情境中幫助開發者,從桌面和網站應用程式,到微服務、行動應用程式、甚至遊戲的研發。

為了實現我們的核心理念,開發出一個統一的核心 API,並使其可以在任何條件下使用是必要的。一個統一的核心 API 可以使開發者們簡單的實現程式碼共享橫跨不同的工作量,使每一位開發者的專長可以得到最好的發揮 ──寫出最好的服務與使用者體驗。

.NET Core 的展望

在 Build 2016, Scott Hunter 即展示了下列投影片:

我們想要展現給各位的理念是:

不論您是想要寫桌面應用、行動 App、網站,又或者是微型服務,您都可以使用 .NET 來幫您達成您的目標。也因為我們提供統一的 BCL,程式碼共享將會是一件非常簡單的事情。作為一個開發人員,您可以專注在功能與技術對應至您選擇的使用者體驗與平台。

我們想要這樣實現這些理念:我們將會為以核心基礎類別庫 Base Class Libraries (BCL)為目標的應用程式提供原始碼和二進位碼相容性(binary compatibility), 並保證在所有平台上運作方式的一致性。基礎類別庫(BCL)就是那些存在在 mscorlibSystemSystem.CoreSystem.DataSystem.Xml,而這並不限定特定的應用程式模型和作業系統實作。

不論你將目標放在 .NET Core 1.0 的 surface(以 System.Runtime 為基礎的 surface),還是在即將釋出有擴充 API (以 mscorlib 為基礎的 surface)的 .NET Core 版本,你目前的程式碼都將可以繼續使用。

我們承諾簡化將現有程式碼移植,也將同樣保證在函式庫與 NuGet 套裝軟體上。當然包含可攜式類別函示庫 (portable class libraries)無論他們使用的是 mscorlib 或是 System.Runtime

這裡有幾個有關這次新增的例子可以讓你使用 .NET Core 起來更為順手:

  • Reflection 將會變得跟 .NET Framework 一樣,不需要 GetTypeInfo(),而舊的 .GetType() 回來了。
  • 型別將不再會缺少因為清理原因而已經刪除的成員(Clone()、Close() vs Dispose(),舊的 APM APIs)
  • 二元序列化(BinaryFormatter)將又可以使用

可以到我們 corefx GitHub 的套件庫查看更完整的計畫新增名單。

 

.NET Core 的意義

從我們跟社群的對話當中我們瞭解到他們的疑慮,使用者們擔心這些新增的 API 功能會使得 .NET Core 體驗大打折扣,這完全是個誤解。我們對 .NET Core 絕大多數的投入, 不論是能以 app local 的方式推出,還是 XCOPY deployment, 又或者是我們的 AOT 編譯器工具鏈,其開源與跨平台的理念是不變的。這同樣適用於所有額外的功能和我們對效能的改進,例如新的網路組件 – Kestrel。

一開始當我們在設計 .NET Core 時,就強調模組化以及付費使用的概念,這表示您只需要消耗最終使用之功能的磁碟空間。我們相信我們仍然可以實現這些目標而不會對相容性問題造成太大的影響。

最初,我們仰賴將功能分割在微小的函式庫中來最小化程式的磁碟利用率,而我們也知道用戶喜歡這一點。現在,我們將提供一個比其他手動程序還要更為精確、更好的儲存空間的鏈接工具。這是類似於 Xamarin 開發者現已使用的方式。

 

時程與流程

在我們推出 .NET Core 1.0 RTM 後,我們將開始著手擴充 .NET Core 的 API 介面。如此,持續追蹤 .NET Core 的你們即可部署至生產環境。

在未來幾週,您可以在我們的 corefx GitHub 看到更多詳細資訊與計畫。首先我們會做的就是釋出一系列 API references,列出我們將會推出的 API,那麼當您在移植程式碼時,您就可以有所依循來決定是否要跳轉到 .NET Core 1.0 或是等待即將推出的 APIs。同時,我們也會宣布哪些 API 我們不打算推出,我們希望能為我們的用戶提供一個看板,以查詢專案的狀態與目標。

 

這次將會是對我們為 .NET Core 1.0 做準備所遵循的流程的一大改善,前次發表因為內部流程分享不夠公開而造成的不圓滿,將在這次修正。

最後,我們計畫在 NuGet 上推出更多的 .NET Core API 更新。這些更新會是漸進式的,亦即是我們將會擴充現有的 API 功能,並同步推出更多的 API。這樣一來,使用者們可以得到好處,不用等到 API 完全更新結束才開始使用。這同時也可以讓我們把各位對於運作方式的意見與回饋加入我們的更新中。

在接下來的幾週內,我們將會在 corefx 套件庫公佈更多的資訊。你可以在這個部落格討論相關狀態以及所有的重大決策。

 

本文翻譯自 Making it easier to port to .NET Core

 

  

若對以上技術及產品有任何問題,很樂意為您服務! 請洽:台灣微軟開發工具服務窗口 – MSDNTW@microsoft.com / 02-3725-3888 #4922

Exchange Online increases its URL filtering

MSDN Blogs - Thu, 06/30/2016 - 17:29

One of the ways in which Exchange Online detects spam, malware, and phishing is through URL filtering. We use a variety of sources, you can find them here:

https://technet.microsoft.com/en-us/library/dn458545(v=exchg.150).aspx

We use URL reputation lists in the following way (including but not limited to):

  1. At time-of-scan, if a message contains a URL that is on one of the lists we use, a weight is added to the message. This weight is added to all the other features of a message to determine a message’s spam/non-spam status, and also sets the Spam Confidence Level (SCL). Different lists have different weights.
    .
  2. The URL lists are also used as inputs into our machine learning algorithms to see if there are any similarities between URLs, and between messages with URLs. This is so our filters can make predictions in the future about messages with URLs that are not yet on any of our lists but may be in the future. That is, we are trying to pre-emptively determine that a message containing a malicious URL is spam, malware, or phishing prior to the URL being added to a reputation list.
    .
  3. Our Safe Links feature, which is part of Office 365’s Advanced Threat Protection, uses mostly (but not completely) the same set of URLs in the spam filter that it does for blocking when a user clicks on a link that we think is malicious (when they have Safe Links enabled).
    .

We publish all the URL lists that we use at the link above. However, going forward, we may or may not publish every list.

For you see, we recently expanded the number of URL sources we pull from. Whereas before with our lists we were going for volume, nowadays adding more and more URL lists does not necessarily give you better coverage. Just stuffing more and more links into a list gives diminishing returns because spammers and phishers churn through them so rapidly. The result is a list that is 10 million entries, 99% of which are never seen.

Instead, we’ve been looking to shore up our lists by quality. We are not necessarily targeting the size of the list, but rather are diversifying based upon origin.

– How frequently does it update?

– What sources does it come from?

– Do they overlap with our existing lists? (this is an important factor)

– Does it overlap much with another list we are evaluating?

– How much additional value does it generate relative to the price the vendor wants to charge us?

– Does is specifically target phishing?

– Does it specifically target malware? These last two are important because we can use some of these lists that target those two types of spam as part of our Safety Tips feature.

The way we try out a new list is to pull it down from the source, push it out to production, and put it in pass-through mode. We observe how much overlap there is between the contents of the list and our own traffic. We then start pushing up the weight of the list but only apply it to time-of-scan. We then watch for false positives. We continue to push up the aggressiveness of the list until it’s as far as it’s going to go, at which point we enable it for machine learning and also for Safe Links. If we get false positives, we either decrease the aggressiveness of the weight of the list, figure out the root cause of the false positives (i.e., syntax errors in the list, problems with the downloaders), or stop using the list altogether.

The goal of this is to get better protection for our customers while avoiding disruption to legitimate mail flow. That’s a balancing act and usually takes about four weeks from when we start to when we complete.

Anyway, as I was saying earlier, we’ve included several new lists over the past few weeks; some of them are being used in #1-3 above, some others are only at #1, and a couple more are at stage 0. But whereas with our previous lists we revealed what they are, we don’t necessarily plan to identify the new ones. This is for a couple of reasons:

  1. The sources have asked not to be identified
    .
  2. By revealing which sources we use, a phisher can try to game the system and we are trying to prevent that

We still manage the false positives by doing cost/benefit analysis on the sources and would stop using the ones that do not provide benefit relative to the negative mailflow disruption they might cause.

So there you go; that’s what’s new in Exchange Online Protection over the past four weeks. We’ve incrementally started making your experience better, all in an effort to ensure you have the best email protection possible.

Join us at Red Hat’s DevNation Federal 2016 in Washington, DC – July 28

MSDN Blogs - Thu, 06/30/2016 - 14:19

This week, Microsoft is made several announcements highlighting our commitment to customer choice and support for open source technologies, including Monday’s general availability of .NET Core 1.0 and ASP.NET Core 1.0. This release is a huge accomplishment for the entire open source ecosystem – with more than 18,000 developers representing more than 1,300 companies contributing to .NET Core 1.0.

With the release of .NET Core 1.0, Red Hat also announced that they are now actively supporting .NET Core 1.0 on Red Hat Enterprise Linux, extending the benefits of .NET to the entire Red Hat ecosystem.  This is the first in a sizable list of announcements that illustrate the continued growth of our partnership with Red Hat.

Today, Microsoft is working across public sector and the developer community to help our customers achieve more in government modernization with open source.  In partnership with Red Hat, the Microsoft Azure Government team invites you to join us at DevNation Federal event Thursday, July 28 in Washington, DC where we will speak to our latest open source innovation efforts in government.

Space is limited. For more information or to register, visit the DevNation Federal site.

 

How to monetize APIs with Azure API Management

MSDN Blogs - Thu, 06/30/2016 - 13:33

This article describes the steps to set up monetization of APIs hosted in Azure API Management. Code samples are provided to help you set up integration with your chosen payment provider.

One question we are consistently asked by customers, is “how can I set up billing for my APIs?” API Management has all the information you need to set up billing accessible through our Management APIs – we leave the choice of which payment provider to use up to you. However, if you would like a bit more guidance this blog post should help.

Two Ways to Directly Monetize Your APIs

The two most common ways to directly monetize your APIs are: Subscription Billing, where you charge your customers a flat monthly fee to access your APIs; and Metered Billing, where you charge your customers based on the number of API calls they make.

Subscription Billing Model:

With a subscription model, your customers pay a flat monthly fee and are allowed to make a certain number of API calls per month. For example, a customer pays $100 to access up to 10,000 API calls per month. Whether they make 0 API calls or 10,000 API calls, the customer is charged $100 each month. If a customer subscribes to your API in the middle of a billing period, the customer would be charged a pro-rated amount for the number of days the subscription was active.

Metered Billing Model:

As the name implies, with a Metered Billing model you charge your customers a fee for each API call they make. Your customers are able to make as many calls as they want and are charged based on the total number of calls made. If the customer makes 7,000 API calls at $0.01 per call then the bill at the end of the month would be $70.

With the Metered Billing model you charge your customers once a month and all you need to know is how many calls they’ve made during that period.

Using a payment platform for Invoicing and Collecting Payments

Azure API Management tracks API usage in real time so you know how many API calls are made by each of your customers. This usage data can be used to bill each customer and send an invoice to collect monthly payments. To bill your users, you need a 3rd-party recurring billing solution, such as Stripe (which we will use as an example in this article). Stripe supports multiple billing models, currencies and payment methods, and its own API enables you to integrate with API Management seamlessly. Azure API Management does not recommend any particular payment service – you should select a payment provider that best meets your needs.

How API Management and a payment platform Works Together

To set this up, we will need to integrate Azure API Management and Stripe; they both have APIs that you can use to create smooth customer experience. The API Management API exposes API usage and subscription details for each of your customers. The Stripe API enables you to send a monthly invoice to each of your customers which includes billing details for each product in their subscription.

Create Products in Azure API Management

To manage your API with Azure API Management you need to create a Product and then add one or more APIs to it. You add your APIs to Product bundles because this is how customers subscribe to your APIs. Once a product is published, developers can subscribe to the Product bundles and begin to use the product’s APIs.

Map API Management Products to Stripe Plans

Your API customer is mapped to a Stripe user account via the Azure API Management customer account (often referred to as a developer account). which might include multiple product subscriptions. So the primary integration task is to retrieve monthly customer usage details from API Management’s API and send these details to Stripe.

With API Management’s own API, you can determine the number of active users, list the subscriptions for each user, and get monthly usage for each active product subscription. This information will be retrieved by your custom code and then submitted to the Stripe API to create and send customer invoices by email.

Create Stripe Plans for each API Management Product

Stripe refers to Plans much in the same way Azure API Management refers to Products, so you should create a Stripe Plan for each Product you create in API Management. Each Stripe Plan has a unique id, which you will map to its API Management Product counterpart.

To connect a monthly subscription Product on API Management to a monthly subscription Plan on Stripe you will set a price for your Stripe plan as well as a billing frequency. For example, when you describe a $100 / month subscription product in API Management, you’d create a Stripe Plan and specify a price of $100 and a billing frequency of one month.

To connect a metered billing product on API Management to a metered billing Plan on Stripe, you’d create a plan with $0 per month price and add fees to the monthly bill based on usage. The amount of the additional fees would be calculated by your custom code and submitted as an API request. These metered billing fees are called Invoice items and are considered non-recurring charges because they can be different from month to month.

Sign customers up and register their product choices

When a customer signs up, or adds a new product, there will be additional information that you want to capture or present to the user (e.g. credit card information, payment terms) that will then allow you to register them with your payment provider.

The way API Management allows you to do this is with sign-in delegation. Each time a customer subscribes to one of your API Management Products, your code will redirect them to capture additional information using the unique Stripe Plan ID to create a subscription through the Stripe API. You can create a new Stripe customer at the same time that you sign them up for a subscription by including the Plan id in the API call to create a new customer Take a look at the delegation article for more information, as this step can be a little tricky. See the accompanying sample code also.

Cancelling a Subscription from Stripe

When a customer cancels a monthly subscription, you need to cancel the subscription on Stripe too. This can also be done through Stripe’s API. From then on the customer’s credit card will not be charged again.

For metered billing you don’t have to do anything, however you customers will continue getting a $0 invoice which may be avoided by cancelling the $0/month subscription through the Stripe API as well.

Setting Up a Monthly Subscription Product

Here are the basic steps to implement subscription billing:

  • Package your APIs into tiered products e.g. Bronze, Silver, Gold. Offering various API call allowances, and/or maybe different call rates.
  • Apply quota and rate limit policies to your products as appropriate
  • Create corresponding Stripe Plans
  • Write code to run a monthly billing job that pulls all subscribers of each product listed in API Management during a given billing period. Submit the usage details to the Stripe API to generate bills and invoices.
Calculating the Bill for Monthly Subscription Users

Calculating bills for monthly subscription users is very easy because you don’t need to know how many API calls were made. Instead, you charge each user a flat monthly fee. In fact, once a customer’s subscription is created in Stripe, it stores the price of the subscription and will take care of the billing the monthly subscriber until the subscription is canceled.

See the accompanying sample code for specific details.

Setting Up a Metered Billing

How to implement metered billing:

  • Package APIs into tiered products e.g. Bronze, Silver, Gold
  • Apply rate limit policies to products
  • Create corresponding Stripe Plans
  • Run a monthly billing job that pulls usage info from API Management for each subscriber during a given billing period, and feed that info into Stripe to generate your customer’s bill
Calculating the Bill for Metered Users

To calculate this, your custom code will query API Management to get the customer’s API usage for each metered product. Your custom code will multiply this number by the cost per API call to calculate the total fee for this product. See the accompanying sample code for more details.

See the accompanying sample code for specific details.

Notes

Speaking of math…

MSDN Blogs - Thu, 06/30/2016 - 13:10

This post discusses how a combination of the Office in-memory built-up format (“Professional” in Word) and the math linear format is ideal for generating speech for math zones. Neither format was designed with speech in mind. The built-up format was designed to aid the creation of beautiful math typography. The linear format was designed to aid math keyboard input by looking as close as possible like real math. That goal is often achieved. For example, (a+b)/2 is, in fact, a valid mathematical expression. Fortuitously this goal also brings the notation closer to speech. One can understand the literal translation “open paren a plus b close paren over 2”.

Speech granularity

Understand at the outset that two granularities of math speech are needed: coarse-grained, which speaks math expressions fluently in a natural language, and fine-grained, which speaks the content at the insertion point. The coarse-grained granularity is great for scanning through math zones. It doesn’t pretend to be tightly synchronized with the characters in memory and cannot be used directly for editing. It’s relatively independent of the memory math model used in applications.

In contrast, the fine-grained granularity is tightly synchronized with the characters in memory and is ideal for editing. By its very nature, it depends on the built-up memory math model (described below), which is the same for all Microsoft math-aware products, but may differ from the models of other math products. Coarse grained navigation between siblings for a given math nesting level can be done with Ctrl+→ and Ctrl+← or Braille equivalents, while fine-grained navigation is done with → and ← or equivalents. The latter allows the user to traverse every character in the display math tree used for a math zone. The coarse- and fine-grained granularities are discussed further in the post Math Accessibility Trees. In addition to granularity, it’s useful to have levels of verbosity. Especially when new to a system, it’s helpful to have more verbiage describing an equation. But with greater familiarity, one can comprehend an equation more quickly with less verbiage.

Parentheses

To represent mathematics linearly and unambiguously, the linear format may introduce parentheses that are removed in built-up form. Speaking the introduced parentheses can get confusing since it may be hard for the listener to track which parentheses go with which part of the expression. In the simple example above of (a+b)/2, it’s more meaningful to say “start numerator a plus b end numerator over 2” than to speak the parentheses. Or to be less verbose, leave out the “start”. This idea applies to expressions that include square roots, boxed formulas and other “envelopes” that use parentheses to define their arguments unambiguously. For the linear format square-root √(a^2-b^2), it’s clearer to say “square root of a squared minus b squared, end square root” instead of “square root of open paren a squared minus b squared close paren”. This is particularly true if the square root is nested inside a denominator as in

which has the linear format 1/(2+√(a^2-b^2)). By saying “end square root” instead of “close paren”, it’s immediately clear where the square root ends. Simple fractions like 2/3 are spoken using ordinals as in “two thirds”. Also when speaking the linear format text ∑_(n=0)^∞, rather than say “sum from open paren n equal 0 close paren to infinity”, one should say “sum from n equal 0 to infinity”, which is unambiguous without the parentheses since the “from” and “to” act as a pair of open and close delimiters. This and similar enhancements are discussed in the ClearSpeak specification and in Significance of Paralinguistic Cues in the Synthesis of Mathematical Equations. Such clearer start-of-unit, end-of-unit vocabulary mirrors what’s in memory. The parentheses introduced by the linear format are not in memory since the memory version uses special delimiters as explained below. Parentheses inserted by the user are spoken as “open paren” and “close paren” provided they are the outermost parentheses. Nested parentheses are spoken together with their parenthesis nesting level as in “open second paren”, “open third paren”, etc.

Built-up format

Such refinements can be made by processing the linear format, but some parsing is needed. It’s easier to examine the built-up version of expressions, since that version is already largely parsed. The built-up format is a display tree as described in the post Math Accessibility Trees. For example, to know that an exponent in the linear format equation a^2+b^2=c^2 is, in fact, a 2 and not part of a larger argument, one must check the character following the 2 to make sure that it’s an operator and not part of the exponent. If the letter z follows the 2 as in a^2z, the z is part of the superscript and the expression should be spoken as “a to the power 2z”. In memory one just checks for a single code, here the end-of-object code U+FDEF. If that code follows the 2, the exponent is 2 alone and “squared” is appropriate, unless exponents are indices as in tensor notation.

The built-up memory format represents mathematical objects like fraction, matrix and superscript by a start delimiter, the first argument, an argument separator if the object has more than one argument, the second argument, etc., with the final argument terminated by the object end delimiter. For example, the linear format fraction a/2 is represented in the built-up format by {frac a|2} where {frac is the start delimiter, | is the argument separator, and } is the end delimiter. Similarly a^2 is represented in the built-up format by {sup a|2 }. Here the start delimiter is the same character for all math objects and is the Unicode character U+FDD0 in RichEdit (Word uses a different character). The type of math object is given by a rich-text object-type property associated with the start delimiter as described in ITextRange2::GetInlineObject(). The RichEdit argument separator is U+FDEE and the object end delimiter is U+FDEF. These Unicode codes are in the U+FDD0..U+FDEF “noncharacters” block reserved for internal use only.

Fine-grained navigation

Another scenario where the built-up format is very useful for speech is in traversing a math zone character by character, allowing editing along the way. Consider the integral

When the insertion point is at the start of the math zone, “math zone” is spoken followed by the speech for the entire math zone. But at any time the user can enter → (or Braille equivalent), which halts the math-zone speech, enters the numerator of the leading fraction, and speaks “1”. Another → and “end of numerator” is spoken. Another → and “2 pi” is spoken. Another → and “end of denominator” is spoken and so forth. In this way, the user knows exactly where the insertion point is and can edit using the usual input methods.

This approach is quite general. Consider matrices. At the start of a matrix, “n × m matrix” is spoken, where n is the number of rows and m is the number of columns. Using →, the user moves into the matrix with one character spoken for each → up until the end of the first element. At that end, “end of element 1 1” is spoken, etc. Up and down arrows can be used to move vertically inside a matrix as elsewhere, in all cases with the target character or end of element being spoken so that the user knows which element the insertion point is in.

Variables and ordinary text

Math variables are represented by math alphabetics (see Section 2.2 of Unicode Technical Report #25). This allows variables to be distinguished easily from ordinary text. When converted to speech text, such variables are surrounded by spaces when inserted into the speech text. This causes text-to-speech engines to say the individual letters instead of speaking a span of consecutive letters as a word. In contrast, an equation like rate = distance/time, would be spoken as “rate equals distance over time”. Math italic letters are spoken simply as the corresponding ASCII or Greek letters since in math zones math italic is enabled by default. Other math alphabets need extra words to reveal their differences. For example, ℋ is spoken as “script cap h”. Alternatively, the “cap” can be implied by raising the voice pitch.

Some special cues may be needed to convince text-to-speech engines to say math characters correctly. For example, ‘+’ may need to be given as “plus”, since otherwise it might be spoken as “and”. The letter ‘a’ may need to be enclosed in single quotes, since otherwise it may be spoken as the ‘a’ in “zebra” instead of the ‘a’ in “base”.

Tweaks

Another example of how the two speech granularities differ is in how math text tweaking is revealed. First, let’s define some ways to tweak math text. You can insert extra spaces as described in Sec. 3.15 of the linear format paper. Coarse-grained speech doesn’t mention such space but fine-grained speech does. More special kinds of tweaking are done by inserting phantom objects. Five Boolean flags characterize a phantom object: 1) zero ascent, 2) zero descent, 3) zero width, 4) show, and 5) transparent. Phantom objects insert or remove precise amounts of space. You can read about them in the post on MathML and Ecma Math (OMML) and in Sec. 3.17 of the linear format paper. The π in the upper limit of the integral above is inside an “h smash” phantom, which sets the π’s width to 0 (smashes the horizontal dimension). Notice how integrand starts at the start of the π. Coarse-grained speech doesn’t mention this and other phantom objects and only includes their contents if the “show” flag is set. Fine-grained speech includes the start and end entities as well as the contents. This allows a user to edit phantom objects just like the 22 other math objects in the LineServices math model.

The approaches described here produce automated math speech; the content creator doesn’t need to do anything to enable math speech. But it’s desirable to have override capability, since the heuristics used may not apply or the content author may prefer an alternate phrasing.

SQL Server Data Tools July Update

MSDN Blogs - Thu, 06/30/2016 - 10:04

The SQL Server Data Tools team is pleased to announce an update for SQL Server Data Tools (SSDT) is now available. The SSDT update for July 2016 adds bug fixing and enhanced support for SQL Server 2016 features such as Always Encrypted and Temporal Table. This update also introduces a ‘safe installation‘ experience. It aims to guarantee the installation of SSDT update will not break existing applications or SQL Server engine that have shared the same components in the past through the new component isolation approach.

Get it here:

Download SSDT GA July 2016 for Visual Studio 2015 and Visual Studio 2013

This release will be available through Visual Studio Extensions and Updates soon.

Download Data-Tier Application Framework (June 30 2016)

  • The version number is 13.0.3370.2
What’s new in SSDT?

Database Tools – Always Encrypted enhanced support

Always Encrypted is a highly anticipated security feature in Azure SQL Database and SQL Server 2016. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server), significantly enhancing security.

What’s in this release

For a guided tour of Always Encrypted in SSDT, see this blog post (coming soon). This release adds full support for Always Encrypted through our core APIs and command line tool (SqlPackage.exe). You can build and publish database projects with full support for all Always Encrypted features.

Limitations:

  • In this release, actions that require Azure Key Vault are not supported inside SSDT, but are supported from our SqlPackage.exe command line tool. Database Publish and Data Viewing/Editing scenarios, involving column master keys stored in Azure Key Vault, will not work in this release.
  • Generation of column master keys and column encryption keys is currently not supported. Use SSMS or Powershell to create these keys.

What’s coming soon

  • Support for Azure Key Vault in SSDT. You will be able to publish databases using column master keys stored in Azure Key Vault through the UI, view encrypted table data, and do all other actions that are supported in this release when using column master keys, stored in Windows Certificate Store.
  • Improved support for setup of encryption through SSDT.

Database Tools – Temporal Table enhanced support in SQL Server Data Tools

Temporal tables have also received significantly improved support in SSDT this month. Previously there were a number of incremental publish scenarios that would be blocked for temporal tables. This month, we have simplified the experience by unlinking temporal tables before alterations and re-linking once these have completed. This means that Temporal Tables have parity with other table types (standard, in-memory) in terms of the operations that are supported.

Database Tools – SqlPackage.exe and installation changes

We’re making changes to isolate SSDT from SQL Server engine and SSMS updates. Full information is in this blog post.

Fixed / Improved this month

Database Tools

Fixed: from now on SSDT will never disable Transparent Data Encryption (TDE) on a database. Previously since the default encryption option in a project’s database settings was disabled, it would turn off encryption. With this fix encryption can be enabled but never disabled during publish. Increased the retry count and resiliency for Azure SQL DB connections during initial connection Fixed an issue where if the default filegroup is not PRIMARY, Import/Publish to to Azure V12 would fail. Now this setting is ignored when publishing Fixed an issue where when exporting a database with an object with Quoted Identifier on, export validation could fail in some instances Fixed an issue where the TEXTIMAGE_ON option was incorrectly added for Hekaton table creations where it is not allowed. Fixed an issue where Export took a long time exporting with large amount of data due to a write to the model.xml file after data phase completed caused contents of the .bacpac file to be rewritten Fixed an issue where Users were not appearing in the Security folder for Azure SQL DW and APS connections

Analysis Services & Reporting Services

SSMS & SSDT: Fixed a SxS issue with MSOLAP OLEDB provider where only the 32-bit provider was getting installed, impacting 64-bit Excel 2016 connecting to SQL Server 2014 (did not repro with ClickOnce installs from Office365, only MSI Excel install) SSDT: Fixed an issue for a corner case to be more robust when upgrading AS model with pasted tables from 1103 to 1200 compat-level that could give error “Relationship uses an invalid column ID” SSDT: Fixed a SxS issue when SSDT-BI 2013 on same machine, could no longer import data in AS model after uninstalling SSDT 2015 (cartridges shared registry setting) SSDT: Improved robustness to address issuescrashes when the connection to the AS engine is lost (i.e. SSDT left open overnight and AS server recycled, or other cases where the connection is temporarily lost) SSDT: Fixed issues with dialogs opening on different screens than VS in multi-monitor scenarios SSDT: Fixed/enabled support for pasting from HTML tables (grid data) in AS model pasted tables SSDT: Fixed issue where upgrade failed to upgrade an empty pasted table to 1200 (used only as container table for measures) SSDT: Fixed an issue with upgrading AS tabular model with pasted tables to 1200 to work around an AS engine issue with CalcTables (which are used for Pasted Tables in 1200), to perform a process full on the new calc tables after the upgrade SSDT: Fixed an issue where canceling creation of new AS 1200 model calculated table with incomplete DAX expression could crash SSDT: Fixed an issue importing 1200 model from AS server into SSDT AS project when DB name and a table name were the same SSDT: Fixed an issue with editing KPI measure in 1103 tabular model SSDT: Fixed an Object reference not set exception hit while pasting a KPI measure in the grid for an AS 1200 model SSDT: Fixed an issue where a column in a calculated table could not be deleted from the diagram view in 1200 models SSDT: Fixed an Object Reference not set exception when viewing the model.bim project file properties while in code view SSDT: Fixed an issue with pasting data into AS model grid to create pasted table yielded incorrect values on international locales using comma as decimal separator SSDT: Fixed an issue opening 2008 RS project in SSDT and choosing to not upgrade it SSDT: Fixed issue in 1200 compat-level models calculated table UI when using default formatting for column type to allow changing the formatting type from the UI Contact us:

If you have any question or feedback, please ping @sqldatatools on twitter, visit our forum and for bugs please create bug reports on our Microsoft Connect page. We are fully committed to improve the SSDT experience and look forward to hearing from you!

Changes to SSDT and SqlPackage.exe installation and updates

MSDN Blogs - Thu, 06/30/2016 - 10:02

A major benefit of the new SQL Server is that you don’t need to wait years to get new features. Azure SQL DB is adding features on a frequent basis and the tools you use need to keep up with this. The old model of globally shared dependencies with GAC (Global Assembly Cache) made this very difficult to maintain for tools. We’re in the process of moving to a new, faster update model without the worry of breaking other applications that are already installed. This month’s release has the first set of changes, and the information below will give useful information on what you can expect to change.

What does this mean to you

Mostly, this means that if you update your tools you can safely do so without worrying about breaking other components. SSDT updates should not break SSMS or the SQL Server engine. For the relational database tools in particular, we will not ship any updates that overwrite components shared with SSMS or the engine. We’re moving these components out of the GAC and out of shared install directories – more components will be updated in this way with each release.

Note: SSIS updates will continue using the engine servicing releases (CU1, CU2) for the immediate future. If you select to install this please be aware that it will follow the traditional global assembly sharing model.

For users of the next version of Visual Studio, expect to see far fewer “SQL Server” entries in Add/Remove Programs. We now package more of these into our core bundles and use lightweight installers to reduce the impact to your machine.

SqlPackage.exe changes

SqlPackage.exe is a command line tool that gives many of the same features used in SSDT (to publish, extract, import and export databases). If you want to take advantage of the latest features such as Azure Key Vault support for Always Encrypted in SqlPackage, we recommend you install the DacFramework.msi which includes all the Azure DLLs needed to connect and publish to Azure SQL DB.

  • SSDT will continue to install SqlPackage.exe in “C:Program Files (x86)Microsoft Visual Studio 14.0Common7IDEExtensionsMicrosoftSQLDBDAC130” so that users running on build agents won’t have scripts broken, but this will not have Azure support at present.
  • SSMS now has its own copy of DacFx, so updates to DacFramework.msi will not affect SSMS (and vice versa).
  • DacFramework.msi installs to “C:Program Files (x86)Microsoft SQL Server130DACbinSqlPackage.exe” and updates independently of SSDT / SSMS.
  • We will work to get an easier installation for future updates so you do not need an MSI to install SqlPackage.exe
Nuget packages for DacFx

Starting this month we’re releasing a nuget package containing DacFx DLLs for use in any application. This is initially targeted at supporting Visual Studio vNext (aka “Visual Studio 15”) and so the version does not match the current MSI release, but we’ll be updating it soon and will release regular updates to match our releases. Nuget packages can be used when building an app that calls into DacFx, but can also be used to get the latest version to a local folder easily. The main DacFx dependencies are bundled in this nuget so you can run it on a machine without any other dependencies. You can get the x64 package here, and the x86 package here.

Going forward we will aim to have more tools available via nuget for easier acquisition & updating in ALM / build agent environments.

Extension developers

For anyone extending SSDT, note that we now use version 13.100 for many. We are adding binding redirects to Visual Studio to ensure that anything you built against older versions will work, but for the latest features you may want to target this version.

What’s new in Office 365 administration—June update

MS Access Blog - Thu, 06/30/2016 - 09:00

Additional user management options, enhanced export capabilities and better guidance on next steps are some of the improvements we focused on in June to improve existing features in the new Office 365 admin center. Of course, we all love completely new features, but we want to make sure that the new admin center really makes it easier, faster and more efficient for you to manage Office 365. Based on your feedback, we took a critical look at the existing features to see how we can further improve them to provide you with the best management experience possible.

Here’s a summary of the June updates:

Updates for the new Office 365 admin center

Get tips on how to get started quickly—Most of you use some areas and capabilities of the admin center more heavily than others. The reason is often that admins aren’t fully aware of all the capabilities that the admin center provides or how a specific feature could help you to increase the value of Office 365. To help you get started, tips and quick links on pages that you visit for the first time or that you don’t use on a regular basis are now displayed at the bottom of a page. You will find the tips at the bottom of the Groups, Shared mailboxes, Rooms & equipment, Active users and Contacts pages.

Additional user management capabilities—The new admin center now enables you to export the list of active users directly from the Active users page by clicking Export in the upper right corner. In addition, we updated the user card to show the job title and department of each user at the top of the card.

Easily see who you have assigned a license to—We updated the Subscriptions page so that you can now directly see which users you have assigned with a specific license. This has been a frequently requested feature from many of you to help you better manage your licenses. Now, when you click Assigned on the Subscriptions page, a list of the users that you have assigned that specific license to is displayed.

Get guidance on next steps—When you complete an action in the admin center, you often want to follow up with another action. For example, if you add a user, as a next step you may want to share details with the user on how to get started using Office 365. Or if you create a room, editing the booking options is often the next logical action. To help you accomplish those next steps faster—such as add, delete or block a user, add a shared mailbox, add a contact or create a room—suggested Next Steps are now displayed directly in the confirmation window.

Get tips on how to manage Office 365 more effectively—Office 365 is a rapidly evolving product making new features and functionality available to end users as well as to admins on a regular basis. We want to make sure you get the most out of using and managing Office 365. The new Office 365 Tips provide you with tips and tricks on how to take full advantage of the service and are especially helpful when you’re new to Office 365. Learn how to create another admin, manage Office 365 from the admin app or quickly add more users. To access Office 365 Tips, click the Office 365 Tips card on the home dashboard.

More to come—In the coming month, we’ll be adding additional reports, support for partner managed admin centers, new admin roles and additional set-up guidance.

Let us know what you think!

Try the new features and provide feedback using the feedback link in the lower right corner of the admin center. If you’re missing a feature, take a look at the Recently Added page that is linked to directly from the home page. Are you missing a feature? Please send us a note using the feedback link. And don’t be surprised if we respond to your feedback. We truly read every piece of feedback that we receive to make sure the Office 365 administration experience meets your needs.

—Anne Michels, @Anne_Michels, senior product marketing manager for the Office 365 Marketing team

Please note: the features mentioned in this blog post have started to roll out worldwide. If they are not available yet for your organization, please check back in a few days!

The post What’s new in Office 365 administration—June update appeared first on Office Blogs.

Episode 097 on the Office 365 Connectors and Xamarin development—Office 365 Developer Podcast

MS Access Blog - Thu, 06/30/2016 - 09:00

In this episode, Richard DiZerega and Andrew Coates have an open discussion on Office 365 Connectors and Xamarin development with the Microsoft Graph.

http://officeblogspodcastswest.blob.core.windows.net/podcasts/EP97_ConnectorsXamarin.mp3

Download the podcast.

Weekly updates Show notes

Got questions or comments about the show? Join the O365 Dev Podcast on the Office 365 Technical Network. The podcast RSS is available iTunes or search for it on “Office 365 Developer Podcast” or add directly with the RSShttp://feeds.feedburner.com/Office365DeveloperPodcast.

About the hosts

Richard is a software engineer in Microsoft’s Developer Experience (DX) group, where he helps developers and software vendors maximize their use of Microsoft cloud services in Office 365 and Azure. Richard has spent a good portion of the last decade architecting Office-centric solutions, many that span Microsoft’s diverse technology portfolio. He is a passionate technology evangelist and frequent speaker are worldwide conferences, trainings, and events. Richard is highly active in the Office 365 community, popular blogger at http://aka.ms/richdizz and can be found on Twitter at @richdizz. Richard is born, raised and based in Dallas, TX, but works on a worldwide team based in Redmond. Richard is an avid builder of things (BoT), musician and lightning-fast runner.

A Civil Engineer by training and a software developer by profession, Andrew Coates has been a Developer Evangelist at Microsoft since early 2004, teaching, learning and sharing coding techniques. During that time, he’s focused on .Net development on the desktop, in the cloud, on the web, on mobile devices and, most recently for Office. Andrew has a number of apps in various stores and generally has far too much fun doing his job to honestly be able to call it work. Andrew lives in Sydney, Australia with his wife and two almost-grown-up children.

Useful links

The post Episode 097 on the Office 365 Connectors and Xamarin development—Office 365 Developer Podcast appeared first on Office Blogs.

Pages

Subscribe to Randy Riness @ SPSCC aggregator
Drupal 7 Appliance - Powered by TurnKey Linux