Thursday, May 3, 2018

Tip o’ the Week 428 – Spring, April or the Edge of Summer

clip_image002The intent was to release the latest update (“Redstone 4” or “RS4”) to Windows 10 during early April, though a late “blocking bug” delayed the release. The name of the update was late to be officially confirmed, too – it was rumoured to be “Spring Creators Update” (since the Fall Creators Update happened last year, though the “Creators Update” appeared around a year ago, in April 2017)… but was also thought to be simply, “Windows 10 April Update”. The Reg forecast a wait of weeks to be sure.

clip_image004There are lots of small improvements in the update, as well as some biggies like Timeline (which is showing up in other apps, too – like Photos, as seen to the left), and the Edge browser is getting a slug of new functionality - take a sneak peek at some of the Edge goodness, here.

Developers also got a new preview of Edge DevTools, which opens the door to such excitement as remote debugging of another Edge instance. If you’re a hoopy frood, check it out here.

clip_image006It seems that Edge, even though it’s the default browser in Windows 10, doesn’t appear to be everyone’s favourite, with many users installing Chrome as one of their first tasks on a new machine. Both browsers and the respective web services from their creators seem insistent on nagging their end users to switch…

clip_image008

Still, there are times when the two cooperate behind the scenes. The Edge for Android app, for example, uses the rendering engine from the Chromium project, so is effectively running the same browser capabilities in a different shell which takes care of synchronising your favourites, passwords etc, between the Edge browser on your PC(s) and the one on your phone. Edge for iOS uses the native WebKit engine to achieve the same thing.

There are updates on the way for the mobile versions of Edge, supporting Timeline too – so you could resume activities from your desktop on your phone and vice versa.

Microsoft also recently launched a Defender Extension for Chrome, to provide similar protection to defectors that Edge users get natively from the SmartScreen filter technology (NSS Labs tested Edge, Chrome & Firefox, concluding that Edge blocks more bad stuff than either of the others). Even some surprised Chrome users recommend it.



from TechNet Blogs https://ift.tt/2HMagXk

Tip o’ the Week 427 – OneNote roadmap update

clip_image002As has been covered many times previously on ToW, the OneNote app has a lot of fans who love the product and use a lot of its features, especially when it’s used in the Classroom. Defectors to other platforms sometimes bemoan the lack of OneNote (or a decent alternative) as a hurdle in using their chosen environment.

Talking about OneNote can be confusing, though, as there are the two PC versions – OneNote 2016, the Win32 app that’s evolved ever since the first version shipped as part of Office 2003, and the shiny new codebase that is OneNote for Windows 10, the Store app which also shares a lot of its UX with the Mac, mobile and web versions. Differences are explained here.

Major users of OneNote may have noticed that over the last couple of years, the traditional Windows app hasn’t received a whole lot of new functionality, but the Store version has had regular updates with extra features… though it is a much simpler app anyway, so there’s more to improve. The Metro Store version is missing quite a lot of the capability of the full-fat version, though the gap is closing fast.

Recently, the OneNote team announced that there will be no further development of the traditional OneNote 2016 application, and that it won’t be installed by default in the next iteration of Office (though it will still be available as an option, in case you can’t live without it).

New features are planned for the Store version – like support for tags, and what looks to clip_image004be a tweak to the search experience, which will provide additional search refinements. Whether it’s as good as the somewhat obscure but quite powerful Search capability in the 2016 app remains to be seen.

To get the latest version of the OneNote app, first check it’s up to date, or join the Office Insiders program. Windows Insiders clip_image006also get early access to OneNote versions, and there’s an Experimental Features option (in the ellipsis···” Settings & More menu, Options).

Paul Thurrott – an unashamed fan of the OneNote for Windows 10 app, preferring it to its elder sibling – also reported on the news. Paul points out that the UWP version has better support for ink, that syncing is faster, performance is better etc. Tech Republic has some further commentary too.

To keep up with other news on OneNote, you could do well to follow William Devereux from the OneNote team on Twitter, as recommended by Windows Central’s “50 influencers” article.



from TechNet Blogs https://ift.tt/2w7SaxC

Wednesday, May 2, 2018

Prepare for the Azure Infrastructure Exam (70-535)

Do you have members of your technical team that need to get a better understanding of Azure? Looking to get one of those certifications to both make your team more excited about Azure and prove your chops with your customers? Now is your chance!

image

We are delighted to present to you our revamped program to help you prepare for the Azure Certification Exam: 70-535 Architecting Microsoft Azure Solutions, which is more informative and easy to learn than before.

The program aims at providing interactive, fast-paced learning for implementing Microsoft Azure based Solutions, which will help you tread your skill enhancement path and get certified. The learners will be prepared for this certification exam through the examination focused delivery method.

Bootcamp Schedule

May 07, 2018

Architecture Patterns in Azure and Deploying resources with Azure Resource Manager

Topics Covered:

  • Understand performance, resiliency, scalability and data patterns
  • Creating resource groups
  • Deploy simple ARM template
  • Filter resources using tags
  • How to author complex deployments using Azure Building blocks tools

image

May 08, 2018

Building IaaS based Server applications, Managed Server applications and Serverless applications in Azure

Topics Covered:

  • Implementing High Availability
  • Templated Infrastructure
  • Domain connected machines
  • Infrastructure backed PaaS
  • High Performance Compute (HPC)
  • Implementing Azure Web App, Azure Functions and Integration

image

May 09, 2018

Backing Azure Solutions with Azure Storage and comparing Database options in Azure

Topics Covered:

  • Storage Pricing
  • Understanding Blob and Files storage, and Managed Disks
  • Hybrid Storage solution using StorSimple
  • Relational database options
  • NoSQL and Azure Cosmos DB
  • Data Storage and Data Analysis

image

May 10, 2018

Networking Azure Application components and Managing Security & Identity

Topics Covered:

  • Understanding Azure Virtual Networks (VNETs)
  • Using Load balancing and designing for external connectivity
  • Security Monitoring
  • Designing Data Security
  • Designing Application Security using Azure Active Directory (AAD)
  • Integrating applications with Azure AD

image

May 11, 2018

Integrating SaaS Services, Azure Solution components using Messaging Services and Monitoring & automating Azure solutions

Topics Covered:

  • Cognitive Services based solutions
  • Azure Bot Services
  • Designing solutions using Machine Learning
  • Media Services
  • Using Event Messaging
  • Solution integration
  • Azure IoT solutions
  • Application, Platform and Network Monitoring
  • Alerting, Backup and Azure Automation

image

By the end of the course, the learner should be able to:

  • Get onboarded to exam focused self-learning resources
  • Get complete insight into the Learning Path to become an MCSD/MCSE
  • Learn the tips and tricks to pass the certification exam
  • Understand key aspects of the exam and get hands on learning guidance
  • Know the common mistakes and learn how to avoid them during the exam
  • Participate in an interactive Q&A with our expert presenter and strengthen the key concepts

Who should Participate:

  • Security Architects
  • Network Administrators
  • Storage Design Architects
  • Cloud Administrators
  • Solution Architects
  • Identity Solution Architects

Prerequisites:

  • Knowledge of basic Azure concepts
  • Familiarity with basic structure of Azure platform
  • Familiarity with Microsoft virtualization platform
    Hyper-V

Frankly, I think ANYONE with an interest in learning Azure for their company should attend this session. In addition, I believe the skills learned while passing the 70-535 exam can be directly applied to passing the Azure Implementation exam 70-533 as well.

See you online,

SDeming 2017  Steve



from TechNet Blogs https://ift.tt/2w3L5hK

Replace your SAP BW by leveraging Simplement and the Microsoft Azure Data Stack

Hi everyone! Let me introduce myself. My name is Alp Kaya. I work for Microsoft as a Data & AI technical presales in Ontario. This is my first post of many to come… enjoy! 

If you’d like to discuss the article or any Azure Data related services please reach out: https://www.linkedin.com/in/kayaalp/

Objective of the Article?

SAP ERP is one of the most widely used business applications in Large Enterprise (LE). It is great piece of software for business processes and so wide spread in the market that 2/3 of the world’s GDP runs through an SAP system. However, one of things it is not known for is its ability to allow users access to the data and enabling analytics on the ERP system. SAP – recognizing this as a short coming, developed an application called SAP BW (Business Warehouse) which is a prebuilt proprietary Data Warehouse application. The licensing for the BW application is free so long as you have users licensed on SAP ECC. SAP BW solution provided SAP customers standard extractors and prepackaged analytics/reports that would read/extract directly from you ECC system (+ other data sources as well) and load SAP BW analytics structures. The prepackaged and out of the box analytics reports would fulfill common reporting requirements such as Sales, A/R, A/P, Financials, etc…However, the SAP BW system is based on the same proprietary SAP ERP platform Netweaver and ABAP. The overall market and customer perception SAP BW application has been luke warm at best – there are customers who absolutely love BW and others who refuse to leverage it and would rather spend the energy and money and build their own.

SAP recognizing the marketplace perception and attempted to modernize SAP BW developed with an in memory DBMS called SAP Hana. SAP Hana allowed SAP BW (+ SAP ECC in subsequent releases) application to run on top of it. In the latest reincarnation, SAP has announced a new transactional product (S/4) but has not been widely adopted yet. The predominate use case for SAP Hana has been to move SAP BW workloads onto SAP Hana. The SAP Hana solution enables the SAP ERP or SAP BW in memory and hence was able to accelerate the data access to business users. In addition, with SAP Hana, a very SQL like interface for the SAP system and this really simplified access so long as you have the Hana Enterprise (i.e. non-runtime). Some customers chose to deploy SAP Hana as a side car and replicated data from SAP ERP straight into it via SLT and other tools. This deployment model constituted a full use (non-runtime license). And although SAP Hana is a great platform, it is a very expensive software license and requires a very expensive hardware appliance. It’s not unusual for customers to spend over 7 figures on licensing and hardware for a couple of TB of Hana.

Overview and the Challenge?

Most recently, I have had the opportunity to work with a customer who has a multi-terabyte ECC and an SAP Hana side car deployment. This customer’s preference was to build their own data warehouse (instead of using SAP BW) via SAP Hana side car. This customer mightily struggled with getting access to Financial Reporting out of their SAP ECC system and by struggled, I mean:

  • It took more than 6 months to build a baseline Financial Income Statement
  • Several years to complete the finalized Financial Income Statement (albeit the final Financial had over 200 measures that included several this period vs prior period vs same period prior year and all of the % increases/decreases of each – which are very computationally taxing)
  • Cost several million dollars to generate and maintain this one single Financial Statement report (for a variety reasons from data integration into SAP Hana via SLT & Data Services, Calculation Views on top of Calculation Views, and the general complexity of the solution)

This customer was wondering if there was a better, easier and more cost effective way to do this both from a licensing, hardware, and resourcing perspective. Working closely with this customer, we engaged in a 6 week project and leveraged an ISV solution called Simplement which produced some incredible results.

What does Simplement Do?

Simplement is a metadata driven solution that reverse engineers SAP ERP and removes the proprietary SAP format and exposes the SAP underlying tables via a common SQL interface. The Simplement solution first takes the underlying SAP tables, de-clusters SAP cluster tables, de-pools SAP pooled tables, and translates German language SAP columns/tables into English and makes them available as regular tables in Microsoft SQL DBMS. The Simplement Data Liberator (core component of the Simplement solution) also automatically manages SAP hierarchies and allows you to automatically inherit your existing SAP security model to the Liberated data. So, for customers who have large user communities where security has already been or for customers who have well defined hierarchies and want to continue leveraging them, these are all cascaded to the SQL world without any re-work.

Once this is done, all of the data in SAP can now easily be queried via a very common non-proprietary SQL language on the Microsoft SQL DBMS. Simplement then provides out of the box multi-dimensional views (think OLAP) into the data in the form of in memory Microsoft Analysis Services Tabular models. Microsoft Analysis Services Tabular is a multidimensional in memory analytical structures and can be thought of as a competitor to SAP Hana. Furthermore, the Simplement solution provides 9 multi-dimensional Analysis Services Tabular LOB models (known as “Smart Starts” covering areas such as S & D, GL, A/R, A/P, COPA, and others. Finally, to get you going with the Analysis Services Tabular models, the Simplement solution provides a prebaked PowerBI reports for each of the different smart start areas. You can use PowerBI to build management reports, and your company analysts can also connect Excel directly to the Tabular Smart Start model for deep analysis. The Data Liberator can also feed the transformed SAP data to Azure Data Warehouse, Machine Learning, Cognitive Services, etc.

Key Components and Highlights of The Solution

clip_image002[3]

Key Highlights of the Solution

1. Out of the box Microsoft SQL Server that contains the SAP data in a SQL table format. With SQL, SAP documents can be searched from a variety of methods. For instance, PowerBI or Excel can be connected and be used as front end to search for any SAP document or analysis can be done straight on the SAP data in SQL. In addition, with Microsoft SQL 2017, both Python and R can be leveraged to execute statistical and data science packages without having to export to a separate package.

Lastly, the Azure Search as a Service functionality can be used so that “search” can be applied on SAP data in the SQL DBMS.

clip_image004[3]

For instance, imagine navigating your customer data or part masters data in a ‘google’ style search. The image above is an example of Azure Search as a Service with the following features turned on providing a rich UI experience on the SAP data:

§ Full text search is a primary use case for most search-based apps. Queries can be formulated using a supported syntax.

§ Search suggestions can be enabled for type-ahead queries in a search bar. Actual documents in your index are suggested as users enter partial search input.

§ Faceted navigation is enabled through a single query parameter. Azure Search returns a faceted navigation structure you can use as the code behind a categories list, for self-directed filtering (for example, to filter catalog items by price-range or brand).

2. Out of the box, the Simplement solution provides several in memory Microsoft AS Tabular models such as S&D, General Ledger, COPA, A/R, A/P, Inventory, Asset History, etc… These models provide transactional facts, conformed dimensions, and key measures for your SAP data. In addition, using the DAX language in AS Tabular, additional KPI and measures can easily be created. One of the powerful features of the DAX language is the ability to easily create key measures like this period vs prior period or this period vs same period last year. Often, this type of requirement takes long periods of time to develop in other tools, but this is an inherent feature of the AS Tabular solution.

clip_image006[3]

3. Simplement provides out of the box PowerBI reports for modules like S&D, General Ledger, COPA, A/R, A/P, Inventory, Asset History, etc... So, various LOBs can get access to the SAP data and do in depth analysis (drill up/down and across) without having to do any data integration or custom development. Crucial SAP data elements are exposed as prebuilt PowerBI dashboards and reports. Lastly, this content can further be customized with additional calculations and different visualizations.

clip_image008[3]

4. Once the data is in the Microsoft Cloud (i.e. Azure), the possibilities become endless. For instance, IoT Hub can be leveraged to pull in terabytes/petabytes of data and marry that back with Asset/maintenance data from SAP. Or, Azure ML can be leveraged to apply Machine Learning on the data with endless possibilities like predictive maintenance, financial forecasting, and others. The diagram below depicts some of the possibilities once you are on Azure:

clip_image010[3]

Project Results

1. Within a week a fully functional AS Tabular GL Model on commodity hardware on Azure

2. A baseline Financial Income statement within a week after accessing SAP ECC with over a 100 key relevant measures out of the box

3. Leveraging the Azure Data Stack and applying advanced analytics via R and forecasting terabytes worth of Financials within a few minutes

Final Thoughts

Simplement is a great way to provide analytics and easy ad-hoc requests to SAP ERP systems. Simplement works completely transparent of the underlying databases - even if the SAP ERP system is running off Hana. Finally, the Simplement solution provides the out of the box SQL DBMS, Analysis Services Tabular (in-memory) running on commodity hardware, and PowerBI content to get you started. This also unlocks the ability to take a traditional SAP customer and move them to Microsoft Azure – opening with it all that Azure can do. For instance, applying Azure Search as a Service to SAP documents, applying Big Data or IoT, and Machine Learning to SAP ERP data along with elastic scale up/down of the cloud.



from TechNet Blogs https://ift.tt/2FAeorH

Protect Unmanaged PC Access (CA Scenarios for Success 2 of 4)

Graphic covering benefits of restricting unmangaed PC accessWe are back today for part two of our four part series on conditional access scenarios for success. Today, we will discuss how to restrict unmanaged PCs from accessing corporate resources. We often hear that organizations want to empower their employees to be productive, even on their personal PCs- but that they need to do so in a way that prevents users from leaking their corporate data. We find that most want to enable full resource access (such as through thick clients) only from corporate PCs where they have the capability to wipe/manage that data. However, they still need to provide some access from personal PCs- which we can accomplish by enabling browser-only access with MFA enforced for personal PCs, providing a more limited (yet still protected) experience. Today, we are going to focus on implementing this for SharePoint Online and Exchange Online, since we see most organizations focused first on protecting mail and files, but you can extend these policies to other corporate resources as well.

 

This scenario enables personal, unmanaged devices (non domain-joined) to access their corporate resources through the browser only with MFA controls, while allowing only domain-joined (corporate) devices to access corporate data using thick clients. Additionally, we can use SharePoint session controls to enforce additional restrictions for browser access on personal PCs, preventing users from downloading/printing/syncing their files.

Scenario Requirements

This scenario is simple to fulfill- all it requires is setting up two conditional access policies, enabling SharePoint session controls, and configuring Hybrid Azure AD Join for your domain-joined devices. Let's dive a little deeper into these requirements before we begin setting up the CA policies!

  • Two Conditional Access policies
    • 1st Policy: scoped to EXO/SPO/etc., that targets Client Apps and requires devices be Domain Joined to access. This policy will prevent non-domain joined devices from accessing ExO/SPO through thick clients.
    • 2nd Policy: scoped to EXO/SPO/etc., that targets the Browser, excludes Trusted Networks, and requires MFA and Session Controls to access. This policy will allow browser-only access to ExO/SPO for personal devices and will force them to complete a MFA prompt before being granted access. For SPO, additional controls will be in place to prevent downloading/printing/syncing.
  • Access control restrictions enabled in SPO (required for Session Controls to work)
    • In the SharePoint admin center under "access control" we will "allow limited, web-only access" for unmanaged devicesScreenshot of the access control policy in the SPO Admin Center
    • NOTE: When you enable these access controls in the SharePoint admin center it automatically creates two conditional access policies in Azure AD that by default apply to all users. You have two options: you can either modify the auto-created policies to target the specific security groups you want to receive this policy and add ExO as a targeted cloud app. Or, you can delete the auto-created policies and follow the configuration steps below to create them.
  • Hybrid Azure AD Joined Devices
    • In order for conditional access to consider a device "Hybrid Azure AD joined", it must be a Windows device that is joined to an on-premises AD and registered to Azure AD using Hybrid Azure AD Join. Follow the straightforward instructions in this doc to get Hybrid Azure AD Join configured for your devices.

Configuration Steps

To configure the two conditional access policies, simply follow the configuration outlined in the screenshots below.

 

Policy 1- Enabling thick client access for only domain joined devices (effectively blocking thick client access for personal, non domain-joined devices)

Screenshot of the conditional access CA policy configuration

Policy 2- Enforces MFA to access ExO/SPO through the browser on personal PCs, with additional session controls for SPO.

Screenshot of CA policy #2 for SPO session controls, the conditions section

Screenshot of the controls in the CA policy for sharepoint sesion controls

End User Experience

Let's take a look at how these policies impact the end user experience.

When we try to access the Teams client on an unmanaged device (which honors CA policies targeted to SPO), end users will be blocked and notified that they need to be domain joined.

Screenshot of the end user experience on an unmanged PC access the Teams thick client. The end user receives a notification that they aren't able to access it because they're on an unmanaged device

On that same unmanaged devices, if we try to access OneDrive through the browser we will see this limited experience enforced by the SharePoint Online session controls. The banner on the top of the page notifies the end user that they can't download/print/sync using this unmanaged device. This enables your end users to stay productive by accessing and editing documents, while preventing that corporate data from leaking to this unmanaged device.

Screenshot of end user experience on an unamanged PC access SPO through the browser. the user gets a message that they can access/edit files but are prevented form downloading/printing/syncing

Screenshot of end user experience accessing word online (part of SPO) on an unmanaged PC. They can edit the doc but can't sync/download/print

In Review

Scenario Goal: Allow only Domain Joined devices to access corporate data using thick clients, but restrict unmanaged device access to the browser with MFA

Scenario Scope: Windows PCs

Recommended when…

  • You want to enable remote access to web resources on personal PCs
  • You only want full resource access from corporate PCs
  • You want a strong, yet flexible, security posture for Windows PCs

 

In the next post of this series, we will shift our focus to protecting corporate data on mobile devices. Have more questions about protecting unmanaged PC access? Have you tried out these conditional access scenarios? Let us know in the comments below!

 

-Sarah and Josh



from TechNet Blogs https://ift.tt/2jogguV

Tip of the Day: The Panther Folder Mystery

Today's tip...

I’ve always wondered about the Panther directory myself. Perhaps this bit of trivia might also satisfy others with similar curiosity.

Why are Windows setup logs stored in a Panther directory?

The top rated answer appears to be the following:

“Panther” was the code name for the new setup/servicing engine that first shipped in Windows Vista.  I think the intention was to change the folder name to something more generic before Vista shipped but the path became baked into too much code to safely change in time to ship.

Mystery solved?



from TechNet Blogs https://ift.tt/2jktf0y

SPO Tidbit – Changing How Document ID URLs are Retrieved in Document Libraries

Hello All,

This message just came across my inbox and I think you will find it very interesting….

Starting soon we will begin rolling out to all targeted release tenants a change to how document ID URLs are retrieved in document libraries. Document ID URLs are now only available inside the Document ID column, visible on property forms in modern and classic. They are no longer available inside the sharing dialog or inside the callout.  

We anticipate to be 100% production within 6 weeks of starting.

For more information  on Document ID’s in SharePoint see: https://support.office.com/en-us/article/activate-and-configure-document-ids-in-a-site-collection-66345c77-f079-4104-ac7a-e25826849306

Pax



from TechNet Blogs https://ift.tt/2I6c9BM

Support-Info: (FIMMA): failed-creation-via-web-services

PRODUCTS / COMPONENTS / SCENARIOS INVOLVED

  • Microsoft Identity Manager 2016
    • Synchronization Service - FIM Service Management Agent
    • Service and Portal

PROBLEM SCENARIO DESCRIPTION

  • Running an Export Run Profile on the FIM Service Management Agent produces the Run Status of stopped-server.  We want to understand the best way to clear out data in the FIM Service Management Agent connector space to assist with resolving this issue.

    NOTE

    To learn more about the different Run Profile Status' that is returned by the WMI RunStatus Property when executing Run Profiles, review this MSDN information: https://msdn.microsoft.com/en-us/library/windows/desktop/ms699322(v=vs.100).aspx

FIM SERVICE MANAGEMENT AGENT ERRORS

CAUSE (failed-creation-via-web-services):

  • The Connector Space for the FIM Service Management Agent was deleted and data from the Service and Portal was not reimported into the FIM Service Management Agent Connector Space.  This allowed some data to still exist in the Service and Portal that the FIM Service Management Agent has staged as Pending Export Adds.
NOTEOne of the causes of this issue was the deletion of the FIM Service Management Agent connector space.  The recommendation is to review information around this topic prior to deleting a connector space.  Find more information here:

 

RESOLUTION (failed-creation-via-web-services):

  1. Remove all the Users from the Service and Portal
NOTEDISCLAIMER:
It is extremely important to note that this script will delete objects in the Service and Portal.  Once the user object is removed, until it is populated again into the Service and Portal that user will not have access to the Portal.

Additionally, we highly recommend testing any process like this in a staging and/or testing environment prior to executing in production.  This is to safe guard your data.

Once you are ready to execute, be certain that you have a verified backup of your backend FIMService and FIMSynchronizationService databases in regard to disaster recovery.

 

  1. Ensure that the Service and Portal are clear of all EREs
  2. Execute a Full Import (Stage Only) on the FIM Service Management Agent
    • This will bring in all of the Synchronization Rules into the FIM Service Management Agent Connector Space.
  3. Execute a Full Synchronization on the FIM Service Management Agent
  4. Review Pending Exports to understand the data that you will be exporting.
    • You can do this through Search Connector Space > Pending Exports
  5. Once Pending Exports is confirmed, proceed with running an Export on the FIM Service Management Agent
    • From the Actions menu, select Run and then Export
  6. Once the Export is finished, execute a Delta Import (Stage Only) to confirm the Exported Changes

ADDITIONAL INFORMATION

Deletion of connector spaces

Management Agent Run Status

Other Information

 



from TechNet Blogs https://ift.tt/2rdfWTb

Case of the Hit or Miss Windows 10 Servicing Fail

Hello All,

I hope this finds everyone well and gearing up for summer!  As Windows 10 deployments accelerate and you successfully tackle bare metal and legacy to uefi conversion/refresh scenarios, we also find ourselves in a third scenario:  Servicing Windows 10.  Servicing is a new approach to updating Windows and has been introduced and discussed at length in a number of different forums, TechNet, Ignite, blogs, msdn, etc.  As we approach Windows 10 version 1803 by now most of you should have your servicing setup, tested, and likely have been through one or two rounds of servicing.  I wanted to take a moment to share with you something we found when servicing Windows 10 to version 1709, how we analyzed the problem, and what we did to work around it. The scenario is a mix of Windows 10 machines running versions 1511 and 1607, that are failing to service to 1709 via SCCM.  We set out to service the 1511 machines initially where we saw some level of success, and interestingly some level of failures; enough failures that raised many eyebrows.  Lets say it was a 60/40 ratio, or 40% failure rate; so it was pretty high which usually indicates a systemic problem that is common among the failures.  But alas we are not in the business of speculation!  We had these failures bubble up and it was time to rollup the sleeves, dig in, and do some post mortem to understand why.  Well as we all know, what we need in our life at this point are logs, logs, logs, and more logs!  But where are the logs for servicing?  Although the information is out there, it is surprisingly not so easy to find.  If you haven't already seen this page, you'll want to head over, check it out, and bookmark it.  Tons of great information in here with different levels of content for the beginner to the seasoned IT Pro.  Understanding how servicing works is going to help give you a good foundation on which to troubleshoot these types of failures.  There is quite a bit to take in on the aforementioned page, suffice it to say I will provide some cliffs notes here (which are not a replacement for reading that content ; )).

The Process

Windows 10 servicing is broken down into 4 phases, or 5 if you're unlucky enough to experience an uninstall/rollback.  It's a good idea to read through and understand what each phase is doing, where it takes place, and where the logs for each of these phases are located.  Also a key here in finding out what logs were generated and where, is to understand how many reboots have taken place.  Depending on what logs are generated (and the content of them), you can deduce which phase the servicing operation failed in.  The servicing process reboots once between each phase.  This will make more sense later.

Phase 1.  DownLevel - This phase is ran in the source OS, this is where all of the install files that are needed are downloaded and prepared for installation.  During this phase we mount the SafeOS WIM file AKA the WinPE environment for use after the upcoming (READ 1st) reboot.  After the SafeOS WIM is mounted and updated for use on the system, we dismount it, apply BCD settings making it the default boot entry, suspend Bit Locker, and reboot the machine.

Reboot.

Phase 2.  SafeOS - After we come back from the first reboot we are now booting into the SafeOS WIM (WinPE) that was prepared in phase 1.  Once the machine enters WinPE this is where the bulk of the work to service the operating system is done, AKA where the magic happens.  There are many, many operations being done in this phase.   Some of the key operations are: Creating an OS rollback, creating a recovery partition, copying/moving the source WIM (target OS) to the recovery partition, applying the OS WIM, applying drivers, adding the new OS boot entry into BCD, and setting the SafeOS WIM as the default boot entry in BCD.  Once this phase completes successfully we have applied the new OS, and setup the machine to reboot back into the SafeOS.

Reboot.

Phase 3.  First Boot - We are now coming back from the second reboot of the servicing process.  During the First Boot phase we boot back into SafeOS, new BCD entries are created for the New OS,  settings are applied, sysprep is run, and data is migrated.  There is quite a bit going on here during this phase as well.

Reboot.

Phase 4.  Second Boot - During the final phase more settings are applied and more data is migrated, system services are started, and the out of box experience (OOBE) phase executes.  The culmination of the process is reaching the start screen and eventually the desktop.

Phase 5.   Rollback.  If you've reached this phase, something has gone wrong and your machine is rolled back to the previously existing operating system version.  This implies that somewhere along the line the machine experienced a fatal error and could not continue.  Two logs are of immediate interest if you experience a rollback:

C:Windows.~BTSourcesRollbacksetupact.log

C:Windows.~BTSourcesRollbacksetuperr.log

These four main phases are documented on the Windows 10 Troubleshoot-Upgrade-Errors page, and a nice graphic is included at the bottom of the page.  For the first three phases you can actually follow along with each item listed in the graphic on the upgrade errors page by looking at the C:Windows.~BTSourcesPanthersetupact.log to see which of the first three phases completed successfully.  The page also gives you an idea of where errors are typically seen and what kinds of things can cause them.

The Problem

Fairly widespread reports of machines taking the upgrade, and eventually rolling back began to trickle in.  Results may vary but on average the servicing process can take between 1-3 hours to complete.  The time it takes to complete is dependent on a number of factors, network uplink speed, processor spec, amount of RAM, type of HDD, etc.  In any event, the time that the servicing upgrade took was also compounded by the time the rollback actually took in order to revert the machine to the previous OS.  You can get an accurate count of overall servicing time and rollback time by looking at the setupact.log files.  In some instances the rollback of machines was still cooking a few hours into the servicing process.

Why?

First let me state that there are tons of logs generated during the servicing process; xml, etl, log, evtx, text files, etc.  All of them contain information about what happened during the servicing process, some of them are easy to consume and crack open, some of them aren't as friendly.  Review all of the logs, mount the .evtx logs in the event viewer, review the flat text and xml files, and to get into those pesky ETL files you can try converting them to CSV or XML with tracerpt:

tracecrpt.exe setup.etl -of csv -o setup.etl.csv

So we have "all the logs."  Let me start by saying that setupact.log and setuperr.log are your friends.  They are your go-to.  They likely have the information you are looking for or can give you enough information to point you in the right direction or to another log.

After the dust settled we began to look at a sampling of the machines, effectively scraping the C:Windows.~BTSources and C:WindowsPanther directories to a file share for analysis.  Since the following log (C:Windows.~BTSourcesPanthersetupact.log) details the first three phases of the servicing process, that's where we want to start.  We reviewed the log and low and behold all of the first three phases completed successfully!  One thing to note and key in on in the log is that SETUPPLATFORMEXE reports Global servicing progress as well as Phase progress.  You'll see entries similar to the following:

So we were able to quickly narrow down the scope of the failure to one specific phase.  Phase 4.  Remember Phase 4 occurs in the new target operating system, with all drivers and services starting up and running for the first time, and buttoning up things like settings and data migration tasks, reaching the OOBE phase, and finally (hopefully) the desktop.  Only we never reached the desktop.  Since we failed in Phase 4 which takes place in the new target OS, a rollback occurred and logs were created in the following directory:  C:Windows.~BTSourcesRollback  Cracking open our go-to log we see the following.  A rollback has occurred in phase 4 because of a STOP 0x50 bugcheck, which is PAGE_FAULT_IN_NONPAGED_AREA.   This stop code typically indicates that a driver attempted to read or write to an invalid location in memory, in this particular case it was a read operation.  In the event of a bugcheck a kernel mini-dump is also generated in C:Windows.~BTSourcesRollback The dump only contains stack data.  In this case we were not able to have the dump analyzed.  Don't fret we are still hot on the trail.  Notice about halfway down where it shows "Crash 0x00000050 detected", the next few lines show information extracted from the dump - we can actually see a representation of the stack and the frames in the log.  Frames 6-9 are in the mfenlfk.sys driver.


Continuing down the log we see that Windows tried to recover the installation 3 times but bug checked each time with the same stop code, with the same driver in the middle of the stack.

Eventually after hitting the max recovery attempts, Windows begins the process to rollback the OS:

Now we've zeroed' in on the driver in question, which after reviewing it is a network security driver used by McAfee software; with a time/date stamp that is pretty old.  We engaged McAfee and started an inquiry on the driver, which was out of date (unsupported) for the version of Windows we were trying to service to (1709).  What we found and re-prod' was that even though the system had the latest versions of all the McAfee software(s) installed, this old driver seemed to hang around on the system.  Turns out this isn't so good for servicing.

Moving Past

With all eyes on this old driver, we discussed options in order to rid the system of it.  How can we get rid of this driver without impacting the system negatively?  What if the wrong driver is removed?  As you can see the impacts of making a mistake here could be potentially catastrophic on a given box.  After much deliberation and reviewing our documentation on the driver store, we arrived at the conclusion that the operating system fundamentally supports removing the driver from the store.  Here is a snip of powershell (add your logging, and customize, etc.) we used to interrogate the driver store, search for the very specific driver in question, and remove it:

To expand on this a little, when you query the driver store all drivers are returned.  When you find the one you want to remove, you have to remove it by the value of the "Driver" property as seen below.  Use caution, just because you find the value on one machine as oem1.inf does NOT mean it will be the same value on another machine, the driver property value is different on each machine, even though the OriginalFileName value is the same.  For this reason we have to use logic to identify the driver, grab the "driver" property and feed that to our command to remove the correct driver.  Tricky (1st edition).  Also note lines 1-3, if your Get-WindowsDriver cmdlet returns an error you may need to use this if McAfee Access Protection is enabled and is blocking access to the temp folder.  Tricky (2nd edition).


For the sake of time we used pnputil to remove the driver from the store, of note is that the command line switches for pnputil vary if you are on 1511 (build 10586), they use the legacy switches, and the newer builds of Windows 10 use the newer switches.  Tricky (3rd edition).  We placed this as the first item in the servicing task sequence, then called a reboot before the servicing step began.  We tested this on a number of failed machines and they all took the servicing upgrade successfully.  This was quite the long road from the initial discovery, to troubleshooting, to root cause, and eventually to finding a work-around.  I hope sharing this with you allows you to better understand the servicing process and how to troubleshoot failures.  I would like to re-iterate that the following links provide good information on the topic:

Resolve Windows 10 Upgrade Errors:

https://docs.microsoft.com/en-us/windows/deployment/upgrade/resolve-windows-10-upgrade-errors

Windows 10 Log Files

https://support.microsoft.com/en-us/help/928901/log-files-that-are-created-when-you-upgrade-to-a-new-version-of-window

Windows 10 SetupDiag is a new tool that was recently released that can also be used to troubleshoot servicing failures.  This tool was not released at the time we were working this failure so we didn't get to use it!  Check it out!

https://docs.microsoft.com/en-us/windows/deployment/upgrade/setupdiag

Have a great weekend!

Jesse



from TechNet Blogs https://ift.tt/2JH84AY

SCOM Management Server grayed out with event description “A module of type “System.DataSubscriber” reported an error 0x80FF0003″

Posts in this blog are provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of UseAre you interested in having a dedicated engineer that will be your Microsoft representative.

Let me start with something generic. My Management Server is in a grayed out state and what I will do next.

I will start with running the below SQL query in the Operations Manager Database.

--Replace the name SCOMMS with the name of your Management Server
select BME.Path,AV.ReasonCode,AV.TimeStarted,AV.TimeFinished from AvailabilityHistory AV
join BaseManagedEntity BME on AV.BaseManagedEntityId=BME.BaseManagedEntityId
where BME.FullName like '%SCOMMS%'
order by AV.TimeStarted desc

Here in the output from my LAB.

The reason code description are given below

17 The Health Service windows service is paused.
25 The Health Service Action Account is misconfigured or has invalid credentials.
41 The Health Service failed to parse the new configuration.
42 The Health Service failed to load the new configuration.
43 A System Rule failed to load.
49 Collection of Object State Change Events is stalled.
50 Collection of Monitor State Change Events is stalled.
51 Collection of Alerts is stalled.
97 The Health Service is unable to register with the Event Log Service. The Health Service cannot log additional Heartbeat and Connector events.
98 The Health Service is unable to parse configuration XML.

 

In our case, the Reason Code is 43 which says "A System Rule failed to load".

If you will look at the eventvwr on the Management Server you will see these events.

These events will definitely tell you that that some rules are unloaded. However, in this case it has not really give us an idea  about the problem. I have worked in many cases where it right way gives the rule name and the issue. In our case, the rule name is a Data Warehouse collection rule, so I did not find it a need to check it at this point of time.

I looked through the eventvwr and found another interesting event.

I check the status of the server SQL2016 in my console and find that the server has an entry in both Agent Managed and Agentless. The only way which I can think of coming to such a scenario is to install it as agentless managed and then install it manually and approve it from the pending management.

And since it is not supported/recommended to add the same server under agentless and agent managed at the same time, we ended up in such a situation.

I delete the entry from agentless managed and everything is back normal and healthy.

So in order to avoid such a situation, please make sure you do not have the option "Automatically approve new manually installed agents" selected in SCOM console. And if you have lot of agentless managed computers, do a check before approving them from pending management. You can use the below PowerShell cmdlet to do a quick check.

Get-SCOMAgentlessManagedComputer | select computername
Get-SCOMAgentlessManagedComputer | where {$_.computername -eq 'SQL2016'} | select computername



from TechNet Blogs https://ift.tt/2Fxyumd

May 2018 Non-Security Office Update Release

Listed below are the non-security updates we released on the Download Center and Microsoft Update. See the linked KB articles for more information.

 

Office 2010

Update for Microsoft Outlook 2010 (KB4022144)

 

Office 2013

Update for Microsoft Office 2013 (KB4018389)

Update for Microsoft OneNote 2013 (KB4011281)

Update for Microsoft Outlook 2013 (KB4018376)

Update for Microsoft Project 2013 (KB4018379)

Update for Skype for Business 2015 (KB4018377)

 

Office 2016

Update for Microsoft Office 2016 (KB3203479)

Update for Microsoft Office 2016 (KB4011634)

Update for Microsoft Office 2016 (KB4018318)

Update for Microsoft Office 2016 (KB4018369)

Update for Microsoft Office 2016 (KB4022133)

Update for Microsoft OneNote 2016 (KB4018321)

Update for Microsoft Outlook 2016 (KB4018372)

Update for Microsoft Project 2016 (KB4018373)

Update for Skype for Business 2016 (KB4018367)

 



from TechNet Blogs https://ift.tt/2HFz7A5

This blog has moved to Tech Community!

In an effort to provide you with a single location for announcements and technical blog posts that also provides a channel for discussion with your peers and our product and engineering teams here at Microsoft, the Windows IT Pro blog has moved to the Microsoft Tech Community.

Please bookmark and note the new location: https://aka.ms/windowsforitpros.

 



from TechNet Blogs https://ift.tt/2reAjiK

Microsoft Cloud App Security log collector + OMS = Docker container monitoring

Need a quick method to monitor Docker containers? How about monitoring the Docker container that is utilized for automatic log upload for Microsoft Cloud App Security? If so, try out Microsoft OMS Container Monitoring Solution to monitor your docker containers including continuous log collectors using Docker in Microsoft Cloud App Security! 

Did you know that Microsoft Operations Management Suite (OMS) offers many other management and monitoring solutions including update management for Windows, Surface Hub monitoring, Security and Audit information and many more. For more details please visit: https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-add-solutions

If you’re utilizing Microsoft Cloud App Security in your environment today and would like to learn more about automatic log upload for continuous Cloud App Security reports please visit: https://docs.microsoft.com/en-us/cloud-app-security/discovery-docker

 

The following walks through setting up the Container Monitoring Solution in Azure to monitor a Docker container used for Cloud App Security automatic log upload hosted on an Azure VM.

Requirements

Assumptions for this post

 

Let’s get started…

Here’s a look at the Ubuntu VM with Docker used for Cloud App Security automatic log upload:

clip_image002

If you have an Azure subscription log in, select “new” from the upper left, and search for “container monitoring solution”:

clip_image004

Select Container Monitoring Solution and Create to add it to your OMS workspace:

clip_image006

clip_image008

Once the instance of Container Monitoring Solution is added, sign-on to your host where the containers are deployed and follow the instructions to install the OMS agent used for monitoring the host: https://github.com/Microsoft/OMS-docker#supported-linux-operating-systems-and-docker

 

You’ll run a script that is discussed in the link above to install the OMS agent:

clip_image010

 

Once the installation in complete, navigate back to the OMS admin portal and look for a new tile called “Container Monitoring Solution”:

clip_image012

 

Select the tile and view the status of the containers on the host:

clip_image014

clip_image016

clip_image018

 

From the information provided, I can see I have a failure with my Cloud App Security Log Collector (i.e. I named the container “LogCollector”)

clip_image020

When we drill down into the failure I can see that the which container is failing and other details:

clip_image022

 

Monitoring Docker containers using Microsoft OMS as well as the containers used for log collection for Cloud App Security was really simple and I encourage everyone to deploy OMS today.



from TechNet Blogs https://ift.tt/2KrFLI4

How to build a strong relationship in the modern workplace

It's migration season in the world of business.

Customers are preparing to leave their existing IT environments. For some, this will not be their first migration. They'll have moved between devices and applications many times in their lives. But for most, there lies ahead a daunting journey. Ahead, they hope, is the modern workplace they've heard so much about. All they need is a guide.

Enter the partner. You're strong, wise, and you know the lie of the land. But you can't survive on your own. You know that it's costly to find new customers - which is why you do whatever you can to hang on to those already in your pack. If an existing customer needs a guide, you'll fight to make sure it's you.

The customer and the partner. You need each other - your relationship is symbiotic. And it faces few tests greater than a migration. Because once the move is done, and the customer is settled, what then?

How do you keep the relationship going?

For your customers, the modern workplace is a destination. It's a smart, secure, simple way of working anywhere. And it's exactly what they're looking for.

For you, the modern workplace is an opportunity. With new technology comes plenty of new ways to add value. The trick to keeping the relationship going is to make sure customers know you're an expert in this space - and that you've only just started to help them succeed.

So, what else can you do for your customers? Here are just a few ideas.

Make management easy

It's quick and easy (and sometimes even self-service) for customers to add new devices to their modern workplace. But they'll all want to move at their own pace. Join them in the planning stage to stop the move and management getting in the way of their day to day work.

 

Keep everything secure

Your customers don't need to get distracted by security updates. In the modern workplace, they happen automatically. And if customers need to configure any special security policies, your knowledge of the IT makes them easy to build and implement - so no threats slip through.

 

Stay on top of the latest tech

This is one of the best bits of the modern workplace. Everyone can get their hands on the latest tools, all the time, anywhere. It's even smoother when you manage this process for your customers - so updates don't impact users while they're working, and it's business as usual for compliance and security.

 

Really know your stuff

What's really happening in your customers' businesses? With analytics, you can have all the answers. So it's easy to spot areas for improvement, drive deployment, and keep customers up to date. When you prove you really know their business, that's a relationship they'll want to hang on to.

 

Better together

Even after the migration is done, customers keep looking for new, better ways of working. Even after they've moved to a complete, intelligent solution like Microsoft 365, they'll want a partner that can take them further. There are lots of ways you can make their environment and their IT smarter, more secure, and simpler.

Download the playbook to see them all . It'll tell you more about your modern workplace opportunity, the conversations you can start, and the value you can add to your customers' businesses - long after they've moved to Microsoft 365.



from TechNet Blogs https://ift.tt/2JF4ozN