Group Policy Analytics role in moving User authentication to Azure AD

Context

When I meet with customers one of the conversations I typically have is that organisations would like to start joining new devices (either Autopilot or ConfigMgr deployed) to Azure AD joined and shift their user authentication process into the cloud. This has many benefits, not least the mitigation of having to rely on a VPN back to their on-prem domain controllers to perform the authentication. As the conversation progresses we talk through using Azure AD Connect to synchronise identities, and how that will work with NTFS file shares, then we hit a snag. Group Policies.

Its worth prefixing this post with the fact that typically I work with medium to large organisations with 500+ devices that have many years of GPOs, possibly created and deployed by admins no longer with the company and no clear understanding of what every GPOs purpose is let alone the specific settings in the GPO.

Up until now I have had to explain that these GPOs will no longer apply as the devices will not be joined (or Hybrid Joined) to the existing on-prem Active Directory, and Azure AD does not provide support for GPOs. The migration path would necessitate converting their GPOs to Intune Configuration profiles, enrolling their device to Intune and then deploying the profiles. So far so good… then the killer line “there is no way to convert GPOs to Configuration profiles and this would be a manual process”. At this point they start to estimate the effort involved in analysing these existing GPOs and deciding which ones to migrate and which are stale or no longer required, and they start to favor Hybrid joining the devices just to hang on to what they know as they see it as less effort. This means they miss out on the benefits of Azure AD joining the devices and keep the reliance on their VPN and line-of-sight connectivity to a Domain Controller.

Now we have a new tool to assist with this conversation, Group Policy Analytics in the Endpoint Manager admin center.

Group Policy Analytics allows us to export our existing GPOs to a .xml file and then upload this file to the Endpoint Manger admin center for automatic analysis of all the configuration contained with the GPO. This will then tell us which of the settings can be migrated to Configuration profiles, which are not supported and which are depreciated.

Lets see the workflow in action

Process

Firstly, we need to connect to open Group Policy Management Console on a domain joined device and locate a GPO we would like to test for conversion. We then right-click on the policy and choose ‘Save Report’. Save the .xml file to a temporary location which is easily accessible.

Open Group Policy management and save a GPO as an XML file report.

Then in the Endpoint Manage admin center navigate to Devices > Group Policy Analytics (Preview), select Import and choose the .xml file we previously exported. For my demonstration I am choosing to import the Microsoft provided Windows 10 2004 security baseline GPOs as I know these contain multiple settings and are a good example of complex GPOs.

The admin center then processes the .xml and provides us with a report showing the migration possibility for every individual setting as well as a summary for what percentage of every policy can be migrated to Configuration profiles. We also get a summary showing the total number of settings and migration counts across all GPOs.

Clicking on the percentage column shows us the individual settings included in the GPO and which can be migrated to Configuration profiles.

Summary

This is a big step forward for simplifying the analysis of the workload required in the migration from GPOs to Configuration profiles, and with it helping companies to move to the cloud for user authentication. There would still be additional work to be performed to work through the ‘Not Supported’ and ‘Depreciated’ settings to determine what the impact would be and if any mitigations need to be put in place, but this is a much simpler process than anything that has come before.

However, there are some things still to consider:

Organizational Units to security groups – GPOs by design are deployed to OUs and a computer can only exist in a single OU. Configuration profiles are deployed to security groups. There is currently no way to map OU membership to security groups without writing custom PowerShell scripts and executing these on a regular basis.

Automatic creation of Configuration profile – After the GPO has been analysed by Endpoint Manager it is currently necessary to manually create the Configuration profile. Hopefully Microsoft will introduce a method for automatically creating the required profile with settings ready for deployment.

Intune enrollment – For the propose of this blog I have assumed the the devices in question will be enrolled in Intune and therefore will support Configuration profiles.

Removing Symantec Outlook Add-in using SCCM

Hi guys,

This week I have been looking into an issue a customer of mine has been experiencing with the Symantec Outlook Add-in crashing repeatedly and causing Outlook to crash too which is a poor user experience.

In order to resolve this issue we decided that the best solution was to simply remove the Add-in from the Symantec Endpoint Protection installation. However, this was complicated by the fact that the Symantec Add-in was already installed on all of the workstations and the Add-in is an optional component of the installation and not a seperate application listed in programs and features.

Looking in Program and Features then choosing to modify the Symantec Endpoint Protection installation shows me that currently the feature is installed…

And I want it to change to having the feature removed…

New Installations

As always I took a two stage approach to resolving this issue, firstly to modify the installation process for Symantec Endpoint Protection so that any workstations that need to install Symantec (primarily during OSD) were not deployed with the issue. Then I will target a remediation process to the existing workstations, this saves freshly deployed workstations having to run the fix post-deployment and also should result in the number of unmediated systems only ever decreasing as new systems will not be introduced to the environment.

The resolution for the new installations was a simple process of adding the following additional lines to the end of the SetAid.ini file which is included in the Symantec Endpoint Protection source files. This simply instructs the MSI installer which components to install, and setting the OutlookSnapin to 0 means that the component we want to exclude is skipped.

After updating the INI file I had to redistribute the content to the Distribution Points. I then tested this on a workstation and confirmed that the changes were successful.

Now I know I will not have any additional systems with the Outlook Add-in enabled I can start to resolve the issue on all of my existing workstations.

Existing installations

As we are using SCCM to deploy Symantec Endpoint Protection we already had an application which would perform the installations and I have already modified this application so that new installations will not have the Outlook Add-in enabled. As the application is an MSI type, simply re-running the application on the workstations will modify the existing installation to the desired state.

In order to correctly identify if the workstations needed to re-run the installation I needed to modify the Detection Method for the application to identify if the Outlook Add-in was NOT installed as well as Symantec Endpoint Protection was installed. The existing application only detected if Symantec Endpoint Protection was installed, so I need to modify this.

Unfortunately SCCM does not currently have the capability to identify if a file/folder/reg entry does NOT exist as part of a detection method. It is only capable of identifying if these components exist. However, it is possible to run scripts to perform the installation which means that as long as I can write a script to perform the detection I need then I should be able to successfully identify these systems.

SCCM can run PowerShell, VBS and Jscript for the Detection Method and as I am more proficient in PowerShell I chose this option. The question now though was what criteria should I be querying?

To identify this I simply ran Process Explorer on a workstation whilst I manually performed the installation of the Outlook Add-in on a test workstation. Analysing the actions of the MSIEXEC process showed me that new files were created in the C:\Program Files (x86)\Symantec\Symantec Endpoint Protection\14.2.770.0000.105\Bin\ during the installation, specifically a file called OutlookSessionPlugin.dll.

I also know that in order to identify applications that are installed on a Windows workstation I can check the registry for an entry under the hive HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\ and looking on a test workstation I can see that the MSI code is {713C5DAE-75BA-4DCA-B328-F96B129DCFD5}

Now that I know what the criteria for a ‘correct’ installation is I can write a PowerShell script which will detect the criteria and return the correct results to SCCM. This code is:

$FilePath = "C:\Program Files (x86)\Symantec\Symantec Endpoint Protection\14.2.770.0000.105\Bin\OutlookSessionPlugin.dll" 
$RegPath = "HKLM:SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{713C5DAE-75BA-4DCA-B328-F96B129DCFD5}" 
If ((!(Test-Path $FilePath)) -and (Test-Path $regPath)) {Write-Host "Installed"} 
Else{} 

I then ran this manually on a test workstation both WITH and WITHOUT the Outlook Add-in and confirmed that the script results the correct results. I was then able to paste this script into the Detection Method for my application in SCCM.

Now I simply need to test my updated application to ensure that I get the desired results. To do this I deployed the application as ‘available’ to a collection containing my two test workstations, one with and one without and Outlook Add-in.

Monitoring the AppDiscovery.log I can then see that on my workstation without the Add-in the application is successfully detected, but on my workstation with the Add-in installed the application is not detected.

Clicking ‘Install’ forced the SCCM client to commence the installation of Symantec Endpoint Protection. Once complete the application is successfully detected.

Now I have tested the updated SCCM application I am now confident to deploy the application as Required to all of my workstations and complete the task.

Building an SCCM Technical Preview lab

In this modern age of desktop management the rate of change in the management tools is more frequent than ever before.

In order to stay ahead of the curve with SCCM I find it essential to always have an SCCM site stood up using the Technical Preview release I order to test the new features that Microsoft release on a regular basis.

In this post I am going to walk you through the process I follow whenever I need to standup a new SCCM Technical Preview site.

Install SQL

Firstly I need to install SQL on the allocated server. The highest version currently supported by SCCM is SQL 2017, so this is what I will be installing following the process below:

Mount the SQL ISO file and execute setup.exe

Select to install a new stand-alone installation

Choose to install SQL in Evaluation, or enter a product key if you wish to use this environment for more than 180 days

Review and accept the license terms if agreeable

Because I want to ensure we have the latest hotfixes and security updates, select to use Microsoft Update

I can see a warning telling me that the Windows firewall does not have the SQL ports enabled for remote access. In this scenario this is acceptable as all SQL traffic will be hosted on this single server

The only feature is required for SCCM is the ‘Database Engine Services’ so I select this

Leave the instance configuration as the default configuration

In the Server Configuration section I need to change the SQL Server Database Engine to run using the local SYSTEM account, all other options can left as per the default configuration (be aware that the SQL Collation is a critical dependency for SCCM. In SQL 2017 the default collation is the required value of SQL_Latin1_General_CP1_CI_AS but previous versions of SQL had a different default collation so double check this and modify if appropriate)

Add any SQL Server Administrators that are required. I also like to change the default Data Directories to a drive other than C as this is SQL best practise, although probably in a small test environment this is probably not as important

I can now review the options I have chosen through the wizard and click Install

And eventually the installation will complete

Preparing the Server

I can now start to install and configure the various components that SCCM features rely upon for functionality. My preferred method for performing these tasks is the awesome ConfigMgr Prerequisites Tool written by Nickolaj Andersen and available for download at the Microsoft TechNet gallery https://gallery.technet.microsoft.com/ConfigMgr-2012-R2-e52919cd

Firstly launch the tool select to install the prerequisites for a Primary site

Now because I am going to be installing the Management Point and Distribution Points on the same server we need to navigate to Roles and select Management Point then Install

Then the same for the Distribution Point role

Now I need to install the Windows ADK. To do this navigate to ADK and click on load. This will query the Microsoft download servers and determine what the latest version of ADK is available for download

As you can see Windows 10 1903 is the latest available at the time of writing. Select this and click Install

Then because Microsoft have now split the main ADK and the WinPE components into two seperate downloads I will also need to select the option with ‘WinPE add-on’ and click Install

Now I need to install WSUS on the server so I can performing testing of the Software Updates component of SCCM so I navigate to WSUS, leave the option as SQL Server and click

Now that the WSUS installation is complete I need to complete the post-installation options of WSUS. To do this navigate to Post-Install and click configure. I will need to manually create the folder for WSUS content outside of the tool and input ‘localhost’ for the server name

Now I need to change the SQL memory configuration, in order to do this I need to provide the tool with the location of the SQL instance where we will be installing SCCM. To do this I need to navigate to Settings enter ‘localhost’ for the server name and click connect

I can now navigate to SQL Server, leave the default database memory configuration of 8GB and click configure

I have now completed all of the pre-requisites for SCCM so I can now proceed to installing SCCM

Installing SCCM

Firstly it is necessary to download the latest Technical Preview build from the Evaluation Center and extract the contents. At this time the latest build available for download is 1907 which we will later upgrade to 1909 through the In-Console upgrade process

https://www.microsoft.com/en-us/evalcenter/evaluate-system-center-configuration-manager-and-endpoint-protection-technical-preview

After having extracted the files for the latest Technical Preview it is necessary to navigate to the following location C:\SC_Configmgr_SCEP_TechPreview1907\SMSSETUP\BIN\X64 and then run setup.exe

Leave the default to install a Configuration Manager primary site

Accept the three EULAs

Select a location to download the SCCM pre-requisite files

Select any additional languages for the server

And then select any additional languages for the client

Enter a three character site code, a site name and a location to install SCCM

Then modify any SQL settings as necessary. (For the options we have chosen so far in this tutorial this is not necessary)

Modify the SQL file locations if necessary

And select the SMS provider server name

Select to only use HTTP for client-server communications

Leave the selection to install the Management Point and Distribution Point on this server

Accept to send usage data to Microsoft (this is non-optional in Technical Preview)

Install the Service Connection Point on the local server

Review and confirm the options we have selected

The setup wizard now performs a pre-requisite check and I can see the following screen indicating that I am ready to click Begin Install

The installation will then commence and eventually will complete

This completes the initial installation of SCCM

Updating SCCM

Now I have our SCCM Technical Preview site up and running I want to upgrade it to the latest version so I can see all of the cool new features that Microsoft are working on.

To do this I first need to launch the SCCM console and navigate to Administration > Upgrades and Servicing and I should see the latest technical preview version listed as ready to install. CLick on the Install Upgrade Pack in the ribbon bar

Note – It may take a while for the latest update to show in the console and the status to be Ready to Install. This is because the Service Connector Point may take some time to communicate with Microsoft and download the update. Clicking on the ‘Check for Updates‘ button will force to process to initiate immediately

I can then see the types of updates that are included in the Upgrade Pack, click

Select any features I want to enable and click Next. I can always enable these features later if not selected now

I can then select if I want to upgrade my clients with or without validating on a test collection

Using the Enrolment Status Page with Autopilot

In my previous article (here) we looked at Autopilot, what the benefits are for an organisation and how to configure it.

In this article we are going to look at the additional feature of the Enrollment Status Page (ESP) and how that enhances the default Autopilot for both the end user and IT administrators.

What is The Enrollment Status Page

What you may have noticed when we were performing our Autopilot Enrollment in the previous lab was that the end user was delivered to their desktop before the Intune Enrollment process was complete, and therefore before the compliance policies and applications that were targeted the the device or user were enforced. This is may seem like a trivial issue on the surface, and waiting for the policies to arrive is the resolution, but what if this did cause an issue.

What if an end user started to use the system before all of the applications that he/she needs are fully installed and configured? Chances are that they will open a ticket with the service desk with all of the overheads that entails. What if the user starts to browse the internet before your corporate security policies have been enforced? Then you are playing catchup in a security context, which is always bound to lead to some vulnerabilities.

The fact is end users, and IT professionals, expect devices to be ‘working’ when they are delivered.

To address this is Microsoft have introduced an Enrollment Status Page feature into Intune to allow the on boarding process to be controlled, and administrators have the ability to ‘lock’ the device until it has been deemed ready for end users to start using it.

Creating a User Group

Firstly, it is only possible to deploy any ESP profiles to user groups present within AzureAD. Therefore, we need to either select an existing group that contains out demo user(s) or create a new group. In this demonstration we will create a new group for this purpose.

Open the Azure Portal and navigate to Azure Active Directory > Groups and select New Group

Input the Group Type, Group Name, Group Description, Membership type and selected a single user account and click Create

Now we have a suitable group we can can now proceed to creating our custom ESP profile

Configuring the Enrollment Status Page

Like all Autopilot and Intune polices we first new to logon to the Azure portal, then navigate to Intune > Device Enrollment > Windows Enrollment > Enrollment Status Page

Here we can see that there is already a Default policy which is assigned to ‘All users and all devices’. This policy is created on all Intune tenants and as you can see by the description and configuration it is not configured to show the progress of the apps and profile installation.

We will therefore create our own Profile to configure the end user experience of the ESP exactly as we wish

Firstly, navigate to Intune > Device Enrolment > Windows Enrolment > Enrolment Status Page and select Create Profile

We will then complete the Name and Description of the Profile and clicking on Settings opens the settings of the Profile. Here, we can configure the ESP exactly as we wish. In my example I have enabled the features ‘Show apps and profiles installation progress‘, ‘Block device until all apps and profiles are installed‘ and selected my Office 365 app as an app that we have to wait for installation to complete

Then click ‘Save‘ to close the Settings blade, and ‘Create‘ to create the profile.

Now we have created our new profile we need to deploy it to the group that we created earlier. To do this click on the ‘Assign‘ button

Then click Select Groups, select the ‘ESP demo’ group we created earlier and click Select

And click Save to commit the changes

We have now completed the setup of the ESP and are now ready to commence testing of the End User experience

Enrollment Status Page experience

To test the ESP experience we need to first start a Windows 10 workstation that is registered in Autopilot and has been reset. I will not go through the details of how to set this up as I would be repeating my previous article.

Firstly, boot the workstation into the OOBE wizard and select the region

Select a keyboard layout

Select an additional keyboard if required

Now, because the device is registered for Autopilot the standard Autopilot experience will take over and prompt the user for credentials

Now we start to see our new ESP controlling the setup experience

This process can take some time because as we talked about at the start of this article the propose of the ESP is to ensure that all enrolment and deployment configuration is completed before the user is delivered to their desktop. Also, in this example we assigned Office 365 as an enforced app which, due to its size, can take some time to download/install depending on bandwidth and workstation performance.

Eventually though, we see that the users desktop is loaded, complete with Teams as Office 365 has been successfully installed.

And that concludes the demo of the Enrollment Status Page

Configuring Microsoft Autopilot

Hi All,

Today I am going to walk you through the setup of an Autopilot demonstration scenario that I recently set up for a customer.

What is Autopilot?

OK, so firstly let’s cover off the basic question of ‘What is Autopilot and why would I want to use it?’

Autopilot was first introduced in Windows 10 1703 and is a new deployment methodology from Microsoft that allows Organisations to make use of Azure Active Directory and Microsoft Intune to take ownership of -and fully configure-, the Windows 10 installation that comes pre-loaded onto new hardware by the OEM manufacturer. The benefit of this is that Organisations no longer have to purchase hardware, have it shipped to the IT department, wipe the existing OS and load a custom Windows image.

The wipe-and-reload methodology has been around for the last few decades and has worked well for Organisations. Nonetheless, it does come with downsides such as:

  • Creating the custom image
  • Deploying technology (such as MDT or SCCM) to deliver the image
  • Additional workload for IT to perform the wipe-and-reload process
  • Maintaining a driver catalogue each time new hardware types are procured
  • Maintaining Operating System updates within the custom image
  • Maintaining the custom applications installed within the custom image
  • Delay between purchase of hardware and delivery to end-user

Therefore, the ability to simply make use of the OEM image without having to perform the functions listed above has the potential to allow for new hardware to be delivered directly to end users, saving time and money. Additionally, the initial setup of the device could also be performed by the end user, totally removing the overheads for the IT department.

So, now that the purpose and advantages of Autopilot is more clear, let’s start to create a demo lab so we can test the functionality…

Pre-requisites

In order to proceed with this lab you will need the following:

  • A single Windows 10 workstation (can be physical or virtual),
    • Build 1703 or above (I will be using 1903 for this demo), LTSC 2019 also supported
    • Professional, Education or Enterprise edition
  • Existing Azure Tennant with demo users
  • One of the following licenses assigned to demo users
    • Microsoft 365 E3
    • EMS E3
    • Azure Active Directory P1 & Intune

Obtaining the Hardware ID

The first thing you should know when testing Autopilot is that when a Windows 10 workstation is booted for the first time during the OOBE (Out Of Box Experience) the setup process contacts Intune to see if the workstation has been registered for this functionality. This process is performed every time a Windows 10 workstation is booted for the first time. Note that if the hardware ID is not registered (or the workstation cannot contact Intune due to lack of internet connectivity) then the OOBE silently continues with user interaction. However, if the hardware ID has been registered for Autopilot then the OOBE branches into that process.

In a production environment this registration will be performed by the OEM who will provide Microsoft with a list of hardware IDs for the workstations being purchased and the Azure Tennant ID that the workstation should be assigned to. Obviously you will need to have provided your Tennant ID to the OEM at the time of purchase.

However, in our lab we are not purchasing new hardware but using a VM that has been created specifically for the purpose of testing Autopilot, so we need to manually complete this hardware ID registration by performing the following process:

  • Install Windows 10 on workstation from Microsoft installation media
    • Complete the standard Windows Setup experience using the default options as below
    • Complete the OOBE experience by simply creating ‘user1’ with a temporary password
  • Run the following PowerShell script which will generate the hardware ID for the test workstation and export it to a .CSV file
md c:\HWID
Set-Location c:\HWID
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Unrestricted -Force
Install-Script -Name Get-WindowsAutopilotInfo -Force
$env:Path += ";C:\Program Files\WindowsPowerShell\Scripts"
Get-WindowsAutopilotInfo.ps1 -OutputFile AutopilotHWID.csv

Noteu – I did find on my vanilla workstation that I had to modify the execution policy to allow scripts to be run and also accept the installation of the NuGet provider

Copy the .csv created in c:\HWID to location that can be accessed from a seperate workstation where the Azure portal will be used to make the configuration changes.

Reset the VM

Now that we have the Hardware ID extracted from our test machine we can reset the workstation so that it will perform the OOBE and we can simulate the end user experience.

To do this open Settings > Update & Security > Recovery and click on Get started under Reset this PC. Select Remove everything and Just remove my files. Finally, click on Reset.

This process will take some time and the workstation will restart several times during this process, so we can move on to the next steps while this is processing.

Importing the Hardware ID file

Now we have the .csv file containing the hardware ID we need to upload this into the Intune portal so Autopilot knows the ID. To do this simply open the Azure portal and navigate to the blade Microsoft Intune – Device Enrollment – Windows Enrollment – Devices

You can see the option to Import at the top of the page. Click this and navigate to the .csv file that was previously created.

This process will eventually complete and you will see the device listed.

You will also note that if you browse to Azure Active Directory – Devices you will see the device we have just imported. Note though that the device is only listed by its serial number as it does not yet have a name (at least not one that is known to AzureAD).

Preparing AzureAD

We now need to configure our environment appropriately to allow Autopilot to function.

Firstly navigate to Azure Active Directory – Devices – Device Settings and enable the option to allow users to register devices in Azure Active Directory. In this demo I am allowing all users to register devices, but you may want to limit this to a test group.

Then we need to set Intune as the MDM authority so that systems that join AzureAD are automatically registered and managed with Intune. We set this in Azure Active Directory – ‘Mobility (MDM and MAM)’

Now we need to create an AzureAD group that we can assign our Autopilot profile to and to make our test workstation a member of the group. To do this we navigate to Azure Active Directory – Groups and click on New Group

We can then name the group as shown below, including making our test machine a member of the group (remember at this stage we are still having to manage the workstation by serial number).

Note – Its important to highlight that for the purpose of this demo we are only adding a single device to the group, but we could make this a dynamic group that automatically contains all devices

We now have everything we need configured in Azure AD and are ready to configure Intune

Configuring Intune

Now we need to create a new Autopilot profile within Intune. To do this navigate to Intune – Device Enrollment – Windows Enrollment – Deployment Profiles and Select Create Profile

We then give the profile a name

Configure the options we want our devices to display to the end-user

Define your desired tags

Finally, we need to deploy the profile. Choose which groups we want to include (or exclude). In the example below, we will select the group we have created specifically for this purpose

This then completes the Intune configuration and we are ready to test out new Autopilot experience.

Autopilot Experience

Now change back to the test workstation. It should now be displaying at the region selection screen. This is the start of the user experience as they would see it if Autopilot was enabled for them.

First the standard Windows 10 setup prompts. Select the Region

Select the keyboard layout

Add any additional keyboards layouts

Now for the different experience with Autopilot. The user will then be prompted to enter their username (remember they should utilise the format username@companyname.com). Please note that the device already knows that it is managed by our Organisation.

And then their password

After the user profile creation process has completed and we arrive at the user’s desktop we can then see in Settings > Accounts that the device is registered to the correct AzureAD tenant.

Also, we can see in Intune that the device is registered and compliant with policies

So that concludes our demonstration of Autopilot. The user was only prompted for the standard Windows 10 setup questions along with their Username and Password and they now have a fully AzureAD and Intune registered workstation ready for management.

Connecting SCCM to Upgrade Analytics

In my last post I detailed the process for deploying Upgrade Analytics and how to use SCCM to configure workstations to upload their telemetry data for processing in Upgrade Analytics.

Now we have this data available to us in Upgrade Analytics I am going to walk through the process of connecting SCCM to import the available Upgrade Analytics data back into the SCCM console. Doing so enables administrators to create SCCM collections based on the Upgrade Analytics data, and then in turn create deployments to remediate issues that have been identified with apps/drivers etc. that are currently blocking in-place upgrades of Windows 10 to the desired build.

Obviously a pre-requisite to following this guide is to have fully deployed Upgrade Analytics according to my previous blog post.

Create Azure AD Web Application

The first stage of connecting SCCM to our existing Upgrade Analytics instance is to create an Azure AD Web Application which will then, in turn, be used to grant SCCM read permissions to the instance.

Firstly, navigate to http://portal.azure.com and logon with your Azure AD credentials. Then navigate to the exiting Azure Active Directory instance and select ‘App registrations’.

Now click ‘New Application Registration’ and complete the details as below:

  • Name – Free text but call this something easily identifiable
  • Application Type – Select Web app / API
  • Sign-on URL – Does not need to be a valid URL (as we won’t be redirecting users to this address), but must be in a valid URL format with http:// or https:// as a prefix

And click ‘Create’

The Application will then be created and the details presented in the console

Now click ‘Settings’ then ‘Keys’ to be prompted to create a new Key. Complete the name of a new key (maximum 16 characters) and select the length of duration for the key.

Click ‘Save’

Important – At this stage you will now be presented with the key in the form of a 43 character text string. I have deliberately not screenshotted my key, but this is the only time you will be able to read the key so ensure you copy this key now and store in a secure manner. Also, note the Expiry date (although this can be retrieved later).

Also collect the Application ID and App ID URL from the key properties screen.

Grant the New Application permissions to Upgrade Analytics

Now we have successfully created our Azure AD application we need to grant to the required permissions so it can access the data stored in Upgrade Analytics.

To perform this, within the Azure Portal browse to Resource Groups and select the Resource Group that contains the Upgrade Analytics solution

Under ‘Add a role assignment’ select ‘Add’ and complete the presented screen as below, then click ‘Save’.

Note: It is required to assign the permissions at the Resource Group level as later in the process SCCM will need to create a

Configuring SCCM to connect to Upgrade Analytics

Now we have created our new Azure AD app and granted it the correct permissions we are ready to connect SCCM to Upgrade Analytics.

In the SCCM Console browse to Administration-Cloud Services-Azure Services.

Then right-click on ‘Azure Services’ and select ‘Configure Azure Services’. Complete the presented wizard as shown below.


Then ensure ‘AzurePublicCloud’ is selected and click ‘Import’

You will then need to complete the presented screen with all of the details listed below and click ‘Verify’

  • Azure AD Tenant Name – Free text field but name it something easily identifiable
  • Azure AD Tennant ID – This is the directory ID of your Azure AD instance. This can be found by browsing the properties screen of Azure AD
  • Application Name – Free text field but name it something easily identifiable
  • Client ID – This is the App ID previously obtained
  • Secret Key – This is the Key previously obtained
  • Secret Key Expiry – Ensure the same date is selected as the key expires
  • APP ID URL – This is the previously collected value

Provided everything verifies successfully click ‘OK’ and then ‘Next’ in the wizard

Ensure that the correct Subscription, Resource Group and Windows Analytics workspace are selected and click ‘Next’

Review the settings and click ‘Next’

Once the wizard completes click ‘Close’. We can now see that the Upgrade Analytics Connecter is listed in Azure Services

Now if we switch to the Monitoring – Upgrade Readiness node in the SCCM console we can see the data is displayed

This completes the configuration of connecting SCCM to Upgrade Analytics

Deploying Microsoft Upgrade Analytics

Hi All, In todays post I am going to walk through the process of deploying Microsoft’s Azure/OMS solution named Upgrade Analytics. The purpose of Upgrade Analytics is to assist organisations with their planning process for in-place upgrades of Windows 10 builds through the review and remediation of all applications and drivers that are deployed within the existing fleet of workstations, either Windows 7, Windows 8/8.1 or an existing Windows 10 system.

Throughout this post I will be focusing on how to deploy Upgrade Readiness to existing workstations, the analysis of the information that is collected, processed and presented through Upgrade Analytics I will cover in a separate dedicated post at a later date.

Upgrade Analytics is a ‘solution’ that is provided by Microsoft that runs within a Operations Management Suite workspace which in turn runs on Azure. Therefore, for the purpose of the post it is assumed that there is an existing Azure tenant available, as well as a valid subscription (Upgrade Analytics is a free solution, but as with all Azure resources the OMS workspace we will create needs to reside within a subscription).

As Upgrade Analytics allows administrators to analyse large numbers of workstations I am going to assume that SCCM has also been deployed in the environment and is available for use.

OK, lets proceed with the deployment…

Phase 1 – Creating the Upgrade Analytics solution

Firstly, logon to your existing Azure Portal, and select the ‘Create a Resource’ option in the top left and search for ‘Upgrade Analytics’.

Here we can see the full description of Upgrade Analytics, review and select ‘Create’

We are now prompted for a log analytics workspace in which to create the Upgrade Analytics solution. Select ‘select a workspace’

As we can see there are no existing workspaces so I will select the option to ‘Create New Workspace’

We need to name the new Log Analytics workspace, select a suitable Subscription and Resource Group and click ‘OK’

After the workspace has completed deployment we can now click ‘Create’ to start the deployment of the Upgrade Analytics solution

Once the deployment has completed we can now verify that the solution existing by searching for ‘Solutions’ in the Microsoft Azure portal

Within the solutions blade we can see that the new Upgrade Analytics solution is now present and we can select it

Currently we can see that there are currently 0 systems currently uploading data to the new solution so we will now proceed with configuring workstations to upload data for analysis

Using SCCM to configure workstations to upload data

Now we have created the new solution we need to obtain our Commercial Id Key from the solution and then use SCCM Client Settings to configure this on existing workstations.

Within the new solution we need to select ‘Upgrade Readiness Settings’ and copy the Commercial Id Key that is displayed

Note: Also on this page I can change the version of Windows 10 we are assessing our workstations for upgrade to. This will need to be modified each time we are ready to start analysis for the next Windows 10 build

Now we need to switch to our SCCM console and browse to ‘Client Settings’

I do not want to deploy this configuration to my existing servers as I will not be assessing these for upgrade but I do want to deploy this to all existing workstations. To allow for this configuration I will create a new Client Settings Policy and give it the name of ‘All Windows Workstations – Upgrade Readiness’

It is now possible to configure the Windows Analytics component of the new client settings policy by enabling the management of telemetry settings, pasting our previously obtained Commercial Id Key, and I have selected to Enabled telemetry data from Windows 8.1 and earlier systems as I want to assess their readiness for upgrade to Windows 10. I have also allowed all Internet Explorer data uploaded as I want to analyse this data.

Click ‘OK’, and we now have out new policy ready for deployment

To deploy the new client settings policy simply right-click the new policy and chose ‘Deploy’. Then select the collection we want to deploy to, in this example I am going to be deploying to the ‘All Windows Workstations’ collection

Click ‘OK’ and we can now see that the client settings are deployed

This completes the required configuration within SCCM. We now need to wait for the following processes to complete:

  • SCCM clients to perform a policy refresh – Normally within 1 hour
  • Workstations to upload data to OMS – Normally within 1 day
  • Upgrade Analytics to process the data – Overnight each day

Verifying the deployment

2 days after configuring my workstations I again logged on to the Azure Portal and navigated to the Upgrade Analytics portal. Here I can now see that my test workstations have successfully uploaded data.

I am now in a position to start using Upgrade Analytics to plan for my upgrades to Windows 10 build 1809!!!

Enabling TLS v1.2 support in SCCM

Hi All,

I have recently been working with a customer who had a requirement to disable TLS v1.0 and TLS v1.1 due to the two protocols going End Of Life and now being considered an insecure protocol for communication between servers. This therefore mandates the requirement to use TLS v1.2 in their SCCM environment.

To be clear on our objectives before beginning, TLS is a security protocol for network communication between server that is utilised by SQL. This particular customer has their SCCM SQL database hosted on a remote server to the SCCM Primary site server so it is necessary for TLS to be utilised for communication. If the SQL database was hosted on the same server as the SCCM Primary Site Server then this process would not be necessary as there would be no SQL traffic traversing the network, and therefore TLS would not be required.

Note: The following testing was performed with SCCM 1802, Server 2016 and SQL 2016.

Confirm existing state

Firstly, I just want to demonstrate that the existing SCCM Primary site is communicating with the SQL server without any issues.

To verify this I can check the smsexec.log file on the Primary Site server and confirm there are no errors or warnings present.

Untitled

Disabling TLS v1.0 and TLS v1.1

Now we have confirmed that the existing environment has no pre-existing issues, the first stage in our process will be to disable TLS 1.0 & 1.1 on our Primary Site Server and the SQL server. This will ensure that all communication will be forced to TLS 1.2.

To perform this we will RDP to each of our servers and open the registry editor to the following location:

‘HKey_Local_Machine\System\CurrentControlSet\Control\SecurityProviders \SCHANNEL\Protocols’

Untitled

We can see here that there are existing keys for SSL 2.0 and SSL 3.0 but not for TLS 1.0 or TLS 1.1 so we need to create them shown below. In each of the new folders there should be a new DWORD key created named ‘Enabled’ and the value set to 0 (i.e. Disabling the protocol).

Note: I will not screenshot every registry setting for both servers and this will be repetitive, but trust me I have created all of them on both servers!

Untitled-2

I then restarted the SMS_EXECUTIVE service on the SCCM Primary Site server

Untitled-3.png

And checking again in the smsexec.log file shows that the Primary Site server is now no longer able to communicate with the SQL server

Untitled-4.png

This then verifies that TLS 1.0 and 1.1 are now disabled and that SCCM is not currently able to use TLS 1.2 to communicate with the SQL server. So lets go about fixing that…

 

Enabling TLS 1.2 support

The SCCM Primary Site does not communicate directly with the SQL server. It uses a locally installed SQL Native Client to perform this communication which is actually installed as a part of the SCCM Primary Site installation process when the SQL server is remotely hosted.

We can see in our log file above that the ‘SQL Server Native Client 11.0’ is the driver being called by SCCM and when checking the installed programs list on the Primary Site server we can see that there is a program named ‘Microsoft SQL Server 2012 Native Client’ and the version is 11.0.2100.60. This is the driver that SCCM is using to communicate with the SQL server despite our SQL server actually running SQL 2016.

Upon checking the following Microsoft article it is apparent that the currently installed version of the Native Client does not support TLS 1.2, and therefore we will need to upgrade the client.

https://support.microsoft.com/en-au/help/3135244/tls-1-2-support-for-microsoft-sql-server

Firstly we will Uninstall the existing SQL Native Client by simply using the Windows uninstall process.

Untitled-5

We then need to install the latest version of the SQL Native Client. This can be downloaded from the following location. The file required is ‘sqlncli.msi’

https://www.microsoft.com/en-us/download/details.aspx?id=52676

I simply installed this MSI using all of the default options so I won’t screenshot each individual step, but we can see that once the installation is complete it still registers in Programs and Features as ‘Microsoft SQL Server 2012 Native Client’, but crucially now the version has been increased to 11.3.6518.0 which is above the minimum version required for TLS 1.2 support.

Untitled-6

Again, another restart of the SMS_EXECUTIVE service will force SCCM to start using the new version of the client.

Untitled-3

And we can see in the smsexec.log file that the Primary Site is now able to successfully communicate with the SQL server.

Untitled-7

And launching the console, it successfully connects to the SCCM site

Untitled-8

At this stage I am happy to say that we have successfully upgraded our SCCM site to be fully TLS 1.2 compliant.

Obviously this process has been performed on the Primary Site server only. If we had either a Central Administration Site or any Secondary Sites in the hierarchy this process would need to be repeated for these sites too

Please feel free to leave me a comment below

Thank you for reading

Deploying Domain Controller using PowerShell on Windows Server Core

Hi All,

In this post I am going to walk you though a process I perform regularly when creating new domains for the purpose of testing, and occasionally in production environments. This procedure is going to be performed on Windows Server 2016 Core edition.

I normally choose Core edition for Domain Controllers as it reduces the overall footprint of the server by not having to run a GUI. This is perfect for Domain Controllers as not only do I want to reduce the possible security vulnerabilities on the server, but also after the DC has been deployed I will “never” need to log on to the server itself as all Active Directory management will be performed using Remote Server Admin Tools on remote server/workstations. On top of this its always nice to reduce the amount of CPU/RAM the server needs to run.

Note: For the purpose of this post we will assume that the server Operating System has already been deployed and network adaptors configured. I am using Windows Server 2016 Datacenter, but the process should be identical for all other versions and editions. Also, in this demo there is no existing Active Directory Domain deployed in the environment so we will be creating a new AD Forest, Domain and DNS whilst deploying this Domain Controller.

 

Installing the Active Directory features

When deploying Windows Server 2016 it does not include the necessary features to function as a Domain Controller. Therefore the first action we need to perform is to install the required feature. As this is process is being performed on Server Core we will need to do this from the command line/PowerShell.

Firstly, connect to the freshly built server using the local credentials supplied during deployment. This will open a Command Prompt window as shown below:

Screenshot-1

We then need to launch PowerShell by typing the command ‘PowerShell’:

Screenshot-2

Before installing the Active Directory feature on our server we first need to know the exact name of the feature. To get this we can simply run the cmdlet ‘Get-WindowsFeature’ which will list all of the available features, already installed and available for installation

Screenshot-3

Scrolling up we can find the Active Directory feature we are looking for ‘Active Directory Domain Services’. We can see by the lack of the X to the left that the feature is not currently installed, and can see in the ‘Name’ column the name of the feature is ‘AD-Domain-Services’.

Screenshot-4

We can now go ahead and install this feature by running the command ‘Install-WindowsFeature AD-Domain-Services’

Screenshot-5

Which then returns the following if successfulScreenshot-6

This completes the installation of the Active Directory Domain Controller feature on our server. We now need to configure the new Domain Controller…

 

Configuring Active Directory Domain Controller

Firstly, we need to ensure that the AD management module is imported to PowerShell so we can start our deployment. I will first check if the module is already imported by running the command ‘Get-Module’

Screenshot-7

We can see that the module is not currently imported so we will go ahead and import the module by running the command ‘Import-Module ADDSDeployment’

Screenshot-8

And run ‘Get-Module’ again to confirm that our import has completed successfully

Screenshot-9

We can see that the ADDSDeployment module is now listed

Now we are ready to actually execute our command to install and configure our new forest and domain. I will be configuring my new domain with the following attributes:

  • DNS Delegation – False
  • Database Path – C:\Windows\NTDS (Default)
  • Domain Mode – Server 2016
  • Domain Name – StingraySystems.com.au
  • NetBIOS Domain name – StingraySystems
  • Forest Mode – Server 2016
  • Install DNS – True
  • Log Path – C:\Windows\NTDS (Default)
  • Reboot on Completion – True
  • Sysvol Path – C:\Windows\SYSVOL

In order to apply all of these attributes during the creation of the Forest and Domain I therefore need o execute the following command – Install-ADDSForest -CreateDnsDelegation:$False -DatabasePath “C:\Windows\NTDS” -DomainMode “7” -DomainName “StingraySystems.com.au” -DomainNetbiosName “StingraySystems” -ForestMode “7” -InstallDns:$True -LogPath “C:\Windows\NTDS” -NoRebootOnCompletion:$False -SysvolPath “C:\Windows\SYSVOL” -Force:$True

Note: Obviously if you intend to run this command on your own server then please adjust the relevant attributes to match your own environment

I will then be prompted to enter (and confirm) the Safe Mode Administrator password for the domain. Enter these and press enter to start the installation of the forest and domain

Screenshot-10

After a few mins the server will then restart as we had told it this was OK to proceed in the command

Screenshot-11

Once the server completes its restart it is possible to logon to the server again but this time we must use domain credentials as all local accounts will be removed during the promotion to a Domain Controller.

Note: The account used to execute the Domain Controller installation on the first Domain Controller in the domain is automatically converted to a domain account and made a member of the Domain Admins security group so this account can be used for logging on to the server.

And that completes the installation of the Domain Controller on Windows Server Core. We are now ready to start using the domain.

If you have any questions or feedback on this guide please feel free to leave a comment below…

Deploying Office 365 client using Microsoft

Hi all,

In this post I am going to explain a solution I have designed to overcome a customer’s requirement to deploy Office 365 click-to-run client to its existing workstation fleet. The customer in question is a small organization with approximately- 160 workstations (a mixture of Windows 7 and Windows 10 and currently does not have application deployment tools in place, but is going to migrating to Office 365 and also have EM+S licensing.

Included in EM+S is Microsoft Intune, so the decision was made to deploy the InTune agent to all workstation in the domain which can then be used to deploy the Office 365 client.

This gives us the high level steps of :

  1. Deploy InTune client
  2. Create Office 365 package
  3. Deploy Office 365 using Intune

 

Deploy InTune client

To deploy the InTune client to all workstations we will be using a Group Policy Object as all the workstations are currently joined to an Active Directory domain. To achieve this the following steps will be undertaken :

  1. Logon to the Intune portal (manage.microsoft.com)Intune
  2. Navigate to Admin -> Client Software DownloadIntune2.PNG
  3. Click on the option to ‘Download Client Software’ – this will download a 13Mb zip file
  4. Extract the client files to a local directory – c:\InTuneIntune3
  5. Extract the Microsoft_InTune_Setup.exe file using the command ‘c:\Intune\Microsoft_Intune_Setup.exe /extract c:\Intune’Intune4
  6. Copy Files to a suitable network share – exclude the original Microsoft_Intune_Setup.exe file, there is no need to retain this now we have extracted the contentsIntune5
  7. Open GPMC                                 Intune7.PNG
  8. Create a new GPO and link to Domain root levelIntune8
  9. Modify GPO to deploy InTune agent as a software installationIntune9.PNG
  10. Verify GPO is applied to clientIntune10.PNG
  11. Reboot client to initiate installation
  12. Track client installation logs in c:\program files\MicrosoftIntune19.PNG
  13. Verify workstation is registered in the Intune consoleIntune20

 

Create an Office 365 installation package

Now we have the Intune client deployed we have the ability to deploy .exe and .msi files to our workstations. I personally like to use the GitHub Office 365 ProPlus | Install Toolkit to create an installer for Office 365 as I find the interface simple and intuitive.

https://officedev.github.io/Office-IT-Pro-Deployment-Scripts/XmlEditor.html

Here are the options I have chosen

  1. Create a new installerOffice 365.png
  2. Select the product options for deploymentOffice 365-2.png
  3. Choose the desired LanguagesOffice 365-3.png
  4. Choose the following optional settingsOffice 365-4.png
  5. Choose which products to exclude – Exclude products not licensed as it will prevent the software from being installedOffice 365-5.png
  6. Select which version of Office 365 you wish to installOffice 365-6.png
  7. Select which update channel to subscribe to for future updatesOffice 365-7.png
  8. Select the options you wish to deploy using – Note: Display level is not relevant when deploying through Intune as Intune only performs hidden deploymentsOffice 365-8
  9. Choose the wrapper options you wish to use – Note I am creating a .MSI fileOffice 365-9.png
  10. Clicking the generate button then produces a 2Mb file named OfficeProPlus.msi – Note: the size of this file is small as we chose not to include the source files. This was due to this customer having a fast internet connection and wishing to always install the latest Deferred Channel version. Also, deploying through Intune means that the source files would have to be transferred across the internet anyway resulting in the same file transfer requirements

I could now run this .msi installer on a workstation to verify the installation performs as required.

 

Deploy Office 365 using Intune

Now we have clients enrolled in our Intune tenant and a valid Office 365 installer we need to bring the two together to complete our solution. For this we will import our new .msi file as an application in Intune and then deploy to a test group to confirm functionality

  1. In the Intune portal navigate to Apps – Apps – Add App, the Microsoft Intune Software Publisher will then launch in a pop-out window. Unfortunately you will need to sign in again to this applicationIntune11.PNG
  2. Click next to begin the wizardIntune12.PNG
  3. Specify the location of your custom .msi fileIntune13.PNG
  4. Specify the details you would like to appear for end-users. Personally I like to use the Office 365 image to give a more professional lookIntune14
  5. Choose the requirements as desired, in this example I am not going to specify anyIntune15.PNG
  6. No need to provide any command line argumentsIntune16.PNG
  7. Review the details entered and click UploadIntune17
  8. Once the upload of the application has completed we need to deploy the application to our test system. As this is a lab I am simply going to deploy the app to all Staff but in a production environment you may want to limit this to a subset of users.
  9. Navigate to Apps – Apps and select ‘Office ProPlus Installer application’ we have just createdIntune21
  10. Select Manage Deployment to launch the deployment wizardIntune22
  11. Select the all staff collectionIntune23.PNG
  12. Click the Add button in the centre of the wizard and click nextIntune24.PNG
  13. Configure the deployment to be Required and As Soon As Possible and click finishIntune25.PNG
  14. Back on our test client I can either wait for the new policy to be downloaded or force a restart to expedite the process
  15. We can see the new policy downloaded in the *** log file
  16. The installation will then initiate, Office 365 logs its install tasks under c:\windows\temp so we can follow the installation
  17. Eventually we will then see that the office icons appear in the Start Menu on the system