Quantcast
Viewing all 274 articles
Browse latest View live

Intermittent Mail Sack: Must Remember to Write 2013 Edition

Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on:

Before we get started, however, I wanted to share information about a new service available to Premier customers through Microsoft Services Premier Support. Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the RAP as a Service (or RaaS for short). Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a description of the new offering, and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract.

Image may be NSFW.
Clik here to view.

 

Now on to the Mail Sack!

Question

I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade?

Answer

The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them.

Check out Ned's post over on the FileCab blog: DFS Replication Improvements in Windows Server. Specifically, review the section that discusses Dynamic Access Control Support.

Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools:

Image may be NSFW.
Clik here to view.

You can also do this with PowerShell:

Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con
Install-WindowsFeature -name RSAT-DFS-Mgmt-Con

 

Question

From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0.

We configured the FedAuth cookie to be session based after we did this:

$sts = Get-SPSecurityTokenServiceConfig 
$sts.UseSessionCookies = $true 
$sts.Update() 

 

The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications.

Answer

(Courtesy of Adam Conkle)

Great question! I hope the following details help you in your deployment:

Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010.

When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session.

However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session.

If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host.

The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again.

To conclude, this is by-design behavior for web browser clients. There are two workarounds available:

Workaround 1

Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type

Workaround 2

Instruct your user base to always close their web browser when they have finished their session

Question

Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it?

Answer

(Courtesy of Mike Stephens)

Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor.

Security Descriptor

The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply.

Question

I read the post located here regarding the machine account password change in Active Directory.

Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain.

I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems.

I have reset the password reset interval to be various durations from fourteen days which is the default, to one day.

I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists.

I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement.

Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this.

Answer

(Courtesy of Mike Stephens)

Windows has a security policy feature named Domain member: Disable machine account password change, which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command

Net ads changetrustpw 

 

By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage.

Question

I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on.

Event ID: 45058 
Source: LsaSrv 
Level: Information 
Description: 
A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. 

 

All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it?

Fyi, our users never work offline, only online.

Answer

By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again.

The number of "slots" available to store credentials is controlled by:

Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Setting Name: CachedLogonsCount
Data Type: REG_SZ
Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1

Cached credentials can also be managed with group policy by configuring:

Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options.
Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available)

The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system.

I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time.

Question

In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default.

At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows?

Answer

DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now.

There was an Advisory Memorandum published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future.

I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES.

Question

I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the whitepaper is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal.

Answer

First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal:

  1. When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed.
  2. The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it.
  3. The list of CES URLs is returned to the client. This list is unordered.
  4. The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs.

The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal.

Other Stuff

I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic.

This was also on my list, this year. Go Science!

Image may be NSFW.
Clik here to view.

Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year!

Jonathan "13th baktun" Stephens

 

 

Image may be NSFW.
Clik here to view.

ADAMSync + (AD Recycle Bin OR searchFlags) = "FUN"

Hello again ADAMSyncers! Kim Nichols here again with what promises to be a fun and exciting mystery solving adventure on the joys of ADAMSync and AD Recycle Bin (ADRB) for AD LDS. The goal of this post is two-fold:

  1. Explain AD Recycle Bin for AD LDS and how to enable it
  2. Highlight an issue that you may experience if you enable AD Recycle Bin for AD LDS and use ADAMSync

I'll start with some background on AD Recycle Bin for AD LDS and then go through a recent mind-boggling scenario from beginning to end to explain why you may not want (or need) to enable AD Recycle Bin if you are planning on using ADAMSync.

Hold on to your hats!

AD Recycle Bin for ADLDS

If you're not familiar with AD Recycle Bin and what it can do for you, check out Ned's prior blog posts or the content available on TechNet.

The short version is that AD Recycle Bin is a feature added in Windows Server 2008 R2 that allows Administrators to recover deleted objects without restoring System State backups and performing authoritative restores of those objects.  

Requirements for AD Recycle Bin

   

To enable AD Recycle Bin (ADRB) for AD DS your forest needs to meet some basic requirements:

   

  1. Have extended your schema to Windows Server 2008 R2.
  2. Have only Windows Server 2008 R2 DC's in your forest.
  3. Raise your domain(s) functional level to Windows Server 2008 R2.
  4. Raise your forest's functional level to Windows Server 2008 R2.

   

What you may not be aware of is that AD LDS has this feature as well. The requirements for implementing ADRB in AD LDS are the same as AD DS although they are not as intuitive for AD LDS instances.

 

Schema must be Windows Server 2008 R2

   

If your AD LDS instance was originally built as an ADAM instance, then you may or may not have extended the schema of your instance to Windows Server 2008 R2. If not, upgrading the schema is a necessary first step in order to support ADRB functionality.

   

To update your AD LDS schema to Windows Server 2008 R2, run the following command from your ADAM installation directory on your AD LDS server:

   

Ldifde.exe –i –f MS-ADAM-Upgrade-2.ldf –s server:port –b username domain password –j . -$ adamschema.cat

   

You'll also want to update your configuration partition:

   

ldifde –i –f ms-ADAM-Upgrade-1.ldf –s server:portnumber –b username domain password –k –j . –c "CN=Configuration,DC=X" #configurationNamingContext

   

Information on these commands can be found on TechNet:

Decommission any Windows Server 2003 ADAM servers in the Replica set

   

In an AD DS environment, ADRB requires that all domain controllers in the forest be running Windows Server 2008 R2. Translating this to an AD LDS scenario, all servers in your replica set must be running Windows Server 2008 R2. So, if you've been hanging on to those Windows Server 2003 ADAM servers for some reason, now is the time to decommission them.

 

LaNae's blog "How to Decommission an ADAM/ADLDS server and Add Additional Servers" explains the process for removing a replica member. The process is pretty straightforward and just involves uninstalling the instance, but you will want to check FSMO role ownership, overall instance health, and application configurations before blindly uninstalling. Now is not the time to discover applications have been hard-coded to point to your Windows Server 2003 server or that you've been unknowingly been having replication issues.

   

Raise the functional level of the instance

   

In AD DS, raising the domain and forest functional levels is easy; there's a UI -- AD Domains and Trusts. AD LDS doesn't have this snap-in, though, so it is a little more complicated. There's a good KB article (322692) that details the process of raising the functional levels of AD and gives us insight into what we need to do raise our AD LDS functional level since we can't use the AD Domains and Trusts MMC.

   

AD LDS only has the concept of forest functional levels. There is no domain functional level in AD LDS. The forest functional level is controlled by the msDS-Behavior-Version attribute on the CN=Partitions object in the Configuration naming context of your AD LDS instance.

   

Image may be NSFW.
Clik here to view.

   

Simply changing the value of msDS-Behavior-Version from 2 to 4 will update the functional level of your instance from Windows Server 2003 to Windows Server 2008 R2. Alternatively, you can use Windows PowerShell to upgrade the functional level of your AD LDS instance. For AD DS, there is a dedicated Windows PowerShell cmdlet for raising the forest functional level called Set-ADForestMode, but this cmdlet is not supported for AD LDS. To use Windows PowerShell to raise the functional level for AD LDS, you will need to use the Set-ADObject cmdlet to specify the new value for the msDS-Behavior-Version attribute.

   

To raise the AD LDS functional level using Windows PowerShell, run the following command (after loading the AD module):

   

Set-ADObject -Identity <path to Partitions container in Configuration Partition of instance> -Replace @{'msds-Behavior-Version'=4} -Server <server:port>

   

For example in my environment, I ran:

   

Set-ADObject -Identity 'CN=Partitions,CN=Configuration,CN={A1D2D2A9-7521-4068-9ACC-887EDEE90F91}' -Replace @{'msDS-Behavior-Version'=4} -Server 'localhost:50000'

   

 

 

Image may be NSFW.
Clik here to view.

   

   

As always, before making changes to your production environment:

  1. Test in a TEST or DEV environment
  2. Have good back-ups
  3. Verify the general health of the environment (check replication, server health, etc)

   

Now we're ready to enable AD Recycle Bin! 

Enabling AD Recycle Bin for AD LDS

   

For Windows Server 2008 R2, the process for enabling ADRB in AD LDS is nearly identical to that for AD DS. Either Windows PowerShell or LDP can be used to enable the feature. Also, there is no UI for enabling ADRB for AD LDS in Windows Server 2008 R2 or Windows Server 2012. Windows Server 2012 does add the ability to enable ADRB and restore objects through the AD Administrative Center for AD DS (you can read about it here), but this UI does not work for AD LDS instances on Windows Server 2012.

   

Once the feature is enabled, it cannot be disabled. So, before you continue, be certain you really want to do this. (Read this whole post to help you decide.)

   

The ADRB can be enabled in both AD DS and AD LDS using a PowerShell cmdlet, but the syntax is slightly different between the two. The difference is fully documented in TechNet.

   

In my lab, I used the PowerShell cmdlet to enable the feature rather than using LDP. Below is the syntax for AD LDS:

   

Enable-ADOptionalFeature 'recycle bin feature' -Scope ForestOrConfigurationSet -Server <server:port> -Target <DN of configuration partition>

   

Here's the actual cmdlet I used and a screenshot of the output. The cmdlet asks you confirm that you want to enable the feature since this is an irreversible process.

   

 

 

Image may be NSFW.
Clik here to view.

   

You can verify that the command worked by checking the msDS-EnabledFeature attribute on the Partitions container of the Configuration NC of your instance.

   

Image may be NSFW.
Clik here to view.

 

Seemed like a good idea at the time. . .

   

Now, on to what prompted this post in the first place.

   

Once ADRB is enabled, there is a change to how deleted objects are handled when they are removed from the directory. Prior to enabling ADRB when an object is deleted, it is moved to the Deleted Objects container within the application partition of your instance (CN=Deleted Objects, DC=instance1, DC=local or whatever the name of your instance is) and most of the attributes are deleted. Without Recycle Bin enabled, a user object in the Deleted Object container looks like this in LDP:

   

Image may be NSFW.
Clik here to view.

   

After enabling ADRB, a deleted user object looks like this in LDP:

   

Image may be NSFW.
Clik here to view.

   

Notice that after enabling ADRB, givenName, displayName, and several other attributes including userPrincipalName (UPN) are maintained on the object while in the Deleted Objects container. This is great if you ever need to restore this user: most of the data is retained and it's a pretty simple process using LDP or PowerShell to reanimate the object without the need to go through the authoritative restore process. But, retaining the UPN attribute specifically can cause issues if ADAMSync is being used to synchronize objects from AD DS to AD LDS since the userPrincipalName attribute must be unique within an AD LDS instance.

   

In general, the recommendation when using ADAMSync, is to perform all user management (additions/deletions) on the AD DS side of the sync and let the synchronization process handle the edits in AD LDS. There are times, though, when you may need to remove users in AD LDS in order to resolve synchronization issues and this is where having ADRB enabled will cause problems.

   

For example:

   

Let's say that you discover that you have two users with the same userPrincipalName in AD and this is causing issues with ADAMSync: the infamous ATT_OR_VALUE_EXISTS error in the ADAMSync log.

   

====================================================

Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0 Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8> Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

   

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:

0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

   

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:

0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

===============================================

   

   

Upon further inspection of the users, you determine that at some point a copy was made of the user's account in AD and the UPN was not updated. The old account is not needed anymore but was never cleaned up either. To get your ADAMSync working, you:

  1. Delete the user account that synced to AD LDS.
  2. Delete the extra account in AD (or update the UPN on one of the accounts).
  3. Try to sync again

   

BWAMP!

   

The sync still fails with the ATT_OR_VALUE_EXISTS error on the same user. This doesn't make sense, right? You deleted the extra user in AD and cleaned up AD LDS by deleting the user account there. There should be no duplicates. The ATT_OR_VALUE_EXISTS error is not an ADAMSync error. ADAMSync is making LDAP calls to the AD LDS instance to create or modify objects. This error is an LDAP error from the AD LDS instance and is telling you already have an object in the directory with that same userPrincipalName. For what it's worth, I've never seen this error logged if the duplicate isn't there. It is there; you just have to find it!

   

At this point, it's not hard to guess where the duplicate is coming from, since we've already discussed ADRB and the attributes maintained on deletion. The duplicate userPrincipalName is coming from the object we deleted from the AD LDS instance and is located in the Deleted Objects container. The good news is that LDP allows you to browse the container to find the deleted object. If you've never used LDP before to look through the Deleted Objects container, TechNet provides information on how to browse for deleted objects via LDP.

 

It's great that we know why we are having the problem, but how do we fix it? Now that we're already in this situation, the only way to fix it is to eliminate the duplicate UPN from the object in CN=Deleted Objects. To do this:

   

  1. Restore the deleted object in AD LDS using LDP or PowerShell
  2. After the object is restored, modify the UPN to something bogus that will never be used on a real user
  3. Delete the object again
  4. Run ADAMSync again

   

Now your sync should complete successfully!

Not so fast, you say . . .

   

So, I was feeling pretty good about myself on this case. I spent hours figuring out ADRB for AD LDS and setting up the repro in my lab and proving that deleting objects with ADRB enabled could cause ATT_OR_VALUE_EXISTS errors during ADAMSync. I was already patting myself on the back and starting my victory lap when I got an email back from my customer stating the msDS-BehaviorVersion attribute on their AD LDS instance was still set to 2.

   

Huh?!

   

I'll admit it, I was totally confused. How could this be? I had LDP output from the customer's AD LDS instance and could see that the userPrincipalName attribute was being maintained on objects in the Deleted Objects container. I knew from my lab that this is not normal behavior when ADRB is disabled. So, what the heck is going on?

   

I know when I'm beat, so decided to use one of my "life lines" . . . I emailed Linda Taylor. Linda is an Escalation Engineer in the UK Directory Services team and has been working with ADAM and AD LDS much longer than I have. This is where I should include a picture of Linda in a cape because she came to the rescue again!

   

Apparently, there is more than one way for an attribute to be maintained on deletion. The most obvious was that ADRB had been enabled. The less obvious requires a better understanding of what actually happens when an object is deleted. Transformation into a Tombstone documents this process in more detail. The part that is important to us is:

The Schema Management snap-in doesn't allow us to see attributes on attributes, so to verify the value of searchFlags on the userPrincipalName attribute we need to ADSIEdit or LDP.

   

WARNING: Modifying the schema can have unintended consequences. Please be certain you really need to do this before proceeding and always test first!

   

By default, the searchFlags attribute on userPrincipalName should be set to 0x1 (INDEX).

   

Image may be NSFW.
Clik here to view.

   

   

My customer's searchFlags attribute was set to 0x1F (31 decimal) = (INDEX |CONTAINER_INDEX |ANR |PRESERVE_ON_DELETE |COPY).

   

Image may be NSFW.
Clik here to view.

   

Apparently these changes to the schema had been made to improve query efficiency when searching on the userPrincipalName attribute.

 

Reminder: Manually modifying the schema in this way is not something you should doing unless are certain you know what you are doing or have been directed to by Microsoft Support.

 

The searchFlags attribute is a bitwise attribute containing a number of different options which are outlined here. This attribute can be zero or a combination of one or more of the following values:

Value

Description

1 (0x00000001)

Create an index for the attribute.

2 (0x00000002)

Create an index for the attribute in each container.

4 (0x00000004)

Add this attribute to the Ambiguous Name Resolution (ANR) set. This is used to assist in finding an object when only partial information is given. For example, if the LDAP filter is (ANR=JEFF), the search will find each object where the first name, last name, email address, or other ANR attribute is equal to JEFF. Bit 0 must be set for this index take affect.

8 (0x00000008)

Preserve this attribute in the tombstone object for deleted objects.

16 (0x00000010)

Copy the value for this attribute when the object is copied.

32 (0x00000020)

Supported beginning with Windows Server 2003. Create a tuple index for the attribute. This will improve searches where the wildcard appears at the front of the search string. For example, (sn=*mith).

64(0x00000040)

Supported beginning with ADAM. Creates an index to greatly help VLV performance on arbitrary attributes.

   

To remove the PRESERVE_ON_DELETE flag, we subtracted 8 from customer's value of 31, which gave us a value of 23 (INDEX | CONTAINER | ANR | COPY). 

 

Once we removed the PRESERVE_ON_DELETE flag, we created and deleted a test account to confirm our modifications changed the tombstone behavior of the userPrincipalName attribute. UPN was no longer maintained!

   

Mystery solved!! I think we all deserve a Scooby Snack now!

   

Image may be NSFW.
Clik here to view.

Nom nom nom!

 

Lessons learned

   

  1. ADRB is a great feature for AD. It can even be useful for AD LDS if you aren't synchronizing with AD. If you are synchronizing with AD, then the benefits of ADRB are limited and in the end it can cause you more problems than it solves.
  2. Manually modifying the schema can have unintended consequences.
  3. PowerShell for AD LDS is not as easy as AD
  4. AD Administrative Center is for AD and not AD LDS
  5. LDP Rocks!

   

This wraps up the "More than you really ever wanted to know about ADAMSync, ADRB & searchFlags" Scooby Doo edition of AskDS. Now, go enjoy your Scooby Snacks!

 
- Kim "That Meddling Kid" Nichols

   

   

Image may be NSFW.
Clik here to view.

Configuring Change Notification on a MANUALLY created Replication partner

Hello. Jim here again to elucidate on the wonderment of change notification as it relates to Active Directory replication within and between sites. As you know Active Directory replication between domain controllers within the same site (intrasite) happens instantaneously. Active Directory replication between sites (intersite) occurs every 180 minutes (3 hours) by default. You can adjust this frequency to match your specific needs BUT it can be no faster than fifteen minutes when configured via the AD Sites and Services snap-in.

Back in the old days when remote sites were connected by a string and two soup cans, it was necessary in most cases to carefully consider configuring your replication intervals and times so as not to flood the pipe (or string in the reference above) with replication traffic and bring your WAN to a grinding halt. With dial up connections between sites it was even more important. It remains an important consideration today if your site is a ship at sea and your only connectivity is a satellite link that could be obscured by a cloud of space debris.

Now in the days of wicked fast fiber links and MPLS VPN Connectivity, change notification may be enabled between site links that can span geographic locations. This will make Active Directory replication instantaneous between the separate sites as if the replication partners were in the same site. Although this is well documented on TechNet and I hate regurgitating existing content, here is how you would configure change notification on a site link:

  1. Open ADSIEdit.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Expand Sites, navigate to the Inter-Site Transports container, and select CN=IP.

    Note: You cannot enable change notification for SMTP links.
  4. Right-click the site link object for the sites where you want to enable change notification, e.g. CN=DEFAULTSITELINK, click Properties.
  5. In the Attribute Editor tab, double click on Options.
  6. If the Value(s) box shows <not set>, type 1.

Image may be NSFW.
Clik here to view.

There is one caveat however. Change notification will fail with manual connection objects. If your connection objects are not created by the KCC the change notification setting is meaningless. If it's a manual connection object, it will NOT inherit the Options bit from the Site Link. Enjoy your 15 minute replication latency.

Why would you want to keep connection objects you created manually, anyway? Why don't you just let the KCC do its thing and be happy? Maybe you have a Site Link costing configuration that you would rather not change. Perhaps you are at the mercy of your networking team and the routing of your network and you must keep these manual connections. If, for whatever reason you must keep the manually created replication partners, be of good cheer. You can still enjoy the thrill of change notification.

Change Notification on a manually created replication partner is configured by doing the following:

  1. Open ADSIEDIT.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Navigate to the following location:

    \Sites\SiteName\Server\NTDS settings\connection object that was manually created
  4. Right-click on the manually created connection object name.
  5. In the Attribute Editor tab, double click on Options.
  6. If the value is 0 then set it to 8.

Image may be NSFW.
Clik here to view.

If the value is anything other than zero, you must do some binary math. Relax; this is going to be fun.

On the Site Link object, it's the 1st bit that controls change notification. On the Connection object, however, it's the 4th bit. The 4th bit is highlighted in RED below represented in binary (You remember binary don't you???)

Binary Bit

8th

7th

6th

5th

4th

3rd

2nd

1st

Decimal Value

128

64

32

16

8

4

2

1

 

NOTE: The values represented by each bit in the Options attribute are documented in the Active Directory Technical Specification. Fair warning! I'm only including that information for the curious. I STRONGLY recommend against setting any of the options NOT discussed specifically in existing documentation or blogs in your production environment.

Remember what I said earlier? If it's a manual connection object, it will NOT inherit the Options value from the Site Link object. You're going to have to enable change notifications directly on the manually created connection object.

Take the value of the Options attribute, let's say it is 16.

Open Calc.exe in Programmer mode, and paste the contents of your options attribute.

Image may be NSFW.
Clik here to view.

Click on Bin, and count over to the 4th bit starting from the right.

Image may be NSFW.
Clik here to view.

That's the bit that controls change notification on your manually created replication partner. As you can see, in this example it is zero (0), so change notifications are disabled.

Convert back to decimal and add 8 to it.

Image may be NSFW.
Clik here to view.

Click on Bin, again.

Image may be NSFW.
Clik here to view.

As you can see above, the bit that controls change notification on the manually created replication partner is now 1. You would then change the Options value in ADSIEDIT from 16 to 24.

Image may be NSFW.
Clik here to view.

Click on Ok to commit the change.

Congratulations! You have now configured change notification on your manually created connection object. This sequence of events must be repeated for each manually created connection object that you want to include in the excitement and instantaneous gratification of change notification. Keep in mind that in the event something (or many things) gets deleted from a domain controller, you no longer have that window of intersite latency to stop inbound replication on a downstream partner and do an authoritative restore. Plan the configuration of change notifications accordingly. Make sure you take regular backups, and test them occasionally!

And when you speak of me, speak well…

Jim "changes aren't permanent, but change is" Tierney

 

Image may be NSFW.
Clik here to view.

Distributed File System Consolidation of a Standalone Namespace to a Domain-Based Namespace

Hello again everyone! David here to discuss a scenario that is becoming more and more popular for administrators of Distributed File System Namespaces (DFSN): consolidation of one or more standalone namespaces that are referenced by a domain-based namespace. Below I detail how this may be achieved.

History: Why create interlinked namespaces?

First, we should quickly review the history of why so many administrators designed interlinked namespaces.

In Windows Server 2003 (and earlier) versions of DFSN, domain-based namespaces were limited to hosting approximately 5,000 DFS folders per namespace. This limitation was simply due to how the Active Directory JET database engine stored a single binary value of an attribute. We now refer to this type of namespace as "Windows 2000 Server Mode". Standalone DFS namespaces (those stored locally in the registry of a single namespace server or server cluster) are capable of approximately 50,000 DFS folders per namespace. Administrators would therefore create thousands of folders in a standalone namespace and then interlink (cascade) it with a domain-based namespace. This allowed for a single, easily identifiable entry point of the domain-based namespace and leveraged the capacity of the standalone namespaces.

"Windows Server 2008 mode" namespaces allow for domain-based namespaces of many thousands of DFS folders per namespace (look here for scalability test results). With many Active Directory deployments currently capable of supporting 2008 mode namespaces, Administrators are wishing to remove their dependency on the standalone namespaces and roll them up into a single domain-based namespace. Doing so will improve referral performance, improve fault-tolerance of the namespace, and ease administration.

How to consolidate the namespaces

Below are the steps required to consolidate one or more standalone namespaces into an existing domain-based namespace. The foremost goal of this process is to maintain identical UNC paths after the consolidation so that no configuration changes are needed for clients, scripts, or anything else that references the current interlinked namespace paths. Because so many design variations exist, you may only require a subset of the operations or you may have to repeat some procedures multiple times. If you are not concerned with maintaining identical UNC paths, then this blog does not really apply to you.

For demonstration purposes, I will perform the consolidation steps on a namespace with the following configuration:

Domain-based Namespace: \\tailspintoys.com\data
DFS folder: "reporting" (targeting the standalone namespace "reporting" below)
Standalone Namespace: \\server1\reporting
DFS folders: "report####" (totaling 10,000 folders)

Below is what these namespaces look like in the DFS Management MMC.

Domain Namespace DATA:
Image may be NSFW.
Clik here to view.

Standalone Namespace "Reporting" hosted by server "FS1" and has 15,000 DFS folders:
Image may be NSFW.
Clik here to view.

 

For a client to access a file in the "report8000" folder in the current DFS design, the client must access the following path:
\\tailspintoys.com\data\reporting\report8000

Image may be NSFW.
Clik here to view.


Below are the individual elements of that UNC path with descriptions below each:                    

\\tailspintoys.com

\Data

\Reporting

\Reporting8000

Domain

Domain-based Namespace

Domain-Based Namespace folder

 
  

Standalone Namespace

Standalone Namespace folder targeting a file server share


Note the overlap of the domain-based namespace folder "reporting" (dark green) with the standalone namespace "reporting" (light green). Each item in the UNC path is separated by a "\" and is known as a "path component".

In order to preserve the UNC path using a single domain-based namespace we must leverage the ability for DFSN to host multiple path components within a single DFS folder. Currently, the "reporting" DFS folder of the domain-based namespace refers clients to the standalone namespace that contains DFS folders, such as "reporting8000", beneath it. To consolidate those folders of the standalone root to the domain-based namespace, we must merge them together.

To illustrate this, below is how the new consolidated "Data" domain-based namespace will be structured for this path:

\\tailspintoys.com

\Data

\Reporting\Reporting8000

Domain

Domain-based Namespace

Domain-based Namespace folder targeting a file server share


Notice how the name of the DFS folder is "Reporting\Reporting8000" and includes two path components separated by a "\". This capability of DFSN is what allows for the creation of any desired path. When users access the UNC path, they ultimately will still be referred to the target file server(s) containing the shared data. "Reporting" is simply a placeholder serving to maintain that original path component.

Step-by-step

Below are the steps and precautions for consolidating interlinked namespaces. It is highly recommended to put a temporary suspension on any administrative changes to the standalone namespace(s).

Assumptions:
The instructions assume that you have already met the requirements for "Windows Server 2008 mode" namespaces and your domain-based namespace is currently running in "Windows 2000 Server mode".

However, if you have not met these requirements and have a "Windows 2000 Server mode" domain-based namespace, these instructions (with modifications) may still be applied *if* after consolidation the domain-based namespace configuration data is less than 5 MB in size. If you are unsure of the size, you may run the "dfsutil /root:\\<servername>\<namespace_name> /view" command against the standalone namespace and note the size listed at the top (or bottom) of the output. The reported size will be added to the current size of the domain-based namespace and must not exceed 5 MB. Cease any further actions if you are unsure, or test the operations in a lab environment. Of course, if your standalone namespace size was less than 5 MB in size, then why did you create a interlinked namespace to begin with? Eh…I'm not really supposed to ask these questions. Moving on…

Step 1

Export the standalone namespace.

Dfsutil root export \\fs1\reporting c:\exports\reporting_namespace.txt

Step 2

Modify the standalone namespace export file using a text editor capable of search-and-replace operations. Notepad.exe has this capability. This export file will be leveraged later to create the proper folders within the domain-based namespace.

Replace the "Name" element of the standalone namespace with the name of the domain-based namespace and replace the "Target" element to be the UNC path of the domain-based namespace server (the one you will be configuring later in step 6). Below, I highlighted the single "\\FS1\reporting" 'name' element that will be replaced with "\\TAILSPINTOYS.COM\DATA". The single "\\FS1\reporting" element immediately below it will be replaced with "\\DC1\DATA" as "DC1" is my namespace server.
Image may be NSFW.
Clik here to view.


Next, prepend "Reporting\" to the folder names listed in the export. The final result will be as follows:
Image may be NSFW.
Clik here to view.

One trick is to utilize the 'replace' capability of Notepad.exe to search out and replace all instances of the '<Link Name="' string with '<Link Name="folder\' ('<Link Name="Reporting\' in this example). The picture below shows the original folders defined and the 'replace' dialog responsible for changing the names of the folders (click 'Replace all' to replace all occurrences).
Image may be NSFW.
Clik here to view.


Save the modified file with a new filename (reporting_namespace_modified.txt) so as to not overwrite the standalone namespace export file.

Step 3

Export the domain-based namespace
dfsutil root export \\tailspintoys.com\data c:\exports\data_namespace.txt

Step 4

Open the output file from Step 3 and delete the link that is being consolidated ("Reporting"):
Image may be NSFW.
Clik here to view.

Save the file as a separate file (data_namespace_modified.txt). This export will be utilized to recreate the *other* DFS folders within the "Windows Server 2008 Mode" domain-based namespace that do not require consolidation.

Step 5

This critical step involves deleting the existing domain-based namespace. This is required for the conversion from "Windows 2000 Server Mode" to "Windows Server 2008 Mode".

Delete the domain-based namespace ("DATA" in this example).
Image may be NSFW.
Clik here to view.

Step 6

Recreate the "DATA" namespace, specifying the mode as "Windows Server 2008 mode". Specify the namespace server to be a namespace server with close network proximity to the domain's PDC. This will significantly decrease the time it takes to import the DFS folders. Additional namespace servers may be added any time after Step 8.

Image may be NSFW.
Clik here to view.

Step 7

Import the modified export file created in Step 4:
dfsutil root import merge data_namespace_modified.txt \\tailspintoys.com\data

In this example, this creates the "Business" and "Finance" DFS folders:

Image may be NSFW.
Clik here to view.

Step 8

Import the modified namespace definition file created in Step 2 to create the required folders (note that this operation may take some time depending on network latencies and other factors):
dfsutil root import merge reporting_namespace_modified.txt \\tailspintoys.com\DATA


Image may be NSFW.
Clik here to view.

Step 9

Verify the structure of the namespace:
Image may be NSFW.
Clik here to view.

Step 10

Test the functionality of the namespace. From a client or another server, run the "dfsutil /pktflush" command to purge cached referral data and attempt access to the DFS namespace paths. Alternately, you may reboot clients and attempt access if they do not have dfsutil.exe available.

Below is the result of accessing the "report8000" folder path via the new namespace:
Image may be NSFW.
Clik here to view.


Referral cache confirms the new namespace structure (red line highlighting the name of the DFS folder as "reporting\report8000"):
Image may be NSFW.
Clik here to view.


At this point, you should have a fully working namespace. If something is not working quite right or there are problems accessing the data, you may return to the original namespace design by deleting all DFS folders in the new domain-based namespace and importing the original namespace from the export file (or recreating the original folders by hand). At no time did we alter the standalone namespaces, so returning to the original interlinked configuration is very easy to accomplish.

Step 11

Add the necessary namespace servers to the domain-based namespace to increase fault tolerance.

Notify all previous administrators of the standalone namespace(s) that they will need to manage the domain-based namespace from this point forward. Once you confident with the new namespace, the original standalone namespace(s) may be retired at any time (assuming no systems on the network are using UNC paths directly to the standalone namespace).

Namespace already in "Windows Server 2008 mode"?

What would the process be if the domain-based namespace is already running in "Windows Server 2008 mode"? Or, you have already run through the operations once and wish to consolidate additional DFS folders? Some steps remain the same while others are skipped entirely:
Steps 1-2 (same as detailed previously to export the standalone namespace and modify the export file)
Step 3 Export the domain-based namespace for backup purposes
Step 4 Delete the DFS folder targeting the standalone namespace--the remainder of the domain-based namespace will remain unchanged
Step 8 Import the modified file created in step 2 to the domain-based namespace
Step 9-10 Verify the structure and function of the namespace

Caveats and Concerns

Ensure that no data exists in the original standalone namespace server's namespace share. Because clients are now no longer using the standalone namespace, the "reporting" path component exists as a subfolder within each domain-based namespace server's share. Furthermore, hosting data within the namespace share (domain-based or standalone) is not recommended. If this applies to you, consider moving such data into a separate folder within the new namespace and update any references to those files used by clients.

These operations should be performed during a maintenance window. The length of which is dictated by your efficiency in performing the operations and the length of time it takes to import the DFS namespace export file. Because a namespace is so easily built, modified, and deleted, you may wish to consider a "dry run" of sorts. Prior to deleting your production namespace(s), create a new test namespace (e.g. "DataTEST"), modify your standalone namespace export file (Step 2) to reference this "DataTEST" namespace and try the import. Because you are using a separate namespace, no changes will occur to any other production namespaces. You may gauge the time required for the import, and more importantly, test access to the data (\\tailspintoys.com\DataTEST\Reporting\Reporting8000 in my example). If access to the data is successful, then you will have confidence in replacing the real domain-based namespace.

Clients should not be negatively affected by the restructuring as they will discover the new hierarchy automatically. By default, clients cache namespace referrals for 5 minutes and folder referrals for 30 minutes. It is advisable to keep the standalone namespace(s) operational for at least an hour or so to accommodate transition to the new namespace, but it may remain in place for as long as you wish.

If you decommission the standalone namespace and find some clients are still using it directly, you could easily recreate the standalone namespace from our export in Step 1 while you investigate the client configurations and remove their dependency on it.

Lastly, if you are taking the time and effort to recreate the namespace for "Windows Server 2008 mode" support, you might as well consider configuring the targets of the DFS folders with DNS names (modify the export files) and also implementing DFSDnsConfig on the namespace servers.

I hope this blog eliminates some of the concerns and fears of consolidating interlinked namespaces!

Dave "King" Fisher

Image may be NSFW.
Clik here to view.

Circle Back to Loopback

Hello again!  Kim Nichols here again.  For this post, I'm taking a break from the AD LDS discussions (hold your applause until the end) and going back to a topic near and dear to my heart - Group Policy loopback processing.

Loopback processing is not a new concept to Group Policy, but it still causes confusion for even the most experienced Group Policy administrators.

This post is the first part of a two part blog series on User Group Policy Loopback processing.

  • Part 1 provides a general Group Policy refresher and introduces Loopback processing
  • Part 2 covers Troubleshooting Group Policy loopback processing

Hopefully these posts will refresh your memory and provide some tips for troubleshooting Group Policy processing when loopback is involved.

Part 1: Group Policy and Loopback processing refresher

Normal Group Policy Processing

Before we dig in too deeply, let's quickly cover normal Group Policy processing.  Thinking back to when we first learned about Group Policy processing, we learned that Group Policy
applies in the following order: 

  1. Local Group Policy
  2. Site
  3. Domain
  4. OU

You may have heard Active Directory “old timers” refer to this as LSDOU.  As a result of LSDOU, settings from GPOs linked closest (lower in OU structure) to the user take precedence over those linked farther from the user (higher in OU structure). GPO configuration options such as Block Inheritance and Enforced (previously called No Override for you old school admins) can modify processing as well, but we will keep things simple for the purposes of this example.  Normal user group policy processing applies user settings from GPOs linked to the Site, Domain, and OU containing the user object regardless of the location of the computer object in Active Directory.

Let's use a picture to clarify this.  For this example, the user is the "E" OU and the computer is in the "G" OU of the contoso.com domain.

Image may be NSFW.
Clik here to view.

Following normal group policy processing rules (assuming all policies apply to Authenticated Users with no WMI filters or "Block Inheritance" or "Enforced" policies), user settings of Group Policy objects apply in the following order:

  1. Local Computer Group Policy
  2. Group Policies linked to the Site
  3. Group Policies linked to the Domain (contoso.com)
  4. Group Policies linked to OU "A"
  5. Group Policies linked to OU "B"
  6. Group Policies linked to OU "E"

That’s pretty straightforward, right?  Now, let’s move on to loopback processing!

What is loopback processing?

Group Policy loopback is a computer configuration setting that enables different Group Policy user settings to apply based upon the computer from which logon occurs. 

Breaking this down a little more:

  1. It is a computer configuration setting. (Remember this for later)
  2. When enabled, user settings from GPOs applied to the computer apply to the logged on user.
  3. Loopback processing changes the list of applicable GPOs and the order in which they apply to a user. 

Why would I use loopback processing?

Administrators use loopback processing in kiosk, lab, and Terminal Server environments to provide a consistent user experience across all computers regardless of the GPOs linked to user's OU. 

Our recommendation for loopback is similar to our recommendations for WMI filters, Block Inheritance and policy Enforcement; use them sparingly.  All of these configuration options modify the default processing of policy and thus make your environment more complex to troubleshoot and maintain. As I've mentioned in other posts, whenever possible, keep your designs as simple as possible. You will save yourself countless nights/weekends/holidays in the office because will you be able to identify configuration issues more quickly and easily.

How to configure loopback processing

The loopback setting is located under Computer Configuration/Administrative Templates/System/Group Policy in the Group Policy Management Editor (GPME). 

Use the policy setting Configure user Group Policy loopback processing mode to configure loopback in Windows 8 and Windows Server 2012Earlier versions of Windows have the same policy setting under the name User Group Policy loopback processing mode.  The screenshot below is from the Windows 8 version of the GPME.

Image may be NSFW.
Clik here to view.

When you enable loopback processing, you also have to select the desired mode.  There are two modes for loopback processing:  Merge or Replace.

Image may be NSFW.
Clik here to view.

Loopback Merge vs. Replace

Prior to the start of user policy processing, the Group Policy engine checks to see if loopback is enabled and, if so, in which mode.

We'll start off with an explanation of Merge mode since it builds on our existing knowledge of user policy processing.

Loopback Merge

During loopback processing in merge mode, user GPOs process first (exactly as they do during normal policy processing), but with an additional step.  Following normal user policy processing the Group Policy engine applies user settings from GPOs linked to the computer's OU.  The result-- the user receives all user settings from GPOs applied to the user and all user settings from GPOs applied to the computer. The user settings from the computer’s GPOs win any conflicts since they apply last.

To illustrate loopback merge processing and conflict resolution, let’s use a simple chart.  The chart shows us the “winning” configuration in each of three scenarios:

  • The same user policy setting is configured in GPOs linked to the user and the computer
  • The user policy setting is only configured in a GPO linked to the user’s OU
  • The user policy setting is only configured in a GPO linked to the computer’s OU

Image may be NSFW.
Clik here to view.

Now, going back to our original example, loopback processing in Merge mode applies user settings from GPOs linked to the user’s OU followed by user settings from GPOs linked to the computer’s OU.

Image may be NSFW.
Clik here to view.

GPOs for the user in OU ”E” apply in the following order (the first part is identical to normal user policy processing from our original example):

  1. Local Group Policy
  2. Group Policy objects linked to the Site
  3. Group Policy objects linked to the Domain
  4. Group Policy objects linked to OU "A"
  5. Group Policy objects linked to OU "B"
  6. Group Policy objects linked to OU "E"
  7. Group Policy objects linked to the Site
  8. Group Policy objects linked to the Domain
  9. Group Policy objects linked to OU "A"
  10. Group Policy objects linked to OU "C"
  11. Group Policy objects linked to OU "G"

Loopback Replace

Loopback replace is much easier. During loopback processing in replace mode, the user settings applied to the computer “replace” those applied to the user.  In actuality, the Group Policy service skips the GPOs linked to the user’s OU. Group Policy effectively processes as if user object was in the OU of the computer rather than its current OU. 

The chart for loopback processing in replace mode shows that settings “1” and “2” do not apply since all user settings linked to the user’s OU are skipped when loopback is configured in replace mode.

Image may be NSFW.
Clik here to view.

Returning to our example of the user in the “E” OU, loopback processing in replace mode skips normal user policy processing and only applies user settings from GPOs linked to the computer.

Image may be NSFW.
Clik here to view.

The resulting processing order is: 

  1. Local Group Policy
  2. Group Policy objects linked to the Site
  3. Group Policy objects linked to the Domain
  4. Group Policy objects linked to OU "A"
  5. Group Policy objects linked to OU "C"
  6. Group Policy objects linked to OU "G"

Recap

  1. User Group Policy loopback processing is a computer configuration setting.
  • Loopback processing is not specific to the GPO in which it is configured. If we think back to what an Administrative Template policy is, we know it is just configuring a registry value.  In the case of the loopback policy processing setting, once this registry setting is configured, the order and scope of user group policy processing for all users logging on to the computer is modified per the mode chosen: Merge or Replace.
  • Merge mode applies GPOs linked to the user object first, followed by GPOs with user settings linked to the computer object. 
    • The order of processing determines the precedence. GPOs with users settings linked to the computer object apply last and therefore have a higher precedence than those linked to the user object.
    • Use merge mode in scenarios where you need users to receive the settings they normally receive, but you want to customize or make changes to those settings when they logon to specific computers.
  • Replace mode completely skips Group Policy objects linked in the path of the user and only applies user settings in GPOs linked in the path of the computer.  Use replace mode when you need to disregard all GPOs that are linked in the path of the user object.
  • Those are the basics of user group policy loopback processing. In my next post, I'll cover the troubleshooting process when loopback is enabled.

    Kim “Why does it say paper jam, when there is no paper jam!?” Nichols

     

    Image may be NSFW.
    Clik here to view.

    AD FS 2.0 Claims Rule Language Part 2

    Hello, Joji Oshima here to dive deeper into the Claims Rule Language for AD FS. A while back I wrote a getting started post on the claims rule language in AD FS 2.0. If you haven't seen it, I would start with that article first as I'm going to build on the claims rule language syntax discussed in that earlier post. In this post, I'm going to cover more complex claim rules using Regular Expressions (RegEx) and how to use them to solve real world issues.

    An Introduction to Regex

    The use of RegEx allows us to search or manipulate data in many ways in order to get a desired result. Without RegEx, when we do comparisons or replacements we must look for an exact match. Most of the time this is sufficient but what if you need to search or replace based on a pattern? Say you want to search for strings that simply start with a particular word. RegEx uses pattern matching to look at a string with more precision. We can use this to control which claims are passed through, and even manipulate the data inside the claims.

    Using RegEx in searches

    Using RegEx to pattern match is accomplished by changing the standard double equals "==" to "=~" and by using special metacharacters in the condition statement. I'll outline the more commonly used ones, but there are good resources available online that go into more detail. For those of you unfamiliar with RegEx, let's first look at some common RegEx metacharacters used to build pattern templates and what the result would be when using them.

    Symbol

    Operation

    Example rule

    ^

    Match the beginning of a string

    c:[type == "http://contoso.com/role", Value =~ "^director"]

    => issue (claim = c);

     

    Pass through any role claims that start with "director"

    $

    Match the end of a string

    c:[type == "http://contoso.com/email", Value =~ "contoso.com$"]

    => issue (claim = c);

     

    Pass through any email claims that end with "contoso.com"

    |

    OR

    c:[type == "http://contoso.com/role", Value =~ "^director|^manager"]

    => issue (claim = c);

     

    Pass through any role claims that start with "director" or "manager"

    (?i)

    Not case sensitive

    c:[type == "http://contoso.com/role", Value =~ "(?i)^director"]

    => issue (claim = c);

     

    Pass through any role claims that start with "director" regardless of case

    x.*y

    "x" followed by "y"

    c:[type == "http://contoso.com/role", Value =~ "(?i)Seattle.*Manager"]

    => issue (claim = c);

     

    Pass through any role claims that contain "Seattle" followed by "Manager" regardless of case.

    +

    Match preceding character

    c:[type == "http://contoso.com/employeeId", Value =~ "^0+"]

    => issue (claim = c);

     

    Pass through any employeeId claims that contain start with at least one "0"

    *

    Match preceding character zero or more times

    Similar to above, more useful in RegExReplace() scenarios.

     

    Using RegEx in string manipulation

    RegEx pattern matching can also be used in replacement scenarios. It is similar to a "find and replace", but using pattern matching instead of exact values. To use this in a claim rule, we use the RegExReplace() function in the value section of the issuance statement.

    The RegExReplace() function accepts three parameters.

    1. The first is the string in which we are searching.
      1. We will typically want to search the value of the incoming claim (c.Value), but this could be a combination of values (c1.Value + c2.Value).
    2. The second is the RegEx pattern we are searching for in the first parameter.
    3. The third is the string value that will replace any matches found.

    Example:

    c:[type == "http://contoso.com/role"]
    => issue (Type = "http://contoso.com/role", Value = RegExReplace(c.Value, "(?i)director", "Manager");

     

    Pass through any role claims. If any of the claims contain the word "Director", RegExReplace() will change it to "Manager". For example, "Director of Finance" would pass through as "Manager of Finance".

     

    Real World Examples

    Let's look at some real world examples of regular expressions in claims rules.

    Problem 1:

    We want to add claims for all group memberships, including distribution groups.

    Solution:

    Typically, group membership is added using the wizard and selecting Token-Groups Unqualified Names and map it to the Group or Role claim. This will only pull security groups, not distribution groups, and will not contain Domain Local groups.

    Image may be NSFW.
    Clik here to view.

    We can pull from memberOf, but that will give us the entire distinguished name, which is not what we want. One way to solve this problem is to use three separate claim rules and use RegExReplace() to remove unwanted data.

    Phase 1: Pull memberOf, add to working set "phase 1"

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
    => add(store = "Active Directory", types = ("http://test.com/phase1"), query = ";memberOf;{0}", param = c.Value);

    Example: "CN=Group1,OU=Users,DC=contoso,DC=com" is put into a phase 1 claim.

     

    Phase 2: Drop everything after the first comma, add to working set "phase 2"

     

    c:[Type == "http://test.com/phase1"]
    => add(Type = "http://test.com/phase2", Value = RegExReplace(c.Value, ",[^\n]*", ""));

    Example: We process the value in the phase 1 claim and put "CN=Group1" into a phase 2 claim.

     

    Digging Deeper: RegExReplace(c.Value, ",[^\n]*", "")

    • c.Value is the value of the phase 1 claim. This is what we are searching in.
    • ",[^\n]*" is the RegEx syntax used to find the first comma, plus everything after it
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Phase 3: Drop CN= at the beginning, add to outgoing claim set as the standard role claim

     

    c:[Type == "http://test.com/phase2"]

    => issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", Value = RegExReplace(c.Value, "^CN=", ""));

    Example: We process the value in phase 2 claim and put "Group1" into the role claim

    Digging Deeper: RegExReplace(c.Value, "^CN=", "")

    • c.Value is the value of the phase 1 claim. This is what we are searching in.
    • "^CN=" is the RegEx syntax used to find "CN=" at the beginning of the string.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Problem 2:

    We need to compare the values in two different claims and only allow access to the relying party if they match.

    Solution:

    In this case we can use RegExReplace(). This is not the typical use of this function, but it works in this scenario. The function will attempt to match the pattern in the first data set with the second data set. If they match, it will issue a new claim with the value of "Yes". This new claim can then be used to grant access to the relying party. That way, if these values do not match, the user will not have this claim with the value of "Yes".

     

    c1:[Type == "http://adatum.com/data1"] &&

    c2:[Type == "http://adatum.com/data2"]

    => issue(Type = "http://adatum.com/UserAuthorized", Value = RegExReplace(c1.Value, c2.Value, "Yes"));

     

    Example: If there is a data1 claim with the value of "contoso" and a data2 claim with a value of "contoso", it will issue a UserAuthorized claim with the value of "Yes". However, if data1 is "adatum" and data2 is "fabrikam", it will issue a UserAuthorized claim with the value of "adatum".

     

    Digging Deeper: RegExReplace(c1.Value, c2.Value, "Yes")

    • c1.Value is the value of the data1 claim. This is what we are searching in.
    • c2.Value is the value of the data2 claim. This is what we are searching for.
    • "Yes" is the replacement value. Only if c1.Value & c2.Value match will there be a pattern match and the string will be replaced with "Yes". Otherwise the claim will be issued with the value of the data1 claim.

     

    Problem 3:

    Let's take a second look at potential issue with our solution to problem 2. Since we are using the value of one of the claims as the RegEx syntax, we must be careful to check for certain RegEx metacharacters that would make the comparison mean something different. The backslash is used in some RegEx metacharacters so any backslashes in the values will throw off the comparison and it will always fail, even if the values match.

    Solution:

    In order to ensure that our matching claim rule works, we must sanitize the input values by removing any backslashes before doing the comparison. We can do this by taking the data that would go into the initial claims, put it in a holding attribute, and then use RegEx to strip out the backslash. The example below only shows the sanitization of data1, but it would be similar for data2.

    Phase 1: Pull attribute1, add to holding attribute "http://adatum.com/data1holder"

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]

    => add(store = "Active Directory", types = ("http://adatum.com/data1holder"), query = ";attribute1;{0}", param = c.Value);

    Example: The value in attribute 1 is "Contoso\John" which is placed in the data1holder claim.

     

    Phase 2: Strip the backslash from the holding claim and issue the new data1 claim

     

    c:[Type == "http://adatum.com/data1holder", Issuer == "AD AUTHORITY"]

    => issue(type = "http://adatum.com/data1", Value = RegExReplace(c.Value,"\\","");

    Example: We process the value in the data1holder claim and put "ContosoJohn" in a data1 claim

    Digging Deeper: RegExReplace(c.Value,"\\","")

    • c.Value is the value of the data1 claim. This is what we are searching in.
    • "\\" is considered a single backslash. In RegEx, using a backslash in front of a character makes it a literal backslash.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    An alternate solution would be to pad each backslash in the data2 value with a second backslash. That way each backslash would be represented as a literal backslash. We could accomplish this by using RegExReplace(c.Value,"\\","\\") against a data2 input value.

     

    Problem 4:

    Employee numbers vary in length, but we need to have exactly 9 characters in the claim value. Employee numbers that are shorter than 9 characters should be padded in the front with leading zeros.

    Solution:

    In this case we can create a buffer claim, join that with the employee number claim, and then use RegEx to use the right most 9 characters of the combined string.

    Phase 1: Create a buffer claim to create the zero-padding

     

    => add(Type = "Buffer", Value = "000000000");

     

    Phase 2: Pull the employeeNumber attribute from Active Directory, place it in a holding claim

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]

    => add(store = "Active Directory", types = ("ENHolder"), query = ";employeeNumber;{0}", param = c.Value);

     

    Phase 3: Combine the two values, then use RegEx to remove all but the 9 right most characters.

     

    c1:[Type == "Buffer"]

    && c2:[Type == "ENHolder"]

    => issue(Type = "http://adatum.com/employeeNumber", Value = RegExReplace(c1.Value + c2.Value, ".*(?=.{9}$)", ""));

    Digging Deeper: RegExReplace(c1.Value + c2.Value, ".*(?=.{9}$)", "")

    • c1.Value + c2.Value is the employee number padded with nine zeros. This is what we are searching in.
    • ".*(?=.{9}$)" represents the last nine characters of a string. This is what we are searching for. We could replace the 9 with any number and have it represent the last "X" number of characters.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Problem 5:

    Employee numbers contain leading zeros but we need to remove those before sending them to the relying party.

    Solution:

    In this case we can pull employee number from Active Directory and place it in a holding claim, then use RegEx to use the strip out any leading zeros.

    Phase 1: Pull the employeeNumber attribute from Active Directory, place it in a holding claim

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]

    => add(store = "Active Directory", types = ("ENHolder"), query = ";employeeNumber;{0}", param = c.Value);

     

    Phase 2: Take the value in ENHolder and remove any leading zeros.

     

    c:[Type == "ENHolder"]

    => issue(Type = "http://adatum.com/employeeNumber", Value = RegExReplace(c.Value, "^0*", ""));

    Digging Deeper: RegExReplace(c.Value, "^0*", "")

    • c1.Value is the employee number. This is what we are searching in.
    • "^0*" finds any leading zeros. This is what we are searching for. If we only had ^0 it would only match a single leading zero. If we had 0* it would find any zeros in the string.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Conclusion

    As you can see, RegEx adds powerful functionality to the claims rule language. It has a high initial learning curve, but once you master it you will find that there are few scenarios that RegEx can't solve. I would highly recommend searching for an online RegEx syntax tester as it will make learning and testing much easier. I'll continue to expand the TechNet wiki article so I would check there for more details on the claims rule language.

    Understanding Claim Rule Language in AD FS 2.0

    AD FS 2.0: Using RegEx in the Claims Rule Language

    Regular Expression Syntax

    AD FS 2.0 Claims Rule Language Primer

    Until next time,

    Joji "Claim Jumper" Oshima

    Image may be NSFW.
    Clik here to view.

    We're back. Did you miss us?

    Hey all, David here.  Now that we’ve broken the silence, we here on the DS team felt that we owed you, dear readers, an explanation of some sort.  Plus, we wanted to talk about the blog itself, some changes happening for us, and what you should hopefully be able to expect moving forward.

     

    So, what had happened was….

    As most of you know, a few months ago our editor-in-chief and the butt of many jokes here on the DS support team moved to a new position.  We have it on good authority that he is thoroughly terrorizing many of our developers in Redmond with scary words like “documentation”, “supportability”, and other Chicago-style aphorisms which are best not repeated in print.

    Unfortunately for us and for this blog, that left us with a little bit of a hole in the editing team!  The folks left behind might have been superheroes, but the problem with being a superhero is that you get called on to go save the world (or a customer with a crisis) all the time, and that doesn’t leave much time for picking up your cape from the dry cleaners, let alone keeping up with editing blog submissions, doing mail sacks, and generally keeping the blog going.

    At the same time, we had a bit of a reorganization internally.  Where we were formerly one team within support, we are now two teams – DS (Directory Services) and ID (Identity).  Why the distinction?  Well, you may have heard about this Cloud thing…. But that’s a story best told by technical blog posts, really.  For now, let’s just say the scope of some of what we do expanded last year from “a lot of people use it” to “internet scale”.  Pretty scary when you think about it.

    Just to make things even more confusing, about a month ago we were officially reunited with our long-lost (and slightly insane, but in a good way) brethren in the field engineering organization.  That’s right, our two orgs have been glommed[1] together into one giant concentration of support engineering superpower.  While it’s opening up some really cool stuff that we have always wanted to do but couldn’t before, it’s still the equivalent of waking up one day and finding out that all of those cousins you see every few years at family reunions are coming to live with you.  In your house.  Oh, and they’re bringing their dog.

    Either way, the net effect of all this massive change was that we sort of got quiet for a few months.  It wasn’t you, honest.  It was us.

     

    What to Expect

    It’s important to us that we keep this blog current with detailed, pertinent technical info that helps you resolve issues that you might encounter, or even just helps you understand how our parts of Windows work a bit better.  So, we’re picking that torch back up and we’ll be trying to get several good technical posts up each month for you.  You may also see some shorter posts moving forward.  The idea is to break up the giant articles and try to get some smaller, useful-to-know things out there every so often.  Internally, we’re calling the little posts “DS Quickies” but no promises on whether we’ll actually give you that as a category to search on.  Yes, we’re cruel like that.  You’ll also see the return of the mail sack at some point in the near future, and most importantly you’re going to see some new names showing up as writers.  We’ve put out the call, and we’re planning to bring you blog posts written not just by our folks in the Americas, but also in Europe and Asia.  You can probably also expect some guest posts from our kin in the PFE organization, when they have something specific to what we do that they want to talk about.

    At the same time, we’re keen on keeping the stuff that makes our blog useful and fun.  So you can continue to expect technical depth, detailed analysis, plain-English explanations, and occasional irreverent, snarky humor.  We’re not here to tell you why you should buy Windows clients or servers (or phones, or tablets) – we have plenty of marketing websites that do that better than we ever could.  Instead, we’re here to help you understand how Windows works and how to fix problems when they occur.  Although we do reserve the right to post blatant wackiness or fun things every so often too.  Look, we don’t get out much, ok?  This is our outlet.  Just go with us on this.

    Finally, you’re going to see me personally posting a bit more, since I’ve taken over as the primary editor for the site.  I know - I tried to warn them what would happen, but they still gave me the job all the same.  Jokes aside, I feel like it’s important that our blog isn’t just an encyclopedia of awesome technical troubleshooting, but also that it showcases the fact that we’re real people doing our best to make the IT world a better place, as sappy as that sounds. (Except for David Fisher– I’m convinced he’s really a robot).  I have a different writing style than Ned and Jonathan, and a different sense of humor, but I promise to contain myself as much as possible.  :-)

    Sound good?  We hope so.  We’re going to go off and write some more technical stuff now – in fact:  On deck for next week:  A followup to Kim’s blog on Loopback Policy Processing.

    We wanted to leave you with a funny video that’s safe for work to help kick off the weekend, but alas our bing fu was weak today.  Got a good one to share?  Feel free to link it for us in the comments!


    [1]
    “Glom” is a technical term, by the way, not a managerial one.  Needless to say, hijinks are continuing to
    ensue.

     

    -- David "Capes are cool" Beach

    Image may be NSFW.
    Clik here to view.

    Back to the Loopback: Troubleshooting Group Policy loopback processing, Part 2

    Welcome back!  Kim Nichols here once again with the much anticipated Part 2 to Circle Back to Loopback.  Thanks for all the comments and feedback on Part 1.  For those of you joining us a little late in the game, you'll want to check out Part 1: Circle Back to Loopback before reading further.

    In my first post, the goal was to keep it simple.  Now, we're going to go into a little more detail to help you identify and troubleshoot Group Policy issues related to loopback processing.  If you follow these steps, you should be able to apply what you've learned to any loopback scenario that you may run into (assuming that the environment is healthy and there are no other policy infrastructure issues).

    To troubleshoot loopback processing you need to know and understand:

    1. The status of the loopback configuration.  Is it enabled, and if so, in which mode?
    2. The desired state configuration vs. the actual state configuration of applied policy
    3. Which settings from which GPOs are "supposed" to be applied?
    4. To whom should the settings apply or not apply?
      1. The security filtering requirements when using loopback
      2. Is the loopback setting configured in the same GPO or a separate GPO from the user settings?
      3. Are the user settings configured in a GPO with computer settings?

    What you need to know:

    Know if loopback is enabled and in which mode

    The first step in troubleshooting loopback is to know that it is enabled.  It seems pretty obvious, I know, but often loopback is enabled by one administrator in one GPO without understanding that the setting will impact all computers that apply the GPO.  This gets back to Part 1 of this blog . . . loopback processing is a computer configuration setting. 

    Take a deep cleansing breath and say it again . . . Loopback processing is a computer configuration setting.  :-)

    Everyone feels better now, right?  The loopback setting configures a registry value on the computer to which it applies.  The Group Policy engine reads this value and changes how it builds the list of applicable user policies based on the selected loopback mode.

    The easiest way to know if loopback might be causing troubles with your policy processing is to collect a GPResult /h from the computer.  Since loopback is a computer configuration setting, you will need to run GPResult from an administrative command prompt.

     

    Image may be NSFW.
    Clik here to view.

     

    The good news is that the GPResult output will show you the winning GPO with loopback enabled.  Unfortunately, it does not list all GPOs with loopback configured, just the one with the highest precedence. 

    If your OU structure separates users from computers, the GPResult output can also help you find GPOs containing user settings that are linked to computer OUs.  Look for GPOs linked to computer OUs under the Applied GPOs section of the User Details of the GPResult output. 

    Below is an example of the output of the GPResult /h command from a Windows Server 2012 member server.  The layout of the report has changed slightly going from Windows Server 2008 to Windows Server 2012, so your results may look different, but the same information is provided by previous versions of the tool.  Notice that the link location includes the Computers OU, but we are in the User Details section of the report.  This is a good indication that we have loopback enabled in a GPO linked in the path of the computer account. 

     Image may be NSFW.
    Clik here to view.

       
    Understand the desired state vs. the actual state

    This one also sounds obvious, but in order to troubleshoot you have to know and understand exactly which settings you are expecting to apply to the user.  This is harder than it sounds.  In a lab environment where you control everything, it's pretty easy to keep track of desired configuration.  However, in a production environment with potentially multiple delegated GPO admins, this is much more difficult. 

    GPResult gives us the actual state, but if you don't know the desired state at the setting level, then you can't reasonably determine if loopback is configured correctly (meaning you have WMI filters and/or security filtering set properly to achieve your desired configuration). 

         
    Review security filtering on GPOs

    Once you determine which GPOs or which settings are not applying as expected, then you have a place to start your investigation. 

    In our experience here in support, loopback processing issues usually come down to incorrect security filtering, so rule that out first.

    This is where things get tricky . . . If you are configuring custom security filtering on your GPOs, loopback can get confusing quickly.  As a general rule, you should try to keep your WMI and security filtering as simple as possible - but ESPECIALLY when loopback is involved.  You may want to consider temporarily unlinking any WMI filters for troubleshooting purposes.  The goal is to ensure the policies you are expecting to apply are actually applying.  Once you determine this, then you can add your WMI filters back into the equation.  A test environment is the best place to do this type of investigation.

    Setting up security filtering correctly depends on how you architect your policies:

    1. Did you enable loopback in its own GPO or in a GPO with other computer or user settings?
    2. Are you combining user settings and computer settings into the same GPO(s) linked to the computer’sOU?

    The thing to keep in mind is that if you have what I would call "mixed use" GPOs, then your security filtering has to accommodate all of those uses.  This is only a problem if you remove Authenticated Users from the security filter on the GPO containing the user settings.  If you remove Authenticated Users from the security filter, then you have to think through which settings you are configuring, in which GPOs, to be applied to which computers and users, in which loopback mode....

    Ouch.  That's LOTS of thinking!

    So, unless that sounds like loads of fun to you, it’s best to keep WMI and security filtering as simple as possible.  I know that you can’t always leave Authenticated Users in place, but try to think of alternative solutions before removing it when loopback is involved. 

    Now to the part that everyone always asks about once they realize their current filter is wrong – How the heck should I configure the security filter?!

     

    Security filtering requirements:

    1. The computer account must have READandAPPLY permissions to the GPO that contains the loopback configuration setting.
    2. If you are configuring user settings in the same GPO as computer settings, then the user and computer accounts will both need READandAPPLY permissions to the GPO since there are portions of the GPO that are applicable to both.
    3. If the user settings are in a separate GPO from the loopback configuration setting (#1 above) and any other computer settings (#2 above), then the GPO containing the user settings requires the following permissions:  

     

    Merge mode requirements (Vista+):

    User account:

    READ and APPLY (these are the default
      permissions that are applied when you add users to the Security Filtering
      section of the GPO  on the Scope tab in
      GPMC)

    Computer account:

    Minimum of READ permission

     

    Replace mode requirements:

    User account:

    READ and APPLY (these are the default
      permissions that are applied when you add users to the Security Filtering
      section of the GPO  on the Scope tab in
      GPMC)

    Computer account:

    No permissions are required

      

     

    Tools for Troubleshooting

    The number one tool for troubleshooting loopback processing is your GPRESULT output and a solid understanding of the security filtering requirements for loopback processing in your GPO architecture (see above).

    The GPRESULT will tell you which GPOs applied to the user.  If a specific GPO failed to apply, then you need to review the security filtering on that GPO and verify:

    • The user has READ and APPLYpermissions
    • Depending on your GPO architecture, the computer may need READor it may need READ and APPLY if you combined computer and user settings in the same GPO.

    The same strategy applies if you have mysterious policy settings applying after configuring loopback and you are not sure know why.  Use your GPRESULT output to identify which GPO(s) the policy settings are coming from and then review the security filtering of those GPOs. 

    The Group Policy Operational logs from the computer will also tell you which GPOs were discovered and applied, but this is the same information that you will get
    from the GPRESULT.

    Recommendations for using loopback

    After working my fair share of loopback-related cases, I've collected a list of recommendations for using loopback.  This isn’t an official list of "best practices", but rather just some personal recommendations that may make your life easier.  ENJOY!

    I'll start with what is fast becoming my mantra: Keep it Simple.  Pretty much all of my recommendations can come back to this point.

     

    1. Don't use loopback  :-) 

    OK, I know, not realistic.  How about this . . . Don't use loopback unless you absolutely have to. 

    • I say this not because there is something evil about loopback, but rather because loopback complicates how you think about Group Policy processing.  Loopback tends to be configured and then forgotten about until you start seeing unexpected results. 

    2. Use a separate GPO for the loopback setting; ONLY include the loopback setting in this GPO, and do not include the user settings.  Name it Loopback-Merge or Loopback-Replace depending on the mode.

    • This makes loopback very easy to identify in both the GPMC and in your GPRESULT output.  In the GPMC, you will be able to see where the GPO is linked and the mode without needing to view the settings or details of any GPOS.  Your GPRESULT output will clearly list the loopback policy in the list of applied policies and you will also know the loopback mode, without digging into the report. Using a separate policy also allows you to manage the security of the loopback GPO separately from the security on the GPOs containing the user settings.

    3. Avoid custom security filtering if you can help it. 

    • Loopback works without a hitch if you leave Authenticated Users in the security filtering of the GPO.  Removing Authenticated Users results in a lot more work for you in the long run and makes troubleshooting undesired behaviors much more complicated.

    4. Don't enable loopback in a GPO linked at the domain level!

    • This will impact your Domain Controllers.  I wouldn't be including this warning, if I hadn't worked several cases where loopback had been inadvertently applied to Domain Controllers.  Again, there isn’t anything inherently wrong with applying loopback on Domain Controllers.  It is bad, however, when loopback unexpectedly applies to Domain Controllers.
    • If you absolutely MUST enable loopback in a GPO linked at the domain level, then block inheritance on your Domain Controllers OU.  If you do this, you will need to link the Default Domain Policy back to the Domain Controllers OU making sure to have the precedence of the Default Domain Controllers policy higher (lower number) than the Domain Policy.
    • In general, be careful with all policies linked at the at the domain level.  Yes, it may be "simpler" to manage most policy at the domain level, but it can lead
      to lazy administration practices and make it very easy to forget about the impact of seemingly minor policy changes on your DCs.
    • Even if you are editing the security filtering to specific computers, it is still dangerous to have the loopback setting in a GPO linked at the domain level.  What if someone mistakenly modifies the security filtering to "fix" some other issue.
      • TEST, TEST, TEST!!!  It’s even more important to test when you are modifying GPOs that impact domain controllers.  Making a change at the domain level that negatively impacts a domain controller can be career altering.  Even if you have to set up a test domain in virtual machines on your own workstation, find a way to test.

    5. Always test in a representative environment prior to deploying loopback in production.

    • Try to duplicate your production GPOs as closely as possible.  Export/Import is a great way to do this.
    • Enabling loopback almost always surfaces some settings that you weren't aware of.  Unless you are diligent about disabling unused portions of GPOs and you perform periodic audits of actual configuration versus documented desired state configuration, there will typically be a few settings that are outside of your desired configuration. 
    • Duplicating your production policies in a test environment means you will find these anomalies before you make the changes in production.

     

    That’s all folks!  You are now ready to go forth and conquer all of those loopback policies!

     

    Kim “1.21 Gigawatts!!” Nichols

    Image may be NSFW.
    Clik here to view.

    Two lines that can save your AD from a crisis

    Editor's note:  This is the first of very likely many "DS Quickies".  "Quickies" are shorter technical blog posts that relate hopefully-useful information and concepts for you to use in administering your networks.  We thought about doing these on Twitter or something, but sadly we're still too technical to be bound by a 140-character limit :-)

    For those of you who really look forward to the larger articles to help explain different facets of Windows, Active Directory, or troubleshooting, don't worry - there will still be plenty of those too. 

     

    Hi! This is Gonzalo writing to you from the support team for Latin America.

    Recently we got a call from a customer, where one of the administrators accidentally executed a script that was intended to delete local users… on a domain controller. The result was that all domain users were deleted from the environment in just a couple of seconds. The good thing was that this customer had previously enabled Recycle Bin, but it still took a couple of hours to recover all users as this was a very large environment. This type of issue is something that comes up all the time, and it’s always painful for the customers who run into it. I have worked many cases where the lack of proper protection to objects caused a lot of issues for customer environments and even in some cases ended up costing administrators their jobs, all because of an accidental click. But, how can we avoid this?

    If you take a look at the properties of any object in Active Directory, you will notice a checkbox named “Protect object from accidental deletion” under Object tab. When this enabled, permissions are set to deny
    deletion of this object to Everyone.

    Image may be NSFW.
    Clik here to view.

     

    With the exception of Organizational Units, this setting is not enabled by default on all objects in Active Directory.  When creating an object, it needs to be set manually. The challenge is how to easily enable this on thousands of objects.

    ANSWER!  Powershell!

    Two simple PowerShell commands will enable you to set accidental deletion protection on all objects in your Active Directory. The first command will set this on any users or computers (or any object with value user on the ObjectClass attribute). The second command will set this on any Organizational Unit where the setting is not already enabled.

     

    Get-ADObject -filter {(ObjectClass -eq "user")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true

    Get-ADOrganizationalUnit -filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

     

    Once you run these commands, your environment will be protected against accidental (or intentional) deletion of objects.

    Note: As a proof of concept, I tested the script that my customer used with the accidental deletion protection enabled and none of the objects in my Active Directory environment were deleted.

     

    Gonzalo “keep your job” Reyna

    Image may be NSFW.
    Clik here to view.

    Monthly Mail Sack: Yes, I Finally Admit It Edition

    Heya folks, Ned here again. Rather than continue the lie that this series comes out every Friday like it once did, I am taking the corporate approach and rebranding the mail sack. Maybe we’ll have the occasional Collector’s Edition versions.

    This week month, I answer your questions on:

    Let’s incentivize our value props!

    Question

    Everywhere I look, I find documentation saying that when Kerberos skew exceeds five minutes in a Windows forest, the sky falls and the four horsemen arrive.

    I recall years ago at a Microsoft summit when I brought that time skew issue up and the developer I was speaking to said no, that isn't the case anymore, you can log on fine. I recently re-tested that and sure enough, no amount of skew on my member machine against a DC prevents me from authenticating.

    Looking at the network trace I see the KRB_APP_ERR_SKEW response for the AS REQ which is followed by breaking down of the kerb connection which is immediately followed by reestablishing the kerb connection again and another AS REQ that works just fine and is responded to with a proper AS REP.

    My first question is.... Am I missing something?

    My second question is... While I realize that third party Kerb clients may or may not have this functionality, are there instances where it doesn't work within Windows Kerb clients? Or could it affect other scenarios like AD replication?

    Answer

    Nope, you’re not missing anything. If I try to logon from my highly-skewed Windows client and apply group policy, the network traffic will look approximately like:

    Frame

    Source

    Destination

    Packet Data Summary

    1

    Client

    DC

    AS Request Cname: client$ Realm: CONTOSO.COM Sname:

    2

    DC

    Client

    KRB_ERROR - KRB_AP_ERR_SKEW (37)

    3

    Client

    DC

    AS Request Cname: client$ Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

    4

    DC

    Client

    AS Response Ticket[Realm: CONTOSO.COM, Sname: krbtgt/CONTOSO.COM]

    5

    Client

    DC

    TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

    6

    DC

    Client

    KRB_ERROR - KRB_AP_ERR_SKEW (37)

    7

    Client

    DC

    TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

    8

    DC

    Client

    TGS Response Cname: client$

    When your client sends a time stamp that is outside the range of Maximum tolerance for computer clock synchronization, the DC comes back with that KRB_APP_ERR_SKEW error – but it also contains an encrypted copy of his own time stamp. The client uses that to create a valid time stamp to send back. This doesn’t decrease security in the design because we are still using encryption and requiring knowledge of the secrets,  plus there is still only – by default – 5 minutes for an attacker to break the encryption and start impersonating the principal or attempt replay attacks. Which is not feasible with even XP’s 11 year old cipher suites, much less Windows 8’s.

    This isn’t some Microsoft wackiness either – RFC 4430 states:

    If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW.The optional client's time in the KRB-ERROR SHOULD be filled out.

    If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message.

    The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Hmmm… SHOULD. Here’s where things get more muddy and I address your second question. No one actually has to honor this skew correction:

    1. Windows 2000 didn’t always honor it. But it’s dead as fried chicken, so who cares.
    2. Not all third parties honor it.
    3. Windows XP and Windows Server 2003 do honor it, but there were bugs that sometimes prevented it (long gone, AFAIK). Later Windows OSes do of course and I know of no regressions.
    4. If the clock of the client computer is faster than the clock time of the domain controller plus the lifetime of Kerberos ticket (10 hours, by default), the Kerberos ticket is invalid and auth fails.
    5. Some non-client logon application scenarios enforce the strict skew tolerance and don’t care to adjust, because of other time needs tied to Kerberos and security. AD replication is one of them – event LSASRV 40960 with extended error 0xc000133 comes to mind in this scenario, as does trying to run DSSite.msc “replicate now” and getting back error 0x576 “There is a time and / or date difference between the client and the server.” I have recent case evidence of Dcpromo enforcing the 5 minutes with Kerberos strictly, even in Windows Server 2008 R2, although I have not personally tried to validate it. I’ve seen it with appliances and firewalls too.

    With that RFC’s indecisiveness and the other caveats, we beat the “just make sure it’s no more than 5 minutes” drum in all of our docs and here on AskDS. It’s too much trouble to get into what-ifs.

    We have a KB tucked away on this here but it is nearly un-findable.

    Awesome question.

    Question

    I’ve found articles on using Windows PowerShell to locate all domain controllers in a domain, and even all GCs in a forest, but I can’t find one to return all DCs in a forest. Get-AdDomainController seems to be limited to a single domain. Is this possible?

    Answer

    It’s trickier than you might think. I can think of two ways to do this; perhaps commenters will have others. The first is to get the domains in the forest, then find one domain controller in each domain and ask it to list all the domain controllers in its own domain. This gets around the limitation of Get-AdDomainController for a single domain (single line wrapped).

    (get-adforest).domains | foreach {Get-ADDomainController -discover -DomainName $_} | foreach {Get-addomaincontroller -filter * -server $_} | ft hostname

    The second is to go directly to the the native  .NET AD DS forest class to return the domains for the forest, then loop through each one returning the domain controllers (single lined wrapped).

    [system.directoryservices.activedirectory.Forest]::GetCurrentForest().domains | foreach {$_.DomainControllers} | foreach {$_.hostname}

    This also lead to updated TechNet content. Good work, Internet!

    Question

    Hi, I've been reading up on RID issuance management and the new RID Master changes in Windows Server 2012. They still leave me with a question, however: why are RIDs even needed in a SID? Can't the SID be incremented on it's own? The domain identifier seems to be an adequately large number, larger than the 30-bit RID anyway. I know there's a good reason for it, but I just can't find any material that says why there are separate domain ID and relative ID in a SID.

    Answer

    The main reason was a SID needs the domain identifier portion to have a contextual meaning. By using the same domain identifier on all security principals from that domain, we can quickly and easily identify SIDs issued from one domain or another within a forest. This is useful for a variety of security reasons under the hood.

    That also allows us a useful technique called “SID compression”, where we want to save space in a user’s security data in memory. For example, let’s say I am a member of five domain security groups:

    DOMAINSID-RID1
    DOMAINSID-RID2
    DOMAINSID-RID3
    DOMAINSID-RID4
    DOMAINSID-RID5

    With a constant domain identifier portion on all five, I now have the option to use one domain SID portion on all the other associated ones, without using all the memory up with duplicate data:

    DOMAINSID-RID1
    “-RID2
    “-RID3
    “-RID4
    “-RID5

    The consistent domain portion also fixes a big problem: if all of the SIDs held no special domain context, keeping track of where they were issued from would be a much bigger task. We’d need some sort of big master database (“The SID Master”?) in an environment that understood all forests and domains and local computers and everything. Otherwise we’d have a higher chance of duplication through differing parts of a company. Since the domain portion of the SID unique and the RID portion is an unsigned integer that only climbs, it’s pretty easy for RID masters to take care of that case in each domain.

    You can read more about this in coma-inducing detail here: http://technet.microsoft.com/en-us/library/cc778824.aspx.

    Question

    When I want to set folder and application redirection for our user in different forest (with a forest trust) in our Remote Desktop Services server farm, I cannot find users or groups from other domain. Is there a workaround?

    Answer

    The Object Picker in this case doesn’t allow you to select objects from the other forest – this is a limitation of the UI the that Folder Redirection folks put in place. They write their own FR GP management tools, not the GP team.

    Windows, by default, does not process group policy from user logon across a forest—it automatically uses loopback Replace.  Therefore, you can configure a Folder Redirection policy in the resource domain for users and link that policy to the OU in the domain where the Terminal Servers reside.  Only users from a different forest should receive the folder redirection policy, which you can then base on a group in the local forest.

    Question

    Does USMT support migrating multi-monitor settings from Windows XP computers, such as which one is primary, the resolutions, etc.?

    Answer

    USMT 4.0 does not supported migrating any monitor settings from any OS to any OS (screen resolution, monitor layout, multi-monitor, etc.). Migrating hardware settings and drivers from one computer to another is dangerous, so USMT does not attempt it. I strongly discourage you from trying to make this work through custom XML for the same reason – you may end up with unusable machines.

    Starting in USMT 5.0, a new replacement manifest – Windows 7 to Windows 7, Windows 7 to Windows 8, or Windows 8 to Windows 8 only – named “DisplayConfigSettings_Win7Update.man” was added. For the first time in USMT, it migrates:

    <pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Connectivity\* [*]</pattern>
    <pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Configuration\* [*]</pattern>

    This is OK on Win7 and Win8 because the OS itself knows what valid and invalid are in that context and discards/fixes things as necessary. I.e. this is safe is only because USMT doesn’t actually do anything but copy some values and relies on the OS to fix things after migration is over.

    Question

    Our proprietary application is having memory pressure issues and it manifests when someone runs gpupdate or waits for GP to refresh; some times it’s bad enough to cause a crash.  I was curious if there was a way to stop the policy refresh from occurring.

    Answer

    Only in Vista and later does preventing total refresh become possible vaguely possible; you could prevent the group policy service from running at all (no, I am not going to explain how). The internet is filled with thousands of people repeating a myth that preventing GP refresh is possible with an imaginary registry value on Win2003/XP – it isn’t.

    What you could do here is prevent background refresh altogether. See the policies in the “administrative templates\system\group policy” section of GP:

    1. You could enable policy “group policy refresh interval for computers” and apply it to that one server. You could set the background refresh interval to 45 days (the max). That way it be far more likely to reboot in the meantime for a patch Tuesday or whatever and never have a chance to refresh automatically.

    2. You could also enable each of the group policy extension policies (ex: “disk quota policy processing”, “registry policy processing”) and set the “do not apply during periodic background processing” option on each one.  This may not actually prevent GPUPDATE /FORCE though – each CSE may decide to ignore your background refresh setting; you will have to test, as this sounds boring.

    Keep in mind for #1 that there are two of those background refresh policies – one per user (“group policy refresh interval for users”), one per computer (“group policy refresh interval for computers”). They both operate in terms of each boot up or each interactive logon, on a per computer/per user basis respectively. I.e. if you logon as a user, you apply your policy. Policy will not refresh for 45 days for that user if you were to stay logged on that whole time. If you log off at 22 days and log back on, you get apply policy, because that is not a refresh – it’s interactive logon foreground policy application.

    Ditto for computers, only replace “logon” with “boot up”. So it will apply the policy at every boot up, but since your computers reboot daily, never again until the next bootup.

    After those thoughts… get a better server or a better app. :)

    Question

    I’m testing Virtualized Domain Controller cloning in Windows Server 2012 on Hyper-V and I have DCs with snapshots. Bad bad bad, I know, but we have our reasons and we at least know that we need to delete them when cloning.

    Is there a way to keep the snapshots on the source computer, but not use VM exports? I.e. I just want the new copied VM to not have the old source machine’s snapshots.

    Answer

    Yes, through the new Hyper-V disk management Windows PowerShell cmdlets or through the management snap-in.

    Graphical method

    1. Examine the settings of your VM and determine which disk is the active one. When using snapshots, it will be an AVHD/X file.

    Image may be NSFW.
    Clik here to view.
    image

    2. Inspect that disk and you see the parent as well.

    Image may be NSFW.
    Clik here to view.
    image

    3. Now use the Edit Disk… option in the Hyper-V manager to select that AVHD/X file:

    Image may be NSFW.
    Clik here to view.
    image

    4. Merge the disk to a new copy:

    Image may be NSFW.
    Clik here to view.
    image

    Image may be NSFW.
    Clik here to view.
    image

    Windows PowerShell method

    Much simpler, although slightly counter-intuitive. Just use:

    Convert-vhd

    For example, to export the entire chain of a VM's disk snapshots and parent disk into a new single disk with no snapshots named DC4-CLONED.VHDX:

    Image may be NSFW.
    Clik here to view.
    image

    Violin!

    You don’t actually have to convert the disk type in this scenario (note how I went from dynamic to dynamic). There is also Merge-VHD for more complex differencing disk and snapshot scenarios, but it requires some extra finagling and disk copying, and  isn’t usually necessary. The graphical merge option works well there too.

    As a side note, the original Understand And Troubleshoot VDC guide now redirects to TechNet. Coming soon(ish) is an RTM-updated version of the original guide, in web format, with new architecture, troubleshooting, and other info. I robbed part of my answer above from it – as you can tell by the higher quality screenshots than you usually see on AskDS – and I’ll be sure to announce it. Hard.

    Question

    It has always been my opinion that if a DC with a FSMO role went down, the best approach is to seize the role on another DC, rebuild the failed DC from scratch, then transfer the role back. It’s also been my opinion that as long as you have more than one DC, and there has not been any data loss, or corruption, it is better to not restore.

    What is the Microsoft take on this?

    Answer

    This is one of those “it depends” scenarios:

    1. The downside to restoring from (usually proprietary) backup solutions is that the restore process just isn’t something most customers test and work out the kinks on until it actually happens; tons of time is spent digging out the right tapes, find the right software, looking up the restore process, contacting that vendor, etc. Often times a restore doesn’t work at all, so all the attempts are just wasted effort. I freely admit that my judgment is tainted through my MS Support experience here – customers do not call us to say how great their backups worked, only that they have a down DC and they can’t get their backups to restore.

    The upside is if your recent backup contained local changes that had never replicated outbound due to latency, restoring them (even non-auth) still means that those changes will have a chance to replicate out. E.g. if someone changed their password or some group was created on that server and captured by the backup, you are not losing any changes. It also includes all the other things that you might not have been aware of – such as custom DFS configurations, operating as a DNS server that a bunch of machines were solely pointed to, 3rd party applications pointed directly to the DC by IP/Name for LDAP or PDC or whatever (looking at you, Open Source software!), etc. You don’t have to be as “aware”, per se.

    2. The downside to seizing the FSMO roles and cutting your losses is the converse of my previous point around latent changes; those objects and attributes that could not replicate out but were caught by the backup are gone forever. You also might miss some of those one-offs where someone was specifically targeting that server – but you will hear from them, don’t worry; it won’t be too hard to put things back.

    The upside is you get back in business much faster in most cases; I can usually rebuild a Win2008 R2 server and make it a DC before you even find the guy that has the combo to the backup tape vault. You also don’t get the interruptions in service for Windows from missing FSMO roles, such as DCs that were low on their RID pool and now cannot retrieve more (this only matters with default, obviously; some customers raise their pool sizes to combat this effect). It’s typically a more reliable approach too – after all, your backup may contain the same time bomb of settings or corruption or whatever that made your DC go offline in the first place. Moreover, the backup is unlikely to contain the most recent changes regardless – backups usually run overnight, so any un-replicated originating updates made during the day are going to be nuked in both cases.

    For all these reasons, we in MS Support generallyrecommend a rebuild rather than a restore, all things being equal. Ideally, you fix the actual server and do neither!

    As a side note, restoring the RID master usedto cause issues that we first fixed in Win2000 SP3. This unfortunately has live on as a myth that you cannot safely restore the RID master. Nevertheless, if someone impatiently seizes that role, then someone else restores that backup, you get a new problem where you cannot issue RIDs anymore. Your DC will also refuse to claim role ownership with a restored RID Master (or any FSMO role) if your restored server has an AD replication problem that prevents at least one good replication with a partner. Keep those in mind for planning no matter how the argument turns out!

    Question

    I am trying out Windows Server 2012 and its new Minimal Server Interface. Is there a way to use WMI to determine if a server is running with a Full Installation, Core Installation, or a Minimal Shell installation?

    Answer

    Indeed, although it’s not made it way to MSDN quite yet. The Win32_ServerFeature class returns a few new properties in our latest operating system. You can use WMIC or Windows PowerShell to browse the installed ones. For example:

    Image may be NSFW.
    Clik here to view.
    image

    The “99” ID is Server Graphical Shell, which means, in practical terms, “Full Installation”. If 99 alone is not present, that means it’s a minshell server. If the “478” ID is also missing, it’s a Core server.

    E.g. if you wanted to apply some group policy that only applied to MinShell servers, you’d set your query to return true if 99 was not present but 478 was present.

    Other Stuff

    Speaking of which, Windows Server 2012 General Availability is September 4th. If you manage to miss the run up, you might want to visit an optometrist and/or social media consultant.

    Stop worrying so much about the end of the world and think it through.

    So awesome:


    And so fake :(

    If you are married to a psychotic Solitaire player who poo-poo’ed switching totally to the Windows 8 Consumer Preview because they could not get their mainline fix of card games, we have you covered now in Windows 8 RTM. Just run the Store app and swipe for the Charms Bar, then search for Solitaire.

    Image may be NSFW.
    Clik here to view.
    image

    It’s free and exactly 17 times better than the old in-box version:

    Image may be NSFW.
    Clik here to view.
    image

    OMG Lisa, stop yelling at me! 

    Is this the greatest geek advert of all time?


    Yes. Yes it is.

    When people ask me why I stopped listening to Metallica after the Black Album, this is how I reply:

    Image may be NSFW.
    Clik here to view.
    Hetfield in Milan

    Ride the lightning Mercedes

    We have quite a few fresh, youthful faces here in MS Support these days and someone asked me what “Mall Hair” was when I mentioned it. If you graduated high school between 1984 and 1994 in the Midwestern United States, you already know.

    Finally – I am heading to Sydney in late September to yammer in-depth about Windows Server 2012 and Windows 8. Anyone have any good ideas for things to do? So far I’ve heard “bridge climb”, which is apparently the way Australians trick idiot tourists into paying for death. They probably follow it up with “funnel-web spider petting zoo” and “swim with the saltwater crocodiles”. Lunatics.

    Until next time,

    - Ned “I bet James Hetfield knows where I can get a tropical drink by the pool” Pyle

    Image may be NSFW.
    Clik here to view.

    One of us: What it was like to interview for a support role at Microsoft

    Hello, Kim here again. We get many questions about what to expect when interviewing at Microsoft. I’m coming up on my two year anniversary at Microsoft and I thought I would share my experience in the hope that it might help you if you are interested in applying to Microsoft Support; if nothing else, there is some educational and entertainment value in reading about me being interviewed by Ned. :)

    Everyone at Microsoft has a unique story to tell about how they were hired. On the support side of Microsoft, many of us were initially hired as contractors and later offered a full-time position. Others were college hires, starting our first real jobs here. It seems some have just been here forever. Then there are a few of us, myself included, that were industry hires. Over the years, I've submitted my résumé to Microsoft a number of times. I have always wanted to work for Microsoft, but never really expected to be contacted since there aren’t many Microsoft positions available in central Indiana (where I’m from). I had a good job and wasn’t particularly unhappy in it, but the opportunity to move up was limited in my current role. I casually looked for a new position for a couple of months and had been offered one job, but it just didn't feel like the right fit. Around the same time, I submitted my résumé to Microsoft for a Support Engineer position on the Directory Services support team in Charlotte. Much to my surprise, I received an email that began a wild ride of excitement, anxiety, anticipation, and fear that ultimately resulted in my moving from the corn fields of the Midwest (there is actually more than corn in Indiana, btw) to the land of sweet tea.

    I never expected that Microsoft would contact me due to the sheer volume of résumés they receive daily and the fact that the position was in Charlotte and I was not. About a week after I submitted my résumé, I received an email requesting a phone interview with the Directory Services team. I, of course, responded immediately and a phone interview was set up for three days from the current date. When I submitted my résumé, I didn’t think I’d be contacted and if I was, I definitely thought I’d have more than three days to prepare! The excitement lasted about 30 seconds before the reality of the situation set in . . . I was going to have an interview with Microsoft in three days! Just to add to the anxiety level, Ned Pyle (queue the Halloween theme) was going to do my phone screen!

    Preparation - Phone Screen

    I didn't know where to start to prepare. As with any phone screen, you have no idea what types of questions you will be asked. Would it be a technical interview; would it just be a review of my résumé and my qualifications? I didn’t know what to expect. I assumed that since Ned was calling me that there would be some technical aspect to it, but I wasn’t sure. There’s no wiki article on how to interview at Microsoft. :) On top of that, I'd heard rumors of questions about manhole covers and all kinds of other strange problem-solving questions. This was definitely going to be more difficult than any other interview I’d ever had.

    Once I got over the initial panic, I decided I needed to start with the basics. This was a position for the Directory Services team, so I dug out all of the training books from the last eight years of working with Active Directory and put together a list of topics I knew I needed to review. I also did a Bing search on Active Directory Interview questions and I found a couple of lists of general AD questions. Finally, I went to the source, the AskDS blog, and searched for information on "hiring" and found a link to Post-Graduate AD Studies.

    My resource list looked something like this:

    1. Post-Graduate AD Studies (thanks, Ned)

    2. O'Reilly Active Directory book (older version)

    3. Training manual from Active Directory Troubleshooting course that was offered by MCS many years ago

    4. Training manuals from a SANS SEC505 Securing Windows course

    5. MS Press Active Directory Pocket Consultant

    6. MS Press Windows Group Policy Guide

    7. AD Interview Questions Bing search

       a) http://www.petri.co.il/mcse_system_administrator_active_directory_interview_questions.htm

       b) http://www.petri.co.il/mcse-system-administrator-windows-server-2008-r2-active-directory-interview-questions.htm

    I only had three days to study, so I decided to start with reviewing the areas that I was weakest in and most comfortable with. For me, these were:

    1. PKI (ugh)

    2. AD Replication (good)

    3. Kerberos (ick)

    4. Authentication (meh)

    5. Group Policy (very good)

    The SANS manuals had good slides and decent descriptions, so that is where I started. Everyone has different levels of experience and different study habits. What works for me is writing. If I write something down, it seems to solidify it in my mind. I reviewed each of the topics above and focused on writing down the parts either that were new to me or that I needed to focus on in more detail. This approach meant that I was reading both the topics I already understood (as a refresher) and writing down the topics I needed to work on. Next, I went through the various lists of AD interview questions I had found and made sure that I could at least answer all of the questions at a high level. This involved doing some research for some of the questions. The websites with the lists of questions were a good resource because they didn’t give me the answers. I didn’t just want to be able to recite some random acronyms. I wanted to understand, at least at a high level, what all of the basic concepts were and be able to relate them to one another. I knew that I was going to need to have broad knowledge of many topics and then deep knowledge in others.

    The worst part of all of this studying was that I didn't have enough lead-time to request time off from work to focus on it. So, while I was eating lunch, I was studying. While I was waiting on servers to build, I was studying. While I was waiting on VMs to clone, guess what? I was studying. :) By the end of the three days of studying, I was pretty much a nervous wreck and ready for this phone screen to end.

    The Phone Screen

    This is where you'd like me to tell you what questions Ned asked me, but . . . that isn't going to happen. Bwahahaha. :-)

    What I can tell you about the interview is that it wasn't solely about rote knowledge, which is good since I had prepared for more than just how to spell AD & PKI. Knowing the high-level concepts was good; he asked a few random questions to see how far I could explain some of the technologies. It was more important to know what to do with this information and how to troubleshoot given what you know about a particular technology. If you can't apply the concepts to a real world scenario then the knowledge is useless. Throughout the interview, there were times where I couldn't come up with the right words or terms for something and I imagined Ned sitting there playing with his beard out of boredom.

    Image may be NSFW.
    Clik here to view.
    image

    In those situations, I found Ned was awake and tried to help me through them or skipped to something else that eventually got me back to the part I’d been struggling with but this time with better results. For that, I was grateful and it helped me keep my nerves in check as well. While trying to answer the flood of questions and keep my nerves in check, I tried to keep a list of the topics we were discussing just in case I got a follow-up interview. Although I’d like to say that I totally rocked out the phone interview and that I’m awesome (ok, I’m pretty cool), I actually thought I’d done alright, but not necessarily well enough to get a follow-up interview. Overall, I didn’t feel like I had been able to come up with responses quickly enough and Ned guided me around a couple of topics before I finally understood what he was getting at a few more times than I would have liked.

    On-site interview scheduled - WOOT!

    Much to my own disbelief, I did receive that follow-up email to schedule an in-person interview down in sunny Charlotte, NC. Fortunately, I had a little more time to prepare, mainly due to the nature of an on-site interview that is out of state. Logistics were in my favor this time! As I recall, I had about two weeks between when I received notification of the on-site interview and the actual scheduled interview date. This was definitely better than the three days I had to prepare for the phone screen.

    With more time, I decided that I would take some days off work to focus on studying. Maybe this is extreme, but that is how important it was to me to get this job. I figured that this was my one shot to get this right and I was going to do everything I possibly could to ensure that I was as prepared as I could possibly be.

    This time, I started studying with the list of questions from my phone interview with Ned. I wanted to make sure that if Ned was in my face-to-face interview that I would be able to answer those questions the second time. Then I reviewed all of the questions and notes that I had prepared for my phone interview. Finally, I really started digging in on the Post-Graduate AD Studies from the AskDS blog. I take full responsibility for the small forest of trees I killed in printing all of this material off. I read as much as I could of each of the Core Technology Reading and then I chose three or four areas from the Post Graduate Technology Reading to dig into deeper.

    Obviously, I didn't study all day for two weeks. I'd read and then go for a short walk. As the time passed, I began to realize how long two weeks is. Having two weeks to prepare is awesome, but the stress of waking up every day knowing what you need to do and then dealing with the anxiety of just wanting it to be over is harder than I thought it would be. I tried to review my notes at least once a day and then read more of the in-depth content with the goal of ensuring that I had some relatively deep knowledge in some areas, knew the troubleshooting tools and processes, and for the areas I couldn’t go so deep into that I at least knew the lingo and how the pieces fit together. I certainly didn’t want to get all the way to Charlotte and have some basic question come at me and just sit there staring at the conference room table blankly. :-/

    By the time I was ready to leave for my interview, I knew that I’d done everything I could to prepare and I just had to hope that the hard work paid off and that my brain cells held out for another day.

    The On-site interview

    I arrived in Charlotte the evening before the interview. I studied on the flight and then a little the night before. Again, just reviewing my notes and the SANS guide on PKI and Kerberos. I tried not to overdo it. If I wasn't ready at this point, I never would be.

    I got to the site a little early that day, so I sat in the car and read more PKI and FRS notes. Then I took about 5 minutes and tried to relax and get my nerves under control (nice try).

    The interview itself was intense. It was scheduled for an hour, but by the time I got out of the conference room I’d been in there two and a half hours. There were engineers and managers from both Texas (video conference) and Charlotte in the room. The questions pretty much started where we had left off from the phone interview in terms of complexity. I didn’t get a gimme on the starting point. I think we went for about an hour before they took pity on me and let me get more caffeine and started loading me up on chocolate. By the time I got to the management portion of the interview, I was shaking pretty intensely (probably from all that soda and chocolate that they kept giving me) and I was glad that I’d brought copies of my résumé so I could remember the last 10 years of my work history.

    The thing that I appreciated most about the entire process was how understanding everyone was. They know how scary this can be and how nervous people are when they come in for an interview. Although I was incredibly nervous, everyone made me feel comfortable and I felt like they genuinely wanted me to succeed. The management portion of the interview was definitely easier, but they did ask some tough questions as well. I also made sure that I had come prepared with several questions of my own to ask them.

    When I finally walked out of the conference room, I felt like a train had hit me. Emotionally I was shot, physically I was somewhere between wired and exhausted. It was definitely the most grueling interview I’d ever experienced, but I knew that I’d done everything I could to prepare. The coolest part happened as I was escorted to my car. As we were finishing our formalities, my host got a phone call on his cell phone and it was for me. This was probably the weirdest thing that had ever happened to me at an interview. I took his cell phone and it was one of the managers that had participated in my interview, she was calling to let me know that they were going to make me an offer and wanted to let me know before I left so I wouldn’t be worried about it all the way home on the plane. Getting that phone call before I left was an amazing feeling. I’d just been through a grueling interview that I’d spent weeks (really my entire career) preparing for and finding out my hard work had paid off was an unbelievable feeling. It didn’t become real until I got my blue badge a few days after my start date.

    Hindsight is 20/20

    Looking back at my career and my preparation for this role, is there anything that I would do differently to better prepare? Career-wise, I’d say that I did a good job of preparing for this role. I took increasingly more challenging roles from both a technical and a leadership perspective. I led projects that required me to be both the technical leader (designing, planning, testing, documenting a system) and a project leader (collaborating with other teams, managing schedules, reporting progress to management, dealing with road blocks and competing priorities). These experiences have given me insight and perspective on the environments and processes that my customers work with daily.

    If I could do anything differently, I’d say that I would have dug in a little deeper on technologies that I didn’t deal with as part of my roles. For instance, learning more about SQL and IIS or even Exchange would have helped me better understand to what degree my technologies are critical to the functionality of others. Often our support cases center on the integration of multiple technologies, so having a better understanding of those technologies can be beneficial.

    If you are newer to the industry, focusing on troubleshooting methodologies is a must. The job of support is to assist with troubleshooting in order to resolve technical issues. The entire interview process, from the phone-screen to the on-site interview, focused on my ability to be presented with a situation I am not familiar with and use my knowledge of technology and troubleshooting tools to isolate the problem. If you haven’t reviewed Mark Renoden’s post on Effective Troubleshooting, I highly recommend it. This is what being in support is all about.

    Just don’t be these guys

    So, what's it really like?

    Working in support at Microsoft is by far the most technically demanding role I’ve had during the course of my career. Every day is a new challenge. Every day you work on a problem you’ve never seen before. It’s a lot like working in an Emergency room at times. Systems are down, businesses are losing money, the pressure is high and the expectations are even higher. Fortunately, not all cases are critsits (severity A) and the people I work with are amazing. My row is comprised of some of the most intelligent but “unique” people I’ve ever worked with. In ten minutes on the row, you can participate in a conversation about how the code in Group Policy chooses a Domain Controller for writes and which MIDI rendition of “Jump” is the best (for the record, they are all bad). While the cases are difficult and the pressure is intense, the work environment allows us to be ourselves and we are never short on laughs.

    The last two years have been an incredible journey. I’ve learned more at Microsoft in two years than I did in five out in the industry. I get to work on some of the largest environments in the world and help people every day. While this isn't a prescription for how to prepare for an interview at Microsoft, it worked for me; and if you're crazy enough to want to work with Ned and the rest of us maybe it will work for you too. GOOD LUCK!

    - Kim “Office 2013 has amazing beard search capabilities” Nichols

    Image may be NSFW.
    Clik here to view.

    Updated Group Policy Search service

    Mike here with an important service announcement.  In June of 2010, guest poster Kapil Mehra introduced the Group Policy Search service.  The Group Policy Search (GPS) service is a web application hosted on Windows Azure, which enables you to search for registry-based Group Policy settings used in Windows operating systems.

    It’s a "plezz-shzaa" to announce that GPS version 1.1.4 is live at http://gps.cloudapp.net.  Version 1.1.4 includes registry-based policy settings from Windows 8 and Windows Server 2012, performance improvements, bug fixes, and a few little surprises.  It's the easiest way to search for a Group Policy setting. 

    So, the next time you need to search for a Group Policy settings, or want to know the registry key and value name that backs a particular policy setting-- don't look for a antiquated settings spreadsheet reference.  Get your Group Policy Search on!!

    And, if you act now-- we'll throw in the Group Policy Search Windows Phone 7 application-- for free! That's right, take Group Policy Search with you on the go. What an offer! Group Policy Search and Group Policy Search Windows Phone 7 application -- for one low, low price -- FREE!  Act now and you'll get free shipping.

    This is Mike Stephens and "Ned Pyle" approves this message!

    Image may be NSFW.
    Clik here to view.

    Windows Server 2012 GA

    Hey folks, Ned here again to tell you what you probably already know: Windows Server 2012 is now generally available: 

    I don’t often recommend “vision” posts, but Satya Nadella – President of Server and Tools – explains why we made the more radical changes in Windows Server 2012. Rather than start with the opening line, I’ll quote from the finish:

    In the 1990s, Microsoft saw the need to democratize computing and made client/server computing available at scale, to customers of all sizes. Today, our goal is to do the same for cloud computing with Windows Server 2012.

    On a more personal note: Mike Stephens, Joseph Conway, Tim Quinn, Chuck Timon, Don Geddes, and I dedicated two years to understanding, testing, bug stomping, design change requesting, documenting, and teaching Windows Server 2012. Another couple dozen senior support folks – such as our very own Warren Williams - spent the last year working with customers to track down issues and get feedback. Your feedback. You will see things in Directory Services that were requested through this blog.

    Having worked on a number of pre-release products, this is the most Support involvement in any Windows operating system I have ever seen. When combined with numerous customer and field contributions, I believe that Windows Server 2012 is the most capable, dependable, and supportable product we’ve ever made. I hope you agree.

    - Ned “also, any DS issues you find were missed by Mike, not me” Pyle

    Image may be NSFW.
    Clik here to view.

    Let the Blogging begin…

    Hello AskDS Readers. Mike here again. If you notice, Ned posted one of our first Windows Server 2012 RTM blogs a while back (Managing RID Issuance in Windows Server 2012). Yes friends, the gag order has been lifted and we are allowed to spout mountains of technical goodness about Windows Server 2012 and Windows 8.

    "So much time and so little to do. Wait a minute. Strike that. Reverse it." Windows Server 2012 has many cool features that Ned and I have been waiting to share with you. Here is a 50,000-foot view of the technologies and features we are going to blog in the next few weeks and months-- in no specific order.

    I'll start by highlighting some of the changes with security, PKI, authentication, and authorization. The Windows Server 2012 Certificate Services role has a few feature changes that should delight many of the certificate administrators out there. With new installation, deployment, and improved configuration-- it's probably the easiest certificate authority to configure.

    Windows Server 2012 authentication is a healthy technology with a ton of technical goo just seeping at the seams; starting with the mac-daddy of them all-- Kerberos. In a few weeks, we will begin publishing the first of many installments of Kerberos changes in Windows 8/Windows Server 2012. As a teaser, the lineup includes KDC Proxy Server, the latest and greatest way to configured Kerberos Constrained Delegation-- "It really whips the lama's @#%." We'll take some exhaustive time explaining some Kerberos enhancements such as Kerberos Armoring and Compound Identity. We have tons more to share in the area of authentication including Virtual Smartcard Readers, and Picture Password logon.

    Advanced client security highlights features like Server Name Indicator (SNI) for Windows Server 2012, Certificate Lifecycle Notification, Weak Key Protection (most of which is published in Jonathan Stephen's latest blog, RSA Key Blocking is Here!), Implicit binding, which is the infrastructure behind the new Centralized Certificate Store IIS feature, and Client certificate hints. Advanced client security also includes a wicked-cool security-enhancement to PFX files and new a PKI module for Windows PowerShell

    At some point in our publishing timeline, we'll launch into the saga of all sagas, Dynamic Access Control. We've hosted guest posts here on AskDS to introduce this radical, amazingly cool new way to perform file-based authorization. This isn't your grandfather's authorization either. Dynamic Access Control or DAC as we’ll call it, requires planning, diligence, and an understanding of many dependencies, such as Active Directory, Kerberos, and effective access. Did I mention there are many knobs you must turn to configure it? No worries though, we'll break DAC down into consumable morsels that should make it easy for everyone to understand.

    The concept of claims continues by showing you how to use Windows Server 2012's Active Directory Federation Services role to leverage claims issued by Windows domain controllers. Using AD FS, you can pass-through the Windows authorization claims or transform them into well-known SAML-based claim types.

    No, I'm not done yet. I'm going introduce a well-hidden feature that hasn't received much exposure, but has been labeled "pretty cool" by many training attendees. Access Denied Assistance is a gem of a feature that is locked away within the File Server Resource Manager (FSRM). It enables you to provide a SharePoint-like experience for users in Windows Explorer when they experience access denied or file not found to a shared file or folder. Access Denied Assistance provides the user with a "Request Access" interface that sends an email to the share owner that provides details on the access requested and guidance for the share owner can follow to remediate the problem. It's very slick.

    Wait there is more; this is just my list of topics to cover. Ned has a fun-bag full of Active Directory related material that he'll intermix with these topics to keep things fresh. I'm certain we'll sneak in a few extras that may not be directly related to Directory Services; however, they will help you make your Windows Server 2012 and Windows 8 experience much better. Need to run for now, this blog post just wrote checks my body can't cash.

    The line above and below this were intentionally left blank using Microsoft Word 2013 Preview Edition

    Mike "There's no earthly way of knowing; which direction they are going... There's no knowing where they're rowing..." Stephens

    Image may be NSFW.
    Clik here to view.

    MaxTokenSize and Windows 8 and Windows Server 2012

    Hello AskDS Populous, Mike here and I want to share with you some of the excellent enhancements we accomplished in Windows 8 and Windows Server 2012 around MaxTokenSize. Let’s review MaxTokenSize and its symptoms before we jump in to wonderful world of Windows 8 (say that three times fast).

    Wonderful World of Windows 8
    Wonderful World of Windows 8
    Wonderful World of Windows 8

    What is MaxTokenSize

    Kerberos is the default and preferred authentication protocol since the release of Windows 2000 Server. Over the last few years, Microsoft has made some significant investments in provided extensions to the protocol. One of those extensions to Kerberos is the Privilege Attribute Certificate or PAC (defined in Windows Server Protocol specification MS-PAC).

    Microsoft created the PAC to encapsulate authorization related information in a manner consistent with RFC4120. The authorization information included in the PAC includes security identifiers, user profile information such as Full name, home directory, and bad password count. Security identifiers (SIDs) included in the PAC represent the user's current SID and any instances of SID history and security group memberships to the extent of current domain groups, resource domain groups, and universal groups.

    Kerberos uses a buffer to store authorization information and reports this size to applications using Kerberos for authentication. MaxTokenSize is the size of buffer used to store authorization information. This buffer size is important because some protocols such as RPC and HTTP use it when they allocate memory for authentication. If the authorization data for a user attempting to authenticate is larger than the MaxTokenSize, then the authentication fails for that connection using that protocol. This explains why authentication failures resulted when authenticating to IIS but not when authenticating to folder shared on a file server. The default buffer size for Kerberos in Windows 7 and Windows Server 2008R2 is 12k.

    Windows 8 and Windows Server 2012

    Let's face the facts of today's IT environment… authentication and authorization is not getting easier; it's becoming more complex. In the world of single sign-on and user claims, the amount of authorization data is increasing. Increasing authorization data in an infrastructure that has already had its experiences with authentication failures because a user was a member of too many groups justifies some concern for the future. Fortunately, Windows 8 and Windows Server 2012 have features to help us take proactive measures to avoid the problem.

    Default MaxTokenSize

    Windows 8 and Windows Server 2012 benefit from an increased MaxTokenSize of 48k. Therefore, when HTTP relies on the MaxTokenSize value as the value used for memory allocation; it will allocate 48k of memory for the authentication buffer, which hold a substantially more authorization information than in previous versions of Windows where the default MaxTokenSize was only 12k.

    Group Policy settings

    Windows 8 and Windows Server 2012 introduce two new computer-based policy settings that help combat against large service tickets, which is the cause of the MaxTokenSize dilemma. The first of these policy settings is not exactly new-- it has been in Windows for years, but only as a registry value. Use the policy setting Set maximum Kerberos SSPI context token buffer size to change the MaxTokenSize using group policy. Looking closely at this policy setting in the Group Policy Management Editor, you'll notice the icon for this setting is slightly different from the others around it.

    Image may be NSFW.
    Clik here to view.
    clip_image001

    This difference is attributed to registry location the policy setting modifies when enabled or disabled. This registry setting is the actual MaxTokenSize registry key and value name that has been used in earlier versions of Windows

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters\MaxTokenSize

    Therefore, you can use this computer-based policy setting to manage Windows 8, Windows Server 2012, and earlier versions of Windows. The catch here is that this registry location is not a managed policy location. Managed policy locations are removed and reapplied during policy refreshes to avoid persistent settings in the registry after the settings in a Group Policy object become out of scope. That behavior does not occur with this key, as the setting applied by this policy setting is not removed during application. Therefore, the policy setting persists even if the Group Policy object providing the setting falls out of scope.

    The second policy setting is very cool and answers the question that customers always asked when they encounter a problem with MaxTokenSize: "How big is the token?" You might be one of those people that went on the crusade of a lifetime using TOKENSZ.EXE and spent countless hours trying to determine the optimal MaxTokenSize for your environment. Those days are gone.

    A new KDC policy settings Warning events for large Kerberos tickets provides you with a way to monitor the size of Kerberos tickets issued by KDCs. When you enable this policy setting, you then must configure a ticket threshold size. The KDC uses the ticket threshold size to determine if it should write a warning event to the system event log. If the KDC issues a ticket that exceeds the ticket threshold size, then it writes a warning. This policy setting, when enabled, defaults to the 12k, which is the default MaxTokenSize of previous version of Windows.

    Image may be NSFW.
    Clik here to view.
    clip_image003

    Ideally, if you use this policy setting, then you'd likely want to set the ticket threshold value to approximately 1k less than your current MaxTokenSize. You want it lower than your current MaxTokenSize (unless you are using 12k, that is the minimum value) so you can use the warning events as a proactive measure to avoid an authentication failure due to an incorrectly sized buffer. Setting the threshold too low will just train you to ignore the Event 31 warnings because they'll become noise in the event log. Setting it too high and you're likely to be blindsided with authentication failures rather than warning events.

    Image may be NSFW.
    Clik here to view.
    clip_image004

    Earlier I said that this policy setting solves your problems with fumbling with TOKENSZ and other utilities to determine MaxTokenSize-- here's how. If you examine the details of the Kerberos-Key-Distribution-Center Warning event ID 31, you'll notice that it gives you all the information you need to determine the optimal MaxTokenSize in your environment. In the following example, the user Ned is a member of over 1000 groups (he's very popular and a big deal on the Internet). When I attempt to log on Ned using the RUNAS command, I generated an Event ID 31. The event description provides you with the service principal name, the user principal name, the size of the ticket requested and the size of the threshold. This enables you to aggregate all the event 31s and identify the maximum ticket size requested. Armed with this information, you can set the optimal MaxTokenSize for your environment.

    Image may be NSFW.
    Clik here to view.
    clip_image006

    KDC Resource SID Compression

    Kerberos authentication inserts security identifiers (SIDs) of the security principal, SID history, all the groups to which the user is a member including universal groups and groups from the resource domain. Security principals with too many group memberships greatly affect the size of the authentication data. Sometimes the authentication data is larger than the allocated size reported by Kerberos to applications. This can causes authentication failure in some applications. SIDs from the resource domain share the same domain portion of the SID, these SIDs can be compressed by only providing the resource domain SID once for all SIDs in the resource domain.

    Windows Server 2012 KDCs help reduce the size of the PAC by taking advantage of resource SID compression. By default, a Windows Server 2012 KDC will always compress resource SIDs. To compress resource SIDs, the KDC stores SID of the resource domain to which the target resource is a member.  Then, it inserts only the RID portion of each resource SID into the ResourceGroupIds portion of the authentication data. 

    Resource SID Compression reduces the size of each stored instance of a resource SID because the domain SID is stored once rather than with each instance. Without resource SID Compression, the KDC inserts all the SIDs added by the resource domain in the Extra-SID portion of the PAC structure, which is a list of SIDs.  [MS-KILE]

    Interoperability

    Other Kerberos implementations may not understand resource group compression and therefore are not compatible. In these scenarios, you may need to disable resource group compression to allow the Windows Server 2012 KDC to interoperate with the third-party Kerberos implementation.

    Resource SID compression is on by default; however, you can disable it. You disable resource SID compression on a Windows Server 2012 KDC using the DisableResourceGroupsFields registry value under the HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kdc\Parameters registry key. This registry value has a DWORD registry value type. You completely disable resource SID compression when you set the registry value to 1. The KDC reads this configuration when building a service ticket. With the bit enabled, the KDC does not use resource SID compression when building the service ticket.

    Wrap up

    There's the skinny on the Kerberos enhancements included in Windows 8 and Windows Server 2012 that specifically target large service ticket and MaxTokenSize scenarios. To summarize:

    · Increased default MaxTokenSize from 12k to 48k

    · New Group Policy setting to centrally manage MaxTokenSize

    · New Group Policy setting to write warnings to the system event log when a service ticket exceeds a designated threshold

    · New Resource SID compression to reduce the storage size of SIDs from the resource domain

    Keep an eye out for more Windows 8 and Kerberos needful

    Mike "~Mike" Stephens

    Image may be NSFW.
    Clik here to view.

    Monthly Mail Sack: I Hope Your Data Plan is Paid Up Edition

    Hi all, Ned here again with that thing we call love. Blog! I mean blog. I have a ton to talk about now that I have moved to the monthly format, and I recommend you switch to WIFI if you’re on your phone.

    This round I answer your questions on:

    I will bury you!

    Image may be NSFW.
    Clik here to view.
    image

    With screenshots!

    Question

    Is there a way to associate a “new” domain controller with an “existing” domain controller account in Active Directory? I.e. if I have a DC that is dead and has to be replaced, I have to metadata clean the old DC out before I promote a replacement DC with the same name.

    Answer

    You can “reinstall” DCs, attaching an existing objects that were not removed by demotion/MD cleanup. In Windows Server 2012 this is detected and handled by the AD DS config wizard right after you choose a replica DC and get to the DC Options page, or with the Install-AddsDomainController cmdlet using the -AllowDomainControllerReinstall argument.

    Image may be NSFW.
    Clik here to view.
    image

    Neato

    If using an older operating system, no such luck (this actually existed in dcpromo.exe /unattend in 2008 R2, but didn't work AFAIK). You should use DSA.MSC or NTDSUTIL to metadata cleanup that old domain controller before promoting its replacement.

    Question

    I’ve read in the past – from you - that DFSR using SYSVOL supports the change notification flag on AD DS replication links or connection objects. Is this true? I am finding very inconsistent behavior.

    Answer

    Not really (and I updated my old writing on this– yes, Ned can be wrong).

    DFSR always replicates immediately and continuously with its own internal change notification, as long as the schedule is open; these scheduled windows are in 15 minute blocks and are assigned on the AD DS connection objects.

    If the current time matches an open block, you replicate continuously (as fast as possible, sending DFSR change notifications) until that block closes.

    If the next block is closed, you wait for 15 minutes, sending no updates at all. If that next block had also been open, you continue replicating at max speed. Therefore, to replicate with change notification, set the connection objects to use a fully opened window. For example:

    Image may be NSFW.
    Clik here to view.
    image

    To make DFSR SYSVOL slower, you must close the replication schedule windows on the connections. But since the historical scenario is a desire to make group policy/script replication faster - and since it is better that SYSVOL beat AD DS, since SYSVOL contains files called once AD DS is updated - this scenario is less likely or important. Not to mention that ideally, SYSVOL is pretty static.

    Question

    I was using the new graphical Fine Grained Password Policy in Windows Server 2012 AD Administrative Center. I realized that it lets me set a minimum password length of 255 characters.

    Image may be NSFW.
    Clik here to view.
    image

    When I edit group policy in GPMC, it doesn’t let me set a minimum of more than 14 characters!

    Image may be NSFW.
    Clik here to view.
    image

    Did I find a bug?

    Answer

    Nope. The original reason around the 14 character password was to force users to set a 15 character password and force the removal of LM password hashes (which is sort of silly at this point, as we have a security setting called Do not store LAN Manager hash value on next password change that makes this moot and is enabled by default in our later operating systems). The security policy editor enforces the 14 character limit, but this is not the actual limit. You can use ADSIEDIT to change it, for example, and that will work.

    The true maximum limit in Active Directory for your password is 255 unicode characters and that’s what ADAC is enforcing. But many pieces of Windows software limit you to 127 character passwords, or even fewer; for example, the NET USE command: if you set a password to 254 characters and then attempt to map a drive with NET USE, it ignores the other characters beyond 127 and you always receive “unknown user name or bad password.” So be careful here.

    It goes without saying that if you are requiring a minimum password length of even 25 characters, you are kind of a jerk :-D. Time for smartcard logons, dudes and dudettes; there is no way your users are going to remember passwords that long and it will be on Post-It notes all over their cubicles.

    Totally unrelated note: the second password shown here is exactly 127 characters:

    Image may be NSFW.
    Clik here to view.
    image

    Awesome

    Question

    I am using USMT 4.0 and running scanstate on a computer with multiple fixed hard drives, like C:, D:, E:. I want to migrate to new Windows 7 machines that only have a C: drive. Do I need to create a custom XML file?

    Answer

    I could have sworn I wrote something up on this before but darned if I can find it. The short answer is – use migdocs.xml and it will all magically work. The long answer and demonstration of behavior is:

    1. I have a computer with C: and D: fixed drives (OS is unimportant, USMT 4.0 or later).

    2. On the C: drive I have two custom folders, each with a custom file.

    Image may be NSFW.
    Clik here to view.
    clip_image001

    3. On the D: drive I have two custom folders, each with a custom file.

    Image may be NSFW.
    Clik here to view.
    clip_image001[5]

    4. One of the folders is named the same on both drives, with a file that is named the same in that folder, but contains different contents.

    Image may be NSFW.
    Clik here to view.
    clip_image002

    Image may be NSFW.
    Clik here to view.
    clip_image003

    5. Then you scanstate with no hardlinks (e.g. scanstate c:\store /i:migdocs.xml /c /o)

    6. Then you go to a machine with only a C: drive (in my repro I was lazy and just deleted my D: drive) and copy the store over.

    7. Run loadstate (e.g. loadstate c:\store /i:migdocs.xml /c)

    8. Note how the folders on D: are migrated into C:, merging the folders and creating renamed copies of files when there are duplications:

    Image may be NSFW.
    Clik here to view.
    clip_image004
     Image may be NSFW.
    Clik here to view.
    clip_image005

    Image may be NSFW.
    Clik here to view.
    clip_image006

    Image may be NSFW.
    Clik here to view.
    clip_image007

    Question

    Where does Active Directory get computer specific information like Operating System, Service Pack level, etc., for computer accounts that are joined to the domain? I'm guessing WMI but I'm also wondering how often it checks.

    Answer

    AD gets it from attributes (for example).

    AD relies on the individual Windows computers to take care of it – such as when joining the domain, being upgraded, being service packed, or after reboot. Nothing in AD confirms it or maintains outside those “client” processes, so if I change my OS version info using ADSIEDIT, that's the OS as far as AD is concerned and it's not going to change back unless the Windows computer makes it happen. Which it will!

    Here I change a Win2008 R2 server to use nomenclature similar to our Linux and Apple competitors:

    Image may be NSFW.
    Clik here to view.
    image

    And here it is after I reboot that computer:

    Image may be NSFW.
    Clik here to view.
    image

    That would be a good band name, now that I think about it.

    Question

    I’d like to add a DFSR file replication filter but I have hundreds of RFs and don’t want to click around Dfsmgmt.msc for days. Is there a way to set this globally for entire replication groups?

    Answer

    Not per se; DFSR file filters are set on each replicated folder in Active Directory.

    But setting it via a Windows PowerShell loop is not hard. For example, in Win2008 R2, where I imported the activedirectory module - here I am (destructively!) setting a filter to match the defaults plus add a new extension on all RFs in this domain:

    Image may be NSFW.
    Clik here to view.
    image

    Question

    Is there a way to export and import the DFS Replication configuration the way we do for DFSN? It seems like no but I want to make sure I am not missing anything.

    Answer

    DFSRADMIN LIST shows the configuration and there are a couple export/import commands for scheduling. But overall this is going to be a semi-manual process for you unless they write their own tool or scripts. Ultimately, it’s all just LDAP data, after all – this is how frs2dfsr.exe works.

    Once you list and inventory everything, the DFSRADMIN BULK command is useful to recreate things accurately.

    Question

    Does USMT migrate Internet Explorer Autocomplete Settings?

    Image may be NSFW.
    Clik here to view.
    image

    Answer

    I really should make you figure this out for yourself… but I am feeling pleasant today. These settings are all here:

    Image may be NSFW.
    Clik here to view.
    image

    Hint hint – Process Monitor is always your friend with custom USMT coding

    Looking at the USMT 5.0 replacement manifest:

    • MICROSOFT-WINDOWS-IE-INTERNETEXPLORER-REPL.MAN (from Windows 8)

    I see that we do get the \Internet Explorer\and all sub-data (including Main and DomainSuggestion) for those specific registry values with no exclusions. We also get the Explorer\Autocomplete in that same manifest, likewise without exclusion.

    • MICROSOFT-WINDOWS-IE-INTERNETEXPLORER-DL.MAN (from XP)

    Ditto. We grab all this as well.

    Question

    I have read that Windows Server 2008 R2 has the following documented and supported DFSR limits:

    The following list provides a set of scalability guidelines that have been tested by Microsoft on Windows Server 2008 R2 and Windows Server 2008:

    • Size of all replicated files on a server: 10 terabytes.
    • Number of replicated files on a volume: 8 million.
    • Maximum file size: 64 gigabytes.

    Source: http://technet.microsoft.com/en-us/library/f9b98a0f-c1ae-4a9f-9724-80c679596e6b(v=ws.10)#BKMK_00

    What happens if I exceed these limits? Should I ever consider exceeding these limits? I want to use much more than these limits!

    (Asked by half a zillion customers in the past few weeks)

    Answer

    With more than 10TB or 8 million files, the support will only be best effort (i.e. you can open a support case and we will attempt to assist, but they may reach a point where have to say “this configuration is not supported” and we cannot assist further). If you need us to fully support more end-to-end, you need a solution different than Win2008 R2 DFSR.

    To exceed the 10TB limit – which again, is not supported nor recommended – seriously consider:

    1. High reliability fabric to high reliability storage– i.e. do not use iSCSI. Do not use cheap disk arrays. Dedicated fiber or similar networks only with redundant paths, to a properly redundant storage array that costs a poop-load of money.
    2. Store no more than 2TB per volume– There is one DFSR database per volume, which means if there is a dirty shutdown, recovery affects all replicated data on that volume. 1TB max would be better.
    3. Latest DFSR hotfixes at all timeshttp://support.microsoft.com/kb/968429. This especially includes using http://support.microsoft.com/kb/2663685, combined with read-only replication when possible.

    Actually, just read Warren’s common DFSR mistakes post 10 times. Then read it 10 more times.

    Hmm… I recommend all these even when under 10TB…

    Other stuff

    RSAT for Windows 8 RTM is… RTM. Grab it here.

    I mentioned mall hair in last month’s mail sack. When that sort of thing happen in MS Support, colleagues provide helpful references:

    Image may be NSFW.
    Clik here to view.
    clip_image001

    I hate you, Justin

    Speaking of the ridiculous group I work with, this what you get when Steve Taylor wants to boost team morale on a Friday:


    Couldn’t they just have the bass player record one looped note?

    Canada, what the heck happened?!

    Image may be NSFW.
    Clik here to view.
    clip_image002[5]

    Still going…

    Image may be NSFW.
    Clik here to view.
    clip_image003[5]

    I mean… Norway? NORWAY IN THE SUMMER GAMES? They eat pickled herring and go sledding in June! I’ll grant that if you switch to medal count, you’re a respectable 13th. Good work, America’s Hat.

    In other news bound to depress canucks, the NHL is about to close up shop yet again. Check out this hilarious article courtesy of Mark.

     

    Finally

    I am heading out to Redmond next week to teach a couple days of Certified DS Master, then on to San Francisco and Sydney to vacate and yammer even more. I’ll be back in a few weeks; Jonathan will answer your questions in the meantime and I think Mike has posts aplenty to share. When I return – and maybe before – I will have some interesting news to share.

    See you in a few weeks.

    - Ned “don’t make me take off my shoe” Pyle

    Image may be NSFW.
    Clik here to view.

    Windows Server 2012 Shell game

    Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

    Image may be NSFW.
    Clik here to view.
    clip_image002

    Click…click…click…click-- installation complete; the computer reboots.

    You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

    Holy Shell, Batman! I don't have a desktop!

    Image may be NSFW.
    Clik here to view.
    clip_image004

    Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

    This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

    Image may be NSFW.
    Clik here to view.
    clip_image006

    There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

    Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

    • Internet Explorer
    • The Desktop
    • Windows Explorer
    • Windows 8-style application support
    • Multimedia support
    • Desktop Experience

    Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

    Image may be NSFW.
    Clik here to view.
    clip_image008

    "Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

    Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

    Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

    Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

    Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

    To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

    Image may be NSFW.
    Clik here to view.
    clip_image010

    Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

    Use the following command to view a list of the Server GUI components

    Image may be NSFW.
    Clik here to view.
    clip_image011

    Get-WindowsFeature server-gui*

    Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

    To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

    Remove-WindowsFeature Server-Gui-Shell

    To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

    Remove-WindowsFeature Server-Gui-Mgmt-Infra

    To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

    Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

    Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

    Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

    Image may be NSFW.
    Clik here to view.
    clip_image015

    To stage Server Shell features to a Windows Core Installation

    You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

    Image may be NSFW.
    Clik here to view.
    clip_image017

    You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

    You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

    Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

    Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

    Image may be NSFW.
    Clik here to view.
    clip_image019

    Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

    Image may be NSFW.
    Clik here to view.
    clip_image021

    It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

    Image may be NSFW.
    Clik here to view.
    clip_image023

    Done

    That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

    Mike

    "Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

    Stephens

    Image may be NSFW.
    Clik here to view.

    AD FS 2.0 RelayState

    Hi guys, Joji Oshima here again with some great news! AD FS 2.0 Rollup 2 adds the capability to send RelayState when using IDP initiated sign on. I imagine some people are ecstatic to hear this while others are asking “What is this and why should I care?”

    What is RelayState and why should I care?

    There are two protocol standards for federation (SAML and WS-Federation). RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server.
    Note:

    If the relying party is the application itself, you can use the loginToRp parameter instead.
    Example:
    https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?loginToRp=rpidentifier

    Without the use of any parameters, a user would need to go to the IDP initiated sign on page, log in to the server, choose the relying party, and then be directed to the application. Using RelayState can automate this process by generating a single URL for the user to click and be logged in to the target application without any intervention. It should be noted that when using RelayState, any parameters outside of it will be dropped.

    When can I use RelayState?

    We can pass RelayState when working with a relying party that has a SAML endpoint. It does not work when the direct relying party is using WS-Federation.

    The following IDP initiated flows are supported when using Rollup 2 for AD FS 2.0:

    • Identity provider security token server (STS) -> relying party STS (configured as a SAML-P endpoint) -> SAML relying party App
    • Identity provider STS -> relying party STS (configured as a SAML-P endpoint) -> WIF (WS-Fed) relying party App
    • Identity provider STS -> SAML relying party App

    The following initiated flow is not supported:

    • Identity provider STS -> WIF (WS-Fed) relying party App

    Manually Generating the RelayState URL

    There are two pieces of information you need to generate the RelayState URL. The first is the relying party’s identifier. This can be found in the AD FS 2.0 Management Console. View the Identifiers tab on the relying party’s property page.

    Image may be NSFW.
    Clik here to view.
    image

    The second part is the actual RelayState value that you wish to send to the Relying Party. It could be the identifier of the application, but the administrator for the Relying Party should have this information. In this example, we will use the Relying Party identifier of https://sso.adatum.com and the RelayState of https://webapp.adatum.com

    Starting values:
    RPID: https://sso.adatum.com
    RelayState: https://webapp.adatum.com

    Step 1: The first step is to URL Encode each value.

    RPID: https%3a%2f%2fsso.adatum.com
    RelayState: https%3a%2f%2fwebapp.adatum.com

    Step 2: The second step is to take these URL Encoded values, merge it with the string below, and URL Encode the string.

    String:
    RPID=<URL encoded RPID>&RelayState=<URL encoded RelayState>

    String with values:
    RPID= https%3a%2f%2fsso.adatum.com &RelayState= https%3a%2f%2fwebapp.adatum.com

    URL Encoded string:
    RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

    Step 3: The third step is to take the URL Encoded string and add it to the end of the string below.

    String:
    ?RelayState=

    String with value:
    ?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

    Step 4: The final step is to take the final string and append it to the IDP initiated sign on URL.

    IDP initiated sign on URL:
    https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx

    Final URL:
    https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

    The result is an IDP initiated sign on URL that tells AD FS which relying party STS the login is for, and also gives that relying party information that it can use to direct the user to the correct application.

    Image may be NSFW.
    Clik here to view.
    image

    Is there an easier way?

    The multi-step process and manual manipulation of the strings are prone to human error which can cause confusion and frustration. Using a simple HTML file, we can fill out the starting information into a form and click the Generate URL button.

    Image may be NSFW.
    Clik here to view.
    image

    The code sample for this HTML file has been posted to CodePlex.

    Conclusion and Links

    I hope this post has helped demystify RelayState and will have everyone up and running quickly.

    AD FS 2.0 RelayState Generator
    http://social.technet.microsoft.com/wiki/contents/articles/13172.ad-fs-2-0-relaystate-generator.aspx
    HTML Download
    https://adfsrelaystate.codeplex.com/

    AD FS 2.0 Rollup 2
    http://support.microsoft.com/kb/2681584

    Supporting Identity Provider Initiated RelayState
    http://technet.microsoft.com/en-us/library/jj127245(WS.10).aspx

    Joji "Halt! Who goes there!" Oshima

    Image may be NSFW.
    Clik here to view.

    So long and thanks for all the fish

    My time is up.

    It’s been eight years since a friend suggested I join him on a contract at Microsoft Support (thanks Pete). Eight years since I sat sweating in an interview with Steve Taylor, trying desperately to recall the KDC’s listening port (his hint: “German anti-tank gun”). Eight years since I joined 35 new colleagues in a training room and found that despite my opinion, I knew nothing about Active Directory (“Replication of Absent Linked Object References– what the hell have I gotten myself into?”).

    Eight years later, I’m a Senior Support Escalation Engineer, a blogger of some repute, and a seasoned world traveler who instructs other ‘softies about Windows releases. I’ve created thousands of pages of content and been involved in countless support cases and customer conversations. I am the last of those 35 colleagues still here, but there is proof of my existence even so. It’s been the most satisfactory work of my career.

    Just the thought of leaving was scary enough to give me pause – it’s been so long since I knew anything but supporting Windows. It’s a once in a lifetime opportunity though and sometimes you need to reset your career. Now I’ll help create the next generations of Windows Server and the buck will finally stop with me: I’ve been hired as a Program Manager and am on my way to Seattle next week. I’m not leaving Microsoft, just starting a new phase. A phase with a lot more product development, design responsibility, and… meetings. Soooo many meetings.

    There are two types of folks I am going to miss: the first are workmates. Many are support engineers, but also PFEs, Consultants, and TAMs. Even foreigners! Interesting and funny people fill Premier and Commercial Technical Support and make every day here enjoyable, even after the occasional customer assault. There’s nothing like a work environment where you really like your colleagues. I’ve sat next to Dave Fisher since 2004 and he’s made me laugh every single day. He is a brilliant weirdo, like so many other great people here. You all know who you are.

    The other folks are… you. Your comments stayed thought provoking and fresh for five years and 700 posts. Your emails kept me knee deep in mail sacks and articles (I had to learn in order to answer many of them). Your readership has made AskDS into one of the most popular blogs in Microsoft. You unknowingly played an immense part in my career, forcing me to improve my communication; there’s nothing like a few hundred thousand readers to make you learn your craft.

    My time as the so-called “editor in chief” of AskDS is over, but I imagine you will still find me on the Internet in my new role, yammering about things that I think you’ll find interesting. I also have a few posts in the chamber that Jonathan or Mike will unload after I’m gone, and they will keep the site going. AskDS will continue to be a place for unvarnished support information about Windows technologies, where your questions will get answers.

    Thanks for everything, and see you again soon.

    Image may be NSFW.
    Clik here to view.
    image

    We are looking forward to Seattle’s famous mud puddles

     

    - Ned “42” Pyle

    Image may be NSFW.
    Clik here to view.

    Digging a little deeper into Windows 8 Primary Computer

    [This is a ghost of Ned past article – Editor]

    Hi folks, Ned here again to talk more about the Primary Computer feature introduced in Windows 8. Sharp-eyed readers may have noticed this lonely beta blog post and if you just want a set-by-step guide to enabling this feature, TechNet does it best. Today I am going to fill in some blanks and make sure the feature's architecture and usefulness is clear. At least, I'm going to try.

    Onward!

    Backgrounder and Requirements

    Businesses using Roaming User Profiles, Offline Files and Folder Redirection have historically been limited in controlling which computers cache user data. For instance, while there are group policies to assign roaming profiles on a per computer basis, they affect all users of that computer and are useless if youassign roaming profiles through legacy user attributes.

    Windows 8 introduces a pair of new per-user AD DS attributes to specify a "primary computer." The primary computer is the one directly assigned to a user - such as their laptop, or a desktop in their cubicle - and therefore unlikely to change frequently. We refer to this as "User-Device Affinity". That computer will allow them to store roaming user data or access redirected folder data, as well as allow caching of redirected data through offline files. There are three main benefits to using Primary Computer:

    1. When a user is at a kiosk, using a conference room PC, or connecting to the network from a home computer, there is no risk that confidential user data will cache locally and be accessible offline. This adds a measure of security.
    2. Unlike previous operating systems, an administrator now has the ability to control computers that will not cache data, regardless of the user's AD DS profile configuration settings.
    3. The initial download of a profile has a noticeable impact on logon performance; a brand new Windows 8 user profile is ~68MB in size, and that's before it's filled with "Winter is coming" meme pics. Since a roaming profile and folder redirection no longer synchronously cache data on the computer during logon, a user connecting from a temporary or home machine logs on considerably faster.

    By assigning computer(s) to a user then applying some group policies, you ensure data only roams or caches where you want it.

    Image may be NSFW.
    Clik here to view.

    Yoink, stolen screenshot from a much better artist

    Primary Computer has the following requirements:

    • Windows 8 or Windows Server 2012 computers used for interactive logon
    • Windows Server 2012 AD DS Schema (but not necessarily Win2012 DCs)
    • Group Policy managed from Windows 8 or Windows Server 2012 GPMC
    • Some mechanism to determine each user's primary computer(s)

    Determining Primary Computers

    There is no attribute in Active Directory that tracks which computers a user logs on to, much less the computers they log on to the most frequently. There are a number of out of band options to determine computer usage though:

    • System Center Configuration Manager - SCCM has built in functionality to determine the primary users of computers, as part of its "Asset Intelligence" reporting. You can read more about this feature in SCCM 2012 and 2007 R2. This is the recommended method as it's the most comprehensive and because I like money.
    • Collecting 4624 events - the Security event log Logon Event 4624 with a Logon Type 2 delineates where a user logged on interactively. By collecting these events using some type of audit collection service or event forwarding, you can build up a picture of which users are logging on to which computers repeatedly.

      Image may be NSFW.
      Clik here to view.

       

       

    • Logon Script– If you're the fancy type, you can create a logon script that writes a user's computer to a centralized location, such as on their own AD object. If you grant inherited access for SELF to update (for instance) the Comment attribute on all the user objects, each user could use that attribute as storage. Then you can collect the results for a few weeks and create a list of computer usage by user.

      For example, this rather hokey illustration VBS runs as a logon script and updates a user's own Comment attribute with their computer's distinguished name, only if it has changed from the previous value:

      Set objSysInfo = CreateObject("ADSystemInfo")

      Set objUser = GetObject("LDAP://" & objSysInfo.UserName)

      Set objComputer = GetObject("LDAP://" & objSysInfo.ComputerName)

       

      strMessage = objComputer.distinguishedName

      if objUser.Comment = StrMessage then wscript.quit

       

      objUser.Comment = strMessage

      objUser.SetInfo

        

    A user may have more than one computer they logon to regularly though and if that's the case, an AD attribute-based storage solution is probably not the right answer unless the script builds a circular list with a restricted number of entries and logic to ensure it does not update with redundant data. Otherwise, there could be excessive AD replication. Remember, this is just a simple example to get the creative juices flowing.

    • PsLoggedOn - you can script and run PsLoggedOn.exe (a Windows Sysinternals tool) periodically during the day for all computers over the course of several weeks. That would build, over time, a list of which users frequent which computers. This requires remote registry access through the Windows Firewall.
    • Third parties - there are SCCM/SCOM-like vendors providing this functionality. I don't have details but I'm sure they have a salesman who wants a new German sports sedan and will be happy to bend your ear.

    Setting the Primary Computer

    As I mentioned before, look at TechNet for some DSAC step-by-step for setting the msDS-PrimaryComputer attribute and the necessary group policies. However, if you want to use native Windows PowerShell instead of our interesting out of band module, here are some more juice-flow inducing samples.

    The ActiveDirectory Windows PowerShell module get-adcomputer and set-aduser cmdlets allow you to easily retrieve a computer's distinguished name and assign it to the user's primary computer attribute. You can use assigned variables for readability, or with nested functions for simplicity.

    Variable

    <$variable> = get-adcomputer <computer name>

    Set-aduser <user name> -add @{'msDS-PrimaryComputer'="<$variable>"}

    For example, with a computer named cli1 and a user name stduser:

    Image may be NSFW.
    Clik here to view.

    Nested

    Set-aduser <user name> -add @{'msDS-PrimaryComputer'=(get-adcomputer <computer name>).distinguishedname}

    For example, with that same user and computer:

    Image may be NSFW.
    Clik here to view.

    Other techniques

    If you use AD DS to store the user's last computer in their Comment attribute as part of a logon script - like described in the earlier section - here is an example that reads the stduser attribute Comment and assigns primary computer based on the contents:

    Image may be NSFW.
    Clik here to view.

    If you wanted to assign primary computers to all of the users within the Foo OU based on their comment attributes, you could use this example:

    Image may be NSFW.
    Clik here to view.

    If you have a CSV file that contains the user accounts and their assigned computers as DNs, you can use the import-csv cmdlet to update the users. For example:

    Image may be NSFW.
    Clik here to view.

    This is particularly useful when you have some asset history and assign certain users specific computers. Certainly a good idea for insurance and theft prevention purposes, regardless.

    Cached Data Clearing GP

    Enabling Primary Computer does not remove any data already cached on other computers that a user does not access again. I.e. if a user was already using Roaming User Profiles or Folder Redirection (which, by default, automatically adds all redirected shell folders to the Offline Files cache), enabling Primary Computer means only that further data is not copied locally to non-approved computers.

    In the case of Roaming User Profiles, several policies can clear data from computers at logoff or restart:

    • Delete user profiles older than a specified number of days on system restart - this deletes unused profiles after N days when a computer reboots
    • Delete cached copies of roaming profiles - this removes locally saved roaming profiles once a user logs off. This policy would also apply to Primary Computers and should be used with caution

    Image may be NSFW.
    Clik here to view.

    In the case of Folder Redirection and Offline Files, there is no specific policy to clear out stale data or delete cached data at logoff like there is for RUP, but that's immaterial:

    • When a computer needs to remove FR after to becoming "non-primary" - due to the primary computer feature either being enabled or the machine being removed from the primary computer list for the user - the removal behavior will depend on how the FR policy is configured to behave on removal. It can be configured to either:
      • Redirect the folder back to the local profile– the folder location sets back to the default location in the user's profile (e.g., c:\users\%USERNAME%\Documents), the data copies from the file server to the local profile, and the file server location is unpinned from the computer's Offline Files cache
      • Leave the folder pointing to the file server–the folder location still points to the file server location, but the contents are unpinned from the computer's Offline Files cache. The folder configuration is no longer controlled through policy

    In both cases, once the data is unpinned from the Offline Files cache, it will evict from the computer in the background after 15 minutes.

    Logging Primary Computer Usage

    To see that the Download roaming profiles on primary computers only policy took effect and the behavior at each user logon, examine the User Profile Service operational event log for Event 63. This will state either "This computer is a primary computer for this user" or "This computer is not a primary computer for this user":

    Image may be NSFW.
    Clik here to view.

    The new User Profile Service events for Primary Computer are all in the Operational event log:

    Event ID

    62

    Severity

    Warning

    Message

    Windows was unable to successfully evaluate whether this computer is a primary computer for this user. This may be due to failing to access the Active Directory server at this time. The user's roaming profile will be applied as configured. Contact the Administrator for more assistance. Error: %1

    Notes and resolution

    Indicates an issue contacting LDAP on a domain controller. Examine the extended error, examine System and Application event logs for further details, consider getting a network capture if still unclear

     

    Event ID

    63

    Severity

    Informational

    Message

    This computer %1 a primary computer for this user

    Notes and resolution

    This event's variable will change from "IS" to "IS NOT" depending on circumstances. It is not an error condition unless this is unexpected to the administrator. A customer should interrogate the rest of the IT staff on the network if not expecting to see these events

     

    Event ID

    64

    Severity

    Informational

    Message

    The primary computer relationship for this computer and this user was not evaluated due to %1

    Notes and resolution

    Examine the extended error for details.

     

    To see that the Redirect folders on primary computers only policy took effect and the behavior at each user logon, examine the Folder Redirection operational event log for Event 1010. This will state "This computer is not a primary computer for this user" or if it is (good catch, Johan from Comments)

    Image may be NSFW.
    Clik here to view.

    Architecture

    Windows 8 implements Primary Computer through two new AD DS attributes in the Windows Server 2012 (version 56) Schema.

    Primary Computer is a client-side feature; no matter what you configure in Active Directory or group policy on domain controllers, Windows 7, Windows Server 2008 R2, and older family computers will not obey the settings.

    AD DS Schema

    Attribute

    Explanation

    msDS-PrimaryComputer

    The primary computers assigned to a user or a security group containing users. Contains a multi-valued linked-value distinguished names that references the msDS-isPrimaryComputerFor backlink on a computer object

    msDS-isPrimaryComputerFor

    The users assigned to a computer account. Contains a multi-valued linked-value distinguished names that references the msDS-PrimaryComputer forward link on a user object

     

    Processing

    The processing of this new functionality is:

    1. Look at Group Policy setting to determine if the msDS-PrimaryComputer attribute in Active Directory should influence the decision to roam the user's profile or apply Folder Redirection.
    2. If step 1 is TRUE, initialize an LDAP connection and bind to a domain controller
    3. Check for the required schema version
    4. Query for the "msDS-IsPrimaryComputerFor" attribute on the AD object representing the current computer
    5. Check to see if the current user is in the list returned by this attribute or in the group returned by this attribute and if so, return TRUE for IsPrimaryComputerForUser. If no match is found, return FALSE for IsPrimaryComputerForUser
    6. If step 5 is FALSE:
      1. For RUP, an existing cached local profile should be used if present. If there is no local profile for the user, a new local profile should be created
      2. For FR, if Folder Redirection previously applied, the Folder Redirection configuration removes according to the removal action specified by the previously applied policy (this is retained in the local FR configuration). If there is no current FR configuration, there is no work to be done

    Troubleshooting

    Because this feature is both new and simple, most troubleshooting is likely to follow this basic workflow when Primary Computer is not working as expected:

    1. User assigned the correct computer distinguished name (or in the security group assigned the computer DN)
    2. AD DS replication has converged for the user and computer objects
    3. AD DS and SYSVOL replication has converged for the Primary Computer group policies
    4. Primary Computer group policies applying to the computer
    5. User has logged off and on since the Primary Computer policies applied

    The logs of note for troubleshooting Primary Computer are:

    Log

    Notes and Explanation

    Gpresult/GPMC RSoP Report

    Validates that Primary Computer policy is applying to the computer or user

    Group Policy operational Event log

    Validates that group policy in general is applying to the computer or user with specific details

    System Event Log

    Validates that group policy in general is applying to the computer or user with generalities

    Application Event log

    Validates that Folder Redirection and Roaming User Profiles are working with generalities and specific details

    Folder Redirection operational event log

    Validates that Folder Redirection is working with specific details

    User Profile Service operational event log

    Validates that Roaming User Profile is working with specific details

    Fdeploy.log

    Validates that Folder Redirection is working with specific details

     

    Cases reported by your users or help desk as Primary Computer processing issues are more likely to be AD DS replication, SYSVOL replication, group policy, folder redirection, or roaming user profile issues. Determine immediately if Primary Computer is at all to blame, then move on to the more likely historical culprits. Watch for red herrings!

    Likewise, your company may not be internally aware of Primary Computer deployments and may send you down a rat hole troubleshooting expected behavior. Always ensure that a "problem" with folder redirection or roaming user profiles isn't just another group within the customer's company configuring Primary Computer and not telling you (this applies to you too; send a memo, dangit!).

    Have fun.

    Ned "shouldn't we have called it 'Primary Computers?'" Pyle

    Image may be NSFW.
    Clik here to view.
    Viewing all 274 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>