Quantcast
Channel: Ask the Directory Services Team
Viewing all 274 articles
Browse latest View live

....And knowing is half the battle!


ADAMSync 101

$
0
0

Hi Everyone, Kim Nichols here again, and this time I have an introduction to ADAMSync. I take a lot of cases on ADAM and AD LDS and have seen a number of problems arise from less than optimally configured ADAMSync XML files. There are many sources of information on ADAM/AD LDS and ADAMSync (I'll include links at the end), but I still receive lots of questions and cases on configuring ADAM/AD LDS for ADAMSync.

We'll start at the beginning and talk about what ADAM/AD LDS is, what ADAMSync is and then finally how you can get AD LDS and ADAMSync working in your environment.

What is ADAM/AD LDS?

ADAM (Active Directory Application Mode) is the 2003 name for AD LDS (Active Directory Lightweight Directory Services). AD LDS is, as the name describes, a lightweight version of Active Directory. It gives you the capabilities of a multi-master LDAP directory that supports replication without some of the extraneous features of an Active Directory domain controller (domains and forests, Kerberos, trusts, etc.). AD LDS is used in situations where you need an LDAP directory but don't want the administration overhead of AD. Usually it's used with web applications or SQL databases for authentication. Its schema can also be fully customized without impacting the AD schema.

AD LDS uses the concept of instances, similar to that of instances in SQL. What this means is one AD LDS server can run multiple AD LDS instances (databases). This is another differentiator from Active Directory: a domain controller can only be a domain controller for one domain. In AD LDS, each instance runs on a different set of ports. The default instance of AD LDS listens on 389 (similar to AD).

Here's some more information on AD LDS if you're new to it:

What is ADAMSync?

In many scenarios, you may want to store user data in AD LDS that you can't or don't want to store in AD. Your application will point to the AD LDS instance for this data, but you probably don't want to manually create all of these users in AD LDS when they already exist in AD. If you have Forefront Identity Manager (FIM), you can use it to synchronize the users from AD into AD LDS and then manually populate the AD LDS specific attributes through LDP, ADSIEdit, or a custom or 3rd party application. If you don't have FIM, however, you can use ADAMSync to synchronize data from your Active Directory to AD LDS.

It is important to remember that ADAMSync DOES NOT synchronize user passwords! If you want the AD LDS user account to use the same password as the AD user, then userproxy transformation is what you need. (That's a topic for another day, though. I'll include links at the end for userproxy.)

ADAMSync uses an XML file that defines which data will synchronize from AD to AD LDS. The XML file includes the AD partition from which to synchronize, the object types (classes or categories), and attributes to synchronize. This file is loaded into the AD LDS database and used during ADAMSync synchronization. Every time you make changes to the XML file, you must reload the XML file into the database.

In order for ADAMSync to work:

  1. The MS-AdamSyncMetadata.LDF file must be imported into the schema of the AD LDS instance prior to attempting to install the XML file. This LDF creates the classes and attributes for storing the ADAMSync.xml file.
  2. The schema of the AD LDS instance must already contain all of the object classes and attributes that you will be syncing from AD to AD LDS. In other words, you can't sync a user object from AD to AD LDS unless the AD LDS schema contains the User class and all of the attributes that you specify in the ADAMSync XML (we'll talk more about this next). There is a blog post on using ADSchemaAnalyzer to compare the AD schema to the AD LDS schema and export the differences to an LDF file that can be imported into AD LDS.
  3. Unless you plan on modifying the schema of the AD LDS instance, your instance should be named DC=<partition name>, DC=<com or local or whatever> and not CN=<partition name>. Unfortunately, the example in the AD LDS setup wizard uses CN= for the partition name.  If you are going to be using ADAMSync, you should disregard that example and use DC= instead.  The reason behind this change is that the default schema does not allow an organizationalUnit (OU) object to have a parent object of the Container (CN) class. Since you will be synchronizing OUs from AD to AD LDS and they will need to be child objects of your application partition head, you will run into problems if your application partition is named CN=.




    Obviously, this limitation is something you can change in the AD LDS schema, but simply naming your partition with DC= name component will eliminate the need to make such a change. In addition, you won't have to remember that you made a change to the schema in the future.

The best advice I can give regarding ADAMSync is to keep it as simple as possible to start off with. The goal should be to get a basic XML file that you know will work, gradually add attributes to it, and troubleshoot issues one at a time. If you try to do too much (too wide of object filter or too many attributes) in the XML from the beginning, you will likely run into multiple issues and not know where to begin in troubleshooting.

KEEP IT SIMPLE!!!

MS-AdamSyncConf.xml

Let's take a look at the default XML file that Microsoft provides and go through some recommendations to make it more efficient and less prone to issues. The file is named MS-AdamSyncConf.XML and is typically located in the %windir%\ADAM directory.

<?xml version="1.0"?>
<doc>
<configuration>
<description>sample Adamsync configuration file</description>
<security-mode>object</security-mode>
<source-ad-name>fabrikam.com</source-ad-name> <------ 1
<source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> <------ 2
<source-ad-account></source-ad-account> <------ 3
<account-domain></account-domain> <------ 4
<target-dn>dc=fabrikam,dc=com</target-dn> <------ 5
<query>
<base-dn>dc=fabrikam,dc=com</base-dn> <------ 6
<object-filter>(objectClass=*)</object-filter> <------ 7
<attributes> <------ 8
<include></include>
<exclude>extensionName</exclude>
<exclude>displayNamePrintable</exclude>
<exclude>flags</exclude
<exclude>isPrivelegeHolder</exclude>
<exclude>msCom-UserLink</exclude>
<exclude>msCom-PartitionSetLink</exclude>
<exclude>reports</exclude>
<exclude>serviceprincipalname</exclude>
<exclude>accountExpires</exclude>
<exclude>adminCount</exclude>
<exclude>primarygroupid</exclude>
<exclude>userAccountControl</exclude>
<exclude>codePage</exclude>
<exclude>countryCode</exclude>
<exclude>logonhours</exclude>
<exclude>lockoutTime</exclude>
</attributes>
</query>
<schedule>
<aging>
<frequency>0</frequency>
<num-objects>0</num-objects>
</aging>
<schtasks-cmd></schtasks-cmd>
</schedule> <------ 9
</configuration>
<synchronizer-state>
<dirsync-cookie></dirsync-cookie>
<status></status>
<authoritative-adam-instance></authoritative-adam-instance>
<configuration-file-guid></configuration-file-guid>
<last-sync-attempt-time></last-sync-attempt-time>
<last-sync-success-time></last-sync-success-time>
<last-sync-error-time></last-sync-error-time>
<last-sync-error-string></last-sync-error-string>
<consecutive-sync-failures></consecutive-sync-failures>
<user-credentials></user-credentials>
<runs-since-last-object-update></runs-since-last-object-update>
<runs-since-last-full-sync></runs-since-last-full-sync>
</synchronizer-state>
</doc>

Let's go through the default XML file by number and talk about what each section does, why the defaults are what they are, and what I typically recommend when working with customers.

  1. <source-ad-name>fabrikam.com</source-ad-name> 

    Replace fabrikam.com with the FQDN of the domain/forest that will be your synchronization source

  2. <source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> 

    Replace dc=fabrikam,dc=com with the DN of the AD partition that will be the source for the synchronization

  3. <source-ad-account></source-ad-account> 

    Contains the account that will be used to authenticate to the source forest/domain. If left empty, the credentials of the logged on user will be used

  4. <account-domain></account-domain> 

    Contains the domain name to use for authentication to the source domain/forest. This element combined with <source-ad-account> make up the domain\username that will be used to authenticate to the source domain/forest. If left empty, the domain of the logged on user will be used.

  5. <target-dn>dc=fabrikam,dc=com</target-dn>

    Replace dc=fabrikam,dc=com with the DN of the AD LDS partition you will be synchronizing to.

    NOTE: In 2003 ADAM, you were able to specify a sub-ou or container of the of the ADAM partition, for instance OU=accounts,dc=fabrikam,dc=com. This is not possible in 2008+ AD LDS. You must specify the head of the partition, dc=fabrikam,dc=com. This is publicly documented here.

  6. <base-dn>dc=fabrikam,dc=com</base-dn>

    Replace dc=fabrikam,dc=com with the base DN of the container in AD that you want to synchronize objects from.

    NOTE: You can specify multiple base DNs in the XML file, but it is important to note that due to the way the dirsync engine works the entire directory will still be scanned during synchronization. This can lead to unexpectedly long synchronization times and output in the adamsync.log file that is confusing. The short of this this is that even though you are limiting where to synchronize objects from, it doesn't reduce your synchronization time and you will see entries in the adamsync.log file that indicate objects being processed but not written. This can make it appear as though ADAMSync is not working correctly if your directory is large but you are syncing is a small percentage of the directory. Also, the log will grow and grow, but it may take a long time for objects to begin to appear in AD LDS. This is because the entire directory is being enumerated, but only a portion is being synchronized.

  7. <object-filter>(objectClass=*)</object-filter>

    The object filter determines which objects will be synchronized from AD to AD LDS. While objectClass=* will get you everything, do you really want or need EVERYTHING? Consider the amount of data you will be syncing and the security implications of having everything duplicated in AD LDS. If you only care about user objects, then don't sync computers and groups.

    The filter that I generally recommend as a starting point is:

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit))

    Rather than objectClass=User, I recommend objectCategory=Person. But, why, you ask? I'll tell you :-) If you've ever looked that the class of a computer object, you'll notice that it contains an objectClass of user.



    What this means to ADAMSync is that if I specify an object filter of objectClass=user, ADAMSync will synchronize users and computers (and contact objects and anything else that inherits from the User class). However, if I use objectCategory=Person, I only get actual user objects. Pretty neat, eh?

    So, what does this &#124; mean and why include objectCategory=OrganizationalUnit? The literal &#124; is the XML representation of the | (pipe) character which represents a logical OR. True, I've seen customers just use the | character in the XML file and not have issues, but I always use the XML rather than the | just to be certain that it gets translated properly when loaded into the AD LDS instance. If you need to use an AND rather than an OR, the XML for & is &amp;.

    You need objectCategory=OrganizationalUnit so that objects that are moved within AD get synchronized properly to AD LDS. If you don't specify this, the OUs that contain objects within scope of the object filter will be created on the initial creation of the object in AD LDS. But, if that object is ever MOVED in the source AD, ADAMSync won't be able to synchronize that object to the new location. Moving an object changes the full DN of the object. Since we aren't syncing the OUs the object just "disappears" from an ADAMSync perspective and never gets updated/moved.

    If you need groups to be synchronized as well you can add (objectclass=group) inside the outer parentheses and groups will also be synced.

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)(objectClass=Group))

  8. <attributes>

    The attributes section is where you define which attributes to synchronize for the object types defined in the <object-filter>.

    You can either use the <include></include> or <exclude></exclude> tabs, but you cannot use both.

    The default XML file provided by Microsoft takes the high ground and uses the <exclude></exclude> tags which really means include all attributes except the ones that are explicitly defined within the <exclude></exclude> element. While this approach guarantees that you don't miss anything important, it can also lead to a lot of headaches in troubleshooting.

    If you've ever looked at an AD user account in ADSIEdit (especially in an environment with Exchange), you'll notice there are hundreds of attributes defined. Keeping to my earlier advice of "keep it simple", every attribute you sync adds to the complexity.

    When you use the <exclude></exclude> tags you don't know what you are syncing; you only know what you are not syncing. If your application isn't going to use the attribute then there is no reason to copy that data to AD LDS. Additionally, there are some attributes and classes that just won't sync due to how the dirsync engine works. I'll include the list as I know it at the end of the article. Every environment is different in terms of which schema updates have been made and which attributes are being used. Also, as I mentioned earlier, if your AD LDS schema does not contain the object classes and attributes that you have defined in your ADAMSync XML file you're your synchronization will die in a big blazing ball of flame.


    Whoosh!!

    A typical attributes section to start out with is something like this:

    <include>objectSID</include> <----- only needed for userproxy
    <include>userPrincipalName</include> <----- must be unique in AD LDS instance
    <include>displayName</include>
    <include>givenName</include>
    <include>sn</include>
    <include>physicalDeliveryOfficeName</include>
    <include>telephoneNumber</include>
    <include>mail</include>
    <include>title</include>
    <include>department</include>
    <include>manager</include>
    <include>mobile</include>
    <include>ipPhone</include>
    <exclude></exclude>

    Initially, you may even want to remove userPrincipalName, just to verify that you can get a sync to complete successfully. Synchronization issues caused by the userPrincipalName attribute are among the most common ADAMSync issues I see. Active Directory allows multiple accounts to have the same userPrincipalName, but ADAMSync will not sync an object if it has the same userPrincipalName of an object that already exists in the AD LDS database.

    If you want to be a superhero and find duplicate UPNs in your AD before you attempt ADAMSync, here's a nifty csvde command that will generate a comma-delimited file that you can run through Excel's "Highlight duplicates" formatting options (or a script if you are a SUPER-SUPERHERO) to find the duplicates.

    csvde -f upn.csv -s localhost:389 -p subtree -d "DC=fabrikam,DC=com" -r "(objectClass=user)" -l sAMAccountName,userPrincipalName

    Remember, you are targeting your AD with this command, so the localhost:389 implies that the command is being run on the DC. You'll need to replace "DC=fabrikam, DC=com" with your AD domain's DN.

  9. </schedule>

    After </schedule> is where you would insert the elements to do user proxy transformation. In the References section, I've included links that explain the purpose and configuration of userproxy. The short version is that you can use this section of code to create userproxy objects rather than AD LDS user class objects. Userproxy objects are a special class of user that links back to an Active Directory domain account to allow the AD LDS user to utilize the password of their corresponding user account in AD. It is NOT a way to logon on to AD from an external network. It is a way to allow an application that utilizes AD LDS as its LDAP directory to authenticate a user via the same password they have in AD. Communication between AD and AD LDS is required for this to work and the application that is requesting the authentication does not receive a Kerberos ticket for the user.

    Here is an example of what you would put after </schedule> and before </configuration>

    <user-proxy>
    <source-object-class>user</source-object-class>
    <target-object-class>userProxyFull</target-object-class>
    </user-proxy>

Installing the XML file

OK! That was fun, wasn't it? Now that we have an XML file, how do we use it? This is covered in a lot of different materials, but the short version is we have to install it into the AD LDS instance. To install the file, run the following command from the ADAM installation directory (%windir%\ADAM):

Adamsync /install localhost:389 CustomAdamsync.xml

The command above assumes you are running it on the AD LDS server, that the instance is running on port 389 and that the XML file is located in the path of the adamsync command.

What does this do exactly, you ask? The adamsync install command copies the XML file contents into the configurationFile attribute on the AD LDS application partition head. You can view the attribute by connecting to the application partition via LDP or through ADSIEdit. This is a handy thing to know. You can use this to verify for certain exactly what is configured in the instance. Often there are several versions of the XML file in the ADAM directory and it can be difficult to know which one is being used. Checking the configurationFile attribute will tell you exactly what is configured. It won't tell you which XML file was used, but at least you will know the configuration.

The implication of this is that anytime you update the XML file you must reinstall it using the adamsync /install command otherwise the version in the instance is not updated. I've made this mistake a number of times during troubleshooting!

Synchronizing with AD

Finally, we are ready to synchronize! Running the synchronization is the "easy" part assuming we've created a valid XML file, our AD LDS schema has all the necessary classes and attributes, and the source AD data is without issue (duplicate UPN is an example of a known issue).

From the ADAM directory (typically %windir%\ADAM), run the following command:

Adamsync /sync localhost:389 "DC=fabrikam,DC=com" /log adamsync.log

Again, we're assuming you are running the command on the AD LDS server and that the instance is running on port 389. The DN referenced in the command is the DN of your AD LDS application partition. /log is very important (you can name the log anything you want). You will need this log if there are any issues during the synchronization. The log will tell you which object failed and give you a cryptic "detailed" reason as to why. Below is an example of an error due to a duplicate UPN. This is one of the easier ones to understand.

====================================================
Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0
Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8>
Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)
===============================================

During the sync, if you are syncing from the Active Directory domain head rather than an OU or container, your objects should begin showing up in the AD LDS instance almost immediately. The objects don't synchronize in any order that makes sense to the human brain, so don't worry if objects are appearing in a random order. There is no progress bar or indication of how the sync is going other than fact that the log file is growing. When the sync completes you will be returned to the command prompt and your log file will stop growing.

Did it work?

As you can see there is nothing on the command line nor are there any events in any Windows event log that indicate that the synchronization was successful. In this context, successful means completed without errors and all objects in scope, as defined in the XML file, were synchronized. The only way to determine if the synchronization was successful is to check the log file. This highlights the importance of generating the log. Additionally, it's a good idea to keep a reasonable number of past logs so if the sync starts failing at some point you can determine approximately when it started occurring. Management likes to know things like this.

Since you'll probably be automating the synchronization (easy to do with a scheduled task) and not running it manually, it's a good idea to set up a reminder to periodically check the logs for issues. If you've never looked at a log before, it can be a little intimidating if there are a lot of objects being synchronized. The important thing to know is that if the sync was successful, the bottom of the log will contain a section similar to the one below:

Updating the configuration file DirSync cookie with a new value.

Beginning processing of deferred dn references.
Finished processing of deferred dn references.

Finished (successful) synchronization run.
Number of entries processed via dirSync: 16
Number of entries processed via ldap: 0
Processing took 4 seconds (0, 0).
Number of object additions: 3
Number of object modifications: 13
Number of object deletions: 0
Number of object renames: 2
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 9
Maximum number of values retrieved via range syntax: 0

Beginning aging run.
Aging requested every 0 runs. We last aged 2 runs ago.
Saving Configuration File on DC=instance1,DC=local
Saved configuration file.

If your log just stops without a section similar to the one above, then the last entry will indicate an error similar to the one above for the duplicate UPN.

Conclusion and other References

That covers the basics of setting up ADAMSync! I hope this information makes the process more straight forward and gives you some tips for getting it to work the first time! The most important point I can make is to start very simple with the XML file and get something to work. You can always add more attributes to the file later, but if you start from broken it can be difficult to troubleshoot. Also, I highly recommend using <include> over <exclude> when specifying attributes to synchronize. This may be more work for your application team since they will have to know what their application requires, but it will make setting up the XML file and getting a successful synchronization much easier!

ADAMSync excluded objects

As I mentioned earlier, there are some attributes, classes and object types that ADAMSync will not synchronize. The items listed below are hard-coded not to sync. There is no way around this using ADAMSync. If you need any of these items to sync, then you will need to use LDIFDE exports, FIM, or some other method to synchronize them from AD to AD LDS. The scenarios where you would require any of these items are very limited and some of them are dealt with within ADAMSync by converting the attribute to a new attribute name (objectGUID to sourceObjectGUID).

Attributes

cn, currentValue, dBCSPwd, fSMORoleOwner, initialAuthIncoming, initialAuthOutgoing, isCriticalSystemObject, isDeleted, lastLogonTimeStamp, lmPwdHistory, msDS-ExecuteScriptPassword, ntPwdHistory, nTSecurityDescriptor, objectCategory, objectSid (except when being converted to proxy), parentGUID, priorValue, pwdLastSet, sAMAccountType, sIDHistory, supplementalCredentials, supplementalCredentials, systemFlags, trustAuthIncoming, trustAuthOutgoing, unicodePwd, whenChanged

Classes

crossRef, secret, trustedDomain, foreignSecurityPrincipal, rIDSet, rIDManager

Other

Naming Context heads, deleted objects, empty attributes, attributes we do not have permissions to read, objectGUIDs (gets transferred to sourceObjectGUID), objects with del-mangeled distinguished names (DEL:\)

Additional Goodies

ADAMSync

AD LDS Replication

Misc Blogs

GOOD LUCK and ENJOY!

Kim "Sync or swim" Nichols

Windows Server 2012 R2 - Preview available for download

$
0
0

Just in case you missed the announcement, the preview build of Windows Server 2012 R2 is now available for download.  If you want to see the latest and greatest, head on over there and take a gander at the new features.  All of us here in support have skin in this game, but Directory Services (us) has several new features that we'll be talking about over the coming months.  Including a lot of this stuff named in the announcement:

"Empowering employee productivity– Windows Server Work Folders, Web App Proxy, improvements to Active Directory Federation Services and other technologies will help companies give their employees consistent access to company resources on the device of their choice."

Obviously this is still a beta release.  Things can change before RTM.  Don't go doing anything silly like deploying this in production - it's officially unsupported at this stage, and for testing purposes only.  But with all that in mind, give it a whirl, and hit the TechNet forums to provide feedback and ask questions.  You will also want to keep an eye on some of our server and tools blogs in the near future.  For your convenience, a bunch of those are linked in the bar up top for you.

Happy previewing!

--David "Town Crier" Beach

Interesting findings on SETSPN -x -f

$
0
0

Hello folks, this is Herbert from the Directory Services support team in Europe!

Kerberos is becoming increasingly mandatory for really cool features such as Protocol Transition.  Moreover, as you might be painfully aware, managing Service Principal Names (SPN’s) for the use of Kerberos by applications can be daunting at times.

In this blog, we will not be going into the gory details of SPNs and how applications are using them. In fact, I’m assuming you already have some basic knowledge about SPN’s and how they are used.

Instead, we’re going to talk about an interesting behavior that can occur when an administrator is doing their due diligence managing SPN’s.  This behavior can arise when you are checking the status of the account the SPN is planned for, or when you are checking to see if the SPN that must be registered is already registered in the domain or forest.

As we all know, the KDC’s cannot issue tickets for a particular service if there are duplicate SPN’s, and authentication does not work if the SPN is on the wrong account.

Experienced administrators learn to use the SETSPN utility to validate SPNs when authentication problems occur.  In the Windows Server 2008 version of SETSPN, we provide several options useful to identifying duplicate SPNs:

-      If you want to look for a duplicate of a particular SPN: SETSPN /q <SPN>

-      If you want to search for any duplicate in the domain: SETSPN /x

You can also use the “/f” option to extend the duplicate search to the whole Forest. Many Active Directory Admins use this as a proactive check of the forest for duplicate SPNs.

So far, so good…

The Problem

Sometimes, you’ll get an error running SETSPN -x -f:

c:\>SETSPN -X -F -P
Checking forest DC=contoso,DC=com
Operation will be performed forestwide, it might take a while.
Ldap Error(0x55 -- Timeout): ldap_get_next_page_s

“-P” just tells the tool not to clutter the output with progress indications, but you can see from that error message that we are not talking only about Kerberos anymore. There is a new problem.

 

What are we seeing in the diagnostic data?

In a network trace of the above you will see a query against the GC (port 3268) with no base DN and the filter (servicePrincipalName=*)”. SETSPN uses paged queries with a page size of 100 objects. In a large Active Directory environment this yields quite a number of pages.

If you look closely at network capture data, you’ll often find that Domain Controller response times slowly increase towards the end of the query. If the command completes, you’ll sometimes see that the delay is longest on the last page returned. For example, when we reviewed data for a recent customer case, we noted:

”Customer also noticed that it usually hangs on record 84.”

 

Troubleshooting LDAP performance and building custom queries calls for the use of the STATS Control. Here is how you use it in LDP.exe:

Once connected to port 3268 and logged on as an admin, you can build the query in the same manner as SETSPN does.

1. Launch LDP as an administrator.

2. Open the Search Window using Browse\Search or Ctrl-S.

3. Enter the empty base DN and the filter, specify “Subtree” as the scope. The list of attributes does not matter here. 

4. Go to Options:

 

5. Specify an “Extended” query as we want to use controls. Note I have specified a page size of 100 elements, but that is not important, as we will see later. Let’s move on to “Controls”:

 


5. From the List of Controls select “Search Stats“. When you select it, it automatically checks it in.

6. Now “OK” your way out of the “Controls” and “Options” dialogs.

7. Hit “Run” on the “Search” dialog.

 

You should get a large list of results, but also the STATS a bit like this one:

 

Call Time: 62198 (ms)

Entries Returned: 8508

Entries Visited: 43076

Used Filter: (servicePrincipalName=*)

Used Indices: idx_servicePrincipalName:13561:N

Pages Referenced          : 801521

Pages Read From Disk      : 259

Pages Pre-read From Disk  : 1578

Pages Dirtied             : 0

Pages Re-Dirtied          : 0

Log Records Generated     : 0

Log Record Bytes Generated: 0

 

What are these stats telling us?

We have a total of 8508 objects in the “Entries Returned” result set, but we have visited 43076 objects. That sounds odd, because we used an Index idx_servicePrincipalName”. This does not really look as if the query is using the index.

 

So what is happening here?

At this point, we experience the special behavior of multi-valued non-linked attributes and how they are represented in the index. To illustrate this, let me explain a few data points:

 

1. A typical workstation or member server has these SPNs:

servicePrincipalName:
WSMAN/herbertm5

servicePrincipalName:
WSMAN/herbertm5.europe.contoso.com

servicePrincipalName:
TERMSRV/herbertm5.europe.contoso.com

servicePrincipalName:
TERMSRV/HERBERTM5

servicePrincipalName:
RestrictedKrbHost/HERBERTM5

servicePrincipalName:
HOST/HERBERTM5

servicePrincipalName:
RestrictedKrbHost/HERBERTM5.europe.contoso.com

servicePrincipalName:
HOST/HERBERTM5.europe.contoso.com

 

2. When you look at the result set from running setspn, you notice that you’re not getting all of the SPNs you’d expect:

dn:CN=HQSCCM2K3TEST,OU=SCCM,OU=Test Infrastructure,OU=Domain Management,DC=contoso,DC=com
servicePrincipalName: WSMAN/sccm2k3test
servicePrincipalName: WSMAN/sccm2k3test.contoso.com

If you look at it closely, you notice all the SPN’s start with characters very much at the end of the alphabet, which also happens to be the end of the index. These entries do not have a prefix like “HOST”.

 

So how does this happen?

In the resultant set of LDAP queries, an object may only appear once, but it is possible for an object to be in the index multiple times, because of the way the index is built. Each time the object is found in the index, the LDAP Server has to check the other values of the indexed attribute of the object to see whether it also matches the filter and thus was already added to the result set.  The LDAP server is doing its diligence to avoid returning duplicates.

For example, the first hit in the index for the above workstation example is HOST/HERBERTM5“.

The second hitHOST/HERBERTM5.europe.contoso.com kicks off the algorithm.

The object is read already and the IO and CPU hit has happened.

Now the query keeps walking the index, and once it arrives at the prefix WSMAN”, the rate of objects it needs to skip approaches 100%. Therefore, it looks at many objects and little additional objects in the result set.

On the last page of the query, things get even worse. There is an almost 100% rate of duplicates, so the clock of 60 seconds SETSPN allows for the query is ticking, and there are only 8 objects to be found. If the Domain Controller has a slow CPU or the objects need to be read from the disk because of memory pressure, the SETSPN query will probably not finish within a minute for a large forest.  This results in the error Ldap Error(0x55 -- Timeout): ldap_get_next_page_sThe larger the index (meaning, the more computers and users you have in your forest), the greater the likelihood that this can occur.

If you run the query with LDIFDE, LDP or ADFIND you will have a better chance the query will be successful. This is because by default these tools do not specify a time-out and thus use the values of the Domain Controller LDAP Policy. The Domain Controller LDAP policy is 120 seconds (by default) instead of 60 seconds.

The problem with the results generated by these tools is that you have to correlate the results from the different outputs yourself – the tools won’t do it for you.

 

So what can you do about it?

Typically you’ll have to do further troubleshooting, but here are some common causes/resolutions that I’ve seen:

  1. If a domain controller is short on memory and encounters many cache misses and thus substantial IO. You can diagnose this using the NTDS performance counters in Performance Monitor.  You can add memory to reduce the IO rate and speed things up.
  2. If you are not experiencing memory pressure, the limiting factor could be the “Single-Thread-Performance” of the server. This is important as every LDAP query gets a worker thread and runs no faster than one logical CPU core can manage.  If you have a low number of logical cores in a system with a high amount of CPU activity, this can cause the threads to delay long enough for us to see an  nconsistent query return.  In this situation your best bet is to look for ways to reduce overall processor load on the domain controller – for example, moving other services off of the machine.
  3. There is an update for Windows Server 2012 which helps to avoid the problem:

2799960          Time-out error when you run SETSPN.exe in Windows 8 or Windows Server 2012

http://support.microsoft.com/kb/2799960/EN-US

The last customer I helped had a combination of issues 1 and 2 and once he chose a beefier DC with more current hardware, the command always succeeded.  Another customer had a much bigger environment and ended up using the update I listed above to overcome the issue.

I hope you have enjoyed this journey explaining what is happening on such a SETSPN query.

Cheers,

Herbert "The Thread Master" Mauerer

Because TechNet didn't have enough Active Directory awesomeness already

$
0
0

Time for a quick lesson in blog history.  There'll be a quiz at the end!  Ok not really, but some history all the same.

Back a few years ago when we here at Microsoft were just starting to get savvy to this whole blog thing, one of our support escalation engineers, Tim Springston, decided to start up a blog about Active Directory.  You might have seen it in the past.  Over the years he's posted some really great insights and posts there that are definitely worth reading if you have the time.

Of course, the rest of us decided to do something completely different and started up AskDS a little later.  Rumor has it that it had something to do with a high-stakes poker game (Tim *is* from Texas, after all), but no one is really sure why we wound up with two support blogs to be honest - it's one of those things that just sort of happened.

Anyway, all this time while we've been partying it up over here on TechNet, our AD product team has been marooned over on MSDN with an audience of mostly developers.  Not that developers are bad folks - after all, they make the apps that power pretty much everything - but the truth is that a lot of what we do in Active Directory in terms of feature development is also targeted at Administrators and Architects and IT Pros.  You know, the people who read blogs on TechNet and may not think to also check MSDN.

After a lot of debate and discussion internally, the AD product team came to the conclusion that they really should have a presence on TechNet so that they could talk to everyone here about the cool features they're working on.

The problem?  Well, we sort of had a monopoly over here in support on AD-related blog names. :)

Meetings were convened.  Conferences were held.  Email flew back and forth.  Their might even have been some shady dealings involving gifts of sugary pastries.  In the end though, Tim graciously agreed to move his blogging efforts over to AskDS and cede control of http://blogs.technet.com/ad to the Active Directory Product team.

The result?  Everyone wins.  Tim's now helping us write cool stuff for AskDS (you'll see plenty of that in the near future, I'm sure), and the product team has already startedpostinga bunchof things that you might have missed when they were on MSDN.

If you haven't seen what they're up to over there, go and take a look .  And as we get out of summer and get our people back from vacation, and, you know, roll a whole new server OS out the door, keep an eye on both blogs for updates, tips, explanations, and all manner of yummy AD-related goodness.

 

--David "Wait, we get another writer for AskDS??" Beach

Roaming Profile Compatibility - The Windows 7 to Windows 8 Challenge

$
0
0

[Editor's note:  Everything Mark mentions for Windows 8 clients here is also true for Windows 8.1 clients.  Windows 8 and Windows 8.1 clients use the same (v3) profile version, so the 8.1 upgrade will not prevent this from happening if you have roaming profiles in your environment.  Something to be aware of if you're planning to migrate users over to the new OS version. -David]

 

Hi. It’s Mark Renoden, Senior Premier Field Engineer in Sydney, Australia here again. Today I’ll offer a workaround for an issue that’s causing a number of customers around the world a degree of trouble. It turns out to be reasonably easy to fix, perhaps just not so obvious.

The Problem

The knowledge base article "Unpredictable behavior if you migrate a roaming user profile from Windows 8 to Windows 7" - http://support.microsoft.com/kb/2748329 states:

Windows 7 and Windows 8 use similar user profile formats, which do not support interoperability when they roam between computers that are running different versions of Windows. When a user who has a  windows 7 profile signs in to a Windows 8-based computer for the first time, the user profile is updated to the new Windows 8 format. After this occurs, the user profile is no longer compatible with Windows 7-based computers. See the "More information" section for detailed information about how this issue affects roaming and mandatory profiles.

This sort of problem existed between Windows XP and Windows Vista/7 but was mitigated by Windows Vista/7 using a profile that used a .v2 extension.  The OS would handle storing the separate profiles automatically for you when roaming between those OS versions.  With Windows 7 and Windows 8, both operating systems use roaming profiles with a .v2 extension, even though Windows 8 is actually writing the profile in a newer format.

Mark’s Workaround

The solution is to use separate roaming profiles for each operating system by utilizing an environment variable in the profile path.

Configuration

File server for profiles:

  1. Create profile share “\\Server\ProfilesShare” with permissions configured so that users have write access
  2. In ProfilesShare, create folders “Win7” and “Win8”


 

Active Directory:

  1. Create OU for Windows 7 Clients (say “Win7OU”) and create/link a GPO here (say “Win7GPO”)
  2. Create OU for Windows 8 Clients (say “Win8OU”) and create/link a GPO here (say “Win8GPO”)

Note:As an alternative to separate OUs, a WMI filter may be used to filter according to Operating System:

Windows 7 - SELECT version FROM Win32_OperatingSystem WHERE Version LIKE “6.1%” and ProductType = “1″

Windows 8 - SELECT version FROM Win32_OperatingSystem WHERE Version LIKE “6.2%” and ProductType = “1″

3. Edit Win7GPO

    1. Expand Computer Configuration -> Preferences -> Windows Settings
    2. Under Environment create an environment variable with
      1. Action: Create
      2. System Variable
      3. Name: OSVer
      4. Value: Win7

4. Edit Win8GPO

    1. Expand Computer Configuration -> Preferences -> Windows Settings
    2. Under Environment create an environment variable with
      1. Action: Create
      2. System Variable
      3. Name: OSVer
      4. Value: Win8

5. Set user profile paths to \\Server\ProfilesShare\%OSVer%\%username%\

 

Clients:

  1. Log on with administrative accounts first to confirm creation of the OSVer environment variable

2. Log in as users and you’ll observe that different user profiles are created in the appropriate folder in the profiles share depending on client OS

Conclusion

I haven't run into any issues in testing but this might be one of those cases where it's important to use "wait for network". My testing suggests that using "create" as the action on the environment variable mitigates any timing issues.  This is because after the environment variable is created for the machine, this variable persists across boots and doesn't depend on GPP re-application.

You may also wish to consider the use (and testing) of a folder redirection policy to provide users with their data as they cross between Windows 7 and Windows 8 clients. While I have tested this to work with
“My Documents”, there may be varying degrees of success here depending on how Windows 8’s modern apps fiddle with things.

 - Mark “Square Peg in a Round Hole” Renoden

 

 

DFS Replication in Windows Server 2012 R2 and other goodies, now available on the Filecab blog!

$
0
0

Over at the Filecab blog, AskDS alum and all-around nice guy Ned Pyle has posted the first of several blogs about new features coming your way in Windows Server 2012 R2.  If you're a DFS administrator or just curious, go take a look!

Ned promises more posts in the near future, and Filecab is near and dear to our hearts here in DS (they make a bunch of things we support), so if you don't already have it on your RSS feed list, it might be a good time to add it.

MD5 Signature Hash Deprecation and Your Infrastructure

$
0
0

Hi everyone, David here with a quick announcement.

Yesterday, MSRC announced a timeframe for deprecation of built-in support for certificates that use the MD5 signature hash. You can find more information here:

http://blogs.technet.com/b/srd/archive/2013/08/13/cryptographic-improvements-in-microsoft-windows.aspx

Along with this announcement, we've released a framework which allows enterprises to test their environment for certificates that might be blocked as part of the upcoming changes (Microsoft Security Advisory 2862966). This framework also allows future deprecation of other weak cryptographic algorithm to be streamlined and managed via registry updates (pushed via Windows Update).

 

Some Technical Specifics:

This change affects certificates that are used for the following:

  • server authentication
  • code signing  
  • time stamping
  • Other certificate usages that used MD5 signature hash algorithm will NOT be blocked.

For code signing certificates, we will allow signed binaries that were signed before March 2009 to continue to work, even if the signing cert used MD5 signature hash algorithm.

Note:  Only certificates issued under a root CA in the Microsoft Root Certificate program are affected by this change.  Enterprise issued certificates are not affected (but should still be updated).

 

What this means for you:

1) If you're using certificates that have an MD5 signature hash (for example, if you have older web server certificates that used this hashing algorithm), you will need to update those certificates as soon as possible.  The update is planned to release in February 2014; make sure anything you have that is internet facing has been updated by then.

You can find out what signature hash was used on a certificate by simply pulling up the details of that certificate's public key on any Windows 8 or Windows Server 2012 machine.  Look for the signature hash algorithm that was used. (The certificate in my screenshot uses sha1, but you will see md5 listed on certificates that use it).

If you are on Server Core or have an older OS, you can see the signature hash algorithm by using certutil -v against the certificate.

2) Start double-checking your internal applications and certificates to insure that you don't have something older that's using an MD5 hash.  If you find one, update it (or contact the vendor to have it updated).

3) Deploy KB 2862966 in your test and QA environments and use it to test for weaker hashes (You are using test and QA environments for your major applications, right?).  The update allows you to implement logging to see what would be affected by restricting a hash.  It's designed to allow you to get ahead of the curve and find the potential weak spots in your environment.

Sometimes security announcements like this can seem a little like overkill, but remember that your certificates are only as strong as the hashing algorithm used to generate the private key.  As computing power increases, older hashing algorithms become easier for attackers to crack, allowing them to more easily fool computers and applications into allowing them access or executing code.  We don't release updates like this lightly, so make sure you take the time to inspect your environments and fix the weak links, before some attacker out there tries to use them against you.

--David "Security is everyone's business" Beach


Important Announcement: AD FS 2.0 and MS13-066

$
0
0

Hi everyone, Adam and JR here with an important announcement.

We’re tracking an important issue in support where some customers who have installed security update MS13-066 on their AD FS 2.0 servers are experiencing authentication outages.  This is due to a dependency within the security update on certain versions of the AD FS 2.0 binaries.  Customers who are already running ADFS 2.0 RU3 before installing the update should not experience any issues.

We have temporarily suspended further downloads of this security update until we have resolved this issue for all ADFS 2.0 customers. 

Our Security and AD FS product team are working together to resolve this with their highest priority.  We’ll have more news for you soon in a follow-up post.  In the meantime, here is what we can tell you right now.

 

What to Watch For

If you have installed KB 2843638 or KB 2843639 on your AD FS server, you may notice the following symptoms:

  1. Federated sign-in fails for clients.
  2. Event ID 111 in the AD FS 2.0/Admin event log:

The Federation Service encountered an error while processing the WS-Trust request. 

Request type: http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue 

Additional Data 

Exception details: 

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.TypeLoadException: Could not load
type ‘Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.


   at Microsoft.IdentityServer.Service.SecurityTokenService.MSISSecurityTokenService..ctor(SecurityTokenServiceConfiguration securityTokenServiceConfiguration)

   --- End of inner exception stack trace ---

   at System.RuntimeMethodHandle._InvokeConstructor(Object[] args, SignatureStruct& signature, IntPtr declaringType)

   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)

   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)

   at Microsoft.IdentityModel.Configuration.SecurityTokenServiceConfiguration.CreateSecurityTokenService()

   at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.CreateSTS()

   at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.CreateDispatchContext(Message requestMessage, String requestAction, String responseAction, String
trustNamespace, WSTrustRequestSerializer requestSerializer, WSTrustResponseSerializer responseSerializer, WSTrustSerializationContext serializationContext)

   at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.BeginProcessCore(Message requestMessage, WSTrustRequestSerializer requestSerializer, WSTrustResponseSerializer responseSerializer, String requestAction, String responseAction, String trustNamespace, AsyncCallback callback, Object state)

System.TypeLoadException: Could not load type 'Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

   at Microsoft.IdentityServer.Service.SecurityTokenService.MSISSecurityTokenService..ctor(SecurityTokenServiceConfiguration securityTokenServiceConfiguration)

 

What to do if the problem occurs:

  1. Uninstall the hotfixes from your AD FS servers.
  2. Reboot any system where the hotfixes were
    removed.
  3. Check back here for further updates.

We’ll update this blog post with more information as it becomes available, including links to any followup posts about this problem.

Locked or not? Demystifying the UI behavior for account lockouts

$
0
0

Hello Everyone,

This is Shijo from our team in Bangalore once again.  Today I’d like to briefly discuss account lockouts, and some UI behaviors that can trip admins up when dealing with account lockouts.

If you’ve ever had to troubleshoot an account lockout issue, you might have noticed that sometimes accounts appear to be locked on some domain controllers, but not on others.  This can be very confusing since you
typically know that the account has been locked out, but when you inspect individual DCs, they don’t reflect that status.  This inconsistency happens because of some minor differences in the behavior of the UI between Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012.

Windows Server 2003

In Windows Server 2003 the "Account is locked out" checkbox can be cleared ONLY if the account is locked out on the domain controller you are connected to. This means that if an account has been locked out, but the local DC has not yet replicated that information, you CANNOT unlock the account on the local DC.

Windows 2003 account properties for an unlocked account.  Note that the checkbox is grayed out.

 

Windows Server 2008 and Windows Server 2008 R2

In Windows Server 2008/2008 R2 the "Unlock account" checkbox will always be available (regardless of the status of the account). You can tell whether the local DC knows if the account is locked out by looking at the label on the checkbox as shown in the screenshots below:

Windows 2008 account properties showing the “Unlock Account” checkbox.  Notice that the checkbox is available regardless of the status of the account on the local DC.

 

Windows 2008 (and higher) Account Properties dialog box showing locked account on this domain controller

 

If the label on the checkbox is just "Unlock account" then this means that the domain controller you are connected to recognizes the account as unlocked. This does NOT mean that the account is not locked on other DCs, just that the specific DC we're working with has not replicated a lockout status yet.  However, unlike Windows Server 2003, if the local DC doesn’t realize that the account is locked, you DO have ability to unlock it from this interface by checking the checkbox and applying the change.

We changed the UI behavior in Windows Server 2008 to help administrators in large environments unlock accounts faster when required, instead of having to wait for replication to occur, then unlock the account, and then wait for replication to occur again.

 

Windows Server 2012

We can also unlock the accounts using the Active Directory Administrative Center (available in Windows Server 2008 R2 and later).  In Windows Server 2012, this console is the preferred method of managing accounts in Active Directory. The screen shots are present below about how we can go about doing that.

You can see from the account screenshot that the account is locked which is denoted by the padlock symbol. To unlock the account you would have to click on the “Unlock account” tab and you would see a
change in the symbol as can be seen below.

 

You can also unlock the account using the PowerShell command shown in the screenshot below.

 

In this example, I have unlocked the user account named test by simply specifying the DN of the account to unlock . You can modify your powershell command to incorporate many more switches, the details of which are present in the
following article.

http://technet.microsoft.com/en-us/library/ee617234.aspx

Hopefully this helps explain why the older operating systems behave slightly differently from the newer ones, and will help you the next time you have to deal with an account that is locked out in your environment! 

 

If you’re looking for more information on Account Lockouts, check out the following links:

Troubleshooting Account Lockout

Account Lockout Policy

Account Lockout Management Tools

 

Shijo “UNLOCK IT” Joy

Windows Server 2012 Shell game

$
0
0

Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

clip_image002

Click…click…click…click-- installation complete; the computer reboots.

You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

Holy Shell, Batman! I don't have a desktop!

clip_image004

Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

clip_image006

There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

  • Internet Explorer
  • The Desktop
  • Windows Explorer
  • Windows 8-style application support
  • Multimedia support
  • Desktop Experience

Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

clip_image008

"Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

clip_image010

Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

Use the following command to view a list of the Server GUI components

clip_image011

Get-WindowsFeature server-gui*

Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

Remove-WindowsFeature Server-Gui-Shell

To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

Remove-WindowsFeature Server-Gui-Mgmt-Infra

To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

clip_image015

To stage Server Shell features to a Windows Core Installation

You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

clip_image017

You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

clip_image019

Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

clip_image021

It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

clip_image023

Done

That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

Mike

"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

Stephens

AD FS 2.0 RelayState

$
0
0

Hi guys, Joji Oshima here again with some great news! AD FS 2.0 Rollup 2 adds the capability to send RelayState when using IDP initiated sign on. I imagine some people are ecstatic to hear this while others are asking “What is this and why should I care?”

What is RelayState and why should I care?

There are two protocol standards for federation (SAML and WS-Federation). RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server.
Note:

If the relying party is the application itself, you can use the loginToRp parameter instead.
Example:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?loginToRp=rpidentifier

Without the use of any parameters, a user would need to go to the IDP initiated sign on page, log in to the server, choose the relying party, and then be directed to the application. Using RelayState can automate this process by generating a single URL for the user to click and be logged in to the target application without any intervention. It should be noted that when using RelayState, any parameters outside of it will be dropped.

When can I use RelayState?

We can pass RelayState when working with a relying party that has a SAML endpoint. It does not work when the direct relying party is using WS-Federation.

The following IDP initiated flows are supported when using Rollup 2 for AD FS 2.0:

  • Identity provider security token server (STS) -> relying party STS (configured as a SAML-P endpoint) -> SAML relying party App
  • Identity provider STS -> relying party STS (configured as a SAML-P endpoint) -> WIF (WS-Fed) relying party App
  • Identity provider STS -> SAML relying party App

The following initiated flow is not supported:

  • Identity provider STS -> WIF (WS-Fed) relying party App

Manually Generating the RelayState URL

There are two pieces of information you need to generate the RelayState URL. The first is the relying party’s identifier. This can be found in the AD FS 2.0 Management Console. View the Identifiers tab on the relying party’s property page.

image

The second part is the actual RelayState value that you wish to send to the Relying Party. It could be the identifier of the application, but the administrator for the Relying Party should have this information. In this example, we will use the Relying Party identifier of https://sso.adatum.com and the RelayState of https://webapp.adatum.com

Starting values:
RPID: https://sso.adatum.com
RelayState: https://webapp.adatum.com

Step 1: The first step is to URL Encode each value.

RPID: https%3a%2f%2fsso.adatum.com
RelayState: https%3a%2f%2fwebapp.adatum.com

Step 2: The second step is to take these URL Encoded values, merge it with the string below, and URL Encode the string.

String:
RPID=<URL encoded RPID>&RelayState=<URL encoded RelayState>

String with values:
RPID= https%3a%2f%2fsso.adatum.com &RelayState= https%3a%2f%2fwebapp.adatum.com

URL Encoded string:
RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 3: The third step is to take the URL Encoded string and add it to the end of the string below.

String:
?RelayState=

String with value:
?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 4: The final step is to take the final string and append it to the IDP initiated sign on URL.

IDP initiated sign on URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx

Final URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

The result is an IDP initiated sign on URL that tells AD FS which relying party STS the login is for, and also gives that relying party information that it can use to direct the user to the correct application.

image

Is there an easier way?

The multi-step process and manual manipulation of the strings are prone to human error which can cause confusion and frustration. Using a simple HTML file, we can fill out the starting information into a form and click the Generate URL button.

image

The code sample for this HTML file has been posted to CodePlex.

Conclusion and Links

I hope this post has helped demystify RelayState and will have everyone up and running quickly.

AD FS 2.0 RelayState Generator
http://social.technet.microsoft.com/wiki/contents/articles/13172.ad-fs-2-0-relaystate-generator.aspx
HTML Download
https://adfsrelaystate.codeplex.com/

AD FS 2.0 Rollup 2
http://support.microsoft.com/kb/2681584

Supporting Identity Provider Initiated RelayState
http://technet.microsoft.com/en-us/library/jj127245(WS.10).aspx

Joji "Halt! Who goes there!" Oshima

So long and thanks for all the fish

$
0
0

My time is up.

It’s been eight years since a friend suggested I join him on a contract at Microsoft Support (thanks Pete). Eight years since I sat sweating in an interview with Steve Taylor, trying desperately to recall the KDC’s listening port (his hint: “German anti-tank gun”). Eight years since I joined 35 new colleagues in a training room and found that despite my opinion, I knew nothing about Active Directory (“Replication of Absent Linked Object References– what the hell have I gotten myself into?”).

Eight years later, I’m a Senior Support Escalation Engineer, a blogger of some repute, and a seasoned world traveler who instructs other ‘softies about Windows releases. I’ve created thousands of pages of content and been involved in countless support cases and customer conversations. I am the last of those 35 colleagues still here, but there is proof of my existence even so. It’s been the most satisfactory work of my career.

Just the thought of leaving was scary enough to give me pause – it’s been so long since I knew anything but supporting Windows. It’s a once in a lifetime opportunity though and sometimes you need to reset your career. Now I’ll help create the next generations of Windows Server and the buck will finally stop with me: I’ve been hired as a Program Manager and am on my way to Seattle next week. I’m not leaving Microsoft, just starting a new phase. A phase with a lot more product development, design responsibility, and… meetings. Soooo many meetings.

There are two types of folks I am going to miss: the first are workmates. Many are support engineers, but also PFEs, Consultants, and TAMs. Even foreigners! Interesting and funny people fill Premier and Commercial Technical Support and make every day here enjoyable, even after the occasional customer assault. There’s nothing like a work environment where you really like your colleagues. I’ve sat next to Dave Fisher since 2004 and he’s made me laugh every single day. He is a brilliant weirdo, like so many other great people here. You all know who you are.

The other folks are… you. Your comments stayed thought provoking and fresh for five years and 700 posts. Your emails kept me knee deep in mail sacks and articles (I had to learn in order to answer many of them). Your readership has made AskDS into one of the most popular blogs in Microsoft. You unknowingly played an immense part in my career, forcing me to improve my communication; there’s nothing like a few hundred thousand readers to make you learn your craft.

My time as the so-called “editor in chief” of AskDS is over, but I imagine you will still find me on the Internet in my new role, yammering about things that I think you’ll find interesting. I also have a few posts in the chamber that Jonathan or Mike will unload after I’m gone, and they will keep the site going. AskDS will continue to be a place for unvarnished support information about Windows technologies, where your questions will get answers.

Thanks for everything, and see you again soon.

image
We are looking forward to Seattle’s famous mud puddles

 

- Ned “42” Pyle

Digging a little deeper into Windows 8 Primary Computer

$
0
0

[This is a ghost of Ned past article – Editor]

Hi folks, Ned here again to talk more about the Primary Computer feature introduced in Windows 8. Sharp-eyed readers may have noticed this lonely beta blog post and if you just want a set-by-step guide to enabling this feature, TechNet does it best. Today I am going to fill in some blanks and make sure the feature's architecture and usefulness is clear. At least, I'm going to try.

Onward!

Backgrounder and Requirements

Businesses using Roaming User Profiles, Offline Files and Folder Redirection have historically been limited in controlling which computers cache user data. For instance, while there are group policies to assign roaming profiles on a per computer basis, they affect all users of that computer and are useless if youassign roaming profiles through legacy user attributes.

Windows 8 introduces a pair of new per-user AD DS attributes to specify a "primary computer." The primary computer is the one directly assigned to a user - such as their laptop, or a desktop in their cubicle - and therefore unlikely to change frequently. We refer to this as "User-Device Affinity". That computer will allow them to store roaming user data or access redirected folder data, as well as allow caching of redirected data through offline files. There are three main benefits to using Primary Computer:

  1. When a user is at a kiosk, using a conference room PC, or connecting to the network from a home computer, there is no risk that confidential user data will cache locally and be accessible offline. This adds a measure of security.
  2. Unlike previous operating systems, an administrator now has the ability to control computers that will not cache data, regardless of the user's AD DS profile configuration settings.
  3. The initial download of a profile has a noticeable impact on logon performance; a brand new Windows 8 user profile is ~68MB in size, and that's before it's filled with "Winter is coming" meme pics. Since a roaming profile and folder redirection no longer synchronously cache data on the computer during logon, a user connecting from a temporary or home machine logs on considerably faster.

By assigning computer(s) to a user then applying some group policies, you ensure data only roams or caches where you want it.


Yoink, stolen screenshot from a much better artist

Primary Computer has the following requirements:

  • Windows 8 or Windows Server 2012 computers used for interactive logon
  • Windows Server 2012 AD DS Schema (but not necessarily Win2012 DCs)
  • Group Policy managed from Windows 8 or Windows Server 2012 GPMC
  • Some mechanism to determine each user's primary computer(s)

Determining Primary Computers

There is no attribute in Active Directory that tracks which computers a user logs on to, much less the computers they log on to the most frequently. There are a number of out of band options to determine computer usage though:

  • System Center Configuration Manager - SCCM has built in functionality to determine the primary users of computers, as part of its "Asset Intelligence" reporting. You can read more about this feature in SCCM 2012 and 2007 R2. This is the recommended method as it's the most comprehensive and because I like money.
  • Collecting 4624 events - the Security event log Logon Event 4624 with a Logon Type 2 delineates where a user logged on interactively. By collecting these events using some type of audit collection service or event forwarding, you can build up a picture of which users are logging on to which computers repeatedly.

     

     

  • Logon Script– If you're the fancy type, you can create a logon script that writes a user's computer to a centralized location, such as on their own AD object. If you grant inherited access for SELF to update (for instance) the Comment attribute on all the user objects, each user could use that attribute as storage. Then you can collect the results for a few weeks and create a list of computer usage by user.

    For example, this rather hokey illustration VBS runs as a logon script and updates a user's own Comment attribute with their computer's distinguished name, only if it has changed from the previous value:

    Set objSysInfo = CreateObject("ADSystemInfo")

    Set objUser = GetObject("LDAP://" & objSysInfo.UserName)

    Set objComputer = GetObject("LDAP://" & objSysInfo.ComputerName)

     

    strMessage = objComputer.distinguishedName

    if objUser.Comment = StrMessage then wscript.quit

     

    objUser.Comment = strMessage

    objUser.SetInfo

    

A user may have more than one computer they logon to regularly though and if that's the case, an AD attribute-based storage solution is probably not the right answer unless the script builds a circular list with a restricted number of entries and logic to ensure it does not update with redundant data. Otherwise, there could be excessive AD replication. Remember, this is just a simple example to get the creative juices flowing.

  • PsLoggedOn - you can script and run PsLoggedOn.exe (a Windows Sysinternals tool) periodically during the day for all computers over the course of several weeks. That would build, over time, a list of which users frequent which computers. This requires remote registry access through the Windows Firewall.
  • Third parties - there are SCCM/SCOM-like vendors providing this functionality. I don't have details but I'm sure they have a salesman who wants a new German sports sedan and will be happy to bend your ear.

Setting the Primary Computer

As I mentioned before, look at TechNet for some DSAC step-by-step for setting the msDS-PrimaryComputer attribute and the necessary group policies. However, if you want to use native Windows PowerShell instead of our interesting out of band module, here are some more juice-flow inducing samples.

The ActiveDirectory Windows PowerShell module get-adcomputer and set-aduser cmdlets allow you to easily retrieve a computer's distinguished name and assign it to the user's primary computer attribute. You can use assigned variables for readability, or with nested functions for simplicity.

Variable

<$variable> = get-adcomputer <computer name>

Set-aduser <user name> -add @{'msDS-PrimaryComputer'="<$variable>"}

For example, with a computer named cli1 and a user name stduser:

Nested

Set-aduser <user name> -add @{'msDS-PrimaryComputer'=(get-adcomputer <computer name>).distinguishedname}

For example, with that same user and computer:

Other techniques

If you use AD DS to store the user's last computer in their Comment attribute as part of a logon script - like described in the earlier section - here is an example that reads the stduser attribute Comment and assigns primary computer based on the contents:

If you wanted to assign primary computers to all of the users within the Foo OU based on their comment attributes, you could use this example:

If you have a CSV file that contains the user accounts and their assigned computers as DNs, you can use the import-csv cmdlet to update the users. For example:

This is particularly useful when you have some asset history and assign certain users specific computers. Certainly a good idea for insurance and theft prevention purposes, regardless.

Cached Data Clearing GP

Enabling Primary Computer does not remove any data already cached on other computers that a user does not access again. I.e. if a user was already using Roaming User Profiles or Folder Redirection (which, by default, automatically adds all redirected shell folders to the Offline Files cache), enabling Primary Computer means only that further data is not copied locally to non-approved computers.

In the case of Roaming User Profiles, several policies can clear data from computers at logoff or restart:

  • Delete user profiles older than a specified number of days on system restart - this deletes unused profiles after N days when a computer reboots
  • Delete cached copies of roaming profiles - this removes locally saved roaming profiles once a user logs off. This policy would also apply to Primary Computers and should be used with caution

In the case of Folder Redirection and Offline Files, there is no specific policy to clear out stale data or delete cached data at logoff like there is for RUP, but that's immaterial:

  • When a computer needs to remove FR after to becoming "non-primary" - due to the primary computer feature either being enabled or the machine being removed from the primary computer list for the user - the removal behavior will depend on how the FR policy is configured to behave on removal. It can be configured to either:
    • Redirect the folder back to the local profile– the folder location sets back to the default location in the user's profile (e.g., c:\users\%USERNAME%\Documents), the data copies from the file server to the local profile, and the file server location is unpinned from the computer's Offline Files cache
    • Leave the folder pointing to the file server–the folder location still points to the file server location, but the contents are unpinned from the computer's Offline Files cache. The folder configuration is no longer controlled through policy

In both cases, once the data is unpinned from the Offline Files cache, it will evict from the computer in the background after 15 minutes.

Logging Primary Computer Usage

To see that the Download roaming profiles on primary computers only policy took effect and the behavior at each user logon, examine the User Profile Service operational event log for Event 63. This will state either "This computer is a primary computer for this user" or "This computer is not a primary computer for this user":

The new User Profile Service events for Primary Computer are all in the Operational event log:

Event ID

62

Severity

Warning

Message

Windows was unable to successfully evaluate whether this computer is a primary computer for this user. This may be due to failing to access the Active Directory server at this time. The user's roaming profile will be applied as configured. Contact the Administrator for more assistance. Error: %1

Notes and resolution

Indicates an issue contacting LDAP on a domain controller. Examine the extended error, examine System and Application event logs for further details, consider getting a network capture if still unclear

 

Event ID

63

Severity

Informational

Message

This computer %1 a primary computer for this user

Notes and resolution

This event's variable will change from "IS" to "IS NOT" depending on circumstances. It is not an error condition unless this is unexpected to the administrator. A customer should interrogate the rest of the IT staff on the network if not expecting to see these events

 

Event ID

64

Severity

Informational

Message

The primary computer relationship for this computer and this user was not evaluated due to %1

Notes and resolution

Examine the extended error for details.

 

To see that the Redirect folders on primary computers only policy took effect and the behavior at each user logon, examine the Folder Redirection operational event log for Event 1010. This will state "This computer is not a primary computer for this user" or if it is (good catch, Johan from Comments)

Architecture

Windows 8 implements Primary Computer through two new AD DS attributes in the Windows Server 2012 (version 56) Schema.

Primary Computer is a client-side feature; no matter what you configure in Active Directory or group policy on domain controllers, Windows 7, Windows Server 2008 R2, and older family computers will not obey the settings.

AD DS Schema

Attribute

Explanation

msDS-PrimaryComputer

The primary computers assigned to a user or a security group containing users. Contains a multi-valued linked-value distinguished names that references the msDS-isPrimaryComputerFor backlink on a computer object

msDS-isPrimaryComputerFor

The users assigned to a computer account. Contains a multi-valued linked-value distinguished names that references the msDS-PrimaryComputer forward link on a user object

 

Processing

The processing of this new functionality is:

  1. Look at Group Policy setting to determine if the msDS-PrimaryComputer attribute in Active Directory should influence the decision to roam the user's profile or apply Folder Redirection.
  2. If step 1 is TRUE, initialize an LDAP connection and bind to a domain controller
  3. Check for the required schema version
  4. Query for the "msDS-IsPrimaryComputerFor" attribute on the AD object representing the current computer
  5. Check to see if the current user is in the list returned by this attribute or in the group returned by this attribute and if so, return TRUE for IsPrimaryComputerForUser. If no match is found, return FALSE for IsPrimaryComputerForUser
  6. If step 5 is FALSE:
    1. For RUP, an existing cached local profile should be used if present. If there is no local profile for the user, a new local profile should be created
    2. For FR, if Folder Redirection previously applied, the Folder Redirection configuration removes according to the removal action specified by the previously applied policy (this is retained in the local FR configuration). If there is no current FR configuration, there is no work to be done

Troubleshooting

Because this feature is both new and simple, most troubleshooting is likely to follow this basic workflow when Primary Computer is not working as expected:

  1. User assigned the correct computer distinguished name (or in the security group assigned the computer DN)
  2. AD DS replication has converged for the user and computer objects
  3. AD DS and SYSVOL replication has converged for the Primary Computer group policies
  4. Primary Computer group policies applying to the computer
  5. User has logged off and on since the Primary Computer policies applied

The logs of note for troubleshooting Primary Computer are:

Log

Notes and Explanation

Gpresult/GPMC RSoP Report

Validates that Primary Computer policy is applying to the computer or user

Group Policy operational Event log

Validates that group policy in general is applying to the computer or user with specific details

System Event Log

Validates that group policy in general is applying to the computer or user with generalities

Application Event log

Validates that Folder Redirection and Roaming User Profiles are working with generalities and specific details

Folder Redirection operational event log

Validates that Folder Redirection is working with specific details

User Profile Service operational event log

Validates that Roaming User Profile is working with specific details

Fdeploy.log

Validates that Folder Redirection is working with specific details

 

Cases reported by your users or help desk as Primary Computer processing issues are more likely to be AD DS replication, SYSVOL replication, group policy, folder redirection, or roaming user profile issues. Determine immediately if Primary Computer is at all to blame, then move on to the more likely historical culprits. Watch for red herrings!

Likewise, your company may not be internally aware of Primary Computer deployments and may send you down a rat hole troubleshooting expected behavior. Always ensure that a "problem" with folder redirection or roaming user profiles isn't just another group within the customer's company configuring Primary Computer and not telling you (this applies to you too; send a memo, dangit!).

Have fun.

Ned "shouldn't we have called it 'Primary Computers?'" Pyle

....And knowing is half the battle!


ADAMSync 101

$
0
0

Hi Everyone, Kim Nichols here again, and this time I have an introduction to ADAMSync. I take a lot of cases on ADAM and AD LDS and have seen a number of problems arise from less than optimally configured ADAMSync XML files. There are many sources of information on ADAM/AD LDS and ADAMSync (I'll include links at the end), but I still receive lots of questions and cases on configuring ADAM/AD LDS for ADAMSync.

We'll start at the beginning and talk about what ADAM/AD LDS is, what ADAMSync is and then finally how you can get AD LDS and ADAMSync working in your environment.

What is ADAM/AD LDS?

ADAM (Active Directory Application Mode) is the 2003 name for AD LDS (Active Directory Lightweight Directory Services). AD LDS is, as the name describes, a lightweight version of Active Directory. It gives you the capabilities of a multi-master LDAP directory that supports replication without some of the extraneous features of an Active Directory domain controller (domains and forests, Kerberos, trusts, etc.). AD LDS is used in situations where you need an LDAP directory but don't want the administration overhead of AD. Usually it's used with web applications or SQL databases for authentication. Its schema can also be fully customized without impacting the AD schema.

AD LDS uses the concept of instances, similar to that of instances in SQL. What this means is one AD LDS server can run multiple AD LDS instances (databases). This is another differentiator from Active Directory: a domain controller can only be a domain controller for one domain. In AD LDS, each instance runs on a different set of ports. The default instance of AD LDS listens on 389 (similar to AD).

Here's some more information on AD LDS if you're new to it:

What is ADAMSync?

In many scenarios, you may want to store user data in AD LDS that you can't or don't want to store in AD. Your application will point to the AD LDS instance for this data, but you probably don't want to manually create all of these users in AD LDS when they already exist in AD. If you have Forefront Identity Manager (FIM), you can use it to synchronize the users from AD into AD LDS and then manually populate the AD LDS specific attributes through LDP, ADSIEdit, or a custom or 3rd party application. If you don't have FIM, however, you can use ADAMSync to synchronize data from your Active Directory to AD LDS.

It is important to remember that ADAMSync DOES NOT synchronize user passwords! If you want the AD LDS user account to use the same password as the AD user, then userproxy transformation is what you need. (That's a topic for another day, though. I'll include links at the end for userproxy.)

ADAMSync uses an XML file that defines which data will synchronize from AD to AD LDS. The XML file includes the AD partition from which to synchronize, the object types (classes or categories), and attributes to synchronize. This file is loaded into the AD LDS database and used during ADAMSync synchronization. Every time you make changes to the XML file, you must reload the XML file into the database.

In order for ADAMSync to work:

  1. The MS-AdamSyncMetadata.LDF file must be imported into the schema of the AD LDS instance prior to attempting to install the XML file. This LDF creates the classes and attributes for storing the ADAMSync.xml file.
  2. The schema of the AD LDS instance must already contain all of the object classes and attributes that you will be syncing from AD to AD LDS. In other words, you can't sync a user object from AD to AD LDS unless the AD LDS schema contains the User class and all of the attributes that you specify in the ADAMSync XML (we'll talk more about this next). There is a blog post on using ADSchemaAnalyzer to compare the AD schema to the AD LDS schema and export the differences to an LDF file that can be imported into AD LDS.
  3. Unless you plan on modifying the schema of the AD LDS instance, your instance should be named DC=<partition name>, DC=<com or local or whatever> and not CN=<partition name>. Unfortunately, the example in the AD LDS setup wizard uses CN= for the partition name.  If you are going to be using ADAMSync, you should disregard that example and use DC= instead.  The reason behind this change is that the default schema does not allow an organizationalUnit (OU) object to have a parent object of the Container (CN) class. Since you will be synchronizing OUs from AD to AD LDS and they will need to be child objects of your application partition head, you will run into problems if your application partition is named CN=.




    Obviously, this limitation is something you can change in the AD LDS schema, but simply naming your partition with DC= name component will eliminate the need to make such a change. In addition, you won't have to remember that you made a change to the schema in the future.

The best advice I can give regarding ADAMSync is to keep it as simple as possible to start off with. The goal should be to get a basic XML file that you know will work, gradually add attributes to it, and troubleshoot issues one at a time. If you try to do too much (too wide of object filter or too many attributes) in the XML from the beginning, you will likely run into multiple issues and not know where to begin in troubleshooting.

KEEP IT SIMPLE!!!

MS-AdamSyncConf.xml

Let's take a look at the default XML file that Microsoft provides and go through some recommendations to make it more efficient and less prone to issues. The file is named MS-AdamSyncConf.XML and is typically located in the %windir%\ADAM directory.

<?xml version="1.0"?>
<doc>
<configuration>
<description>sample Adamsync configuration file</description>
<security-mode>object</security-mode>
<source-ad-name>fabrikam.com</source-ad-name> <------ 1
<source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> <------ 2
<source-ad-account></source-ad-account> <------ 3
<account-domain></account-domain> <------ 4
<target-dn>dc=fabrikam,dc=com</target-dn> <------ 5
<query>
<base-dn>dc=fabrikam,dc=com</base-dn> <------ 6
<object-filter>(objectClass=*)</object-filter> <------ 7
<attributes> <------ 8
<include></include>
<exclude>extensionName</exclude>
<exclude>displayNamePrintable</exclude>
<exclude>flags</exclude
<exclude>isPrivelegeHolder</exclude>
<exclude>msCom-UserLink</exclude>
<exclude>msCom-PartitionSetLink</exclude>
<exclude>reports</exclude>
<exclude>serviceprincipalname</exclude>
<exclude>accountExpires</exclude>
<exclude>adminCount</exclude>
<exclude>primarygroupid</exclude>
<exclude>userAccountControl</exclude>
<exclude>codePage</exclude>
<exclude>countryCode</exclude>
<exclude>logonhours</exclude>
<exclude>lockoutTime</exclude>
</attributes>
</query>
<schedule>
<aging>
<frequency>0</frequency>
<num-objects>0</num-objects>
</aging>
<schtasks-cmd></schtasks-cmd>
</schedule> <------ 9
</configuration>
<synchronizer-state>
<dirsync-cookie></dirsync-cookie>
<status></status>
<authoritative-adam-instance></authoritative-adam-instance>
<configuration-file-guid></configuration-file-guid>
<last-sync-attempt-time></last-sync-attempt-time>
<last-sync-success-time></last-sync-success-time>
<last-sync-error-time></last-sync-error-time>
<last-sync-error-string></last-sync-error-string>
<consecutive-sync-failures></consecutive-sync-failures>
<user-credentials></user-credentials>
<runs-since-last-object-update></runs-since-last-object-update>
<runs-since-last-full-sync></runs-since-last-full-sync>
</synchronizer-state>
</doc>

Let's go through the default XML file by number and talk about what each section does, why the defaults are what they are, and what I typically recommend when working with customers.

  1. <source-ad-name>fabrikam.com</source-ad-name> 

    Replace fabrikam.com with the FQDN of the domain/forest that will be your synchronization source

  2. <source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> 

    Replace dc=fabrikam,dc=com with the DN of the AD partition that will be the source for the synchronization

  3. <source-ad-account></source-ad-account> 

    Contains the account that will be used to authenticate to the source forest/domain. If left empty, the credentials of the logged on user will be used

  4. <account-domain></account-domain> 

    Contains the domain name to use for authentication to the source domain/forest. This element combined with <source-ad-account> make up the domain\username that will be used to authenticate to the source domain/forest. If left empty, the domain of the logged on user will be used.

  5. <target-dn>dc=fabrikam,dc=com</target-dn>

    Replace dc=fabrikam,dc=com with the DN of the AD LDS partition you will be synchronizing to.

    NOTE: In 2003 ADAM, you were able to specify a sub-ou or container of the of the ADAM partition, for instance OU=accounts,dc=fabrikam,dc=com. This is not possible in 2008+ AD LDS. You must specify the head of the partition, dc=fabrikam,dc=com. This is publicly documented here.

  6. <base-dn>dc=fabrikam,dc=com</base-dn>

    Replace dc=fabrikam,dc=com with the base DN of the container in AD that you want to synchronize objects from.

    NOTE: You can specify multiple base DNs in the XML file, but it is important to note that due to the way the dirsync engine works the entire directory will still be scanned during synchronization. This can lead to unexpectedly long synchronization times and output in the adamsync.log file that is confusing. The short of this this is that even though you are limiting where to synchronize objects from, it doesn't reduce your synchronization time and you will see entries in the adamsync.log file that indicate objects being processed but not written. This can make it appear as though ADAMSync is not working correctly if your directory is large but you are syncing is a small percentage of the directory. Also, the log will grow and grow, but it may take a long time for objects to begin to appear in AD LDS. This is because the entire directory is being enumerated, but only a portion is being synchronized.

  7. <object-filter>(objectClass=*)</object-filter>

    The object filter determines which objects will be synchronized from AD to AD LDS. While objectClass=* will get you everything, do you really want or need EVERYTHING? Consider the amount of data you will be syncing and the security implications of having everything duplicated in AD LDS. If you only care about user objects, then don't sync computers and groups.

    The filter that I generally recommend as a starting point is:

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit))

    Rather than objectClass=User, I recommend objectCategory=Person. But, why, you ask? I'll tell you :-) If you've ever looked that the class of a computer object, you'll notice that it contains an objectClass of user.



    What this means to ADAMSync is that if I specify an object filter of objectClass=user, ADAMSync will synchronize users and computers (and contact objects and anything else that inherits from the User class). However, if I use objectCategory=Person, I only get actual user objects. Pretty neat, eh?

    So, what does this &#124; mean and why include objectCategory=OrganizationalUnit? The literal &#124; is the XML representation of the | (pipe) character which represents a logical OR. True, I've seen customers just use the | character in the XML file and not have issues, but I always use the XML rather than the | just to be certain that it gets translated properly when loaded into the AD LDS instance. If you need to use an AND rather than an OR, the XML for & is &amp;.

    You need objectCategory=OrganizationalUnit so that objects that are moved within AD get synchronized properly to AD LDS. If you don't specify this, the OUs that contain objects within scope of the object filter will be created on the initial creation of the object in AD LDS. But, if that object is ever MOVED in the source AD, ADAMSync won't be able to synchronize that object to the new location. Moving an object changes the full DN of the object. Since we aren't syncing the OUs the object just "disappears" from an ADAMSync perspective and never gets updated/moved.

    If you need groups to be synchronized as well you can add (objectclass=group) inside the outer parentheses and groups will also be synced.

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)(objectClass=Group))

  8. <attributes>

    The attributes section is where you define which attributes to synchronize for the object types defined in the <object-filter>.

    You can either use the <include></include> or <exclude></exclude> tabs, but you cannot use both.

    The default XML file provided by Microsoft takes the high ground and uses the <exclude></exclude> tags which really means include all attributes except the ones that are explicitly defined within the <exclude></exclude> element. While this approach guarantees that you don't miss anything important, it can also lead to a lot of headaches in troubleshooting.

    If you've ever looked at an AD user account in ADSIEdit (especially in an environment with Exchange), you'll notice there are hundreds of attributes defined. Keeping to my earlier advice of "keep it simple", every attribute you sync adds to the complexity.

    When you use the <exclude></exclude> tags you don't know what you are syncing; you only know what you are not syncing. If your application isn't going to use the attribute then there is no reason to copy that data to AD LDS. Additionally, there are some attributes and classes that just won't sync due to how the dirsync engine works. I'll include the list as I know it at the end of the article. Every environment is different in terms of which schema updates have been made and which attributes are being used. Also, as I mentioned earlier, if your AD LDS schema does not contain the object classes and attributes that you have defined in your ADAMSync XML file you're your synchronization will die in a big blazing ball of flame.


    Whoosh!!

    A typical attributes section to start out with is something like this:

    <include>objectSID</include> <----- only needed for userproxy
    <include>userPrincipalName</include> <----- must be unique in AD LDS instance
    <include>displayName</include>
    <include>givenName</include>
    <include>sn</include>
    <include>physicalDeliveryOfficeName</include>
    <include>telephoneNumber</include>
    <include>mail</include>
    <include>title</include>
    <include>department</include>
    <include>manager</include>
    <include>mobile</include>
    <include>ipPhone</include>
    <exclude></exclude>

    Initially, you may even want to remove userPrincipalName, just to verify that you can get a sync to complete successfully. Synchronization issues caused by the userPrincipalName attribute are among the most common ADAMSync issues I see. Active Directory allows multiple accounts to have the same userPrincipalName, but ADAMSync will not sync an object if it has the same userPrincipalName of an object that already exists in the AD LDS database.

    If you want to be a superhero and find duplicate UPNs in your AD before you attempt ADAMSync, here's a nifty csvde command that will generate a comma-delimited file that you can run through Excel's "Highlight duplicates" formatting options (or a script if you are a SUPER-SUPERHERO) to find the duplicates.

    csvde -f upn.csv -s localhost:389 -p subtree -d "DC=fabrikam,DC=com" -r "(objectClass=user)" -l sAMAccountName,userPrincipalName

    Remember, you are targeting your AD with this command, so the localhost:389 implies that the command is being run on the DC. You'll need to replace "DC=fabrikam, DC=com" with your AD domain's DN.

  9. </schedule>

    After </schedule> is where you would insert the elements to do user proxy transformation. In the References section, I've included links that explain the purpose and configuration of userproxy. The short version is that you can use this section of code to create userproxy objects rather than AD LDS user class objects. Userproxy objects are a special class of user that links back to an Active Directory domain account to allow the AD LDS user to utilize the password of their corresponding user account in AD. It is NOT a way to logon on to AD from an external network. It is a way to allow an application that utilizes AD LDS as its LDAP directory to authenticate a user via the same password they have in AD. Communication between AD and AD LDS is required for this to work and the application that is requesting the authentication does not receive a Kerberos ticket for the user.

    Here is an example of what you would put after </schedule> and before </configuration>

    <user-proxy>
    <source-object-class>user</source-object-class>
    <target-object-class>userProxyFull</target-object-class>
    </user-proxy>

Installing the XML file

OK! That was fun, wasn't it? Now that we have an XML file, how do we use it? This is covered in a lot of different materials, but the short version is we have to install it into the AD LDS instance. To install the file, run the following command from the ADAM installation directory (%windir%\ADAM):

Adamsync /install localhost:389 CustomAdamsync.xml

The command above assumes you are running it on the AD LDS server, that the instance is running on port 389 and that the XML file is located in the path of the adamsync command.

What does this do exactly, you ask? The adamsync install command copies the XML file contents into the configurationFile attribute on the AD LDS application partition head. You can view the attribute by connecting to the application partition via LDP or through ADSIEdit. This is a handy thing to know. You can use this to verify for certain exactly what is configured in the instance. Often there are several versions of the XML file in the ADAM directory and it can be difficult to know which one is being used. Checking the configurationFile attribute will tell you exactly what is configured. It won't tell you which XML file was used, but at least you will know the configuration.

The implication of this is that anytime you update the XML file you must reinstall it using the adamsync /install command otherwise the version in the instance is not updated. I've made this mistake a number of times during troubleshooting!

Synchronizing with AD

Finally, we are ready to synchronize! Running the synchronization is the "easy" part assuming we've created a valid XML file, our AD LDS schema has all the necessary classes and attributes, and the source AD data is without issue (duplicate UPN is an example of a known issue).

From the ADAM directory (typically %windir%\ADAM), run the following command:

Adamsync /sync localhost:389 "DC=fabrikam,DC=com" /log adamsync.log

Again, we're assuming you are running the command on the AD LDS server and that the instance is running on port 389. The DN referenced in the command is the DN of your AD LDS application partition. /log is very important (you can name the log anything you want). You will need this log if there are any issues during the synchronization. The log will tell you which object failed and give you a cryptic "detailed" reason as to why. Below is an example of an error due to a duplicate UPN. This is one of the easier ones to understand.

====================================================
Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0
Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8>
Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)
===============================================

During the sync, if you are syncing from the Active Directory domain head rather than an OU or container, your objects should begin showing up in the AD LDS instance almost immediately. The objects don't synchronize in any order that makes sense to the human brain, so don't worry if objects are appearing in a random order. There is no progress bar or indication of how the sync is going other than fact that the log file is growing. When the sync completes you will be returned to the command prompt and your log file will stop growing.

Did it work?

As you can see there is nothing on the command line nor are there any events in any Windows event log that indicate that the synchronization was successful. In this context, successful means completed without errors and all objects in scope, as defined in the XML file, were synchronized. The only way to determine if the synchronization was successful is to check the log file. This highlights the importance of generating the log. Additionally, it's a good idea to keep a reasonable number of past logs so if the sync starts failing at some point you can determine approximately when it started occurring. Management likes to know things like this.

Since you'll probably be automating the synchronization (easy to do with a scheduled task) and not running it manually, it's a good idea to set up a reminder to periodically check the logs for issues. If you've never looked at a log before, it can be a little intimidating if there are a lot of objects being synchronized. The important thing to know is that if the sync was successful, the bottom of the log will contain a section similar to the one below:

Updating the configuration file DirSync cookie with a new value.

Beginning processing of deferred dn references.
Finished processing of deferred dn references.

Finished (successful) synchronization run.
Number of entries processed via dirSync: 16
Number of entries processed via ldap: 0
Processing took 4 seconds (0, 0).
Number of object additions: 3
Number of object modifications: 13
Number of object deletions: 0
Number of object renames: 2
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 9
Maximum number of values retrieved via range syntax: 0

Beginning aging run.
Aging requested every 0 runs. We last aged 2 runs ago.
Saving Configuration File on DC=instance1,DC=local
Saved configuration file.

If your log just stops without a section similar to the one above, then the last entry will indicate an error similar to the one above for the duplicate UPN.

Conclusion and other References

That covers the basics of setting up ADAMSync! I hope this information makes the process more straight forward and gives you some tips for getting it to work the first time! The most important point I can make is to start very simple with the XML file and get something to work. You can always add more attributes to the file later, but if you start from broken it can be difficult to troubleshoot. Also, I highly recommend using <include> over <exclude> when specifying attributes to synchronize. This may be more work for your application team since they will have to know what their application requires, but it will make setting up the XML file and getting a successful synchronization much easier!

ADAMSync excluded objects

As I mentioned earlier, there are some attributes, classes and object types that ADAMSync will not synchronize. The items listed below are hard-coded not to sync. There is no way around this using ADAMSync. If you need any of these items to sync, then you will need to use LDIFDE exports, FIM, or some other method to synchronize them from AD to AD LDS. The scenarios where you would require any of these items are very limited and some of them are dealt with within ADAMSync by converting the attribute to a new attribute name (objectGUID to sourceObjectGUID).

Attributes

cn, currentValue, dBCSPwd, fSMORoleOwner, initialAuthIncoming, initialAuthOutgoing, isCriticalSystemObject, isDeleted, lastLogonTimeStamp, lmPwdHistory, msDS-ExecuteScriptPassword, ntPwdHistory, nTSecurityDescriptor, objectCategory, objectSid (except when being converted to proxy), parentGUID, priorValue, pwdLastSet, sAMAccountType, sIDHistory, supplementalCredentials, supplementalCredentials, systemFlags, trustAuthIncoming, trustAuthOutgoing, unicodePwd, whenChanged

Classes

crossRef, secret, trustedDomain, foreignSecurityPrincipal, rIDSet, rIDManager

Other

Naming Context heads, deleted objects, empty attributes, attributes we do not have permissions to read, objectGUIDs (gets transferred to sourceObjectGUID), objects with del-mangeled distinguished names (DEL:\)

Additional Goodies

ADAMSync

AD LDS Replication

Misc Blogs

GOOD LUCK and ENJOY!

Kim "Sync or swim" Nichols

Revenge of Y2K and Other News

$
0
0

Hello sports fans!

So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here.

Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post. So...here's the link to the AskPFEPlat blog.

In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on The Storage Team blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go check him out.

Finally...you've probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up.

In the mean time, if you have questions about anything you see on the blog, don't hesitate to contact us.

Jonathan "time to make the donuts" Stephens

Intermittent Mail Sack: Must Remember to Write 2013 Edition

$
0
0

Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on:

Before we get started, however, I wanted to share information about a new service available to Premier customers through Microsoft Services Premier Support. Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the RAP as a Service (or RaaS for short). Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a description of the new offering, and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract.

 

Now on to the Mail Sack!

Question

I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade?

Answer

The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them.

Check out Ned's post over on the FileCab blog: DFS Replication Improvements in Windows Server. Specifically, review the section that discusses Dynamic Access Control Support.

Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools:

You can also do this with PowerShell:

Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con
Install-WindowsFeature -name RSAT-DFS-Mgmt-Con

 

Question

From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0.

We configured the FedAuth cookie to be session based after we did this:

$sts = Get-SPSecurityTokenServiceConfig 
$sts.UseSessionCookies = $true 
$sts.Update() 

 

The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications.

Answer

(Courtesy of Adam Conkle)

Great question! I hope the following details help you in your deployment:

Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010.

When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session.

However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session.

If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host.

The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again.

To conclude, this is by-design behavior for web browser clients. There are two workarounds available:

Workaround 1

Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type

Workaround 2

Instruct your user base to always close their web browser when they have finished their session

Question

Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it?

Answer

(Courtesy of Mike Stephens)

Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor.

Security Descriptor

The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply.

Question

I read the post located here regarding the machine account password change in Active Directory.

Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain.

I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems.

I have reset the password reset interval to be various durations from fourteen days which is the default, to one day.

I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists.

I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement.

Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this.

Answer

(Courtesy of Mike Stephens)

Windows has a security policy feature named Domain member: Disable machine account password change, which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command

Net ads changetrustpw 

 

By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage.

Question

I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on.

Event ID: 45058 
Source: LsaSrv 
Level: Information 
Description: 
A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. 

 

All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it?

Fyi, our users never work offline, only online.

Answer

By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again.

The number of "slots" available to store credentials is controlled by:

Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Setting Name: CachedLogonsCount
Data Type: REG_SZ
Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1

Cached credentials can also be managed with group policy by configuring:

Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options.
Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available)

The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system.

I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time.

Question

In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default.

At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows?

Answer

DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now.

There was an Advisory Memorandum published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future.

I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES.

Question

I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the whitepaper is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal.

Answer

First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal:

  1. When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed.
  2. The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it.
  3. The list of CES URLs is returned to the client. This list is unordered.
  4. The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs.

The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal.

Other Stuff

I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic.

This was also on my list, this year. Go Science!

Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year!

Jonathan "13th baktun" Stephens

 

 

ADAMSync + (AD Recycle Bin OR searchFlags) = "FUN"

$
0
0

Hello again ADAMSyncers! Kim Nichols here again with what promises to be a fun and exciting mystery solving adventure on the joys of ADAMSync and AD Recycle Bin (ADRB) for AD LDS. The goal of this post is two-fold:

  1. Explain AD Recycle Bin for AD LDS and how to enable it
  2. Highlight an issue that you may experience if you enable AD Recycle Bin for AD LDS and use ADAMSync

I'll start with some background on AD Recycle Bin for AD LDS and then go through a recent mind-boggling scenario from beginning to end to explain why you may not want (or need) to enable AD Recycle Bin if you are planning on using ADAMSync.

Hold on to your hats!

AD Recycle Bin for ADLDS

If you're not familiar with AD Recycle Bin and what it can do for you, check out Ned's prior blog posts or the content available on TechNet.

The short version is that AD Recycle Bin is a feature added in Windows Server 2008 R2 that allows Administrators to recover deleted objects without restoring System State backups and performing authoritative restores of those objects.  

Requirements for AD Recycle Bin

   

To enable AD Recycle Bin (ADRB) for AD DS your forest needs to meet some basic requirements:

   

  1. Have extended your schema to Windows Server 2008 R2.
  2. Have only Windows Server 2008 R2 DC's in your forest.
  3. Raise your domain(s) functional level to Windows Server 2008 R2.
  4. Raise your forest's functional level to Windows Server 2008 R2.

   

What you may not be aware of is that AD LDS has this feature as well. The requirements for implementing ADRB in AD LDS are the same as AD DS although they are not as intuitive for AD LDS instances.

 

Schema must be Windows Server 2008 R2

   

If your AD LDS instance was originally built as an ADAM instance, then you may or may not have extended the schema of your instance to Windows Server 2008 R2. If not, upgrading the schema is a necessary first step in order to support ADRB functionality.

   

To update your AD LDS schema to Windows Server 2008 R2, run the following command from your ADAM installation directory on your AD LDS server:

   

Ldifde.exe –i –f MS-ADAM-Upgrade-2.ldf –s server:port –b username domain password –j . -$ adamschema.cat

   

You'll also want to update your configuration partition:

   

ldifde –i –f ms-ADAM-Upgrade-1.ldf –s server:portnumber –b username domain password –k –j . –c "CN=Configuration,DC=X" #configurationNamingContext

   

Information on these commands can be found on TechNet:

Decommission any Windows Server 2003 ADAM servers in the Replica set

   

In an AD DS environment, ADRB requires that all domain controllers in the forest be running Windows Server 2008 R2. Translating this to an AD LDS scenario, all servers in your replica set must be running Windows Server 2008 R2. So, if you've been hanging on to those Windows Server 2003 ADAM servers for some reason, now is the time to decommission them.

 

LaNae's blog "How to Decommission an ADAM/ADLDS server and Add Additional Servers" explains the process for removing a replica member. The process is pretty straightforward and just involves uninstalling the instance, but you will want to check FSMO role ownership, overall instance health, and application configurations before blindly uninstalling. Now is not the time to discover applications have been hard-coded to point to your Windows Server 2003 server or that you've been unknowingly been having replication issues.

   

Raise the functional level of the instance

   

In AD DS, raising the domain and forest functional levels is easy; there's a UI -- AD Domains and Trusts. AD LDS doesn't have this snap-in, though, so it is a little more complicated. There's a good KB article (322692) that details the process of raising the functional levels of AD and gives us insight into what we need to do raise our AD LDS functional level since we can't use the AD Domains and Trusts MMC.

   

AD LDS only has the concept of forest functional levels. There is no domain functional level in AD LDS. The forest functional level is controlled by the msDS-Behavior-Version attribute on the CN=Partitions object in the Configuration naming context of your AD LDS instance.

   

   

Simply changing the value of msDS-Behavior-Version from 2 to 4 will update the functional level of your instance from Windows Server 2003 to Windows Server 2008 R2. Alternatively, you can use Windows PowerShell to upgrade the functional level of your AD LDS instance. For AD DS, there is a dedicated Windows PowerShell cmdlet for raising the forest functional level called Set-ADForestMode, but this cmdlet is not supported for AD LDS. To use Windows PowerShell to raise the functional level for AD LDS, you will need to use the Set-ADObject cmdlet to specify the new value for the msDS-Behavior-Version attribute.

   

To raise the AD LDS functional level using Windows PowerShell, run the following command (after loading the AD module):

   

Set-ADObject -Identity <path to Partitions container in Configuration Partition of instance> -Replace @{'msds-Behavior-Version'=4} -Server <server:port>

   

For example in my environment, I ran:

   

Set-ADObject -Identity 'CN=Partitions,CN=Configuration,CN={A1D2D2A9-7521-4068-9ACC-887EDEE90F91}' -Replace @{'msDS-Behavior-Version'=4} -Server 'localhost:50000'

   

 

 

   

   

As always, before making changes to your production environment:

  1. Test in a TEST or DEV environment
  2. Have good back-ups
  3. Verify the general health of the environment (check replication, server health, etc)

   

Now we're ready to enable AD Recycle Bin! 

Enabling AD Recycle Bin for AD LDS

   

For Windows Server 2008 R2, the process for enabling ADRB in AD LDS is nearly identical to that for AD DS. Either Windows PowerShell or LDP can be used to enable the feature. Also, there is no UI for enabling ADRB for AD LDS in Windows Server 2008 R2 or Windows Server 2012. Windows Server 2012 does add the ability to enable ADRB and restore objects through the AD Administrative Center for AD DS (you can read about it here), but this UI does not work for AD LDS instances on Windows Server 2012.

   

Once the feature is enabled, it cannot be disabled. So, before you continue, be certain you really want to do this. (Read this whole post to help you decide.)

   

The ADRB can be enabled in both AD DS and AD LDS using a PowerShell cmdlet, but the syntax is slightly different between the two. The difference is fully documented in TechNet.

   

In my lab, I used the PowerShell cmdlet to enable the feature rather than using LDP. Below is the syntax for AD LDS:

   

Enable-ADOptionalFeature 'recycle bin feature' -Scope ForestOrConfigurationSet -Server <server:port> -Target <DN of configuration partition>

   

Here's the actual cmdlet I used and a screenshot of the output. The cmdlet asks you confirm that you want to enable the feature since this is an irreversible process.

   

 

 

   

You can verify that the command worked by checking the msDS-EnabledFeature attribute on the Partitions container of the Configuration NC of your instance.

   

 

Seemed like a good idea at the time. . .

   

Now, on to what prompted this post in the first place.

   

Once ADRB is enabled, there is a change to how deleted objects are handled when they are removed from the directory. Prior to enabling ADRB when an object is deleted, it is moved to the Deleted Objects container within the application partition of your instance (CN=Deleted Objects, DC=instance1, DC=local or whatever the name of your instance is) and most of the attributes are deleted. Without Recycle Bin enabled, a user object in the Deleted Object container looks like this in LDP:

   

   

After enabling ADRB, a deleted user object looks like this in LDP:

   

   

Notice that after enabling ADRB, givenName, displayName, and several other attributes including userPrincipalName (UPN) are maintained on the object while in the Deleted Objects container. This is great if you ever need to restore this user: most of the data is retained and it's a pretty simple process using LDP or PowerShell to reanimate the object without the need to go through the authoritative restore process. But, retaining the UPN attribute specifically can cause issues if ADAMSync is being used to synchronize objects from AD DS to AD LDS since the userPrincipalName attribute must be unique within an AD LDS instance.

   

In general, the recommendation when using ADAMSync, is to perform all user management (additions/deletions) on the AD DS side of the sync and let the synchronization process handle the edits in AD LDS. There are times, though, when you may need to remove users in AD LDS in order to resolve synchronization issues and this is where having ADRB enabled will cause problems.

   

For example:

   

Let's say that you discover that you have two users with the same userPrincipalName in AD and this is causing issues with ADAMSync: the infamous ATT_OR_VALUE_EXISTS error in the ADAMSync log.

   

====================================================

Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0 Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8> Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

   

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:

0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

   

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:

0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

===============================================

   

   

Upon further inspection of the users, you determine that at some point a copy was made of the user's account in AD and the UPN was not updated. The old account is not needed anymore but was never cleaned up either. To get your ADAMSync working, you:

  1. Delete the user account that synced to AD LDS.
  2. Delete the extra account in AD (or update the UPN on one of the accounts).
  3. Try to sync again

   

BWAMP!

   

The sync still fails with the ATT_OR_VALUE_EXISTS error on the same user. This doesn't make sense, right? You deleted the extra user in AD and cleaned up AD LDS by deleting the user account there. There should be no duplicates. The ATT_OR_VALUE_EXISTS error is not an ADAMSync error. ADAMSync is making LDAP calls to the AD LDS instance to create or modify objects. This error is an LDAP error from the AD LDS instance and is telling you already have an object in the directory with that same userPrincipalName. For what it's worth, I've never seen this error logged if the duplicate isn't there. It is there; you just have to find it!

   

At this point, it's not hard to guess where the duplicate is coming from, since we've already discussed ADRB and the attributes maintained on deletion. The duplicate userPrincipalName is coming from the object we deleted from the AD LDS instance and is located in the Deleted Objects container. The good news is that LDP allows you to browse the container to find the deleted object. If you've never used LDP before to look through the Deleted Objects container, TechNet provides information on how to browse for deleted objects via LDP.

 

It's great that we know why we are having the problem, but how do we fix it? Now that we're already in this situation, the only way to fix it is to eliminate the duplicate UPN from the object in CN=Deleted Objects. To do this:

   

  1. Restore the deleted object in AD LDS using LDP or PowerShell
  2. After the object is restored, modify the UPN to something bogus that will never be used on a real user
  3. Delete the object again
  4. Run ADAMSync again

   

Now your sync should complete successfully!

Not so fast, you say . . .

   

So, I was feeling pretty good about myself on this case. I spent hours figuring out ADRB for AD LDS and setting up the repro in my lab and proving that deleting objects with ADRB enabled could cause ATT_OR_VALUE_EXISTS errors during ADAMSync. I was already patting myself on the back and starting my victory lap when I got an email back from my customer stating the msDS-BehaviorVersion attribute on their AD LDS instance was still set to 2.

   

Huh?!

   

I'll admit it, I was totally confused. How could this be? I had LDP output from the customer's AD LDS instance and could see that the userPrincipalName attribute was being maintained on objects in the Deleted Objects container. I knew from my lab that this is not normal behavior when ADRB is disabled. So, what the heck is going on?

   

I know when I'm beat, so decided to use one of my "life lines" . . . I emailed Linda Taylor. Linda is an Escalation Engineer in the UK Directory Services team and has been working with ADAM and AD LDS much longer than I have. This is where I should include a picture of Linda in a cape because she came to the rescue again!

   

Apparently, there is more than one way for an attribute to be maintained on deletion. The most obvious was that ADRB had been enabled. The less obvious requires a better understanding of what actually happens when an object is deleted. Transformation into a Tombstone documents this process in more detail. The part that is important to us is:

The Schema Management snap-in doesn't allow us to see attributes on attributes, so to verify the value of searchFlags on the userPrincipalName attribute we need to ADSIEdit or LDP.

   

WARNING: Modifying the schema can have unintended consequences. Please be certain you really need to do this before proceeding and always test first!

   

By default, the searchFlags attribute on userPrincipalName should be set to 0x1 (INDEX).

   

   

   

My customer's searchFlags attribute was set to 0x1F (31 decimal) = (INDEX |CONTAINER_INDEX |ANR |PRESERVE_ON_DELETE |COPY).

   

   

Apparently these changes to the schema had been made to improve query efficiency when searching on the userPrincipalName attribute.

 

Reminder: Manually modifying the schema in this way is not something you should doing unless are certain you know what you are doing or have been directed to by Microsoft Support.

 

The searchFlags attribute is a bitwise attribute containing a number of different options which are outlined here. This attribute can be zero or a combination of one or more of the following values:

Value

Description

1 (0x00000001)

Create an index for the attribute.

2 (0x00000002)

Create an index for the attribute in each container.

4 (0x00000004)

Add this attribute to the Ambiguous Name Resolution (ANR) set. This is used to assist in finding an object when only partial information is given. For example, if the LDAP filter is (ANR=JEFF), the search will find each object where the first name, last name, email address, or other ANR attribute is equal to JEFF. Bit 0 must be set for this index take affect.

8 (0x00000008)

Preserve this attribute in the tombstone object for deleted objects.

16 (0x00000010)

Copy the value for this attribute when the object is copied.

32 (0x00000020)

Supported beginning with Windows Server 2003. Create a tuple index for the attribute. This will improve searches where the wildcard appears at the front of the search string. For example, (sn=*mith).

64(0x00000040)

Supported beginning with ADAM. Creates an index to greatly help VLV performance on arbitrary attributes.

   

To remove the PRESERVE_ON_DELETE flag, we subtracted 8 from customer's value of 31, which gave us a value of 23 (INDEX | CONTAINER | ANR | COPY). 

 

Once we removed the PRESERVE_ON_DELETE flag, we created and deleted a test account to confirm our modifications changed the tombstone behavior of the userPrincipalName attribute. UPN was no longer maintained!

   

Mystery solved!! I think we all deserve a Scooby Snack now!

   

Nom nom nom!

 

Lessons learned

   

  1. ADRB is a great feature for AD. It can even be useful for AD LDS if you aren't synchronizing with AD. If you are synchronizing with AD, then the benefits of ADRB are limited and in the end it can cause you more problems than it solves.
  2. Manually modifying the schema can have unintended consequences.
  3. PowerShell for AD LDS is not as easy as AD
  4. AD Administrative Center is for AD and not AD LDS
  5. LDP Rocks!

   

This wraps up the "More than you really ever wanted to know about ADAMSync, ADRB & searchFlags" Scooby Doo edition of AskDS. Now, go enjoy your Scooby Snacks!

 
- Kim "That Meddling Kid" Nichols

   

   

Configuring Change Notification on a MANUALLY created Replication partner

$
0
0

Hello. Jim here again to elucidate on the wonderment of change notification as it relates to Active Directory replication within and between sites. As you know Active Directory replication between domain controllers within the same site (intrasite) happens instantaneously. Active Directory replication between sites (intersite) occurs every 180 minutes (3 hours) by default. You can adjust this frequency to match your specific needs BUT it can be no faster than fifteen minutes when configured via the AD Sites and Services snap-in.

Back in the old days when remote sites were connected by a string and two soup cans, it was necessary in most cases to carefully consider configuring your replication intervals and times so as not to flood the pipe (or string in the reference above) with replication traffic and bring your WAN to a grinding halt. With dial up connections between sites it was even more important. It remains an important consideration today if your site is a ship at sea and your only connectivity is a satellite link that could be obscured by a cloud of space debris.

Now in the days of wicked fast fiber links and MPLS VPN Connectivity, change notification may be enabled between site links that can span geographic locations. This will make Active Directory replication instantaneous between the separate sites as if the replication partners were in the same site. Although this is well documented on TechNet and I hate regurgitating existing content, here is how you would configure change notification on a site link:

  1. Open ADSIEdit.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Expand Sites, navigate to the Inter-Site Transports container, and select CN=IP.

    Note: You cannot enable change notification for SMTP links.
  4. Right-click the site link object for the sites where you want to enable change notification, e.g. CN=DEFAULTSITELINK, click Properties.
  5. In the Attribute Editor tab, double click on Options.
  6. If the Value(s) box shows <not set>, type 1.

There is one caveat however. Change notification will fail with manual connection objects. If your connection objects are not created by the KCC the change notification setting is meaningless. If it's a manual connection object, it will NOT inherit the Options bit from the Site Link. Enjoy your 15 minute replication latency.

Why would you want to keep connection objects you created manually, anyway? Why don't you just let the KCC do its thing and be happy? Maybe you have a Site Link costing configuration that you would rather not change. Perhaps you are at the mercy of your networking team and the routing of your network and you must keep these manual connections. If, for whatever reason you must keep the manually created replication partners, be of good cheer. You can still enjoy the thrill of change notification.

Change Notification on a manually created replication partner is configured by doing the following:

  1. Open ADSIEDIT.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Navigate to the following location:

    \Sites\SiteName\Server\NTDS settings\connection object that was manually created
  4. Right-click on the manually created connection object name.
  5. In the Attribute Editor tab, double click on Options.
  6. If the value is 0 then set it to 8.

If the value is anything other than zero, you must do some binary math. Relax; this is going to be fun.

On the Site Link object, it's the 1st bit that controls change notification. On the Connection object, however, it's the 4th bit. The 4th bit is highlighted in RED below represented in binary (You remember binary don't you???)

Binary Bit

8th

7th

6th

5th

4th

3rd

2nd

1st

Decimal Value

128

64

32

16

8

4

2

1

 

NOTE: The values represented by each bit in the Options attribute are documented in the Active Directory Technical Specification. Fair warning! I'm only including that information for the curious. I STRONGLY recommend against setting any of the options NOT discussed specifically in existing documentation or blogs in your production environment.

Remember what I said earlier? If it's a manual connection object, it will NOT inherit the Options value from the Site Link object. You're going to have to enable change notifications directly on the manually created connection object.

Take the value of the Options attribute, let's say it is 16.

Open Calc.exe in Programmer mode, and paste the contents of your options attribute.

Click on Bin, and count over to the 4th bit starting from the right.

That's the bit that controls change notification on your manually created replication partner. As you can see, in this example it is zero (0), so change notifications are disabled.

Convert back to decimal and add 8 to it.

Click on Bin, again.

As you can see above, the bit that controls change notification on the manually created replication partner is now 1. You would then change the Options value in ADSIEDIT from 16 to 24.

Click on Ok to commit the change.

Congratulations! You have now configured change notification on your manually created connection object. This sequence of events must be repeated for each manually created connection object that you want to include in the excitement and instantaneous gratification of change notification. Keep in mind that in the event something (or many things) gets deleted from a domain controller, you no longer have that window of intersite latency to stop inbound replication on a downstream partner and do an authoritative restore. Plan the configuration of change notifications accordingly. Make sure you take regular backups, and test them occasionally!

And when you speak of me, speak well…

Jim "changes aren't permanent, but change is" Tierney

 

Viewing all 274 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>