Quantcast
Channel: Ask the Directory Services Team
Viewing all 274 articles
Browse latest View live

Shipped it


Windows PowerShell remoting and delegating user credentials

$
0
0

Hey all Rob Greene here again. Yeah, I know, it’s been a while since I’ve written anything for you good people of the Internet.

I recently had an interesting issue with the Active Directory Web Services and the Active Directory Windows PowerShell 2.0 modules in Windows 7 and Windows Server 2008 R2. Let me explain the scenario to you.

We have a group of helpdesk users that need to be able to run certain Windows PowerShell commands to manage users and objects within Active Directory. We do not want to install any of the Active Directory RSAT tools on helpdesk groups Windows 7 workstations directly because these users should not have access to Active Directory console snap-ins [Note: as pointed out in the Comments, you don't have to install all RSAT AD tools if you just want AD Windows PowerShell; now back to the action - the Neditor]. We have written specific Windows PowerShell scripts that the help desk users employ to manage user accounts. We are storing the Windows PowerShell scripts on a central server that the users need to be able to access and run the scripts remotely.

Hmmm…. Well my mind starts thinking, man this is way too complicated, but hey that’s what our customers like to do… Make things complicated.

clip_image002

The basic requirement is that the help desk admins must run some Windows PowerShell scripts on a remote computer that leverages the ActiveDirectory Windows PowerShell cmdlets to manage user accounts in the domain.

So let’s think about the “ask” here:

  • We are going to require Windows PowerShell remoting from the Windows 7 client to the middle tier server where the ActiveDirectory Windows PowerShell modules are installed.

By default you must connect to the remote server with an Administrator level account when PS remoting otherwise the remote session will not be allowed to connect. That means the helpdesk users cannot connect to the domain controllers directly.

If you are interested in changing this requirement the Scripting Guy blog has two ways of doing this via:

  • The middle tier server where the ActiveDirectory Windows PowerShell cmdlets are installed has to connect to a domain controller running the Active Directory Web Service as the PS remoted user account.

Wow, how do we make all this happen?

1. You need to enable Windows PowerShell Remoting on the Remote Admin Server. The simplest way to do this is by launching an elevated Windows PowerShell command prompt and type:

Enable-PSRemoting -Force

To specify HTTPS be used for the remote connectivity instead of HTTP, you can use the following cmdlet (this requires a certificate environment that's outside the scope of this conversation):

Set-WSManQuickConfig –Force -UseSSL

2. On the Remote Admin Server you will also want to make sure that the “Windows Remote Management (WS-Management)” service is started and set to automatic.

If you have done a decent amount of Windows PowerShell scripting you probably got this part.

Alright, the next part is kind of tricky. Since we are delegating the user’s credentials from the Remote Admin Server to the ADWS service, you are probably thinking that we are going to setup some kind of Kerberos delegation here. That would be incorrect. Windows PowerShell remoting does not support Kerberos delegation. You have to use CredSSP to delegate the user account to the Remote Admin Server (which does a logon to the Remote Admin Server) and then it is allowed to interact with the ADWS service on the domain controller.

More information about CredSSP:

MSDN Magazine: Credential Security Support Provider

951608 Description of the Credential Security Support Provider (CredSSP) in Windows XP Service Pack 3
http://support.microsoft.com/kb/951608/EN-US

If you have done some research on CredSSP, it takes the user's name and password and passes it on to the target server. It is not sending a Kerberos ticket or NTLM token for validation. This can be somewhat risky. Just like with Windows PowerShell remoting CredSSP usage is disabled by default and must be enabled. The other key thing to understand about CredSSP is you have to enable the “Client” and the “Server” to be able to use it.

NOTE: Although Windows XP Service Pack 3 does have CredSSP in it. The version of Windows PowerShell for Windows XP does not support CredSSP with remote management.

3. On the Remote Admin Server, we need to enable Windows Remote Management to support CredSSP. We do this by typing the command below in an elevated Windows PowerShell command window:

Enable-WSManCredSSP –Role Server -Force

4. On the Windows 7 client, we need to configure the “Windows Remote Management (WS-Management)” service startup to Automatic. Failure to do this will result in the following error being displayed at the next step:

Enable-WSManCredSSP : The client cannot connect to the destination specified in the request. Very that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination to analyze and configure the winRM service: “winrm quickconfig”

5. On the Windows 7 client, we need to enable Windows Remote Management to support CredSSP. We do this by typing the command below in an elevated Windows PowerShell command window:

Enable-WSManCredSSP -Role Client -DelegateComputer *.contoso.com -Force

NOTE: “*.contoso.com” is a placeholder for your DNS domain name. Within the client configuration is where you can constrain the CredSSP credentials to certain “Targets” or destination computers. If you want them to only work to a specific computer replace *.contoso.com with the specific servers name.

6. Lastly, when the remote session is created to the target server we need to make sure that the “-Authentication CredSSP” switch is provided. Here are a couple of remote session examples:

Enter-PSSession -ComputerName con-rt-ts.contoso.com -Credential (Get-Credential) -Authentication CredSSP

Invoke-Command –ComputerName con-rt-ts.contoso.com –Credential (Get-Credential) –Authentication CredSSP –ScriptBlock {Import-Module ActiveDirectory; get-aduser administrator}

I hope that you have some new information around Windows PowerShell remoting today to make your Windows PowerShell adventures more successful. This story changes in Windows 8 and Windows Server 2012 for the better, so use this article only with your legacy operating systems.

Rob “Power Shrek” Greene

Managing RID Issuance in Windows Server 2012

$
0
0

Hi all, Ned here again to talk further about managing your RID pool.

By default, a domain has capacity for roughly one billion security principals, such as users, security groups, managed service accounts, and computers. If you run out, you can’t create any more.

There aren’t any domains with that many active objects, of course, but we've seen:

  • Provisioning software or administrative scripts accidentally bulk created users, groups, and computers
  • Many unused security and distribution groups created by delegated users
  • Many domain controllers demoted, restored, or metadata cleaned
  • Forest recoveries with an inappropriately set lower RID pool
  • The InvalidateRidPool operation performed too frequently
  • The RID Block Size registry value increased incorrectly

All of these situations use up RIDs unnecessarily, often by mistake. Over many years, a few environments ran out of RIDs and this forced customers to migrate to a new domain or revert with domain and forest recoveries.

Windows Server 2012 addresses issues with RID allocation that have become more likely with the age and ubiquity of Active Directory. These include better event logging, more appropriate limits, and the ability to - in an emergency - increase the overall RID pool allocation by one bit.

Let's get to it.

Periodic Consumption Warnings

Windows Server 2012 adds global RID space event tracking that provide early warning when major milestones are crossed. The model computes the ten (10) percent used mark in the global pool and logs an event when reached. Then it computes the next ten percent used of the remaining and the event cycle continues. As the global RID space is exhausted, events will accelerate as ten percent hits faster in a decreasing pool (but event log dampening will prevent more than one entry per hour). The System event log on every domain controller writes Directory-Services-SAM warning event 16658.

Assuming a default 30-bit global RID space, the first event logs when allocating the pool containing the 107,374,182ND RID. The event rate accelerates naturally until the last checkpoint of 100,000, with 110 events generated in total. The behavior is similar for an unlocked 31-bit global RID space: starting at 214,748,365 and completing in 117 events.

Important

Understand that these events are never "expected": investigate the user, computer, and group creation processes immediately in the domain if you see the event. Creating more than 100 million AD DS objects is quite out of the ordinary!

image

RID Pool Invalidation Events

There are new event alerts that a local DC RID pool was discarded. These are Informational and could be expected, especially due to the new virtualized domain controller functionality. See the event list later for details on the event.

RID Block Size Cap

Ordinarily, a domain controller requests RID allocations in blocks of 500 RIDs at one time. You can override this default using the following registry REG_DWORD value on a domain controller:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\RID Values

RID Block Size

Prior to Windows Server 2012, there was no maximum value enforced in that registry key, except the implicit DWORD maximum (which has a value of 0xffffffff or 4294967295). This value is considerably larger than the total global RID space. Administrators sometimes inappropriately or accidentally configured RID Block Size with values that exhausted the global RID at a massive rate.

In Windows Server 2012, you cannot set this registry value higher than 15,000 decimal (0x3A98 hexadecimal). This prevents massive unintended RID allocation.

If you set the value higher than 15,000, the value is treated as 15,000 and the domain controller logs event 16653 in the Directory Services event log at every reboot until the value is corrected.

Global RID Space Size Unlock

Prior to Windows Server 2012, the global RID space was limited to 230 (or 1,073,741,823) total RIDs. Once reached, only a domain migration or forest recovery to an older timeframe allowed new SIDs creation - disaster recovery, by any measure. Starting in Windows Server 2012, the 231 bit can be unlocked in order to increase the global pool to 2,147,483,647 RIDs.

AD DS stores this setting in a special hidden attribute named SidCompatibilityVersion on the RootDSE context of all domain controllers. This attribute is not readable using ADSIEdit, LDP, or other tools. To see an increase in the global RID space, examine the System event log for warning event 16655 from Directory-Services-SAM or use the following Dcdiag command:

Dcdiag.exe /TEST:RidManager /v | find /i "Available RID Pool for the Domain"

If you increase the global RID pool, the available pool will change to 2,147,483,647 instead of the default 1,073,741,823. For example:

image

Warning!

This unlock is intended only to prevent running out of RIDS and is to be used only in conjunction with RID Ceiling Enforcement (see next section). Do not "preemptively" set this in environments that have millions of remaining RIDs and low growth, as application compatibility issues potentially exist with SIDs generated from the unlocked RID pool.

This unlock operation cannot be reverted or removed, except by a complete forest recovery to an earlier backup.

Windows Server 2003 and Windows Server 2008 Domain Controllers cannot issue RIDs when the global RID pool 31st bit is unlocked. Windows Server 2008 R2 domain controllers can use 31st bit RIDs but only if they install hotfix KB2642658. Unsupported and unpatched domain controllers treat the global RID pool as exhausted when unlocked.

Implementing Unlocked Global RID space

To unlock the RID pool to the 31st bit after receiving the RID ceiling alert perform the following steps:

1. Ensure that the RID Master role is running on a Windows Server 2012 domain controller. If not, transfer it to a Windows Server 2012 domain controller

2. Run LDP.exe

3. Click the Connection menu and click Connect for the Windows Server 2012 RID Master on port 389, and then click Bind as a domain administrator

4. Click the Browse menu and click Modify

5. Ensure that DN is blank

6. In Edit Entry Attribute, type:

SidCompatibilityVersion

7. In Values, type:

1

8. Ensure that Add is selected in Operation and click Enter. This updates the Entry List

9. Select the Synchronous option, then click Run:

image

10. If successful, the LDP output window shows:

***Call Modify...

 ldap_modify_ext_s(Id, '(null)',[1] attrs, SvrCtrls, ClntCtrls);

modified "".

image

11. Confirm the global RID pool increased by examining the System Event Log on that domain controller for Directory-Services-SAM Informational event 16655.

RID Ceiling Enforcement

To afford a measure of protection and elevate administrative awareness, Windows Server 2012 introduces an artificial ceiling on the global RID range at ten (10) percent remaining RIDs in the global space. When within one (1) percent of the artificial ceiling, domain controllers requesting RID pools write Directory-Services-SAM warning event 16657 to their System event log. When reaching the ten percent ceiling on the RID Master FSMO, it writes Directory-Services-SAM event 16657 to its System event log and will not allocate any further RID pools until overriding the ceiling. This forces you to assess the state of the RID Master in the domain and address potential runaway RID allocation; this also protects domains from exhausting the entire RID space.

This ceiling is hard-coded at ten percent remaining of the available RID space. I.e. the ceiling activates when the RID master allocates a pool that includes the RID corresponding to ninety (90) percent of the global RID space.

  • For default domains, the first trigger point is 230-1 * 0.90 = 966,367,640 (or 107,374,183 RIDs remaining).
  • For domains with an unlocked 31-bit RID space, the trigger point is 231-1 * 0.90 = 1,932,735,282 RIDs (or 214,748,365 RIDs remaining).

You can hit this event twice in the lifetime of a domain - once with a default-sized RID pool and once when you unlock. Preferably never, of course.

When triggered, the RID Master sets AD attribute msDS-RIDPoolAllocationEnabled (common name ms-DS-RID-Pool-Allocation-Enabled) to FALSE on the object:

CN=RID Manager$,CN=System,DC=<domain>

This writes the 16657 event and prevents further RID block issuance to all domain controllers. Domain controllers continue to consume any outstanding RID pools already issued to them.

To remove the block and allow RID pool allocation to continue, set that value to TRUE. On the next RID allocation performed by the RID Master, the attribute will return to its default NOT SET value. After that, there are no further ceilings and eventually, the global RID space runs out, requiring forest recovery or domain migration.

Important

Do not just arbitrarily remove the ceiling once hit - after all, something weird and potentially bad has happened here and your RID Master is trying to tell you. Stop and take stock, find out what caused the increase, and don’t proceed until you are darned sure that you are not going to run out immediately due to some sort of run-away process or procedure in your environment.

Removing the Ceiling Block

To remove the block once reaching the artificial ceiling, perform the following steps:

1. Ensure that the RID Master role is running on a Windows Server 2012 domain controller. If not, transfer it to a Windows Server 2012 domain controller

2. Run LDP.exe

3. Click the Connection menu and click Connect for the Windows Server 2012 RID Master on port 389, and then click Bind as a domain administrator

4. Click the View menu and click Tree, then for the Base DN select the RID Master's own domain naming context. Click Ok

5. In the navigation pane, drill down into the CN=System container and click the CN=RID Manager$ object. Right click it and click Modify

6. In Edit Entry Attribute, type:

MsDS-RidPoolAllocationEnabled

7. In Values, type (in upper case):

TRUE

8. Select Replace in Operation and click Enter. This updates the Entry List.

9. Enable the Synchronous option, then click Run:

image

10. If successful, the LDP output window shows:

***Call Modify...

ldap_modify_ext_s(ld, 'CN=RID Manager$,CN=System,DC=<domain>',[1] attrs, SvrCtrls, ClntCtrls);

Modified "CN=RID Manager$,CN=System,DC=<domain>".

image

Events and Error Messages

The following new messages log in the System event log on Windows Server 2012 domain controllers. Automated AD health tracking systems, such as System Center Operations Manager, should monitor for these events; all are notable, and some are indicators of critical domain issues.

Event ID

16653

Source

Directory-Services-SAM

Severity

Warning

Message

A pool size for account-identifiers (RIDs) that was configured by an Administrator is greater than the supported maximum. The maximum value of 15,000 will be used when the domain controller is the RID master. See http://go.microsoft.com/fwlink/?LinkId=225963 for more information.

Notes and resolution

The maximum value for the RID Block Size is now 15000 decimal (3A98 hexadecimal). A domain controller cannot request more than 15,000 RIDs. This event logs at every boot until the value is set to a value at or below this maximum.

Event ID

16654

Source

Directory-Services-SAM

Severity

Informational

Message

A pool of account-identifiers (RIDs) has been invalidated. This may occur in the following expected cases:

1. A domain controller is restored from backup.

2. A domain controller running on a virtual machine is restored from snapshot.

3. An administrator has manually invalidated the pool.

See http://go.microsoft.com/fwlink/?LinkId=226247 for more information.

Notes and resolution

If this event is unexpected, contact all domain administrators and determine which of them performed the action. The Directory Services event log also contains further information on when one of these steps was performed.

Event ID

16655

Source

Directory-Services-SAM

Severity

Informational

Message

The global maximum for account-identifiers (RIDs) has been increased to %1. See http://go.microsoft.com/fwlink/?LinkId=233329 for more information including important operating system interoperability requirements.

Notes and resolution

If this event is unexpected, contact all domain administrators and determine which of them performed the action. This event notes the increase of the overall RID pool size beyond the default of 230 and will not happen automatically; only by administrative action.

Event ID

16656

Source

Directory-Services-SAM

Severity

Warning

Message

Action required! An account-identifier (RID) pool was allocated to this domain controller. The pool value indicates this domain has consumed a considerable portion of the total available account-identifiers.

A protection mechanism will be activated when the domain reaches the following threshold of total available account-identifiers remaining: %1. 

The protection mechanism prevents the allocation of account-identifier (RID) pools needed to allow existing DCs to create additional users, computers and groups, or promote new DCs into the domain. The mechanism will remain active until the Administrator manually re-enables account-identifier allocation on the RID master domain controller.

See http://go.microsoft.com/fwlink/?LinkId=228610 for more information.

Notes and resolution

Contact all domain administrators and inform them that the domain is close to preventing any further principal creation. Interrogate all administrators to find out who or what is creating principals lately and examine the Diagnosis section here for more inventory steps.

Event ID

16657

Source

Directory-Services-SAM

Severity

Error

Message

Action required! This domain has consumed a considerable portion of the total available account-identifiers (RIDs). A protection mechanism has been activated because the total available account-identifiers remaining is approximately: %1.

The protection mechanism prevents the allocation of account-identifier (RID) pools needed to allow existing DCs to create additional users, computers and groups, or promote new DCs into the domain.  The mechanism will remain active until the Administrator manually re-enables account-identifier (RID) allocation on the RID master domain controller.

It is extremely important that certain diagnostics be performed prior to re-enabling account creation to ensure this domain is not consuming account-identifiers at an abnormally high rate. Any issues identified should be resolved prior to re-enabling account creation.

Failure to diagnose and fix any underlying issue causing an abnormally high rate of account-identifier consumption can lead to account-identifier (RID) pool exhaustion in the domain after which account creation will be permanently disabled in this domain.

See http://go.microsoft.com/fwlink/?LinkId=228610 for more information

Notes and resolution

Contact all domain administrators and inform them that no further security principals can be created in this domain until this protection is overridden. Interrogate all administrators to find out who or what is creating principals lately and examine the Diagnosis section here for more inventory steps. Use the steps above to unlock the 31st RID bit only after you have determined that any runaway issuance is not going to continue.

Event ID

16658

Source

Directory-Services-SAM

Severity

Warning

Message

This event is a periodic update on the remaining total quantity of available account-identifiers (RIDs). The number of remaining account-identifiers is approximately: %1.

Account-identifiers are used as accounts are created, when they are exhausted no new accounts may be created in the domain.

See http://go.microsoft.com/fwlink/?LinkId=228745 for more information

Notes and resolution

Contact all domain administrators and inform them that RID consumption has crossed a major milestone; determine if this is expected behavior or not by reviewing security trustee creation patterns. To ever see this event would be highly unusual, as it means that at least ~100 million RIDS have been allocated.

These are just some of the excellent supportability changes available in Windows Server 2012 AD DS. For more info, check out the TechNet library starting at:

http://technet.microsoft.com/en-us/library/hh831484

I hope to have more of these kinds of posts coming along soon, as the gloves were taken off this week for Windows Server 2012. You know me though – something shiny goes by and I vanish for weeks. We’ll see…

Ned “The Chronicles of RID” Pyle

RSA Key Blocking is Here!

$
0
0

Hello everyone. Jonathan here again with another Public Service Announcement post.

Today, Microsoft has published a new Security Advisory:

Microsoft Security Advisory (2661254): Update For Minimum Certificate Key Length

The Security Advisory and the accompanying KB article have complete information about the software update, but the key takeaway is that this update is now available on the Download Center and the Microsoft Update Catalog. In addition, Microsoft will release this software update through Microsoft Update (aka Windows Update) in October 2012. So all of you enterprise customers have two months to start testing this update to see what impact it has in your environments.

If you want information on finding weak keys in your environment then review the KB article. It describes several methods you can use. Microsoft Support has also created a PowerShell script that has been posted to the the TechNet Script Center.

Finally, I have one final warning for those of you that use makecert.exe to create test certificates. By default, makecert.exe creates certificates that chains up to the Root Agency root CA certificate located in the Intermediate Certification Authorities store. The Root Agency CA certificate has a public key of 512 bits, so once you deploy this update no certificate created with makecert.exe will be considered valid.

You should now consider makecert.exe deprecated. As a replacement, starting with Windows 7 / Windows Server 2008 R2, you can use certreq.exe to create a self-signed certificate. For example, to create a self-signed code signing certificate you can create the following .INF file:

[NewRequest]
Subject = "CN=Self Signed Cert"
KeyLength = 2048
ProviderName = "Microsoft Enhanced Cryptographic Provider v1.0"
KeySpec = "AT_SIGNATURE"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
RequestType = Cert
SMIME = False
ValidityPeriod = Years
ValidityPeriodUnits = 2

[EnhancedKeyUsageExtension]
OID = 1.3.6.1.5.5.7.3.3

The important line above is the RequestType value. That tells certreq.exe to create a self-signed certificate. Along with that value, the ValidityPeriod and ValidityPeriodUnits values allow you specify the lifetime of the self-signed certificate.

Once you create the .INF file, run the following command:

Certreq -new  selfsigned.inf selfsigned.crt

This will take your .INF file and generate a new self-signed certificate that you can use for testing.

Ok, so this was supposed to be a short post pointing to where you need to go, but it turns out that I had some other related stuff. The important message here is go read the Security Advisory and the KB article.

Go read the Security Advisory and the KB article.

Ex pace.

Jonathan “I am the Key Master” Stephens

Detaining Docs with DAC

$
0
0

Hey all, Ned here again with a quick advert:

Robert Deluca from our Partner and Customer team just published a blog post on Dynamic Access Control. He walks through the configuration of “document quarantine” to protect sensitive data on file shares and automatically clean up files that violate storage policies. We’ve seen a lot of DAC blog posts over the past couple of months but this one talks about a real-world scenario Robert encountered with some of our early Beta customers.

Document Quarantine with Windows Server 2012 Dynamic Access Control

Definitely take a look at this one!

- Ned "CDC" Pyle

AD Replication Status Tool is Live

$
0
0

Hey all, Ned here with some new troubleshooting tool love, courtesy of the ADREPLSTATUS team at Microsoft. I’ll let them do the talking:

The Active Directory Replication Status Tool (ADREPLSTATUS) is now LIVE and available for download at the Microsoft Download Center.

ADREPLSTATUS helps administrators identify, prioritize and resolve Active Directory replication errors on a single DC or all DCs in an Active Directory Domain or Forest. Cool features include:

  • Auto-discovery of the DCs and domains in the Active Directory forest to which the ADREPLSTATUS computer is joined
  • “Errors only” mode allows administrators to focus only on DCs reporting replication failures
  • Upon detection of replication errors, ADREPLSTATUS uses its tight integration with resolution content on Microsoft TechNet to display the resolution steps for the top AD Replication errors
  • Rich sorting and grouping of result output by clicking on any single column header (sort) or by dragging one or more column headers to the filter bar. Use one or both options to arrange output by last replication error, last replication success date, source DC naming context and last replication success date, etc.)
  • The ability to export replication status data so that it can be imported and viewed by source domain admins, destination domain admins or support professionals using either Microsoft Excel or ADREPLSTATUS
  • The ability to choose which columns you want displayed and their display order. Both settings are saved as a preference on the ADREPLSTATUS computer
  • Broad OS version support (Windows XP -> Windows Server 2012 Preview)

ADREPLSTATUs UI consists of a toolbar and Office-style ribbon to expose different features. The Replication Status Viewer tab displays the replication status for all DCs in the forest. The screenshot below shows ADREPLSTATUS highlighting a DC that has not replicated in Tombstone Lifetime number of days (identified here by the black color-coding)

image
Click me

Using the Errors Only button, you can filter out healthy DCs to focus on destination DCs reporting replication errors.

image
Click me

The Replication Error Guide has a Detected Errors Summary view that records each unique replication error occurring on the set of DCs targeted by the administrator.

image
Click me

Close up of the Detected Errors Summary view.

image
Click me

Selecting any of the replication error codes loads the recommended troubleshooting content for that replication error. The TechNet Article for AD Replication Error 1256 is shown below.

image
Click me

The goals for this tool are to help administrators identify and resolve Active Directory replication errors before they cause user and application failures, outages or lingering objects caused short and long-term replication failures, and to provide administrators greater insight into the operation of Active Directory replication within their environments.

The current version of ADREPLSTATUS as of this posting is 2.2.20717.1 (as reported by ADREPLSTATUS startup splash screen).

Known Issues

Symptoms

Status

ADREPLSTATUS fails to launch on highly secure computers.

 

ADREPLSTATUS will not work when the following security setting is enabled on the operating system:

• System cryptography: Use FIPS 140 compliant cryptographic algorithms, including encryption, hashing and signing algorithms

Extra checkmark appears at bottom of column chooser when right clicking on a column header

 

Known issue and by design.

Support

  • ADREPLSTATUS is a read-only tool and makes no changes to the configuration of, or objects in an Active Directory forest
  • The ADRPLSTATUS tool is supported by the ADREPLSTATUS team at Microsoft. Administrators and support professionals who experience errors installing or executing ADREPLSTATUS may submit a “problem report” on the following web page:

http://social.technet.microsoft.com/wiki/contents/articles/12707.active-directory-replication-status-tool-adreplstatus-resources-page-en-us.aspx

  • If the issue is known, the ADREPLSTATUS team will reply to this page with the status of the issue. The status field will be listed as “known issue”, “by design”, “investigating”, “In progress” or “resolved” with supporting text
  • If a problem requires additional investigation, the ADREPLSTATUS team will contact you at the email address provided in your problem report submission
  • ETA for problem resolution will depend on team workload, problem complexity and root cause. Code defects within the ADREPLSTATUS tool can typically be resolved more quickly. Tool failures due to external root causes will take longer unless a work-around can be found
  • The ADREPLSTATUS team cannot and will not resolve AD replication errors identified by the ADREPLSTATUS tool. Contact your support provider, including Microsoft support for assistance as required. You may also submit and research replication errors on:

http://social.technet.microsoft.com/forums/en-US/winserverDS/threads/

 

Until next time,

Ned “repple depple” Pyle

Monthly Mail Sack: Yes, I Finally Admit It Edition

$
0
0

Heya folks, Ned here again. Rather than continue the lie that this series comes out every Friday like it once did, I am taking the corporate approach and rebranding the mail sack. Maybe we’ll have the occasional Collector’s Edition versions.

This week month, I answer your questions on:

Let’s incentivize our value props!

Question

Everywhere I look, I find documentation saying that when Kerberos skew exceeds five minutes in a Windows forest, the sky falls and the four horsemen arrive.

I recall years ago at a Microsoft summit when I brought that time skew issue up and the developer I was speaking to said no, that isn't the case anymore, you can log on fine. I recently re-tested that and sure enough, no amount of skew on my member machine against a DC prevents me from authenticating.

Looking at the network trace I see the KRB_APP_ERR_SKEW response for the AS REQ which is followed by breaking down of the kerb connection which is immediately followed by reestablishing the kerb connection again and another AS REQ that works just fine and is responded to with a proper AS REP.

My first question is.... Am I missing something?

My second question is... While I realize that third party Kerb clients may or may not have this functionality, are there instances where it doesn't work within Windows Kerb clients? Or could it affect other scenarios like AD replication?

Answer

Nope, you’re not missing anything. If I try to logon from my highly-skewed Windows client and apply group policy, the network traffic will look approximately like:

Frame

Source

Destination

Packet Data Summary

1

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname:

2

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

3

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

4

DC

Client

AS Response Ticket[Realm: CONTOSO.COM, Sname: krbtgt/CONTOSO.COM]

5

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

6

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

7

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

8

DC

Client

TGS Response Cname: client$

When your client sends a time stamp that is outside the range of Maximum tolerance for computer clock synchronization, the DC comes back with that KRB_APP_ERR_SKEW error – but it also contains an encrypted copy of his own time stamp. The client uses that to create a valid time stamp to send back. This doesn’t decrease security in the design because we are still using encryption and requiring knowledge of the secrets,  plus there is still only – by default – 5 minutes for an attacker to break the encryption and start impersonating the principal or attempt replay attacks. Which is not feasible with even XP’s 11 year old cipher suites, much less Windows 8’s.

This isn’t some Microsoft wackiness either – RFC 4430 states:

If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out.

If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message.

The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

Hmmm… SHOULD. Here’s where things get more muddy and I address your second question. No one actually has to honor this skew correction:

  1. Windows 2000 didn’t always honor it. But it’s dead as fried chicken, so who cares.
  2. Not all third parties honor it.
  3. Windows XP and Windows Server 2003 do honor it, but there were bugs that sometimes prevented it (long gone, AFAIK). Later Windows OSes do of course and I know of no regressions.
  4. If the clock of the client computer is faster than the clock time of the domain controller plus the lifetime of Kerberos ticket (10 hours, by default), the Kerberos ticket is invalid and auth fails.
  5. Some non-client logon application scenarios enforce the strict skew tolerance and don’t care to adjust, because of other time needs tied to Kerberos and security. AD replication is one of them – event LSASRV 40960 with extended error 0xc000133 comes to mind in this scenario, as does trying to run DSSite.msc “replicate now” and getting back error 0x576 “There is a time and / or date difference between the client and the server.” I have recent case evidence of Dcpromo enforcing the 5 minutes with Kerberos strictly, even in Windows Server 2008 R2, although I have not personally tried to validate it. I’ve seen it with appliances and firewalls too.

With that RFC’s indecisiveness and the other caveats, we beat the “just make sure it’s no more than 5 minutes” drum in all of our docs and here on AskDS. It’s too much trouble to get into what-ifs.

We have a KB tucked away on this here but it is nearly un-findable.

Awesome question.

Question

I’ve found articles on using Windows PowerShell to locate all domain controllers in a domain, and even all GCs in a forest, but I can’t find one to return all DCs in a forest. Get-AdDomainController seems to be limited to a single domain. Is this possible?

Answer

It’s trickier than you might think. I can think of two ways to do this; perhaps commenters will have others. The first is to get the domains in the forest, then find one domain controller in each domain and ask it to list all the domain controllers in its own domain. This gets around the limitation of Get-AdDomainController for a single domain (single line wrapped).

(get-adforest).domains | foreach {Get-ADDomainController -discover -DomainName $_} | foreach {Get-addomaincontroller -filter * -server $_} | ft hostname

The second is to go directly to the the native  .NET AD DS forest class to return the domains for the forest, then loop through each one returning the domain controllers (single lined wrapped).

[system.directoryservices.activedirectory.Forest]::GetCurrentForest().domains | foreach {$_.DomainControllers} | foreach {$_.hostname}

This also lead to updated TechNet content. Good work, Internet!

Question

Hi, I've been reading up on RID issuance management and the new RID Master changes in Windows Server 2012. They still leave me with a question, however: why are RIDs even needed in a SID? Can't the SID be incremented on it's own? The domain identifier seems to be an adequately large number, larger than the 30-bit RID anyway. I know there's a good reason for it, but I just can't find any material that says why there are separate domain ID and relative ID in a SID.

Answer

The main reason was a SID needs the domain identifier portion to have a contextual meaning. By using the same domain identifier on all security principals from that domain, we can quickly and easily identify SIDs issued from one domain or another within a forest. This is useful for a variety of security reasons under the hood.

That also allows us a useful technique called “SID compression”, where we want to save space in a user’s security data in memory. For example, let’s say I am a member of five domain security groups:

DOMAINSID-RID1
DOMAINSID-RID2
DOMAINSID-RID3
DOMAINSID-RID4
DOMAINSID-RID5

With a constant domain identifier portion on all five, I now have the option to use one domain SID portion on all the other associated ones, without using all the memory up with duplicate data:

DOMAINSID-RID1
“-RID2
“-RID3
“-RID4
“-RID5

The consistent domain portion also fixes a big problem: if all of the SIDs held no special domain context, keeping track of where they were issued from would be a much bigger task. We’d need some sort of big master database (“The SID Master”?) in an environment that understood all forests and domains and local computers and everything. Otherwise we’d have a higher chance of duplication through differing parts of a company. Since the domain portion of the SID unique and the RID portion is an unsigned integer that only climbs, it’s pretty easy for RID masters to take care of that case in each domain.

You can read more about this in coma-inducing detail here: http://technet.microsoft.com/en-us/library/cc778824.aspx.

Question

When I want to set folder and application redirection for our user in different forest (with a forest trust) in our Remote Desktop Services server farm, I cannot find users or groups from other domain. Is there a workaround?

Answer

The Object Picker in this case doesn’t allow you to select objects from the other forest – this is a limitation of the UI the that Folder Redirection folks put in place. They write their own FR GP management tools, not the GP team.

Windows, by default, does not process group policy from user logon across a forest—it automatically uses loopback Replace.  Therefore, you can configure a Folder Redirection policy in the resource domain for users and link that policy to the OU in the domain where the Terminal Servers reside.  Only users from a different forest should receive the folder redirection policy, which you can then base on a group in the local forest.

Question

Does USMT support migrating multi-monitor settings from Windows XP computers, such as which one is primary, the resolutions, etc.?

Answer

USMT 4.0 does not supported migrating any monitor settings from any OS to any OS (screen resolution, monitor layout, multi-monitor, etc.). Migrating hardware settings and drivers from one computer to another is dangerous, so USMT does not attempt it. I strongly discourage you from trying to make this work through custom XML for the same reason – you may end up with unusable machines.

Starting in USMT 5.0, a new replacement manifest – Windows 7 to Windows 7, Windows 7 to Windows 8, or Windows 8 to Windows 8 only – named “DisplayConfigSettings_Win7Update.man” was added. For the first time in USMT, it migrates:

<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Connectivity\* [*]</pattern>
<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Configuration\* [*]</pattern>

This is OK on Win7 and Win8 because the OS itself knows what valid and invalid are in that context and discards/fixes things as necessary. I.e. this is safe is only because USMT doesn’t actually do anything but copy some values and relies on the OS to fix things after migration is over.

Question

Our proprietary application is having memory pressure issues and it manifests when someone runs gpupdate or waits for GP to refresh; some times it’s bad enough to cause a crash.  I was curious if there was a way to stop the policy refresh from occurring.

Answer

Only in Vista and later does preventing total refresh become possible vaguely possible; you could prevent the group policy service from running at all (no, I am not going to explain how). The internet is filled with thousands of people repeating a myth that preventing GP refresh is possible with an imaginary registry value on Win2003/XP – it isn’t.

What you could do here is prevent background refresh altogether. See the policies in the “administrative templates\system\group policy” section of GP:

1. You could enable policy “group policy refresh interval for computers” and apply it to that one server. You could set the background refresh interval to 45 days (the max). That way it be far more likely to reboot in the meantime for a patch Tuesday or whatever and never have a chance to refresh automatically.

2. You could also enable each of the group policy extension policies (ex: “disk quota policy processing”, “registry policy processing”) and set the “do not apply during periodic background processing” option on each one.  This may not actually prevent GPUPDATE /FORCE though – each CSE may decide to ignore your background refresh setting; you will have to test, as this sounds boring.

Keep in mind for #1 that there are two of those background refresh policies – one per user (“group policy refresh interval for users”), one per computer (“group policy refresh interval for computers”). They both operate in terms of each boot up or each interactive logon, on a per computer/per user basis respectively. I.e. if you logon as a user, you apply your policy. Policy will not refresh for 45 days for that user if you were to stay logged on that whole time. If you log off at 22 days and log back on, you get apply policy, because that is not a refresh – it’s interactive logon foreground policy application.

Ditto for computers, only replace “logon” with “boot up”. So it will apply the policy at every boot up, but since your computers reboot daily, never again until the next bootup.

After those thoughts… get a better server or a better app. :)

Question

I’m testing Virtualized Domain Controller cloning in Windows Server 2012 on Hyper-V and I have DCs with snapshots. Bad bad bad, I know, but we have our reasons and we at least know that we need to delete them when cloning.

Is there a way to keep the snapshots on the source computer, but not use VM exports? I.e. I just want the new copied VM to not have the old source machine’s snapshots.

Answer

Yes, through the new Hyper-V disk management Windows PowerShell cmdlets or through the management snap-in.

Graphical method

1. Examine the settings of your VM and determine which disk is the active one. When using snapshots, it will be an AVHD/X file.

image

2. Inspect that disk and you see the parent as well.

image

3. Now use the Edit Disk… option in the Hyper-V manager to select that AVHD/X file:

image

4. Merge the disk to a new copy:

image

image

Windows PowerShell method

Much simpler, although slightly counter-intuitive. Just use:

Convert-vhd

For example, to export the entire chain of a VM's disk snapshots and parent disk into a new single disk with no snapshots named DC4-CLONED.VHDX:

image
Violin!

You don’t actually have to convert the disk type in this scenario (note how I went from dynamic to dynamic). There is also Merge-VHD for more complex differencing disk and snapshot scenarios, but it requires some extra finagling and disk copying, and  isn’t usually necessary. The graphical merge option works well there too.

As a side note, the original Understand And Troubleshoot VDC guide now redirects to TechNet. Coming soon(ish) is an RTM-updated version of the original guide, in web format, with new architecture, troubleshooting, and other info. I robbed part of my answer above from it – as you can tell by the higher quality screenshots than you usually see on AskDS – and I’ll be sure to announce it. Hard.

Question

It has always been my opinion that if a DC with a FSMO role went down, the best approach is to seize the role on another DC, rebuild the failed DC from scratch, then transfer the role back. It’s also been my opinion that as long as you have more than one DC, and there has not been any data loss, or corruption, it is better to not restore.

What is the Microsoft take on this?

Answer

This is one of those “it depends” scenarios:

1. The downside to restoring from (usually proprietary) backup solutions is that the restore process just isn’t something most customers test and work out the kinks on until it actually happens; tons of time is spent digging out the right tapes, find the right software, looking up the restore process, contacting that vendor, etc. Often times a restore doesn’t work at all, so all the attempts are just wasted effort. I freely admit that my judgment is tainted through my MS Support experience here – customers do not call us to say how great their backups worked, only that they have a down DC and they can’t get their backups to restore.

The upside is if your recent backup contained local changes that had never replicated outbound due to latency, restoring them (even non-auth) still means that those changes will have a chance to replicate out. E.g. if someone changed their password or some group was created on that server and captured by the backup, you are not losing any changes. It also includes all the other things that you might not have been aware of – such as custom DFS configurations, operating as a DNS server that a bunch of machines were solely pointed to, 3rd party applications pointed directly to the DC by IP/Name for LDAP or PDC or whatever (looking at you, Open Source software!), etc. You don’t have to be as “aware”, per se.

2. The downside to seizing the FSMO roles and cutting your losses is the converse of my previous point around latent changes; those objects and attributes that could not replicate out but were caught by the backup are gone forever. You also might miss some of those one-offs where someone was specifically targeting that server – but you will hear from them, don’t worry; it won’t be too hard to put things back.

The upside is you get back in business much faster in most cases; I can usually rebuild a Win2008 R2 server and make it a DC before you even find the guy that has the combo to the backup tape vault. You also don’t get the interruptions in service for Windows from missing FSMO roles, such as DCs that were low on their RID pool and now cannot retrieve more (this only matters with default, obviously; some customers raise their pool sizes to combat this effect). It’s typically a more reliable approach too – after all, your backup may contain the same time bomb of settings or corruption or whatever that made your DC go offline in the first place. Moreover, the backup is unlikely to contain the most recent changes regardless – backups usually run overnight, so any un-replicated originating updates made during the day are going to be nuked in both cases.

For all these reasons, we in MS Support generally recommend a rebuild rather than a restore, all things being equal. Ideally, you fix the actual server and do neither!

As a side note, restoring the RID master used to cause issues that we first fixed in Win2000 SP3. This unfortunately has live on as a myth that you cannot safely restore the RID master. Nevertheless, if someone impatiently seizes that role, then someone else restores that backup, you get a new problem where you cannot issue RIDs anymore. Your DC will also refuse to claim role ownership with a restored RID Master (or any FSMO role) if your restored server has an AD replication problem that prevents at least one good replication with a partner. Keep those in mind for planning no matter how the argument turns out!

Question

I am trying out Windows Server 2012 and its new Minimal Server Interface. Is there a way to use WMI to determine if a server is running with a Full Installation, Core Installation, or a Minimal Shell installation?

Answer

Indeed, although it’s not made it way to MSDN quite yet. The Win32_ServerFeature class returns a few new properties in our latest operating system. You can use WMIC or Windows PowerShell to browse the installed ones. For example:

image

The “99” ID is Server Graphical Shell, which means, in practical terms, “Full Installation”. If 99 alone is not present, that means it’s a minshell server. If the “478” ID is also missing, it’s a Core server.

E.g. if you wanted to apply some group policy that only applied to MinShell servers, you’d set your query to return true if 99 was not present but 478 was present.

Other Stuff

Speaking of which, Windows Server 2012 General Availability is September 4th. If you manage to miss the run up, you might want to visit an optometrist and/or social media consultant.

Stop worrying so much about the end of the world and think it through.

So awesome:


And so fake :(

If you are married to a psychotic Solitaire player who poo-poo’ed switching totally to the Windows 8 Consumer Preview because they could not get their mainline fix of card games, we have you covered now in Windows 8 RTM. Just run the Store app and swipe for the Charms Bar, then search for Solitaire.

image

It’s free and exactly 17 times better than the old in-box version:

image
OMG Lisa, stop yelling at me! 

Is this the greatest geek advert of all time?


Yes. Yes it is.

When people ask me why I stopped listening to Metallica after the Black Album, this is how I reply:

Hetfield in Milan
Ride the lightning Mercedes

We have quite a few fresh, youthful faces here in MS Support these days and someone asked me what “Mall Hair” was when I mentioned it. If you graduated high school between 1984 and 1994 in the Midwestern United States, you already know.

Finally – I am heading to Sydney in late September to yammer in-depth about Windows Server 2012 and Windows 8. Anyone have any good ideas for things to do? So far I’ve heard “bridge climb”, which is apparently the way Australians trick idiot tourists into paying for death. They probably follow it up with “funnel-web spider petting zoo” and “swim with the saltwater crocodiles”. Lunatics.

Until next time,

- Ned “I bet James Hetfield knows where I can get a tropical drink by the pool” Pyle

One of us: What it was like to interview for a support role at Microsoft

$
0
0

Hello, Kim here again. We get many questions about what to expect when interviewing at Microsoft. I’m coming up on my two year anniversary at Microsoft and I thought I would share my experience in the hope that it might help you if you are interested in applying to Microsoft Support; if nothing else, there is some educational and entertainment value in reading about me being interviewed by Ned. :)

Everyone at Microsoft has a unique story to tell about how they were hired. On the support side of Microsoft, many of us were initially hired as contractors and later offered a full-time position. Others were college hires, starting our first real jobs here. It seems some have just been here forever. Then there are a few of us, myself included, that were industry hires. Over the years, I've submitted my résumé to Microsoft a number of times. I have always wanted to work for Microsoft, but never really expected to be contacted since there aren’t many Microsoft positions available in central Indiana (where I’m from). I had a good job and wasn’t particularly unhappy in it, but the opportunity to move up was limited in my current role. I casually looked for a new position for a couple of months and had been offered one job, but it just didn't feel like the right fit. Around the same time, I submitted my résumé to Microsoft for a Support Engineer position on the Directory Services support team in Charlotte. Much to my surprise, I received an email that began a wild ride of excitement, anxiety, anticipation, and fear that ultimately resulted in my moving from the corn fields of the Midwest (there is actually more than corn in Indiana, btw) to the land of sweet tea.

I never expected that Microsoft would contact me due to the sheer volume of résumés they receive daily and the fact that the position was in Charlotte and I was not. About a week after I submitted my résumé, I received an email requesting a phone interview with the Directory Services team. I, of course, responded immediately and a phone interview was set up for three days from the current date. When I submitted my résumé, I didn’t think I’d be contacted and if I was, I definitely thought I’d have more than three days to prepare! The excitement lasted about 30 seconds before the reality of the situation set in . . . I was going to have an interview with Microsoft in three days! Just to add to the anxiety level, Ned Pyle (queue the Halloween theme) was going to do my phone screen!

Preparation - Phone Screen

I didn't know where to start to prepare. As with any phone screen, you have no idea what types of questions you will be asked. Would it be a technical interview; would it just be a review of my résumé and my qualifications? I didn’t know what to expect. I assumed that since Ned was calling me that there would be some technical aspect to it, but I wasn’t sure. There’s no wiki article on how to interview at Microsoft. :) On top of that, I'd heard rumors of questions about manhole covers and all kinds of other strange problem-solving questions. This was definitely going to be more difficult than any other interview I’d ever had.

Once I got over the initial panic, I decided I needed to start with the basics. This was a position for the Directory Services team, so I dug out all of the training books from the last eight years of working with Active Directory and put together a list of topics I knew I needed to review. I also did a Bing search on Active Directory Interview questions and I found a couple of lists of general AD questions. Finally, I went to the source, the AskDS blog, and searched for information on "hiring" and found a link to Post-Graduate AD Studies.

My resource list looked something like this:

1. Post-Graduate AD Studies (thanks, Ned)

2. O'Reilly Active Directory book (older version)

3. Training manual from Active Directory Troubleshooting course that was offered by MCS many years ago

4. Training manuals from a SANS SEC505 Securing Windows course

5. MS Press Active Directory Pocket Consultant

6. MS Press Windows Group Policy Guide

7. AD Interview Questions Bing search

   a) http://www.petri.co.il/mcse_system_administrator_active_directory_interview_questions.htm

   b) http://www.petri.co.il/mcse-system-administrator-windows-server-2008-r2-active-directory-interview-questions.htm

I only had three days to study, so I decided to start with reviewing the areas that I was weakest in and most comfortable with. For me, these were:

1. PKI (ugh)

2. AD Replication (good)

3. Kerberos (ick)

4. Authentication (meh)

5. Group Policy (very good)

The SANS manuals had good slides and decent descriptions, so that is where I started. Everyone has different levels of experience and different study habits. What works for me is writing. If I write something down, it seems to solidify it in my mind. I reviewed each of the topics above and focused on writing down the parts either that were new to me or that I needed to focus on in more detail. This approach meant that I was reading both the topics I already understood (as a refresher) and writing down the topics I needed to work on. Next, I went through the various lists of AD interview questions I had found and made sure that I could at least answer all of the questions at a high level. This involved doing some research for some of the questions. The websites with the lists of questions were a good resource because they didn’t give me the answers. I didn’t just want to be able to recite some random acronyms. I wanted to understand, at least at a high level, what all of the basic concepts were and be able to relate them to one another. I knew that I was going to need to have broad knowledge of many topics and then deep knowledge in others.

The worst part of all of this studying was that I didn't have enough lead-time to request time off from work to focus on it. So, while I was eating lunch, I was studying. While I was waiting on servers to build, I was studying. While I was waiting on VMs to clone, guess what? I was studying. :) By the end of the three days of studying, I was pretty much a nervous wreck and ready for this phone screen to end.

The Phone Screen

This is where you'd like me to tell you what questions Ned asked me, but . . . that isn't going to happen. Bwahahaha. :-)

What I can tell you about the interview is that it wasn't solely about rote knowledge, which is good since I had prepared for more than just how to spell AD & PKI. Knowing the high-level concepts was good; he asked a few random questions to see how far I could explain some of the technologies. It was more important to know what to do with this information and how to troubleshoot given what you know about a particular technology. If you can't apply the concepts to a real world scenario then the knowledge is useless. Throughout the interview, there were times where I couldn't come up with the right words or terms for something and I imagined Ned sitting there playing with his beard out of boredom.

image

In those situations, I found Ned was awake and tried to help me through them or skipped to something else that eventually got me back to the part I’d been struggling with but this time with better results. For that, I was grateful and it helped me keep my nerves in check as well. While trying to answer the flood of questions and keep my nerves in check, I tried to keep a list of the topics we were discussing just in case I got a follow-up interview. Although I’d like to say that I totally rocked out the phone interview and that I’m awesome (ok, I’m pretty cool), I actually thought I’d done alright, but not necessarily well enough to get a follow-up interview. Overall, I didn’t feel like I had been able to come up with responses quickly enough and Ned guided me around a couple of topics before I finally understood what he was getting at a few more times than I would have liked.

On-site interview scheduled - WOOT!

Much to my own disbelief, I did receive that follow-up email to schedule an in-person interview down in sunny Charlotte, NC. Fortunately, I had a little more time to prepare, mainly due to the nature of an on-site interview that is out of state. Logistics were in my favor this time! As I recall, I had about two weeks between when I received notification of the on-site interview and the actual scheduled interview date. This was definitely better than the three days I had to prepare for the phone screen.

With more time, I decided that I would take some days off work to focus on studying. Maybe this is extreme, but that is how important it was to me to get this job. I figured that this was my one shot to get this right and I was going to do everything I possibly could to ensure that I was as prepared as I could possibly be.

This time, I started studying with the list of questions from my phone interview with Ned. I wanted to make sure that if Ned was in my face-to-face interview that I would be able to answer those questions the second time. Then I reviewed all of the questions and notes that I had prepared for my phone interview. Finally, I really started digging in on the Post-Graduate AD Studies from the AskDS blog. I take full responsibility for the small forest of trees I killed in printing all of this material off. I read as much as I could of each of the Core Technology Reading and then I chose three or four areas from the Post Graduate Technology Reading to dig into deeper.

Obviously, I didn't study all day for two weeks. I'd read and then go for a short walk. As the time passed, I began to realize how long two weeks is. Having two weeks to prepare is awesome, but the stress of waking up every day knowing what you need to do and then dealing with the anxiety of just wanting it to be over is harder than I thought it would be. I tried to review my notes at least once a day and then read more of the in-depth content with the goal of ensuring that I had some relatively deep knowledge in some areas, knew the troubleshooting tools and processes, and for the areas I couldn’t go so deep into that I at least knew the lingo and how the pieces fit together. I certainly didn’t want to get all the way to Charlotte and have some basic question come at me and just sit there staring at the conference room table blankly. :-/

By the time I was ready to leave for my interview, I knew that I’d done everything I could to prepare and I just had to hope that the hard work paid off and that my brain cells held out for another day.

The On-site interview

I arrived in Charlotte the evening before the interview. I studied on the flight and then a little the night before. Again, just reviewing my notes and the SANS guide on PKI and Kerberos. I tried not to overdo it. If I wasn't ready at this point, I never would be.

I got to the site a little early that day, so I sat in the car and read more PKI and FRS notes. Then I took about 5 minutes and tried to relax and get my nerves under control (nice try).

The interview itself was intense. It was scheduled for an hour, but by the time I got out of the conference room I’d been in there two and a half hours. There were engineers and managers from both Texas (video conference) and Charlotte in the room. The questions pretty much started where we had left off from the phone interview in terms of complexity. I didn’t get a gimme on the starting point. I think we went for about an hour before they took pity on me and let me get more caffeine and started loading me up on chocolate. By the time I got to the management portion of the interview, I was shaking pretty intensely (probably from all that soda and chocolate that they kept giving me) and I was glad that I’d brought copies of my résumé so I could remember the last 10 years of my work history.

The thing that I appreciated most about the entire process was how understanding everyone was. They know how scary this can be and how nervous people are when they come in for an interview. Although I was incredibly nervous, everyone made me feel comfortable and I felt like they genuinely wanted me to succeed. The management portion of the interview was definitely easier, but they did ask some tough questions as well. I also made sure that I had come prepared with several questions of my own to ask them.

When I finally walked out of the conference room, I felt like a train had hit me. Emotionally I was shot, physically I was somewhere between wired and exhausted. It was definitely the most grueling interview I’d ever experienced, but I knew that I’d done everything I could to prepare. The coolest part happened as I was escorted to my car. As we were finishing our formalities, my host got a phone call on his cell phone and it was for me. This was probably the weirdest thing that had ever happened to me at an interview. I took his cell phone and it was one of the managers that had participated in my interview, she was calling to let me know that they were going to make me an offer and wanted to let me know before I left so I wouldn’t be worried about it all the way home on the plane. Getting that phone call before I left was an amazing feeling. I’d just been through a grueling interview that I’d spent weeks (really my entire career) preparing for and finding out my hard work had paid off was an unbelievable feeling. It didn’t become real until I got my blue badge a few days after my start date.

Hindsight is 20/20

Looking back at my career and my preparation for this role, is there anything that I would do differently to better prepare? Career-wise, I’d say that I did a good job of preparing for this role. I took increasingly more challenging roles from both a technical and a leadership perspective. I led projects that required me to be both the technical leader (designing, planning, testing, documenting a system) and a project leader (collaborating with other teams, managing schedules, reporting progress to management, dealing with road blocks and competing priorities). These experiences have given me insight and perspective on the environments and processes that my customers work with daily.

If I could do anything differently, I’d say that I would have dug in a little deeper on technologies that I didn’t deal with as part of my roles. For instance, learning more about SQL and IIS or even Exchange would have helped me better understand to what degree my technologies are critical to the functionality of others. Often our support cases center on the integration of multiple technologies, so having a better understanding of those technologies can be beneficial.

If you are newer to the industry, focusing on troubleshooting methodologies is a must. The job of support is to assist with troubleshooting in order to resolve technical issues. The entire interview process, from the phone-screen to the on-site interview, focused on my ability to be presented with a situation I am not familiar with and use my knowledge of technology and troubleshooting tools to isolate the problem. If you haven’t reviewed Mark Renoden’s post on Effective Troubleshooting, I highly recommend it. This is what being in support is all about.

Just don’t be these guys

So, what's it really like?

Working in support at Microsoft is by far the most technically demanding role I’ve had during the course of my career. Every day is a new challenge. Every day you work on a problem you’ve never seen before. It’s a lot like working in an Emergency room at times. Systems are down, businesses are losing money, the pressure is high and the expectations are even higher. Fortunately, not all cases are critsits (severity A) and the people I work with are amazing. My row is comprised of some of the most intelligent but “unique” people I’ve ever worked with. In ten minutes on the row, you can participate in a conversation about how the code in Group Policy chooses a Domain Controller for writes and which MIDI rendition of “Jump” is the best (for the record, they are all bad). While the cases are difficult and the pressure is intense, the work environment allows us to be ourselves and we are never short on laughs.

The last two years have been an incredible journey. I’ve learned more at Microsoft in two years than I did in five out in the industry. I get to work on some of the largest environments in the world and help people every day. While this isn't a prescription for how to prepare for an interview at Microsoft, it worked for me; and if you're crazy enough to want to work with Ned and the rest of us maybe it will work for you too. GOOD LUCK!

- Kim “Office 2013 has amazing beard search capabilities” Nichols


Updated Group Policy Search service

$
0
0

Mike here with an important service announcement.  In June of 2010, guest poster Kapil Mehra introduced the Group Policy Search service.  The Group Policy Search (GPS) service is a web application hosted on Windows Azure, which enables you to search for registry-based Group Policy settings used in Windows operating systems.

It’s a "plezz-shzaa" to announce that GPS version 1.1.4 is live at http://gps.cloudapp.net.  Version 1.1.4 includes registry-based policy settings from Windows 8 and Windows Server 2012, performance improvements, bug fixes, and a few little surprises.  It's the easiest way to search for a Group Policy setting. 

So, the next time you need to search for a Group Policy settings, or want to know the registry key and value name that backs a particular policy setting-- don't look for a antiquated settings spreadsheet reference.  Get your Group Policy Search on!!

And, if you act now-- we'll throw in the Group Policy Search Windows Phone 7 application-- for free! That's right, take Group Policy Search with you on the go. What an offer! Group Policy Search and Group Policy Search Windows Phone 7 application -- for one low, low price -- FREE!  Act now and you'll get free shipping.

This is Mike Stephens and "Ned Pyle" approves this message!

Windows Server 2012 GA

$
0
0

Hey folks, Ned here again to tell you what you probably already know: Windows Server 2012 is now generally available: 

I don’t often recommend “vision” posts, but Satya Nadella – President of Server and Tools – explains why we made the more radical changes in Windows Server 2012. Rather than start with the opening line, I’ll quote from the finish:

In the 1990s, Microsoft saw the need to democratize computing and made client/server computing available at scale, to customers of all sizes. Today, our goal is to do the same for cloud computing with Windows Server 2012.

On a more personal note: Mike Stephens, Joseph Conway, Tim Quinn, Chuck Timon, Don Geddes, and I dedicated two years to understanding, testing, bug stomping, design change requesting, documenting, and teaching Windows Server 2012. Another couple dozen senior support folks – such as our very own Warren Williams - spent the last year working with customers to track down issues and get feedback. Your feedback. You will see things in Directory Services that were requested through this blog.

Having worked on a number of pre-release products, this is the most Support involvement in any Windows operating system I have ever seen. When combined with numerous customer and field contributions, I believe that Windows Server 2012 is the most capable, dependable, and supportable product we’ve ever made. I hope you agree.

- Ned “also, any DS issues you find were missed by Mike, not me” Pyle

Let the Blogging begin…

$
0
0

Hello AskDS Readers. Mike here again. If you notice, Ned posted one of our first Windows Server 2012 RTM blogs a while back (Managing RID Issuance in Windows Server 2012). Yes friends, the gag order has been lifted and we are allowed to spout mountains of technical goodness about Windows Server 2012 and Windows 8.

"So much time and so little to do. Wait a minute. Strike that. Reverse it." Windows Server 2012 has many cool features that Ned and I have been waiting to share with you. Here is a 50,000-foot view of the technologies and features we are going to blog in the next few weeks and months-- in no specific order.

I'll start by highlighting some of the changes with security, PKI, authentication, and authorization. The Windows Server 2012 Certificate Services role has a few feature changes that should delight many of the certificate administrators out there. With new installation, deployment, and improved configuration-- it's probably the easiest certificate authority to configure.

Windows Server 2012 authentication is a healthy technology with a ton of technical goo just seeping at the seams; starting with the mac-daddy of them all-- Kerberos. In a few weeks, we will begin publishing the first of many installments of Kerberos changes in Windows 8/Windows Server 2012. As a teaser, the lineup includes KDC Proxy Server, the latest and greatest way to configured Kerberos Constrained Delegation-- "It really whips the lama's @#%." We'll take some exhaustive time explaining some Kerberos enhancements such as Kerberos Armoring and Compound Identity. We have tons more to share in the area of authentication including Virtual Smartcard Readers, and Picture Password logon.

Advanced client security highlights features like Server Name Indicator (SNI) for Windows Server 2012, Certificate Lifecycle Notification, Weak Key Protection (most of which is published in Jonathan Stephen's latest blog, RSA Key Blocking is Here!), Implicit binding, which is the infrastructure behind the new Centralized Certificate Store IIS feature, and Client certificate hints. Advanced client security also includes a wicked-cool security-enhancement to PFX files and new a PKI module for Windows PowerShell

At some point in our publishing timeline, we'll launch into the saga of all sagas, Dynamic Access Control. We've hosted guest posts here on AskDS to introduce this radical, amazingly cool new way to perform file-based authorization. This isn't your grandfather's authorization either. Dynamic Access Control or DAC as we’ll call it, requires planning, diligence, and an understanding of many dependencies, such as Active Directory, Kerberos, and effective access. Did I mention there are many knobs you must turn to configure it? No worries though, we'll break DAC down into consumable morsels that should make it easy for everyone to understand.

The concept of claims continues by showing you how to use Windows Server 2012's Active Directory Federation Services role to leverage claims issued by Windows domain controllers. Using AD FS, you can pass-through the Windows authorization claims or transform them into well-known SAML-based claim types.

No, I'm not done yet. I'm going introduce a well-hidden feature that hasn't received much exposure, but has been labeled "pretty cool" by many training attendees. Access Denied Assistance is a gem of a feature that is locked away within the File Server Resource Manager (FSRM). It enables you to provide a SharePoint-like experience for users in Windows Explorer when they experience access denied or file not found to a shared file or folder. Access Denied Assistance provides the user with a "Request Access" interface that sends an email to the share owner that provides details on the access requested and guidance for the share owner can follow to remediate the problem. It's very slick.

Wait there is more; this is just my list of topics to cover. Ned has a fun-bag full of Active Directory related material that he'll intermix with these topics to keep things fresh. I'm certain we'll sneak in a few extras that may not be directly related to Directory Services; however, they will help you make your Windows Server 2012 and Windows 8 experience much better. Need to run for now, this blog post just wrote checks my body can't cash.

The line above and below this were intentionally left blank using Microsoft Word 2013 Preview Edition

Mike "There's no earthly way of knowing; which direction they are going... There's no knowing where they're rowing..." Stephens

MaxTokenSize and Windows 8 and Windows Server 2012

$
0
0

Hello AskDS Populous, Mike here and I want to share with you some of the excellent enhancements we accomplished in Windows 8 and Windows Server 2012 around MaxTokenSize. Let’s review MaxTokenSize and its symptoms before we jump in to wonderful world of Windows 8 (say that three times fast).

Wonderful World of Windows 8
Wonderful World of Windows 8
Wonderful World of Windows 8

What is MaxTokenSize

Kerberos is the default and preferred authentication protocol since the release of Windows 2000 Server. Over the last few years, Microsoft has made some significant investments in provided extensions to the protocol. One of those extensions to Kerberos is the Privilege Attribute Certificate or PAC (defined in Windows Server Protocol specification MS-PAC).

Microsoft created the PAC to encapsulate authorization related information in a manner consistent with RFC4120. The authorization information included in the PAC includes security identifiers, user profile information such as Full name, home directory, and bad password count. Security identifiers (SIDs) included in the PAC represent the user's current SID and any instances of SID history and security group memberships to the extent of current domain groups, resource domain groups, and universal groups.

Kerberos uses a buffer to store authorization information and reports this size to applications using Kerberos for authentication. MaxTokenSize is the size of buffer used to store authorization information. This buffer size is important because some protocols such as RPC and HTTP use it when they allocate memory for authentication. If the authorization data for a user attempting to authenticate is larger than the MaxTokenSize, then the authentication fails for that connection using that protocol. This explains why authentication failures resulted when authenticating to IIS but not when authenticating to folder shared on a file server. The default buffer size for Kerberos in Windows 7 and Windows Server 2008R2 is 12k.

Windows 8 and Windows Server 2012

Let's face the facts of today's IT environment… authentication and authorization is not getting easier; it's becoming more complex. In the world of single sign-on and user claims, the amount of authorization data is increasing. Increasing authorization data in an infrastructure that has already had its experiences with authentication failures because a user was a member of too many groups justifies some concern for the future. Fortunately, Windows 8 and Windows Server 2012 have features to help us take proactive measures to avoid the problem.

Default MaxTokenSize

Windows 8 and Windows Server 2012 benefit from an increased MaxTokenSize of 48k. Therefore, when HTTP relies on the MaxTokenSize value as the value used for memory allocation; it will allocate 48k of memory for the authentication buffer, which hold a substantially more authorization information than in previous versions of Windows where the default MaxTokenSize was only 12k.

Group Policy settings

Windows 8 and Windows Server 2012 introduce two new computer-based policy settings that help combat against large service tickets, which is the cause of the MaxTokenSize dilemma. The first of these policy settings is not exactly new-- it has been in Windows for years, but only as a registry value. Use the policy setting Set maximum Kerberos SSPI context token buffer size to change the MaxTokenSize using group policy. Looking closely at this policy setting in the Group Policy Management Editor, you'll notice the icon for this setting is slightly different from the others around it.

clip_image001

This difference is attributed to registry location the policy setting modifies when enabled or disabled. This registry setting is the actual MaxTokenSize registry key and value name that has been used in earlier versions of Windows

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters\MaxTokenSize

Therefore, you can use this computer-based policy setting to manage Windows 8, Windows Server 2012, and earlier versions of Windows. The catch here is that this registry location is not a managed policy location. Managed policy locations are removed and reapplied during policy refreshes to avoid persistent settings in the registry after the settings in a Group Policy object become out of scope. That behavior does not occur with this key, as the setting applied by this policy setting is not removed during application. Therefore, the policy setting persists even if the Group Policy object providing the setting falls out of scope.

The second policy setting is very cool and answers the question that customers always asked when they encounter a problem with MaxTokenSize: "How big is the token?" You might be one of those people that went on the crusade of a lifetime using TOKENSZ.EXE and spent countless hours trying to determine the optimal MaxTokenSize for your environment. Those days are gone.

A new KDC policy settings Warning events for large Kerberos tickets provides you with a way to monitor the size of Kerberos tickets issued by KDCs. When you enable this policy setting, you then must configure a ticket threshold size. The KDC uses the ticket threshold size to determine if it should write a warning event to the system event log. If the KDC issues a ticket that exceeds the ticket threshold size, then it writes a warning. This policy setting, when enabled, defaults to the 12k, which is the default MaxTokenSize of previous version of Windows.

clip_image003

Ideally, if you use this policy setting, then you'd likely want to set the ticket threshold value to approximately 1k less than your current MaxTokenSize. You want it lower than your current MaxTokenSize (unless you are using 12k, that is the minimum value) so you can use the warning events as a proactive measure to avoid an authentication failure due to an incorrectly sized buffer. Setting the threshold too low will just train you to ignore the Event 31 warnings because they'll become noise in the event log. Setting it too high and you're likely to be blindsided with authentication failures rather than warning events.

clip_image004

Earlier I said that this policy setting solves your problems with fumbling with TOKENSZ and other utilities to determine MaxTokenSize-- here's how. If you examine the details of the Kerberos-Key-Distribution-Center Warning event ID 31, you'll notice that it gives you all the information you need to determine the optimal MaxTokenSize in your environment. In the following example, the user Ned is a member of over 1000 groups (he's very popular and a big deal on the Internet). When I attempt to log on Ned using the RUNAS command, I generated an Event ID 31. The event description provides you with the service principal name, the user principal name, the size of the ticket requested and the size of the threshold. This enables you to aggregate all the event 31s and identify the maximum ticket size requested. Armed with this information, you can set the optimal MaxTokenSize for your environment.

clip_image006

KDC Resource SID Compression

Kerberos authentication inserts security identifiers (SIDs) of the security principal, SID history, all the groups to which the user is a member including universal groups and groups from the resource domain. Security principals with too many group memberships greatly affect the size of the authentication data. Sometimes the authentication data is larger than the allocated size reported by Kerberos to applications. This can causes authentication failure in some applications. SIDs from the resource domain share the same domain portion of the SID, these SIDs can be compressed by only providing the resource domain SID once for all SIDs in the resource domain.

Windows Server 2012 KDCs help reduce the size of the PAC by taking advantage of resource SID compression. By default, a Windows Server 2012 KDC will always compress resource SIDs. To compress resource SIDs, the KDC stores SID of the resource domain to which the target resource is a member.  Then, it inserts only the RID portion of each resource SID into the ResourceGroupIds portion of the authentication data. 

Resource SID Compression reduces the size of each stored instance of a resource SID because the domain SID is stored once rather than with each instance. Without resource SID Compression, the KDC inserts all the SIDs added by the resource domain in the Extra-SID portion of the PAC structure, which is a list of SIDs.  [MS-KILE]

Interoperability

Other Kerberos implementations may not understand resource group compression and therefore are not compatible. In these scenarios, you may need to disable resource group compression to allow the Windows Server 2012 KDC to interoperate with the third-party Kerberos implementation.

Resource SID compression is on by default; however, you can disable it. You disable resource SID compression on a Windows Server 2012 KDC using the DisableResourceGroupsFields registry value under the HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kdc\Parameters registry key. This registry value has a DWORD registry value type. You completely disable resource SID compression when you set the registry value to 1. The KDC reads this configuration when building a service ticket. With the bit enabled, the KDC does not use resource SID compression when building the service ticket.

Wrap up

There's the skinny on the Kerberos enhancements included in Windows 8 and Windows Server 2012 that specifically target large service ticket and MaxTokenSize scenarios. To summarize:

· Increased default MaxTokenSize from 12k to 48k

· New Group Policy setting to centrally manage MaxTokenSize

· New Group Policy setting to write warnings to the system event log when a service ticket exceeds a designated threshold

· New Resource SID compression to reduce the storage size of SIDs from the resource domain

Keep an eye out for more Windows 8 and Kerberos needful

Mike "~Mike" Stephens

Monthly Mail Sack: I Hope Your Data Plan is Paid Up Edition

$
0
0

Hi all, Ned here again with that thing we call love. Blog! I mean blog. I have a ton to talk about now that I have moved to the monthly format, and I recommend you switch to WIFI if you’re on your phone.

This round I answer your questions on:

I will bury you!

image
With screenshots!

Question

Is there a way to associate a “new” domain controller with an “existing” domain controller account in Active Directory? I.e. if I have a DC that is dead and has to be replaced, I have to metadata clean the old DC out before I promote a replacement DC with the same name.

Answer

Starting in Windows Server 2012 you can “reinstall” DCs, attaching an existing objects that were not removed by demotion/MD cleanup. This is detected and handled by the AD DS config wizard right after you choose a replica DC and get to the DC Options page, or with the Install-AddsDomainController cmdlet using the -AllowDomainControllerReinstall argument.

image
Neato

If using an older operating system, no such luck. You should use DSA.MSC or NTDSUTIL to metadata cleanup that old domain controller before promoting its replacement.

Question

I’ve read in the past – from you - that DFSR using SYSVOL supports the change notification flag on AD DS replication links or connection objects. Is this true? I am finding very inconsistent behavior.

Answer

Not really (and I updated my old writing on this – yes, Ned can be wrong).

DFSR always replicates immediately and continuously with its own internal change notification, as long as the schedule is open; these scheduled windows are in 15 minute blocks and are assigned on the AD DS connection objects.

If the current time matches an open block, you replicate continuously (as fast as possible, sending DFSR change notifications) until that block closes.

If the next block is closed, you wait for 15 minutes, sending no updates at all. If that next block had also been open, you continue replicating at max speed. Therefore, to replicate with change notification, set the connection objects to use a fully opened window. For example:

image

To make DFSR SYSVOL slower, you must close the replication schedule windows on the connections. But since the historical scenario is a desire to make group policy/script replication faster - and since it is better that SYSVOL beat AD DS, since SYSVOL contains files called once AD DS is updated - this scenario is less likely or important. Not to mention that ideally, SYSVOL is pretty static.

Question

I was using the new graphical Fine Grained Password Policy in Windows Server 2012 AD Administrative Center. I realized that it lets me set a minimum password length of 255 characters.

image

When I edit group policy in GPMC, it doesn’t let me set a minimum of more than 14 characters!

image

Did I find a bug?

Answer

Nope. The original reason around the 14 character password was to force users to set a 15 character password and force the removal of LM password hashes (which is sort of silly at this point, as we have a security setting called Do not store LAN Manager hash value on next password change that makes this moot and is enabled by default in our later operating systems). The security policy editor enforces the 14 character limit, but this is not the actual limit. You can use ADSIEDIT to change it, for example, and that will work.

The true maximum limit in Active Directory for your password is 255 unicode characters and that’s what ADAC is enforcing. But many pieces of Windows software limit you to 127 character passwords, or even fewer; for example, the NET USE command: if you set a password to 254 characters and then attempt to map a drive with NET USE, it ignores the other characters beyond 127 and you always receive “unknown user name or bad password.” So be careful here.

It goes without saying that if you are requiring a minimum password length of even 25 characters, you are kind of a jerk :-D. Time for smartcard logons, dudes and dudettes; there is no way your users are going to remember passwords that long and it will be on Post-It notes all over their cubicles.

Totally unrelated note: the second password shown here is exactly 127 characters:

image
Awesome

Question

I am using USMT 4.0 and running scanstate on a computer with multiple fixed hard drives, like C:, D:, E:. I want to migrate to new Windows 7 machines that only have a C: drive. Do I need to create a custom XML file?

Answer

I could have sworn I wrote something up on this before but darned if I can find it. The short answer is – use migdocs.xml and it will all magically work. The long answer and demonstration of behavior is:

1. I have a computer with C: and D: fixed drives (OS is unimportant, USMT 4.0 or later).

2. On the C: drive I have two custom folders, each with a custom file.

clip_image001

3. On the D: drive I have two custom folders, each with a custom file.

clip_image001[5]

4. One of the folders is named the same on both drives, with a file that is named the same in that folder, but contains different contents.

clip_image002

clip_image003

5. Then you scanstate with no hardlinks (e.g. scanstate c:\store /i:migdocs.xml /c /o)

6. Then you go to a machine with only a C: drive (in my repro I was lazy and just deleted my D: drive) and copy the store over.

7. Run loadstate (e.g. loadstate c:\store /i:migdocs.xml /c)

8. Note how the folders on D: are migrated into C:, merging the folders and creating renamed copies of files when there are duplications:

clip_image004 clip_image005

clip_image006

clip_image007

Question

Where does Active Directory get computer specific information like Operating System, Service Pack level, etc., for computer accounts that are joined to the domain? I'm guessing WMI but I'm also wondering how often it checks.

Answer

AD gets it from attributes (for example).

AD relies on the individual Windows computers to take care of it – such as when joining the domain, being upgraded, being service packed, or after reboot. Nothing in AD confirms it or maintains outside those “client” processes, so if I change my OS version info using ADSIEDIT, that's the OS as far as AD is concerned and it's not going to change back unless the Windows computer makes it happen. Which it will!

Here I change a Win2008 R2 server to use nomenclature similar to our Linux and Apple competitors:

image

And here it is after I reboot that computer:

image

That would be a good band name, now that I think about it.

Question

I’d like to add a DFSR file replication filter but I have hundreds of RFs and don’t want to click around Dfsmgmt.msc for days. Is there a way to set this globally for entire replication groups?

Answer

Not per se; DFSR file filters are set on each replicated folder in Active Directory.

But setting it via a Windows PowerShell loop is not hard. For example, in Win2008 R2, where I imported the activedirectory module - here I am (destructively!) setting a filter to match the defaults plus add a new extension on all RFs in this domain:

image

Question

Is there a way to export and import the DFS Replication configuration the way we do for DFSN? It seems like no but I want to make sure I am not missing anything.

Answer

DFSRADMIN LIST shows the configuration and there are a couple export/import commands for scheduling. But overall this is going to be a semi-manual process for you unless they write their own tool or scripts. Ultimately, it’s all just LDAP data, after all – this is how frs2dfsr.exe works.

Once you list and inventory everything, the DFSRADMIN BULK command is useful to recreate things accurately.

Question

Does USMT migrate Internet Explorer Autocomplete Settings?

image

Answer

I really should make you figure this out for yourself… but I am feeling pleasant today. These settings are all here:

image
Hint hint – Process Monitor is always your friend with custom USMT coding

Looking at the USMT 5.0 replacement manifest:

  • MICROSOFT-WINDOWS-IE-INTERNETEXPLORER-REPL.MAN (from Windows 8)

I see that we do get the \Internet Explorer\and all sub-data (including Main and DomainSuggestion) for those specific registry values with no exclusions. We also get the Explorer\Autocomplete in that same manifest, likewise without exclusion.

  • MICROSOFT-WINDOWS-IE-INTERNETEXPLORER-DL.MAN (from XP)

Ditto. We grab all this as well.

Question

I have read that Windows Server 2008 R2 has the following documented and supported limits:

The following list provides a set of scalability guidelines that have been tested by Microsoft on Windows Server 2008 R2 and Windows Server 2008:

  • Size of all replicated files on a server: 10 terabytes.
  • Number of replicated files on a volume: 8 million.
  • Maximum file size: 64 gigabytes.

Source: http://technet.microsoft.com/en-us/library/f9b98a0f-c1ae-4a9f-9724-80c679596e6b(v=ws.10)#BKMK_00

What happens if I exceed these limits? Should I ever consider exceeding these limits? I want to use much more than these limits!

(Asked by half a zillion customers in the past few weeks)

Answer

With more than 10TB or 8 million files, the support will only be best effort (i.e. you can open a support case and we will attempt to assist, but they may reach a point where have to say “this configuration is not supported” and we cannot assist further). If you need us to fully support more end-to-end, you need a solution different than Win2008 R2 DFSR.

To exceed the 10TB limit – which again, is not supported nor recommended – seriously consider:

  1. High reliability fabric to high reliability storage – i.e. do not use iSCSI. Do not use cheap disk arrays. Dedicated fiber or similar networks only with redundant paths, to a properly redundant storage array that costs a poop-load of money.
  2. Store no more than 2TB per volume – There is one DFSR database per volume, which means if there is a dirty shutdown, recovery affects all replicated data on that volume. 1TB max would bet better.
  3. Latest DFSR hotfixes at all timeshttp://support.microsoft.com/kb/968429. This especially includes using http://support.microsoft.com/kb/2663685, combined with read-only replication when possible.

Actually, just read Warren’s common DFSR mistakes post 10 times. Then read it 10 more times.

Hmm… I recommend all these even when under 10TB…

Other stuff

RSAT for Windows 8 RTM is… RTM. Grab it here.

I mentioned mall hair in last month’s mail sack. When that sort of thing happen in MS Support, colleagues provide helpful references:

clip_image001
I hate you, Justin

Speaking of the ridiculous group I work with, this what you get when Steve Taylor wants to boost team morale on a Friday:


Couldn’t they just have the bass player record one looped note?

Canada, what the heck happened?!

clip_image002[5]

Still going…

clip_image003[5]

I mean… Norway? NORWAY IN THE SUMMER GAMES? They eat pickled herring and go sledding in June! I’ll grant that if you switch to medal count, you’re a respectable 13th. Good work, America’s Hat.

In other news bound to depress canucks, the NHL is about to close up shop yet again. Check out this hilarious article courtesy of Mark.

 

Finally

I am heading out to Redmond next week to teach a couple days of Certified DS Master, then on to San Francisco and Sydney to vacate and yammer even more. I’ll be back in a few weeks; Jonathan will answer your questions in the meantime and I think Mike has posts aplenty to share. When I return – and maybe before – I will have some interesting news to share.

See you in a few weeks.

- Ned “don’t make me take off my shoe” Pyle

Windows Server 2012 Shell game

$
0
0

Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

clip_image002

Click…click…click…click-- installation complete; the computer reboots.

You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

Holy Shell, Batman! I don't have a desktop!

clip_image004

Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

clip_image006

There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

  • Internet Explorer
  • The Desktop
  • Windows Explorer
  • Windows 8-style application support
  • Multimedia support
  • Desktop Experience

Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

clip_image008

"Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

clip_image010

Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

Use the following command to view a list of the Server GUI components

clip_image011

Get-WindowsFeature server-gui*

Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

Remove-WindowsFeature Server-Gui-Shell

To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

Remove-WindowsFeature Server-Gui-Mgmt-Infra

To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

clip_image015

To stage Server Shell features to a Windows Core Installation

You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

clip_image017

You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

clip_image019

Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

clip_image021

It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

clip_image023

Done

That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

Mike

"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

Stephens

AD FS 2.0 RelayState

$
0
0

Hi guys, Joji Oshima here again with some great news! AD FS 2.0 Rollup 2 adds the capability to send RelayState when using IDP initiated sign on. I imagine some people are ecstatic to hear this while others are asking “What is this and why should I care?”

What is RelayState and why should I care?

There are two protocol standards for federation (SAML and WS-Federation). RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server.
Note:

If the relying party is the application itself, you can use the loginToRp parameter instead.
Example:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?loginToRp=rpidentifier

Without the use of any parameters, a user would need to go to the IDP initiated sign on page, log in to the server, choose the relying party, and then be directed to the application. Using RelayState can automate this process by generating a single URL for the user to click and be logged in to the target application without any intervention. It should be noted that when using RelayState, any parameters outside of it will be dropped.

When can I use RelayState?

We can pass RelayState when working with a relying party that has a SAML endpoint. It does not work when the direct relying party is using WS-Federation.

The following IDP initiated flows are supported when using Rollup 2 for AD FS 2.0:

  • Identity provider security token server (STS) -> relying party STS (configured as a SAML-P endpoint) -> SAML relying party App
  • Identity provider STS -> relying party STS (configured as a SAML-P endpoint) -> WIF (WS-Fed) relying party App
  • Identity provider STS -> SAML relying party App

The following initiated flow is not supported:

  • Identity provider STS -> WIF (WS-Fed) relying party App

Manually Generating the RelayState URL

There are two pieces of information you need to generate the RelayState URL. The first is the relying party’s identifier. This can be found in the AD FS 2.0 Management Console. View the Identifiers tab on the relying party’s property page.

image

The second part is the actual RelayState value that you wish to send to the Relying Party. It could be the identifier of the application, but the administrator for the Relying Party should have this information. In this example, we will use the Relying Party identifier of https://sso.adatum.com and the RelayState of https://webapp.adatum.com

Starting values:
RPID: https://sso.adatum.com
RelayState: https://webapp.adatum.com

Step 1: The first step is to URL Encode each value.

RPID: https%3a%2f%2fsso.adatum.com
RelayState: https%3a%2f%2fwebapp.adatum.com

Step 2: The second step is to take these URL Encoded values, merge it with the string below, and URL Encode the string.

String:
RPID=<URL encoded RPID>&RelayState=<URL encoded RelayState>

String with values:
RPID= https%3a%2f%2fsso.adatum.com &RelayState= https%3a%2f%2fwebapp.adatum.com

URL Encoded string:
RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 3: The third step is to take the URL Encoded string and add it to the end of the string below.

String:
?RelayState=

String with value:
?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 4: The final step is to take the final string and append it to the IDP initiated sign on URL.

IDP initiated sign on URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx

Final URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

The result is an IDP initiated sign on URL that tells AD FS which relying party STS the login is for, and also gives that relying party information that it can use to direct the user to the correct application.

image

Is there an easier way?

The multi-step process and manual manipulation of the strings are prone to human error which can cause confusion and frustration. Using a simple HTML file, we can fill out the starting information into a form and click the Generate URL button.

image

The code sample for this HTML file has been posted to CodePlex.

Conclusion and Links

I hope this post has helped demystify RelayState and will have everyone up and running quickly.

AD FS 2.0 RelayState Generator
http://social.technet.microsoft.com/wiki/contents/articles/13172.ad-fs-2-0-relaystate-generator.aspx
HTML Download
https://adfsrelaystate.codeplex.com/

AD FS 2.0 Rollup 2
http://support.microsoft.com/kb/2681584

Supporting Identity Provider Initiated RelayState
http://technet.microsoft.com/en-us/library/jj127245(WS.10).aspx

Joji "Halt! Who goes there!" Oshima


So long and thanks for all the fish

$
0
0

My time is up.

It’s been eight years since a friend suggested I join him on a contract at Microsoft Support (thanks Pete). Eight years since I sat sweating in an interview with Steve Taylor, trying desperately to recall the KDC’s listening port (his hint: “German anti-tank gun”). Eight years since I joined 35 new colleagues in a training room and found that despite my opinion, I knew nothing about Active Directory (“Replication of Absent Linked Object References – what the hell have I gotten myself into?”).

Eight years later, I’m a Senior Support Escalation Engineer, a blogger of some repute, and a seasoned world traveler who instructs other ‘softies about Windows releases. I’ve created thousands of pages of content and been involved in countless support cases and customer conversations. I am the last of those 35 colleagues still here, but there is proof of my existence even so. It’s been the most satisfactory work of my career.

Just the thought of leaving was scary enough to give me pause – it’s been so long since I knew anything but supporting Windows. It’s a once in a lifetime opportunity though and sometimes you need to reset your career. Now I’ll help create the next generations of Windows Server and the buck will finally stop with me: I’ve been hired as a Program Manager and am on my way to Seattle next week. I’m not leaving Microsoft, just starting a new phase. A phase with a lot more product development, design responsibility, and… meetings. Soooo many meetings.

There are two types of folks I am going to miss: the first are workmates. Many are support engineers, but also PFEs, Consultants, and TAMs. Even foreigners! Interesting and funny people fill Premier and Commercial Technical Support and make every day here enjoyable, even after the occasional customer assault. There’s nothing like a work environment where you really like your colleagues. I’ve sat next to Dave Fisher since 2004 and he’s made me laugh every single day. He is a brilliant weirdo, like so many other great people here. You all know who you are.

The other folks are… you. Your comments stayed thought provoking and fresh for five years and 700 posts. Your emails kept me knee deep in mail sacks and articles (I had to learn in order to answer many of them). Your readership has made AskDS into one of the most popular blogs in Microsoft. You unknowingly played an immense part in my career, forcing me to improve my communication; there’s nothing like a few hundred thousand readers to make you learn your craft.

My time as the so-called “editor in chief” of AskDS is over, but I imagine you will still find me on the Internet in my new role, yammering about things that I think you’ll find interesting. I also have a few posts in the chamber that Jonathan or Mike will unload after I’m gone, and they will keep the site going. AskDS will continue to be a place for unvarnished support information about Windows technologies, where your questions will get answers.

Thanks for everything, and see you again soon.

image
We are looking forward to Seattle’s famous mud puddles

 

- Ned “42” Pyle

Digging a little deeper into Windows 8 Primary Computer

$
0
0

[This is a ghost of Ned past article – Editor]

Hi folks, Ned here again to talk more about the Primary Computer feature introduced in Windows 8. Sharp-eyed readers may have noticed this lonely beta blog post and if you just want a set-by-step guide to enabling this feature, TechNet does it best. Today I am going to fill in some blanks and make sure the feature's architecture and usefulness is clear. At least, I'm going to try.

Onward!

Backgrounder and Requirements

Businesses using Roaming User Profiles, Offline Files and Folder Redirection have historically been limited in controlling which computers cache user data. For instance, while there are group policies to assign roaming profiles on a per computer basis, they affect all users of that computer and are useless if youassign roaming profiles through legacy user attributes.

Windows 8 introduces a pair of new per-user AD DS attributes to specify a "primary computer." The primary computer is the one directly assigned to a user - such as their laptop, or a desktop in their cubicle - and therefore unlikely to change frequently. We refer to this as "User-Device Affinity". That computer will allow them to store roaming user data or access redirected folder data, as well as allow caching of redirected data through offline files. There are three main benefits to using Primary Computer:

  1. When a user is at a kiosk, using a conference room PC, or connecting to the network from a home computer, there is no risk that confidential user data will cache locally and be accessible offline. This adds a measure of security.
  2. Unlike previous operating systems, an administrator now has the ability to control computers that will not cache data, regardless of the user's AD DS profile configuration settings.
  3. The initial download of a profile has a noticeable impact on logon performance; a brand new Windows 8 user profile is ~68MB in size, and that's before it's filled with "Winter is coming" meme pics. Since a roaming profile and folder redirection no longer synchronously cache data on the computer during logon, a user connecting from a temporary or home machine logs on considerably faster.

By assigning computer(s) to a user then applying some group policies, you ensure data only roams or caches where you want it.


Yoink, stolen screenshot from a much better artist

Primary Computer has the following requirements:

  • Windows 8 or Windows Server 2012 computers used for interactive logon
  • Windows Server 2012 AD DS Schema (but not necessarily Win2012 DCs)
  • Group Policy managed from Windows 8 or Windows Server 2012 GPMC
  • Some mechanism to determine each user's primary computer(s)

Determining Primary Computers

There is no attribute in Active Directory that tracks which computers a user logs on to, much less the computers they log on to the most frequently. There are a number of out of band options to determine computer usage though:

  • System Center Configuration Manager - SCCM has built in functionality to determine the primary users of computers, as part of its "Asset Intelligence" reporting. You can read more about this feature in SCCM 2012 and 2007 R2. This is the recommended method as it's the most comprehensive and because I like money.
  • Collecting 4624 events - the Security event log Logon Event 4624 with a Logon Type 2 delineates where a user logged on interactively. By collecting these events using some type of audit collection service or event forwarding, you can build up a picture of which users are logging on to which computers repeatedly.

     

     

  • Logon Script – If you're the fancy type, you can create a logon script that writes a user's computer to a centralized location, such as on their own AD object. If you grant inherited access for SELF to update (for instance) the Comment attribute on all the user objects, each user could use that attribute as storage. Then you can collect the results for a few weeks and create a list of computer usage by user.

    For example, this rather hokey illustration VBS runs as a logon script and updates a user's own Comment attribute with their computer's distinguished name, only if it has changed from the previous value:

    Set objSysInfo = CreateObject("ADSystemInfo")

    Set objUser = GetObject("LDAP://" & objSysInfo.UserName)

    Set objComputer = GetObject("LDAP://" & objSysInfo.ComputerName)

     

    strMessage = objComputer.distinguishedName

    if objUser.Comment = StrMessage then wscript.quit

     

    objUser.Comment = strMessage

    objUser.SetInfo

    

A user may have more than one computer they logon to regularly though and if that's the case, an AD attribute-based storage solution is probably not the right answer unless the script builds a circular list with a restricted number of entries and logic to ensure it does not update with redundant data. Otherwise, there could be excessive AD replication. Remember, this is just a simple example to get the creative juices flowing.

  • PsLoggedOn - you can script and run PsLoggedOn.exe (a Windows Sysinternals tool) periodically during the day for all computers over the course of several weeks. That would build, over time, a list of which users frequent which computers. This requires remote registry access through the Windows Firewall.
  • Third parties - there are SCCM/SCOM-like vendors providing this functionality. I don't have details but I'm sure they have a salesman who wants a new German sports sedan and will be happy to bend your ear.

Setting the Primary Computer

As I mentioned before, look at TechNet for some DSAC step-by-step for setting the msDS-PrimaryComputer attribute and the necessary group policies. However, if you want to use native Windows PowerShell instead of our interesting out of band module, here are some more juice-flow inducing samples.

The ActiveDirectory Windows PowerShell module get-adcomputer and set-aduser cmdlets allow you to easily retrieve a computer's distinguished name and assign it to the user's primary computer attribute. You can use assigned variables for readability, or with nested functions for simplicity.

Variable

<$variable> = get-adcomputer <computer name>

Set-aduser <user name> -add @{'msDS-PrimaryComputer'="<$variable>"}

For example, with a computer named cli1 and a user name stduser:

Nested

Set-aduser <user name> -add @{'msDS-PrimaryComputer'=(get-adcomputer <computer name>).distinguishedname}

For example, with that same user and computer:

Other techniques

If you use AD DS to store the user's last computer in their Comment attribute as part of a logon script - like described in the earlier section - here is an example that reads the stduser attribute Comment and assigns primary computer based on the contents:

If you wanted to assign primary computers to all of the users within the Foo OU based on their comment attributes, you could use this example:

If you have a CSV file that contains the user accounts and their assigned computers as DNs, you can use the import-csv cmdlet to update the users. For example:

This is particularly useful when you have some asset history and assign certain users specific computers. Certainly a good idea for insurance and theft prevention purposes, regardless.

Cached Data Clearing GP

Enabling Primary Computer does not remove any data already cached on other computers that a user does not access again. I.e. if a user was already using Roaming User Profiles or Folder Redirection (which, by default, automatically adds all redirected shell folders to the Offline Files cache), enabling Primary Computer means only that further data is not copied locally to non-approved computers.

In the case of Roaming User Profiles, several policies can clear data from computers at logoff or restart:

  • Delete user profiles older than a specified number of days on system restart - this deletes unused profiles after N days when a computer reboots
  • Delete cached copies of roaming profiles - this removes locally saved roaming profiles once a user logs off. This policy would also apply to Primary Computers and should be used with caution

In the case of Folder Redirection and Offline Files, there is no specific policy to clear out stale data or delete cached data at logoff like there is for RUP, but that's immaterial:

  • When a computer needs to remove FR after to becoming "non-primary" - due to the primary computer feature either being enabled or the machine being removed from the primary computer list for the user - the removal behavior will depend on how the FR policy is configured to behave on removal. It can be configured to either:
    • Redirect the folder back to the local profile – the folder location sets back to the default location in the user's profile (e.g., c:\users\%USERNAME%\Documents), the data copies from the file server to the local profile, and the file server location is unpinned from the computer's Offline Files cache
    • Leave the folder pointing to the file server –the folder location still points to the file server location, but the contents are unpinned from the computer's Offline Files cache. The folder configuration is no longer controlled through policy

In both cases, once the data is unpinned from the Offline Files cache, it will evict from the computer in the background after 15 minutes.

Logging Primary Computer Usage

To see that the Download roaming profiles on primary computers only policy took effect and the behavior at each user logon, examine the User Profile Service operational event log for Event 63. This will state either "This computer is a primary computer for this user" or "This computer is not a primary computer for this user":

The new User Profile Service events for Primary Computer are all in the Operational event log:

Event ID

62

Severity

Warning

Message

Windows was unable to successfully evaluate whether this computer is a primary computer for this user. This may be due to failing to access the Active Directory server at this time. The user's roaming profile will be applied as configured. Contact the Administrator for more assistance. Error: %1

Notes and resolution

Indicates an issue contacting LDAP on a domain controller. Examine the extended error, examine System and Application event logs for further details, consider getting a network capture if still unclear

 

Event ID

63

Severity

Informational

Message

This computer %1 a primary computer for this user

Notes and resolution

This event's variable will change from "IS" to "IS NOT" depending on circumstances. It is not an error condition unless this is unexpected to the administrator. A customer should interrogate the rest of the IT staff on the network if not expecting to see these events

 

Event ID

64

Severity

Informational

Message

The primary computer relationship for this computer and this user was not evaluated due to %1

Notes and resolution

Examine the extended error for details.

 

To see that the Redirect folders on primary computers only policy took effect and the behavior at each user logon, examine the Folder Redirection operational event log for Event 1010. This will state "This computer is not a primary computer for this user" or if it is (good catch, Johan from Comments)

Architecture

Windows 8 implements Primary Computer through two new AD DS attributes in the Windows Server 2012 (version 56) Schema.

Primary Computer is a client-side feature; no matter what you configure in Active Directory or group policy on domain controllers, Windows 7, Windows Server 2008 R2, and older family computers will not obey the settings.

AD DS Schema

Attribute

Explanation

msDS-PrimaryComputer

The primary computers assigned to a user or a security group containing users. Contains a multi-valued linked-value distinguished names that references the msDS-isPrimaryComputerFor backlink on a computer object

msDS-isPrimaryComputerFor

The users assigned to a computer account. Contains a multi-valued linked-value distinguished names that references the msDS-PrimaryComputer forward link on a user object

 

Processing

The processing of this new functionality is:

  1. Look at Group Policy setting to determine if the msDS-PrimaryComputer attribute in Active Directory should influence the decision to roam the user's profile or apply Folder Redirection.
  2. If step 1 is TRUE, initialize an LDAP connection and bind to a domain controller
  3. Check for the required schema version
  4. Query for the "msDS-IsPrimaryComputerFor" attribute on the AD object representing the current computer
  5. Check to see if the current user is in the list returned by this attribute or in the group returned by this attribute and if so, return TRUE for IsPrimaryComputerForUser. If no match is found, return FALSE for IsPrimaryComputerForUser
  6. If step 5 is FALSE:
    1. For RUP, an existing cached local profile should be used if present. If there is no local profile for the user, a new local profile should be created
    2. For FR, if Folder Redirection previously applied, the Folder Redirection configuration removes according to the removal action specified by the previously applied policy (this is retained in the local FR configuration). If there is no current FR configuration, there is no work to be done

Troubleshooting

Because this feature is both new and simple, most troubleshooting is likely to follow this basic workflow when Primary Computer is not working as expected:

  1. User assigned the correct computer distinguished name (or in the security group assigned the computer DN)
  2. AD DS replication has converged for the user and computer objects
  3. AD DS and SYSVOL replication has converged for the Primary Computer group policies
  4. Primary Computer group policies applying to the computer
  5. User has logged off and on since the Primary Computer policies applied

The logs of note for troubleshooting Primary Computer are:

Log

Notes and Explanation

Gpresult/GPMC RSoP Report

Validates that Primary Computer policy is applying to the computer or user

Group Policy operational Event log

Validates that group policy in general is applying to the computer or user with specific details

System Event Log

Validates that group policy in general is applying to the computer or user with generalities

Application Event log

Validates that Folder Redirection and Roaming User Profiles are working with generalities and specific details

Folder Redirection operational event log

Validates that Folder Redirection is working with specific details

User Profile Service operational event log

Validates that Roaming User Profile is working with specific details

Fdeploy.log

Validates that Folder Redirection is working with specific details

 

Cases reported by your users or help desk as Primary Computer processing issues are more likely to be AD DS replication, SYSVOL replication, group policy, folder redirection, or roaming user profile issues. Determine immediately if Primary Computer is at all to blame, then move on to the more likely historical culprits. Watch for red herrings!

Likewise, your company may not be internally aware of Primary Computer deployments and may send you down a rat hole troubleshooting expected behavior. Always ensure that a "problem" with folder redirection or roaming user profiles isn't just another group within the customer's company configuring Primary Computer and not telling you (this applies to you too; send a memo, dangit!).

Have fun.

Ned "shouldn't we have called it 'Primary Computers?'" Pyle

....And knowing is half the battle!

ADAMSync 101

$
0
0

Hi Everyone, Kim Nichols here again, and this time I have an introduction to ADAMSync. I take a lot of cases on ADAM and AD LDS and have seen a number of problems arise from less than optimally configured ADAMSync XML files. There are many sources of information on ADAM/AD LDS and ADAMSync (I'll include links at the end), but I still receive lots of questions and cases on configuring ADAM/AD LDS for ADAMSync.

We'll start at the beginning and talk about what ADAM/AD LDS is, what ADAMSync is and then finally how you can get AD LDS and ADAMSync working in your environment.

What is ADAM/AD LDS?

ADAM (Active Directory Application Mode) is the 2003 name for AD LDS (Active Directory Lightweight Directory Services). AD LDS is, as the name describes, a lightweight version of Active Directory. It gives you the capabilities of a multi-master LDAP directory that supports replication without some of the extraneous features of an Active Directory domain controller (domains and forests, Kerberos, trusts, etc.). AD LDS is used in situations where you need an LDAP directory but don't want the administration overhead of AD. Usually it's used with web applications or SQL databases for authentication. Its schema can also be fully customized without impacting the AD schema.

AD LDS uses the concept of instances, similar to that of instances in SQL. What this means is one AD LDS server can run multiple AD LDS instances (databases). This is another differentiator from Active Directory: a domain controller can only be a domain controller for one domain. In AD LDS, each instance runs on a different set of ports. The default instance of AD LDS listens on 389 (similar to AD).

Here's some more information on AD LDS if you're new to it:

What is ADAMSync?

In many scenarios, you may want to store user data in AD LDS that you can't or don't want to store in AD. Your application will point to the AD LDS instance for this data, but you probably don't want to manually create all of these users in AD LDS when they already exist in AD. If you have Forefront Identity Manager (FIM), you can use it to synchronize the users from AD into AD LDS and then manually populate the AD LDS specific attributes through LDP, ADSIEdit, or a custom or 3rd party application. If you don't have FIM, however, you can use ADAMSync to synchronize data from your Active Directory to AD LDS.

It is important to remember that ADAMSync DOES NOT synchronize user passwords! If you want the AD LDS user account to use the same password as the AD user, then userproxy transformation is what you need. (That's a topic for another day, though. I'll include links at the end for userproxy.)

ADAMSync uses an XML file that defines which data will synchronize from AD to AD LDS. The XML file includes the AD partition from which to synchronize, the object types (classes or categories), and attributes to synchronize. This file is loaded into the AD LDS database and used during ADAMSync synchronization. Every time you make changes to the XML file, you must reload the XML file into the database.

In order for ADAMSync to work:

  1. The MS-AdamSyncMetadata.LDF file must be imported into the schema of the AD LDS instance prior to attempting to install the XML file. This LDF creates the classes and attributes for storing the ADAMSync.xml file.
  2. The schema of the AD LDS instance must already contain all of the object classes and attributes that you will be syncing from AD to AD LDS. In other words, you can't sync a user object from AD to AD LDS unless the AD LDS schema contains the User class and all of the attributes that you specify in the ADAMSync XML (we'll talk more about this next). There is a blog post on using ADSchemaAnalyzer to compare the AD schema to the AD LDS schema and export the differences to an LDF file that can be imported into AD LDS.
  3. Unless you plan on modifying the schema of the AD LDS instance, your instance should be named DC=<partition name>, DC=<com or local or whatever> and not CN=<partition name>. Unfortunately, the example in the AD LDS setup wizard uses CN= for the partition name.  If you are going to be using ADAMSync, you should disregard that example and use DC= instead.  The reason behind this change is that the default schema does not allow an organizationalUnit (OU) object to have a parent object of the Container (CN) class. Since you will be synchronizing OUs from AD to AD LDS and they will need to be child objects of your application partition head, you will run into problems if your application partition is named CN=.




    Obviously, this limitation is something you can change in the AD LDS schema, but simply naming your partition with DC= name component will eliminate the need to make such a change. In addition, you won't have to remember that you made a change to the schema in the future.

The best advice I can give regarding ADAMSync is to keep it as simple as possible to start off with. The goal should be to get a basic XML file that you know will work, gradually add attributes to it, and troubleshoot issues one at a time. If you try to do too much (too wide of object filter or too many attributes) in the XML from the beginning, you will likely run into multiple issues and not know where to begin in troubleshooting.

KEEP IT SIMPLE!!!

MS-AdamSyncConf.xml

Let's take a look at the default XML file that Microsoft provides and go through some recommendations to make it more efficient and less prone to issues. The file is named MS-AdamSyncConf.XML and is typically located in the %windir%\ADAM directory.

<?xml version="1.0"?>
<doc>
<configuration>
<description>sample Adamsync configuration file</description>
<security-mode>object</security-mode>
<source-ad-name>fabrikam.com</source-ad-name> <------ 1
<source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> <------ 2
<source-ad-account></source-ad-account> <------ 3
<account-domain></account-domain> <------ 4
<target-dn>dc=fabrikam,dc=com</target-dn> <------ 5
<query>
<base-dn>dc=fabrikam,dc=com</base-dn> <------ 6
<object-filter>(objectClass=*)</object-filter> <------ 7
<attributes> <------ 8
<include></include>
<exclude>extensionName</exclude>
<exclude>displayNamePrintable</exclude>
<exclude>flags</exclude
<exclude>isPrivelegeHolder</exclude>
<exclude>msCom-UserLink</exclude>
<exclude>msCom-PartitionSetLink</exclude>
<exclude>reports</exclude>
<exclude>serviceprincipalname</exclude>
<exclude>accountExpires</exclude>
<exclude>adminCount</exclude>
<exclude>primarygroupid</exclude>
<exclude>userAccountControl</exclude>
<exclude>codePage</exclude>
<exclude>countryCode</exclude>
<exclude>logonhours</exclude>
<exclude>lockoutTime</exclude>
</attributes>
</query>
<schedule>
<aging>
<frequency>0</frequency>
<num-objects>0</num-objects>
</aging>
<schtasks-cmd></schtasks-cmd>
</schedule> <------ 9
</configuration>
<synchronizer-state>
<dirsync-cookie></dirsync-cookie>
<status></status>
<authoritative-adam-instance></authoritative-adam-instance>
<configuration-file-guid></configuration-file-guid>
<last-sync-attempt-time></last-sync-attempt-time>
<last-sync-success-time></last-sync-success-time>
<last-sync-error-time></last-sync-error-time>
<last-sync-error-string></last-sync-error-string>
<consecutive-sync-failures></consecutive-sync-failures>
<user-credentials></user-credentials>
<runs-since-last-object-update></runs-since-last-object-update>
<runs-since-last-full-sync></runs-since-last-full-sync>
</synchronizer-state>
</doc>

Let's go through the default XML file by number and talk about what each section does, why the defaults are what they are, and what I typically recommend when working with customers.

  1. <source-ad-name>fabrikam.com</source-ad-name> 

    Replace fabrikam.com with the FQDN of the domain/forest that will be your synchronization source

  2. <source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> 

    Replace dc=fabrikam,dc=com with the DN of the AD partition that will be the source for the synchronization

  3. <source-ad-account></source-ad-account> 

    Contains the account that will be used to authenticate to the source forest/domain. If left empty, the credentials of the logged on user will be used

  4. <account-domain></account-domain> 

    Contains the domain name to use for authentication to the source domain/forest. This element combined with <source-ad-account> make up the domain\username that will be used to authenticate to the source domain/forest. If left empty, the domain of the logged on user will be used.

  5. <target-dn>dc=fabrikam,dc=com</target-dn>

    Replace dc=fabrikam,dc=com with the DN of the AD LDS partition you will be synchronizing to.

    NOTE: In 2003 ADAM, you were able to specify a sub-ou or container of the of the ADAM partition, for instance OU=accounts,dc=fabrikam,dc=com. This is not possible in 2008+ AD LDS. You must specify the head of the partition, dc=fabrikam,dc=com. This is publicly documented here.

  6. <base-dn>dc=fabrikam,dc=com</base-dn>

    Replace dc=fabrikam,dc=com with the base DN of the container in AD that you want to synchronize objects from.

    NOTE: You can specify multiple base DNs in the XML file, but it is important to note that due to the way the dirsync engine works the entire directory will still be scanned during synchronization. This can lead to unexpectedly long synchronization times and output in the adamsync.log file that is confusing. The short of this this is that even though you are limiting where to synchronize objects from, it doesn't reduce your synchronization time and you will see entries in the adamsync.log file that indicate objects being processed but not written. This can make it appear as though ADAMSync is not working correctly if your directory is large but you are syncing is a small percentage of the directory. Also, the log will grow and grow, but it may take a long time for objects to begin to appear in AD LDS. This is because the entire directory is being enumerated, but only a portion is being synchronized.

  7. <object-filter>(objectClass=*)</object-filter>

    The object filter determines which objects will be synchronized from AD to AD LDS. While objectClass=* will get you everything, do you really want or need EVERYTHING? Consider the amount of data you will be syncing and the security implications of having everything duplicated in AD LDS. If you only care about user objects, then don't sync computers and groups.

    The filter that I generally recommend as a starting point is:

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit))

    Rather than objectClass=User, I recommend objectCategory=Person. But, why, you ask? I'll tell you :-) If you've ever looked that the class of a computer object, you'll notice that it contains an objectClass of user.



    What this means to ADAMSync is that if I specify an object filter of objectClass=user, ADAMSync will synchronize users and computers (and contact objects and anything else that inherits from the User class). However, if I use objectCategory=Person, I only get actual user objects. Pretty neat, eh?

    So, what does this &#124; mean and why include objectCategory=OrganizationalUnit? The literal &#124; is the XML representation of the | (pipe) character which represents a logical OR. True, I've seen customers just use the | character in the XML file and not have issues, but I always use the XML rather than the | just to be certain that it gets translated properly when loaded into the AD LDS instance. If you need to use an AND rather than an OR, the XML for & is &amp;.

    You need objectCategory=OrganizationalUnit so that objects that are moved within AD get synchronized properly to AD LDS. If you don't specify this, the OUs that contain objects within scope of the object filter will be created on the initial creation of the object in AD LDS. But, if that object is ever MOVED in the source AD, ADAMSync won't be able to synchronize that object to the new location. Moving an object changes the full DN of the object. Since we aren't syncing the OUs the object just "disappears" from an ADAMSync perspective and never gets updated/moved.

    If you need groups to be synchronized as well you can add (objectclass=group) inside the outer parentheses and groups will also be synced.

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)(objectClass=Group))

  8. <attributes>

    The attributes section is where you define which attributes to synchronize for the object types defined in the <object-filter>.

    You can either use the <include></include> or <exclude></exclude> tabs, but you cannot use both.

    The default XML file provided by Microsoft takes the high ground and uses the <exclude></exclude> tags which really means include all attributes except the ones that are explicitly defined within the <exclude></exclude> element. While this approach guarantees that you don't miss anything important, it can also lead to a lot of headaches in troubleshooting.

    If you've ever looked at an AD user account in ADSIEdit (especially in an environment with Exchange), you'll notice there are hundreds of attributes defined. Keeping to my earlier advice of "keep it simple", every attribute you sync adds to the complexity.

    When you use the <exclude></exclude> tags you don't know what you are syncing; you only know what you are not syncing. If your application isn't going to use the attribute then there is no reason to copy that data to AD LDS. Additionally, there are some attributes and classes that just won't sync due to how the dirsync engine works. I'll include the list as I know it at the end of the article. Every environment is different in terms of which schema updates have been made and which attributes are being used. Also, as I mentioned earlier, if your AD LDS schema does not contain the object classes and attributes that you have defined in your ADAMSync XML file you're your synchronization will die in a big blazing ball of flame.


    Whoosh!!

    A typical attributes section to start out with is something like this:

    <include>objectSID</include> <----- only needed for userproxy
    <include>userPrincipalName</include> <----- must be unique in AD LDS instance
    <include>displayName</include>
    <include>givenName</include>
    <include>sn</include>
    <include>physicalDeliveryOfficeName</include>
    <include>telephoneNumber</include>
    <include>mail</include>
    <include>title</include>
    <include>department</include>
    <include>manager</include>
    <include>mobile</include>
    <include>ipPhone</include>
    <exclude></exclude>

    Initially, you may even want to remove userPrincipalName, just to verify that you can get a sync to complete successfully. Synchronization issues caused by the userPrincipalName attribute are among the most common ADAMSync issues I see. Active Directory allows multiple accounts to have the same userPrincipalName, but ADAMSync will not sync an object if it has the same userPrincipalName of an object that already exists in the AD LDS database.

    If you want to be a superhero and find duplicate UPNs in your AD before you attempt ADAMSync, here's a nifty csvde command that will generate a comma-delimited file that you can run through Excel's "Highlight duplicates" formatting options (or a script if you are a SUPER-SUPERHERO) to find the duplicates.

    csvde -f upn.csv -s localhost:389 -p subtree -d "DC=fabrikam,DC=com" -r "(objectClass=user)" -l sAMAccountName,userPrincipalName

    Remember, you are targeting your AD with this command, so the localhost:389 implies that the command is being run on the DC. You'll need to replace "DC=fabrikam, DC=com" with your AD domain's DN.

  9. </schedule>

    After </schedule> is where you would insert the elements to do user proxy transformation. In the References section, I've included links that explain the purpose and configuration of userproxy. The short version is that you can use this section of code to create userproxy objects rather than AD LDS user class objects. Userproxy objects are a special class of user that links back to an Active Directory domain account to allow the AD LDS user to utilize the password of their corresponding user account in AD. It is NOT a way to logon on to AD from an external network. It is a way to allow an application that utilizes AD LDS as its LDAP directory to authenticate a user via the same password they have in AD. Communication between AD and AD LDS is required for this to work and the application that is requesting the authentication does not receive a Kerberos ticket for the user.

    Here is an example of what you would put after </schedule> and before </configuration>

    <user-proxy>
    <source-object-class>user</source-object-class>
    <target-object-class>userProxyFull</target-object-class>
    </user-proxy>

Installing the XML file

OK! That was fun, wasn't it? Now that we have an XML file, how do we use it? This is covered in a lot of different materials, but the short version is we have to install it into the AD LDS instance. To install the file, run the following command from the ADAM installation directory (%windir%\ADAM):

Adamsync /install localhost:389 CustomAdamsync.xml

The command above assumes you are running it on the AD LDS server, that the instance is running on port 389 and that the XML file is located in the path of the adamsync command.

What does this do exactly, you ask? The adamsync install command copies the XML file contents into the configurationFile attribute on the AD LDS application partition head. You can view the attribute by connecting to the application partition via LDP or through ADSIEdit. This is a handy thing to know. You can use this to verify for certain exactly what is configured in the instance. Often there are several versions of the XML file in the ADAM directory and it can be difficult to know which one is being used. Checking the configurationFile attribute will tell you exactly what is configured. It won't tell you which XML file was used, but at least you will know the configuration.

The implication of this is that anytime you update the XML file you must reinstall it using the adamsync /install command otherwise the version in the instance is not updated. I've made this mistake a number of times during troubleshooting!

Synchronizing with AD

Finally, we are ready to synchronize! Running the synchronization is the "easy" part assuming we've created a valid XML file, our AD LDS schema has all the necessary classes and attributes, and the source AD data is without issue (duplicate UPN is an example of a known issue).

From the ADAM directory (typically %windir%\ADAM), run the following command:

Adamsync /sync localhost:389 "DC=fabrikam,DC=com" /log adamsync.log

Again, we're assuming you are running the command on the AD LDS server and that the instance is running on port 389. The DN referenced in the command is the DN of your AD LDS application partition. /log is very important (you can name the log anything you want). You will need this log if there are any issues during the synchronization. The log will tell you which object failed and give you a cryptic "detailed" reason as to why. Below is an example of an error due to a duplicate UPN. This is one of the easier ones to understand.

====================================================
Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0
Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8>
Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)
===============================================

During the sync, if you are syncing from the Active Directory domain head rather than an OU or container, your objects should begin showing up in the AD LDS instance almost immediately. The objects don't synchronize in any order that makes sense to the human brain, so don't worry if objects are appearing in a random order. There is no progress bar or indication of how the sync is going other than fact that the log file is growing. When the sync completes you will be returned to the command prompt and your log file will stop growing.

Did it work?

As you can see there is nothing on the command line nor are there any events in any Windows event log that indicate that the synchronization was successful. In this context, successful means completed without errors and all objects in scope, as defined in the XML file, were synchronized. The only way to determine if the synchronization was successful is to check the log file. This highlights the importance of generating the log. Additionally, it's a good idea to keep a reasonable number of past logs so if the sync starts failing at some point you can determine approximately when it started occurring. Management likes to know things like this.

Since you'll probably be automating the synchronization (easy to do with a scheduled task) and not running it manually, it's a good idea to set up a reminder to periodically check the logs for issues. If you've never looked at a log before, it can be a little intimidating if there are a lot of objects being synchronized. The important thing to know is that if the sync was successful, the bottom of the log will contain a section similar to the one below:

Updating the configuration file DirSync cookie with a new value.

Beginning processing of deferred dn references.
Finished processing of deferred dn references.

Finished (successful) synchronization run.
Number of entries processed via dirSync: 16
Number of entries processed via ldap: 0
Processing took 4 seconds (0, 0).
Number of object additions: 3
Number of object modifications: 13
Number of object deletions: 0
Number of object renames: 2
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 9
Maximum number of values retrieved via range syntax: 0

Beginning aging run.
Aging requested every 0 runs. We last aged 2 runs ago.
Saving Configuration File on DC=instance1,DC=local
Saved configuration file.

If your log just stops without a section similar to the one above, then the last entry will indicate an error similar to the one above for the duplicate UPN.

Conclusion and other References

That covers the basics of setting up ADAMSync! I hope this information makes the process more straight forward and gives you some tips for getting it to work the first time! The most important point I can make is to start very simple with the XML file and get something to work. You can always add more attributes to the file later, but if you start from broken it can be difficult to troubleshoot. Also, I highly recommend using <include> over <exclude> when specifying attributes to synchronize. This may be more work for your application team since they will have to know what their application requires, but it will make setting up the XML file and getting a successful synchronization much easier!

ADAMSync excluded objects

As I mentioned earlier, there are some attributes, classes and object types that ADAMSync will not synchronize. The items listed below are hard-coded not to sync. There is no way around this using ADAMSync. If you need any of these items to sync, then you will need to use LDIFDE exports, FIM, or some other method to synchronize them from AD to AD LDS. The scenarios where you would require any of these items are very limited and some of them are dealt with within ADAMSync by converting the attribute to a new attribute name (objectGUID to sourceObjectGUID).

Attributes

cn, currentValue, dBCSPwd, fSMORoleOwner, initialAuthIncoming, initialAuthOutgoing, isCriticalSystemObject, isDeleted, lastLogonTimeStamp, lmPwdHistory, msDS-ExecuteScriptPassword, ntPwdHistory, nTSecurityDescriptor, objectCategory, objectSid (except when being converted to proxy), parentGUID, priorValue, pwdLastSet, sAMAccountType, sIDHistory, supplementalCredentials, supplementalCredentials, systemFlags, trustAuthIncoming, trustAuthOutgoing, unicodePwd, whenChanged

Classes

crossRef, secret, trustedDomain, foreignSecurityPrincipal, rIDSet, rIDManager

Other

Naming Context heads, deleted objects, empty attributes, attributes we do not have permissions to read, objectGUIDs (gets transferred to sourceObjectGUID), objects with del-mangeled distinguished names (DEL:\)

Additional Goodies

ADAMSync

AD LDS Replication

Misc Blogs

GOOD LUCK and ENJOY!

Kim "Sync or swim" Nichols

Revenge of Y2K and Other News

$
0
0

Hello sports fans!

So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here.

Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post. So...here's the link to the AskPFEPlat blog.

In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on The Storage Team blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go check him out.

Finally...you've probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up.

In the mean time, if you have questions about anything you see on the blog, don't hesitate to contact us.

Jonathan "time to make the donuts" Stephens

Viewing all 274 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>