Quantcast
Viewing all 274 articles
Browse latest View live

How Not to Interview


No more yellow banner (and the gallery of things that are not work)

Well, my wife complained one too many times about the old blog image. Now it’s a nice soothing blue with a more worldwide bend, plus some updated “artwork” courtesy of Visio and MS Paint (sorry for cutting you off Australia & New Zealand, sacrifices had to be made to the pixel gods). I also updated our About page to reflect modern times, such as how we’re support for ADFS and AppLocker. Obviously, no mail sack this week; things were just too busy and a lot of questions were repeats. I do have a new DFSR series in the works that I believe many of you will find useful, look for that to start soon. I believe Jonathan has some more PKI things underway as well.

Enough of work. Here’s some stuff that has nothing to do with pleasing your boss:

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image[40]

 

 

 

 

 

Image may be NSFW.
Clik here to view.
image[51]

Image may be NSFW.
Clik here to view.
image[46]

Image may be NSFW.
Clik here to view.
image[53]

Ned “back on duty” Pyle

Image may be NSFW.
Clik here to view.

New Directory Services Content 8/22-8/28

KB Articles

There are several articles below that are related to support of Single Label Domains, Disjointed Namespaces and Discontiguous Namespaces.  These will soon be linked to the DNS Namespace Planning Solution Center.  More on that here.

Article ID

Title

2273047

User Account Control (UAC) and Windows Explorer

2360265

KRB_AP_ERR_BAD_INTEGRITY error when server tries to delegate in mixed Read-Only DC and Windows Server 2003 DC environment

2328240

Event ID 4107 or 11 is logged in the Application Log in Windows Vista or Windows Server 2008 and later

2030310

TerminalServices-Licensing 4105 – The Terminal Services license server cannot update the license attributes for user “<UserName>” in Active Directory Domain “<DomainName>”

2269838

Microsoft Exchange compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379064

Microsoft Biztalk Server compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379369

Microsoft Office Communications Server compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379367

Microsoft Forefront compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379371

Microsoft Office Outlook compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379373

Microsoft Office SharePoint compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379375

Microsoft SQL Server compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

2379380

Microsoft System Center product compatibility with Single Label Domains, Disjointed Namespaces, and Discontiguous Namespaces

Blogs

Title

Moving Your Organization from a Single Microsoft CA to a Microsoft Recommended PKI

Forcing Afterhours User Logoffs

Don't mess about with USMT's included manifests!

ACT: Suppressing Elevation Prompts for Legacy Applications

Use the DirectorySearcher .NET Class and PowerShell to Search Active Directory

Use the PowerShell [adsiSearcher] Type Accelerator to Search Active Directory

Query Active Directory and Ping Each Computer in the Domain by Using PowerShell

Query Active Directory with PowerShell and Run WMI Commands

Image may be NSFW.
Clik here to view.

The Case of the Enormous CA Database

Hello, faithful readers! Jonathan here again. Today I want to talk a little about Certification Authority monitoring and maintenance. This topic was brought to my attention by a recent case that I had where a customer’s CA database had grown to rather elephantine proportions over the course of many months quite unbeknownst to the administrators. In fact, the problem didn’t come to anyone’s attention until the CA database had consumed nearly all of the 55 GB partition on which it resided. How many of you may be in this same situation and be completely unaware of it? Hmmm? Well, in this post, I’ll first go over the details of the issue and the steps we took to resolve the immediate crisis. In the second part, I’ll cover some processes and tools you can put in place to both maintain your CA database and also alert you to possible problems that may increase its size.

The Issue

Once upon a time, Roger contacted Microsoft Support and reported that he had a problem. His Windows Server 2003 Enterprise CA database, which had been given its own partition, had grown to over 50 GB in size, and was still growing. The partition itself was only 55 GB in size, so Roger asked if there is any way to compact the CA database before the CA failed due to a lack of disk space.

Actually, compacting the CA database is a simple process, and while this isn’t a terribly common request we’re pretty familiar with the steps. What made this case so unusual was the sheer size of the database file. Previously, the largest CA database I’d ever seen was only about 21 GB, and this one was over twice that size! But no matter. The principles are the same regardless, and so we went to it.

Compacting the CA Database

Compacting a CA database is essentially a two-step process. The first step is to delete any unnecessary rows from the CA database. This will leave behind what we call white space in the database file that can be reused by the CA for any new records that it adds. If we just removed the unneeded records the size of the database file would not be reduced, but we could be confident that the database file would grow no larger in size.

If the database file were smaller, this might be an acceptable solution. In this case, the size of the database file relative to the size of the partition on which it resided mandated that we also compact the database file itself.

If you are familiar with compacting the Active Directory database on a domain controller, then you will realize that this process is identical. A new database file is created and all the active records are copied from the old database file to the new database file, thus removing any of the white space. When finished, the old database file is deleted and the new file is renamed in place with the name of the old file. While actually performing the compaction, Certificate Services must be disabled.

At the end of this process, we should have a significantly smaller database file, and with appropriate monitoring and maintenance in the future we can ensure that it never reaches such difficult to manage proportions again.

What to Delete?

What rows can we safely delete from the CA database? First, you need to have a basic understanding of what exactly is stored in the CA database. When a new certificate request is submitted to the CA a new row is created in the database. As that request is processed by the CA the various fields in that row are updated and the status of each request at a particular point in time describes at what point in the process the request is. What are the possible states for each row?

  • Pending - A pending request is basically on hold until an Administrator manually approves the request. When approved, the request is re-submitted to the CA to be processed. On a Standalone CA, all certificate requests are pended by default. On an Enterprise CA, certificate requests are pended if the option to require CA Manager approval is selected in the certificate template.
  • Failed - A failed request is one that has been denied by the CA because the request isn’t suitable per the CA’s policy, or there was an error encountered while generating the certificate. One example of such an error is if the certificate template is configured to require key archival, but no Key Recovery Agents are configured on the CA. Such a request will fail.
  • Issued - The request has been processed successfully and the certificate has been issued.
  • Revoked - The certificate request has been processed and the certificate issued, but the administrator has revoked the certificate.

In addition, issued and revoked certificates can either be time valid or expired.

These states, and whether or not a certificate is expired, need to be taken into account when considering which rows to delete. For example, you do not want to delete the row for a time valid, issued certificate, and in fact, you won’t be able to. You won’t be able to delete the row for a time valid, revoked certificate either because this information is necessary in order for the CA to periodically build its certificate revocation list (CRL).

Once a certificate has expired, however, then Certificate Services will allow you to delete its row. Expired certificates are no longer valid on their face, so there is no need to retain any revocation status. On the other hand, if you’ve enabled key archival then you may have private keys stored in the database row as well, and if you delete the row you’d never be able to recover those private keys.

That leaves failed and pending requests. These rows are just requests; there are no issued certificates associated with them. In addition, while technically a failed request can be resubmitted to the CA by the Administrator, unless the cause of the original failure is addressed there is little purpose in doing so. In practice, you can safely delete failed requests. Any pending requests should probably be examined by an Administrator before you delete them. A pending request means that someone out there has an outstanding certificate request for which they are patiently waiting on an answer. The Administrator should go through and either issue or deny any pending requests to clear that queue, rather than just deleting the records.

In this customer’s case, we decided to delete all the failed requests. But first, we had to determine exactly why the database had grown to such huge proportions.

Fix the Root Problems, First

Before you start deleting the failed requests from the database, you should ensure that you have addressed any configuration issues that led to these failures to begin with. Remember, Roger reported that the database was continuing to grow in size. It would make little sense to start deleting failed requests -- a process that requires that the CA be up and running -- if there are new requests being submitted to the CA and subsequently failing. The rows you delete could just be replaced by more failed rows and you’ll have gained nothing.

In this particular case, we found that there were indeed many request failures still being reported by the CA. These had to be addressed before we could actually do anything about the size of the CA database. When we checked the application log, we saw that Certificate Services was recording event ID 53 warnings and event ID 22 errors for multiple users. Let’s look at these events.

Event ID 53

Event ID 53 is a warning event indicating that the submitted request was denied, and containing information about why it was denied. This is a generic event whose detailed message takes the form of:

Certificate Services denied request %1 because %2. The request was for %3. Additional information: %4

Where:

%1: Request ID
%2: Reason request was denied
%3: Account from which the request was submitted
%4: Additional information

In this particular case, the actual event looked like this:

Event Type:   Warning

Event Source: CertSvc

Event Category:      None

Event ID:     53

Date:         <date>

Time:         <time>

User:         N/A

Computer:     <CA server>

Description:

Certificate Services denied request 22632 because The EMail name is unavailable and cannot be added to the Subject or Subject Alternate name. 0x80094812 (-2146875374).  The request was for CORP02\jackburton.  Additional information: Denied by Policy Module

This event means that the certificate template is configured to include the user’s email address in the Subject field, the Subject Alternative Name extension, or both, and that this particular user does not have an email address configured. When we looked at the users for which this event was being recorded, they were all either service accounts or test users. These are accounts for which there would probably be no email address configured under normal circumstances. Contributing to the problem was the fact that user autoenrollment had been enabled at the domain level by policy, and the Domain Users group had permissions to autoenroll for this particular template.

In general, one probably shouldn’t configure autoenrollment for service accounts or test accounts without specific reasons. In this case, simple User certificates intended for “real” users certainly don’t apply to these types of accounts. The suggestion in this case would be to create a separate OU wherein user autoenrollment is disabled by policy, and then place all service and test accounts in that OU. Another option is to create a group for all service and test accounts, and then deny that group Autoenroll permissions on the template. Either way, these particular users won’t attempt to autoenroll for the certificates intended for your users which will eliminate these events.

For information on troubleshooting other possible causes of these warning events, check out this link.

Event ID 22

Event ID 22 is an error event indicating that the CA was unable to process the request due to an internal failure. Fortunately, this event also tells you what the failure was. This is a generic event whose detailed message takes the form of:

Certificate Services could not process request %1 due to an error: %2. The request was for %3. Additional information: %4

Where:

%1: Request ID
%2: The internal error
%3: Account from which the request was submitted
%4: Additional information

In this particular case, the actual event looked like this:

Event Type:   Error

Event Source: CertSvc

Event Category:      None

Event ID:     22

Date:         <date>

Time:         <time>

User:         N/A

Computer:     <CA server>

Description:

Certificate Services could not process request 22631 due to an error: Cannot archive private key.  The certification authority is not configured for key archival. 0x8009400a (-2146877430).  The request was for CORP02\david.lo.pan.  Additional information: Error Archiving Private Key

This event means that the certificate template is configured for key archival but the CA is not. A CA will not accept the user’s encrypted private key in the request if there are no valid Key Recovery Agent (KRA) configured. The fix for this is pretty simple for our current purposes; disable key archival in the template. If you actually need to archive keys for this particular template then you should set that up before you start removing failed requests from your database. Here are some links to more information on that topic:

Key Archival and Recovery in Windows Server 2003
Key Archival and Recovery in Windows Server 2008 and Windows Server 2008 R2

Template, Template, Where’s the Template?

What’s the fastest way to determine which template is actually associated with each of these events? You can find that by looking at the failed request entry in the Certification Authority MMC snap-in (certsrv.msc). If you have more than a couple hundred failed requests, however, find the one you actually want can be difficult. This is where filtering the view comes in handy.

1. In the Certification Authority MMC snap-in, right-click on Failed Requests, select View, then select Filter….

Image may be NSFW.
Clik here to view.
clip_image001

2. In the Filter dialog box, click Add….

Image may be NSFW.
Clik here to view.
clip_image002

3. In the New Restriction dialog box, set the Request ID to the value that you see in the event, and click Ok.

Image may be NSFW.
Clik here to view.
clip_image003

4. In the Filter dialog box, click Ok.

Image may be NSFW.
Clik here to view.
clip_image004

5. Now you should see just the failed request designated in the event. Right-click on it, select All Tasks, and then select View Attributes/Extensions….

Image may be NSFW.
Clik here to view.
clip_image005

6. In the properties for this request, click on the Extensions tab. In the list of extensions, locate Certificate Template Information. The template name will be show in the extension details.

Image may be NSFW.
Clik here to view.
clip_image006

This is the name of the template whose settings you should review and correct, if necessary.

Once the root problems causing the failed requests have been resolved, monitor the Application event log to ensure that Certificate Services is not logging any more failed requests. Some failed requests in a large environment are expected. That’s just the CA doing its job. What you’re trying to eliminate are the large bulk of the failures caused by certificate template and CA misconfiguration. Once this is complete, you’re ready to start deleting rows from the database.

Deleting the Failed Requests

The next step in this process is to actually delete the rows using our trusty command line utility certutil.exe. The -deleterow verb, introduced in Windows Server 2003, can be used to delete rows from the CA database. You just provide it with the type of records you want deleted and a past date (if you use a date equal to the current date or later, the command will fail). Certutil.exe will then delete the rows of that type where the date the request was submitted to the CA (or the date of expiration, for issued certificates) is earlier than the date you provide. The supported types of records are:

Name

Description

Type of date

Request

Failed and pending requests

Submission date

Cert

Expired and revoked certificates

Expiration date

Ext

Extension table

N/A

Attrib

Attribute table

N/A

CRL

CRL table

Expiration date

 

 

 

 

For example, if you want to delete all failed and pending requests submitted by January 22, 2001, the command is:

C:\>Certutil -deleterow 1/22/2001 Request

The only problem with this approach is that certutil.exe will only delete about 2,000 - 3,000 records at a time before failing due to exhaustion of the version store. Luckily, we can wrap this command in a simple batch file that runs the command over and over until all the designated records have been removed.

@echo off

:Top

Certutil -deleterow 8/31/2010 Request

If %ERRORLEVEL% EQU -939523027 goto Top

This batch file runs certutil.exe with the -deleterow verb. If the command fails with the specific error code indicating that the version store has been exhausted, the batch file simply loops and the command is executed again. Eventually, the certutil.exe command will exit with an ERRORLEVEL value of 0, indicating success. The script will then exit.

Every time the command executes, it will display how many records were deleted. You may therefore want to pipe the output of the command to a text file from which you can total up these values and determine how many records in total were deleted.

In Roger’s case, the total number of deleted records came to about 7.8 million rows. Yes…that is 7.8 million failed requests. The script above ran for the better part of a week, but the CA was up and running the entire time so there was no outage. Indeed, the CA must be up and running for the certutil.exe command to work as certutil.exe communicates with the ICertAdmin COM interface of Certificate Services.

That is not to say that one should not take precautions ahead of time. We increased the base CRL publication interval to seven days and published a new base CRL immediately before starting to delete the rows. We also disabled delta CRLs temporarily while the script was running. We did this so that even if something unexpected happen, clients would still be able to check the revocation status of certificates issued by the CA for an extended period, giving us the luxury of time to take any necessary remediation steps. As expected, however, none were required.

And Finally, Compaction

The final step in this process is compacting the CA database file to remove all the white space resulting from deleting the failed requests from the database. This process is identical to defragmenting and compacting Active Directory’s ntds.dit file, as the Certificate Services uses the same underlying database technology as Active Directory -- the Extensible Storage Engine (ESE).

Just as with AD, you must have free space on the partition equal to or greater than the database file size. As you’ll recall, we certainly didn’t have that in this case what with a database of 50 GB on a 55 GB partition. What do you do in this case? Move the database and log files to a partition with enough free space, of course.

Fortunately, Roger’s backing store was on a Storage Area Network (SAN), so it was trivial to slice off a new 150 GB partition and move the database and log files to the new, larger partition. We didn’t even have to modify the CA configuration as Roger’s storage admins were able to just swap drive letters since the only thing on the original partition was the CertLog folder containing the CA database and log files. Good planning, that.

With enough free space now available, all is ready to compact the database. Well…almost. You should first take the precaution of backing up the CA database prior to starting just in case something goes wrong. The added benefit to backing up the CA database is that you’ll truncate the database log files. In Roger’s case, after deleting 7.8 million records there were several hundred megabytes of log files. To back up just the CA database, run the following command:

C:\>Certutil -backupDB backupDirectory

The backup directory will be created for you if it does not already exist, but if it does exist, it must be empty. Once you have the backup, copy it somewhere safe. And now we’re finally ready to proceed.

To compact the CA database, stop and then disable Certificate Services. The CA cannot be online during this process. Next, run the following command:

C:\>Esentutl /d Path\CaDatabase.edb

Esentutl.exe will take care of the rest. In the background, esentutl.exe will create a temporary database file and copy all the active records from the current database file to the new one. When the process is complete, the original database file will be deleted and the temporary file renamed to match the original. The only difference is that the database file should be much smaller.

How much smaller? Try 2.8 GB. That’s right. By deleting 7.8 million records and compacting the database, we recovered over 47 GB of disk space. Your own mileage may vary, though, as it depends on the number of failed requests in your own database. To finish, we just copied the now much smaller database and log files to the original drive and then re-enabled and restarted Certificate Services.

While very time consuming, simply due to the sheer number of failed requests in the database, overall the operation went off without a hitch. And everyone lived happily ever after.

Preventative Maintenance and Monitoring

Now that the CA database is back down to its fighting weight, how do you make sure you keep it that way? There are actually several things you can do, including regular maintenance and, if you have the capability, closer monitoring of the CA itself.

Maintenance

You’ll remember that it was not necessary to take the CA offline while deleting the failed requests. We did take precautions by modifying the CRL publication interval but fortunately that turned out to be unnecessary. Since no outage is required to remove failed requests from the CA database, it should be pretty simple to get approval to add it to your regular maintenance cycle. (You do have one, right?) Every quarter or so, run the script to delete the failed requests. You can do it more or less often as is appropriate for your own environment.

You don’t have to compact the CA database each time. Remember, the white space will simply be reused by the CA for processing new requests. Over time, you may find that you reach a sort of equilibrium, especially if you also have the freedom to delete expired certificates as well (i.e., no Key Archival), where the CA database just doesn’t get any bigger. Rows are deleted and new rows are created in roughly equal numbers, and the space within the database file is reused over and over -- a state of happy homeostasis.

If you want, you can even use scheduled tasks to automatically perform this maintenance every three months. The batch file above can be modified to run using VBScript or even PowerShell. Simply add some code to email yourself a report when the deletion process is finished; there are plenty of code samples available on the web for sending email using both VBScript and PowerShell. Bing it!

Monitoring

In addition to this maintenance, you can also use almost any monitoring or management software to watch for certain key events on the CA. Those key events? I already covered two of them above -- event IDs 53 and 22. For a complete list of events recorded by Certificate Services, look here.

If you have Microsoft Operations Manager (MOM) 2005 or System Center Operations Manager (SCOM) 2007 deployed, and you have Windows Server 2008 or Windows Server 2008 R2 CAs, then you can download the appropriate management pack to assist you with your monitoring.

MOM 2005: Windows Server 2008 Active Directory Certificate Services Management Pack for Microsoft OpsMgr 2005
SCOM 2007 SP1: Active Directory Certificate Services Monitoring Management Pack

The management packs encompass event monitoring and prescriptive guidance and troubleshooting steps to make managing your PKI much simpler. These management packs are only supported for CAs running on Windows Server 2008 or higher, so this is yet one more reason to upgrade those CAs.

Conclusion

Like any other infrastructure service in your enterprise environment, the Windows CA does require some maintenance and monitoring to maintain its viability over time. If you don’t pay attention to it, you may find yourself in a situation similar to Roger’s, not noticing the problem until it is almost too late to do anything to prevent an outage. With proper monitoring, you can become aware of any serious problems almost as soon as they begin, and with regular maintenance you prevent such problems from ever occurring. I hope you find the information in this post useful.

Jonathan “Pork Chop Express” Stephens

Image may be NSFW.
Clik here to view.

Microsoft’s Support Statement Around Replicated User Profile Data

[Note from Ned: this article was created and vetted by the Microsoft development teams for DFS Replication, DFS Namespaces, Offline Files, Folder Redirection, Roaming User Profiles, and Home Folders. Due to some TechNet publishing timelines, it was decided to post here in the interim. This article will become part of the regular TechNet documentation tree at a later date. The primary author of this document is Mahesh Unnikrishnan, a Senior Program Manager who works on the DFSR, DFSN, and NFS product development teams. You can find other articles by Mahesh at the MS Storage Team blog: http://blogs.technet.com/b/filecab.

The purpose of this article is to clarify exactly which scenarios are supported for user data profiles when used with DFSR, DFSN, FR, CSC, RUP, and HF. It also provides explanation around why the unsupported scenarios should not be used. When you finish reading this article I recommend reviewing http://blogs.technet.com/b/askds/archive/2009/02/20/understanding-the-lack-of-distributed-file-locking-in-dfsr.aspx ]

Deployment scenario 1: Single file server, replicated to enable centralized backup

Consider the following illustrative scenario. Contoso Corporation has two offices – a main office in New York and a branch office in London. The London office is a smaller office and does not have dedicated IT staff on site. Therefore, data generated at the London office is replicated over the WAN link to the New York office for backup.

Contoso has deployed a file server in the London branch office. User profiles and redirected home folders are stored on shares exported by that file server. The contents of these shares are replicated to the central hub server in the New York office for centralized backup and data management. In this scenario, a DFS namespace is not configured. Therefore, users will not be automatically redirected to the central file server if the London file server is unavailable.

Image may be NSFW.
Clik here to view.
clip_image002

As illustrated by the diagram above, there is a file server hosting home folders and user profile data for all employees in Contoso’s London branch office. The home folder and user profile data is replicated using DFS Replication from the London file server to the central file server in the New York office. This data is backed up using backup software like Microsoft’s System Center Data Protection Manager (DPM) at the New York office.

Note that in this scenario, all user initiated modifications occur on the London file server. This holds true for both user profile data and the data stored in users’ home folders. The replica in the New York office is only for backup purposes and is not being actively modified or accessed by users.

There are a few variants of this deployment scenario, depending on whether a DFS Namespace is configured. Following sub-sections detail these deployment variants and specify which of these variants are supported.

Scenario 1A: DFS Namespace is not configured

[Supported Scenario]

Scenario highlights:

  • A single file server is deployed per branch office. Home folders and roaming user profiles for users in that branch office are stored on the branch file server.
  • This data is replicated using DFS Replication over the WAN from (multiple) branch file servers to a hub server for centralized backup (using DPM).
  • DFS Namespace is not configured.

Specifics:

  • In this scenario, in effect, only one copy of the data is modified by end-users, i.e. the data hosted on the branch office file server (London file server, in this example).
  • The replica hosted by the file server in the hub site (New York file server, in this example) is only for backup purposes and users are not actively directed to that content.
  • In this scenario, DFS Namespaces is not configured.
  • Folder redirection may be configured for users in the branch office with data stored on a share hosted by the branch office file server.
  • Roaming user profiles may be configured with user profile data stored on the branch office file server.
  • Offline Files (Client Side Caching) may be configured, with the data stored on the branch office file server made available offline to users in the branch office.
Scenario 1B: DFS Namespace is configured – single link target

[Supported Scenario]

This is a variation of the above scenario, with the only difference being that DFS Namespaces is set up to create a unified namespace across all shares exported by the branch office file server. However, in this scenario, all namespace links must have only one target1 - the share hosted by the branch office file server.

1 Deployment scenarios where namespace links have multiple targets are discussed later in this document.

Scenario highlights:

  • A single file server is deployed per branch office. Home folders and roaming user profiles for users in that branch office are stored on the branch file server.
  • This data is replicated using DFS Replication over the WAN from (multiple) branch file servers to a hub server for centralized backup (using DPM).
  • A DFS Namespace is configured in order to create a unified namespace. However, namespace links do not have multiple targets – the share on the central file server is not added as a DFS-N link target.

Specifics:

  • In this scenario, in effect, only one copy of the data is modified by end-users, i.e. the data hosted on the branch office file server (London file server, in this example).
  • The replica hosted by the file server in the hub site (New York file server) is only for backup purposes and users are not actively directed to that content.
  • In this scenario, a DFS Namespace may be configured, but multiple targets are not set up for links. In other words, none of the namespace links point to replicas of the share hosted on the branch office file server as well as the central file server. Namespace links point only to the share hosted by the branch office file server.
  • Therefore, if the branch office file server were to fail, there will not be an automatic failover of clients to the central file server.
  • Folder redirection may be configured for users in the branch office with data stored on a share hosted by the branch office file server.
  • Roaming user profiles may be configured with user profile data stored on the branch office file server.
  • Offline Files (Client Side Caching) may be configured, with the data stored on the branch office file server made available offline to users in the branch office.
Support Statement … (deployment scenario 1):

Both variants of this deployment scenario are supported. The key point to remember for this deployment scenario is that only one copy of the data is actively modified and used by client computers, thereby avoiding issues caused by replication latencies and users accessing potentially stale data from the file server in the main office (which may not be in sync).

The following use-cases will work in this deployment scenario:

  • TS farm using the file server in the branch as backend store.
  • Laptops in branch office with offline files configured against the branch file server.
  • Regular desktops with folder redirection configured.

In this scenario, the following technologies are supported and will work:

  • Folder redirection to the file server in the branch.
  • Client side caching/Offline files.
  • Roaming user profiles.

Designing for high availability

DFS Replication in Windows Server 2008 R2 includes the ability to add a failover cluster as a member of a replication group. To do so, refer to the TechNet article ‘Add a Failover Cluster to a Replication Group’. Offline files and Roaming User Profiles can also be configured against a share hosted on a Windows failover cluster.

For the above mentioned deployment scenarios, the branch office file server may be deployed on a failover cluster to increase availability. This ensures that the branch office file server is resilient to hardware and software related outages affecting individual cluster nodes and is able to provide highly available file services to users in the branch office.

Deployment scenario 2: Multiple (replica) file servers for geo-location

Consider the same scenario described above with a few differences. Contoso Corporation has two offices – a main office in New York and a branch office in London. Contoso has deployed a file server in the London branch office. User profiles and redirected home folders are stored on shares exported by that file server. The contents of these shares are replicated to the central hub server in the New York office for centralized backup and data management.

In this scenario, a DFS namespace is configured in order to enable users to be directed to the replica closest to their current location. Therefore, namespace links have multiple targets – the file server in the branch as well as the central file server. Optionally, the namespace may be configured to prefer issuing referrals to shares hosted by the branch office file server by ordering referrals based on target priority.

The replica in the central hub/main site may optionally be configured to be a read-only DFS replicated folder.

Scenario 2A: DFS Namespaces is configured – multiple link target configuration

[Unsupported Scenario]

Scenario highlights:

  • A single file server is deployed per branch office. Home folders and roaming user profiles for users in that branch office are stored on the branch file server.
  • This data is replicated using DFS Replication over the WAN from (multiple) branch file servers to a hub server for centralized backup (using DPM).
  • A DFS Namespace is configured in order to create a unified namespace.
  • Namespace links have multiple targets – the share on the central file server is added as a second DFS-N link target.
  • The namespace may optionally be configured to prefer issuing referrals to the branch office file server. This may be done because administrators require that only when the branch file server is unavailable, should clients be redirected to the central file server.

Specifics:

  • In this scenario, in effect, end-users may be directed to any of the available replicas of data. It is expected that in the normal course of events, users will modify the data hosted on the branch office file server.
  • A DFS Namespace is configured and multiple targets are set up for namespace links. In other words, namespace links point to replicas of the share hosted on both the branch office file server as well as the central file server. The namespace may be configured to prefer issuing referrals to the share located on the branch office file server.
  • If the branch office file server were to be unavailable, users would be redirected to the replica on the central hub server.
  • Administrators may also require that roaming users be directed to the copy of their data or user profile that is located on a server closest to their physical location (eg. for users travelling to another site/branch office, this would be the replica in that office).
Scenario 2B: DFS Namespaces is configured – multiple link targets, read-only replica on central/hub server

[Unsupported Scenario]

Scenario highlights:

  • In this scenario, the exact same configuration described above (scenario 2A) applies with only one difference - the replica on the central server is configured to be a read-only DFS Replicated folder.

Specifics:

  • In this scenario, in effect, end-users may be directed to any of the available replicas of data. It is expected that in the normal course of events, users will modify the data hosted on the branch office file server.
  • A DFS Namespace is configured and multiple targets are set up for namespace links. In other words, namespace links point to replicas of the share hosted on both the branch office file server as well as the central file server. The namespace may be configured to prefer issuing referrals to the share located on the branch office file server.
  • The replica on the central file server has been configured to be a read-only replica.
  • If the branch office file server were to be unavailable, users would be redirected to the replica on the central hub server. At this point however, the share becomes read-only for applications and the users, since this replica is a read-only replica.
  • Administrators may also require that roaming users be directed to the copy of their data or user profile that is located on a server closest to their physical location (eg. for users travelling to another site/branch office, this would be the replica in that office).

What can go wrong?

  • In the deployment scenarios listed above (2A, 2B), it is not guaranteed that replication of home folder data or user profiles data between the branch office file server and the central file server is always up to date. This is because many factors may influence replication status, such as the presence of large replication backlogs caused by many files changing frequently or files that weren’t replicated because file handles were not closed, heavy system load, bandwidth throttling, replication schedules etc.
  • Since DFS Replication does not perform transactional replication of user profile data (i.e. replicating all the changes to a given profile, or nothing at all), it is possible that some files belonging to a user profile may have replicated, whilst some others may not have replicated by the time the user was failed over to the server at the central site.
  • The DFS Namespace client component may fail over the client computer to the central file server if it notices transient network glitches or specific error codes when accessing data over SMB from the branch file server. This may not always happen only when the branch file server is down, since momentary glitches and some transient file access error codes may trigger a redirect.
  • Therefore, there is a potential for users to be redirected to the central file server even if the namespace configuration was set to prefer referrals to the branch file server. If the replica data on the central file server is not in sync, users may be impacted in the following ways.

As a result of the behavior described above the following consequences may be observed:

  • If the central file server is a read-write replica:

    • Roaming user profiles: User profile data may get corrupted since all the changes made by the user in their last logon may not have replicated to the central server. Therefore, the user may end up modifying a stale or incomplete copy of the roaming profile during their next logon, thus resulting in potential profile corruption.
    • Offline Files (CSC)/Folder Redirection: Users may experience data loss or corruption, since the data on the central replica may be stale/out of sync with the data on the branch office file server. Therefore, users may experience data loss, where their latest modifications are not persisted and they are presented a stale/old copy of data.
    • Since DFS Replication is a multi-master replication engine with last-writer-wins conflict resolution semantics, the stale copy of data that was edited on the central file server will win replication conflicts and will overwrite the fresher copy of data that existed on the branch file server, but didn’t replicate out.
  • If the central file server is a read-only replica:

    • When a user is directed to a read-only replica located on the central file server (Scenario 2B), applications and users will not be able to modify files stored on that share. This leads to user confusion since a file that could be modified just a while earlier has suddenly become read-only.
    • Roaming user profiles: If the user is directed to the central file server (read-only replica), the profile may be in a corrupt state since all changes made by the user in their last logon may not yet have replicated to the central server. Additionally, the roaming user profiles infrastructure will be unable to write back any subsequent changes to the profile when the user is logged off, since the replica hosting the user profile data is read-only.
    • Offline Files (CSC)/Folder Redirection: Users may experience data loss or corruption, since the data on the central replica may be stale/out of sync with the data on the branch office file server. Therefore, users may experience data loss, where their latest modifications are not persisted and they are presented a stale/old copy of data. Additionally, users will notice sync errors or conflicts for files that have been modified on their computers. It will not be possible for users to resolve these conflicts, because the server copy is now read-only (since it is hosted on a read-only replicated folder).
What if one of the link targets is disabled?

Scenario highlights:

  • In this scenario, the exact same configurations described above (scenario 2A or scenario 2B) apply, with one key difference – the link target that points to the share located on the central hub server is disabled during the normal course of operations.
  • If the branch office file server were to be unavailable, the link target to the central hub server is manually enabled, thus causing client computers to fail over to the copy of the share on the central file server.

This deployment variant helps avoid the problems caused by DFS Namespaces failing over due to transient network glitches or when it encounters specific SMB error codes while accessing data. This is because the referral to the share hosted on the central file server is normally disabled.

However, the important thing to note is that the side-effects of replication latencies are still unavoidable. Therefore, if the data on the central file server is stale (i.e. replication has not yet completed), it is possible to encounter the same problems described in the ‘What can go wrong?’ section above. Before, enabling the referral to the central file server, the administrator may need to verify the status of replication to ensure that the adverse effects of data loss or roaming profile corruption are contained.

Support Statement:

Both variants of this deployment scenario (2A and 2B) are not supported. The following deployment use-cases will not work:

  • TS farm using a DFS Namespace with either the file server in the branch or the central file server as backend store (link targets).
  • Laptops in branch office with offline files configured against a DFS link with multiple targets.
  • Regular desktops with folder redirection configured against a DFS link with multiple targets.

In this scenario, the following technologies are not supported and will not work as expected:

  • Folder redirection against the namespace with multiple link targets.
  • Client side caching/Offline files.
  • Roaming user profiles.
Image may be NSFW.
Clik here to view.

Replacing DFSR Member Hardware or OS (Part 1: Planning)

Hello folks, Ned here again to kick off a new five-part series on DFSR. With the release of Windows Server 2008 R2, the warming of economies, and the timing of hardware leases, we have started seeing more questions around replacing servers within existing DFSR Replication Groups.

Through the series I will discuss the various options and techniques around taking an existing DFSR replica and replacing some or all of its servers. Depending on your configuration and budget, this can range from a very seamless operation that users will never notice to a planned outage where even their local server may not be available for a period of time. I leave it to you and your accountants to figure out which matters most.

This series also gives updated steps on validated pre-seeding to avoid any conflicts and maximize your initial sync performance. I will also speak about new options you have in this replacement cycle for clusters and read-only replication.

Finally: people get nervous when they start messing around with gigabytes of user data scattered across a few continents. I’ll be cutting out most of the wacky Ned’isms here and sticking to business in a “TechNet-Lite” style. Hopefully it’s not too boring.

The Scenario

The most common customer DFSR configuration is the classic hub and spoke data collection topology. This is:

  • Multiple branch file servers that hold user data in the field and are replicating data back to a single main office hub for backup purposes.
  • Some data may flow from the hub out to the branches but that will generally be very static, such as application installation packages or HR paperwork.
  • The storage on the branch servers is local fixed disks.
  • The storage on the hub server is a SAN.
  • The servers are mostly (if not all) currently running Windows Server 2003 R2 SP2.

Image may be NSFW.
Clik here to view.
image

There are variations possible where you might have more SAN’s or no SAN’s or 50 servers or 5 hubs. None of that really matters once you understand the fundamentals that are explained here in these simplified examples. Just focus on how this works at the micro and you will have no trouble at the macro.

In my diagrams below the following holds true:

  • All DFSR servers are Windows Server 2003 R2 SP2.
  • The hub uses a fiber-attached SAN, the branch servers have local disks.
  • The topology is hub and spoke. BRANCH-01 and BRANCH-02 replicate with HUB-DFSR, each in their own replication group.
  • All my replacement OS’ are Windows Server 2008 R2 (so that it is possible to use cluster and read-only).
  • The domain is running Windows Server 2008 R2 DCs (so that it is possible to use read-only).
  • The replacement hubs are clustered to provide higher availability.

The Options

There are a number of ways you can replace your servers with new hardware and operating systems. I have ordered these in least to most disruptive to the replication. As is often the case, there is an inverse correlation between this and cost/effort. In the follow-on articles I go into the specifics of these methods.

Note: the diagrams are simplified for understanding and are not a complete set of steps. Do not use these diagrams as your sole planning and methodology; keep reading the other articles in the series.

You may find that you implement a combination of the options depending on your time, budget, and manpower.

N + 1 Method (Hardware, OS)

The “N+1” method entails adding a new replacement server in a one-to-one partnership with the server being replaced. This allows replication to be configured and synchronized between the two nodes without end users being interrupted for long periods. It also allows both the hardware and OS to be replaced with newer versions. Pre-seeding is also possible. When the servers are synchronized the old server is removed from replication and the new one renamed. The con is that you will need enough storage for two hubs, which may be costly if you are low on capacity currently.

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 1 – Existing environment with old hub and branches 
  • Figure 2 – New hub cluster replicates with old hub only

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 3 – Old branch servers now replicate with new hub
  • Figure 4 – New branch server replicates with old branch server

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 5 – New branch server now replicates with new hub server
  • Figure 6– Old Servers removed  

 

Data Disk Swap Method (Hardware, OS)

The “Data Disk Swap” method does not require twice the storage capacity of the old hub and instead moves an existing disk (typically a LUN) from an old to a new node. This also means you get pre-seeding for free. The downside to this method is that replication to the hub will be interrupted during that disk move process and during the non-authoritative sync will have to happen between the hub and its partners, putting the branches at risk during that timeframe.

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 1 – Existing environment with old hub and branches
  • Figure 2 – New hub cluster built

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 3 – Old hub server removed, new hub attached to storage
  • Figure 4 – New branch server replicates with old branch server

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 5 – New branch server now replicates with new hub server
  • Figure 6– Old Servers removed

 

Reinstall Method (OS Only)

The “Reinstall” method can be used to lay down a later operating system over a previous edition without upgrading. Files are pre-seeded as the data will not be touched in this method, but replication will be halted until the installs are done and replication is reconfigured, leading to a potentially lengthy downtime. Previous OS versions installed will be immaterial to this method.

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 1 – Existing environment with old hub and branches
  • Figure 2 – OS’ reinstalled and DFSR rebuilt

 

Upgrade Method (OS only)

Finally, the “Upgrade” is what it sounds like – an in-place increasing of the OS version using setup. As long as your servers meet the requirements for an in-place upgrade then this is a supported option. It will not cause replication to synchronize again but there will be downtime during the actual upgrade itself. It’s also not possible to deploy Win2008 R2 if the old computers are running 32-bit OS. As with any upgrade, there is some risk that the upgrade will fail to complete or end up in an inconsistent state, leading to lengthier troubleshooting process or a block of this method altogether. For that reason upgrades are the least recommended.

Image may be NSFW.
Clik here to view.
image
Image may be NSFW.
Clik here to view.
image

  • Figure 1 – Existing environment with old hub and branches
  • Figure 2 – OS’ Upgraded

Series Index

- Ned “full mesh” Pyle

Image may be NSFW.
Clik here to view.

Multi-NIC File Server Dissection

Ned here. Our friend and colleague Jose Barreto from the File Server development team has posted a very interesting article around multiple NIC usage on Win2008/R2 file servers. Here's the intro:

When you set up a File Server, there are advantages to configuring multiple Network Interface Cards (NICs). However, there are many options to consider depending on how your network and services are laid out. Since networking (along with storage) is one of the most common bottlenecks in a file server deployment, this is a topic worth investigating.

Throughout this blog post, we will look into different configurations for Windows Server 2008 (and 2008 R2) where a file server uses multiple NICs. Next, we’ll describe how the behavior of the SMB client can help distribute the load for a file server with multiple NICs. We will also discuss SMB2 Durability and how it can recover from certain network failure in configuration where multiple network paths between clients and servers are available. Finally, we will look closely into the configuration of a Clustered File Server with multiple client-facing NICs.

I highly recommend giving the whole thing a read if you are interested in increasing file server throughput and reliability on the network in a recommend fashion.

http://blogs.technet.com/b/josebda/archive/2010/09/03/using-the-multiple-nics-of-your-file-server-running-windows-server-2008-and-2008-r2.aspx

- Ned "I am team Edward" Pyle

Image may be NSFW.
Clik here to view.

Top Solutions RSS feeds for Windows Server and Client now available

Ned here again. The MS Product Quality and Online team has released three new RSS feeds for Windows Server, Windows 7 Client, and Windows 7 IT Pro to get you to the "high impact issues" happening right now that have solutions. Great for proactive work, finding emerging issues, or seeing common problems.

Here are the addresses to plug into your RSS feed reader of choice:

Windows Server:
http://support.microsoft.com/rss/winsrv.xml

Windows Client:
http://support.microsoft.com/rss/winclient.xml
http://support.microsoft.com/rss/winclientitpro.xml

Thanks Jarrett and crew!

- Ned "everything but the mail sack" Pyle

Image may be NSFW.
Clik here to view.

Replacing DFSR Member Hardware or OS (Part 2: Pre-seeding)

Ned here again. Previously I discussed options for performing a hardware or OS replacement within an existing DFSR Replication Group. As part of that process you may end up seeding a new server’s disk with data from an existing server. Pre-seeded files exactly match the copies on an upstream server, so that when initial non-authoritative sync is performed no data will be sent over the network except the SHA-1 hash of each file for confirmation. For a deeper explanation of pre-seeding review:

In order to make this more portable I decided to make this a separate post within the series. Even if you are not planning a file server migration and just want to add some new servers to a replica with pre-seeding, the techniques here will be useful. I demonstrate how to pre-seed from Windows Server 2003 R2 to Windows Server 2008 R2 as this is the common scenario as of this writing. I also call out the techniques needed for other OS arrangements, and I will use both kinds of Windows backup software as well as robocopy in my techniques.

There are three techniques you can use:

  • Pre-seeding with NTBackup
  • Pre-seeding with Robocopy
  • Pre-seeding with Windows Server Backup

The most important thing is to TEST. Don’t be a cowboy or get sloppy when it comes to pre-seeding; most cases we get with massive conflict problems were caused by lack of attention to detail during a pre-seeding that took a functional environment and broke it.

Read-Only Pre-Seeding

If using Windows Sever 2008 R2 and planning on using Read-Only replication, make sure you install the following hotfix before configuring the replicated folder:

An outgoing replication backlog occurs after you convert a read/write replicated folder to a read-only replicated folder in Windows Server 2008 R2 - http://support.microsoft.com/kb/2285835

This prevents a (cosmetic) issue where DFSR displays pre-seeded files as an outbound backlog on a read-only replicated folder. A read-only member cannot have an outbound backlog, naturally.

Pre-seeding with NTBackup

If your data source OS is Windows Server 2003 R2, I recommend you use NTBackup.exe for pre-seeding. NTBackup correctly copies all aspects of a file including data, security, attributes, path, and alternate streams. It has both a GUI and command-line interface.

Prerequisites

If pre-seeding from Windows Server 2003 R2 to Windows Server 2003 R2, no special changes have to be made. If pre-seeding from Windows Server 2003 R2 to Windows Server 2008 or Windows Server 2008 R2, you will need to download an out-of-band version of NTBackup to restore the data:

More info on using NTBackup: http://support.microsoft.com/kb/326216/pl

Critical note: Restoring an entire volume (rather than specific folders like demonstrated below) with NTBACKUP will cause all existing replicated folders on that volume to go into non-authoritative sync. For that reason you should never restore an entire volume if you are already using DFSR on a server volume being pre-seeded. Just restore the replicated folders like I do in the examples.

Procedure

1. Start NTBackup.exe on the Windows Server 2003 R2 DFSR computer that has the data you are going to pre-seed elsewhere.

2. Select the Replicated Folder(s) you are going to pre-seed. In the example below I have two RF’s on my E: drive:

Image may be NSFW.
Clik here to view.
image

Note: When selecting the replicated folders, you can optionally de-select the DFSRPRIVATE folders underneath them to save time and space in the backup.

3. Backup to a flat file format (locally, if you have the disk capacity).

4. When the backup is complete, copy that file over to your new server that is going to replicate this data in the future. If the server is Win2008 or Win2008 R2, make sure you have the NT Restore tool installed.

Note: very large files – such as NTBackup BKF files that are hundreds of GB – can be copied much faster over a gigabit LAN by using tools that support unbuffered IO. A few Microsoft-provided options for this are:

5. Start the NTBackup tool on your new DFSR server that you are pre-seeding.

Image may be NSFW.
Clik here to view.
image

6. Select to restore data. In the Win2008/R2 restore tools, this is the only option available.

7. Select the backup file, then drill down into the backed up files so that you select the parent folders containing all the user data.

Image may be NSFW.
Clik here to view.
image

Note: You may need to select “Tools”, then “Catalog a backup file” to select a backup to restore.

Image may be NSFW.
Clik here to view.
image

8. Change the “Restore files to:” dropdown to “Alternate Location”

9. Specify the “Alternate Location” path to match what it should be on the new server. In my case the replicated folders had existed on the root of the drive, so I restored them to the root of the new servers data drive (E:\).

Image may be NSFW.
Clik here to view.
image

Note: By default the security and mount points will be restored. Security must be restored or file hashes will change and the pre-seeding operation will fail. DFSR doesn’t replicate junction points so there is no need to check that box.

Image may be NSFW.
Clik here to view.
image

10. At this point you are done pre-seeding. See section Validating Pre-Seeding. When that is complete you can proceed with replicating the data. You have the option to delete the DFSRPrivate folder that was restored within your RF(s) at this point, as it will not be useful for pre-seeding.

Pre-seeding with Robocopy

If your data source OS is Windows Server 2008, I recommend you use Robocopy for pre-seeding. While Windows Server 2008 supports Windows Server Backup, it lacks granularity in backing up files. Robocopy can also be used on the other operating systems but it is not as recommended as using a backup.

Prerequisites

Robocopy is included with Windows Vista and later, but there have been subsequent hotfix versions that are required for correct pre-seeding. It is not included with Windows Server 2003. You must install the following on your computer that will be pre-seeded, based on your environment (there is no reason to install on the server that currently holds the old data files):

  • Download latest Windows Server 2008 R2 Robocopy (KB979808 or later)
  • Download latest Windows Server 2008 Robocopy (KB973776 or later)
  • Download Windows Server 2003 robocopy (2003 Resource Kit)

Note: Again, it is not recommended that you pre-seed a new Windows 2003 R2 computer using Robocopy.exe as there are known pre-seeding issues with the version included in the out-of-band Windows Resource Kit Tools. These issues will not be fixed as Win2003 is out of mainstream support. You should instead use NTBackup.exe as described previously.

More info on using robocopy: http://technet.microsoft.com/en-us/library/cc733145(WS.10).aspx

Procedure

1. Logon to the computer that is being pre-seeded with data from a previous DFSR node. Make sure you have full Administrator rights on both computers.

2. Validate that the Replicated Folders that you plan to copy over do not yet exist on the computer being pre-seeded.

Critical note: do not pre-create the base folders that robocopy is copying and copy into them; let robocopy create the entire source tree. Under no circumstances should you change the security on the destination folders and files after using robocopy to pre-seed the data as robocopy will not synchronize security if the files data stream matches, even when using /MIR.

Consider robocopy a one-time option. If you run into some issue with it, delete all the data on the destination and re-run the robocopy commands. Do not try to “fix” the existing data as you are very likely to make things worse.

Image may be NSFW.
Clik here to view.
image

3. Sync the folders using robocopy with the following argument format:

Robocopy.exe “\\source server\drive$\folder path” “destination drive\folder path” /b /e /copyall /r:6 /xd dfsrprivate /log:robo.log /tee

For example:

Image may be NSFW.
Clik here to view.
image

Note: You have the option to use the multi-threaded /MT option starting in the Win2008 version of Robocopy to copy more than one file at a time. The downside of /MT is that you cannot easily see copy progress.

Note: You also have the option to use the /LOG option to redirect all output to a file for later review. This is useful to see more specifics about errors if encountered. The downside is that you will see no console progress.

Image may be NSFW.
Clik here to view.
image

Note: These arguments use a backup API that can copy most in-use file types (/b), include subfiles and folders (/e), copy all aspects of a file (/copyall), retry 6 times if a file copy errors (/r:6), excludes folders called Dfsrprivate (/xd dfsrprivate), writes to a log (/log:robo.log), and also outputs to console (/tee). This DfsrPrivate exclusion can be changed to a full path as well if you suspect this is a legitimate user data folder name deeper in the Replicated Folder (typically it is not; if any copies exist they are usually from previously replicated folders that should have been cleaned up by a file server administrator).

4. When the copy completes, validate that there were no errors and that only one folder was skipped (that will be the DFSRPrivate folder).

Image may be NSFW.
Clik here to view.
image[52]

Note: if you find FAILED entries, you can review the log for specifics.

5. At this point you are done pre-seeding. See section Validating Pre-Seeding. When that is complete you can proceed with replicating the data.

Pre-seeding with Windows Server Backup

If your data source OS is Windows Server 2008 R2, I recommend you use Windows Server Backup (WSB) for pre-seeding. WSB correctly copies all aspects of a file including data, security, attributes, path, and alternate streams. It has both a GUI and command-line interface. I do not recommend WSB on Windows Server 2008 non-R2, as it lacks granularity in backing up files – refer to the Robocopy section of this article if your source computers are Win2008 non-R2.

Prerequisites

Windows Server Backup must be installed as a feature on the DFSR computers; it is not available by default. This can be done through ServerManager.msc or DISM.EXE.

More info on using Windows Server Backup: http://technet.microsoft.com/en-us/library/ee849849(WS.10).aspx

Procedure

1. Start Wbadmin.msc on the Windows Server 2008 R2 DFSR computer that has the data you are going to pre-seed.

2. Select “Backup Once” and then under “Select Backup Configuration” choose “Custom”.

Image may be NSFW.
Clik here to view.
image

3. Use “Add Items” to select the replicated folders that you will be pre-seeding.

Image may be NSFW.
Clik here to view.
image

Note: Do not attempt to exclude the DFSRPrivate junction point folders, as you will receive an error “one of the file paths specified for backup is under a reparse point”.

4. Select where to store the backup. This can be local if you have another disk with enough capacity, or a remote network location. It cannot be the same drive as the replicated folders being backed up.

Image may be NSFW.
Clik here to view.
image

5. If the backup was done locally, copy the WindowsImageBackup folder containing your backup to the location where you will restore the data. It could be a disk on the server you are pre-seeding or a central file share. It cannot be the actual disk(s) you are going to restore data to on the new computer.

6. Start Windows Server Backup on your server that you are pre-seeding with data and select “Recover”.

7. Select “A backup stored on another location”.

8. Select the correct location type. If the file was saved to this server, select “Local drives” and if it’s on another file share choose “Remote shared folder”.

9. You will see the old source data server in the list. Select the server and proceed.

Image may be NSFW.
Clik here to view.
image

10. The backup dates will be listed. By default the most recent will be displayed and this should be your backup; if not choose the correct one.

Image may be NSFW.
Clik here to view.
image

11. Select “Files and Folders” for the “Recovery Type”.

12. For “Items to Recover”, select the server node in “Available Items” tree. Whatever folder you select here, all of its child objects will be restored. For example, here I had two replicated folders on this server at the root of the drive that I backed up. If I just restore the “E” drive backup contents, both folders will be restored.

Image may be NSFW.
Clik here to view.
image

13. Under “Specify Recovery Options” select the destination path. Set “Overwrite the existing versions with the recovered versions”. Make sure that “restore access control list…” is enabled (i.e. checked ON).

Image may be NSFW.
Clik here to view.
image

Note: There should be no existing data to overwrite in this scenario typically; this radio button is selected for completeness. Pre-seeded data should win, that is why you are using it; existing data cannot be trusted.

14. Restore the data by selecting “Recover”.

15. At this point you are done pre-seeding. See section Validating Pre-Seeding. When that is complete you can proceed with replicating the data. You have the option to delete the DFSRPrivate folder that was restored within your RF(s) at this point, as it will not be useful for pre-seeding.

Validating Pre-seeding

Having theoretically pre-seeded correctly at this point, you need to spot check your work and validate that the file hashes are matching on the server. If a half dozen match up, you are usually safe to assume all the rest worked out – validating every single file is possible but in a large data set it will be very time consuming and of little value.

Prerequisites

You must have a Windows 7 or Windows Server 2008 R2 computer somewhere in your environment (even if it is not part of the DFSR environment being migrated) as it includes a new version of DFSRDIAG.EXE that has a filehash checking tool. If you do not have at least a Windows 7 computer running RSAT you will not be able to properly validate SHA-1 DFSR file hash data.

  • If using Win7, install RSAT and add the Distributed File System tools.

Image may be NSFW.
Clik here to view.
image[98]

  • If using Win2008 R2 servers, add the Feature of Distributed File System tools.

Image may be NSFW.
Clik here to view.
image

Note: If you have no copy of Windows 7 you must open a support case in order to gain access to an unsupported internal tool for file hash checking. The cost of this support case is at least the same as a copy of Windows 7 though and the tool you are provided will receive no support, so this is not as advisable as purchasing one Win7 license.

More info on using DFSRDIAG FILEHASH: http://blogs.technet.com/b/filecab/archive/2009/01/19/dfs-replication-what-s-new-in-windows-server-2008-r2.aspx

Procedure

1. Note the path of six files within the source data server. These should be scattered throughout various nested folder trees.

2. For one of those test files, use DFSRDIAG.EXE to get a hash from the source computer and the matching file on the pre-seeded computer:

DFSRDIAG.exe filehash /path:”source computer path file”

DFSRDIAG.exe filehash /path:”pre-seeded computer path file”

For example:

Image may be NSFW.
Clik here to view.
image

3. If DFSRDIAG shows the same hash value for both copies of the file, it has been pre-seeded correctly and matches in all file aspects (data stream, alternate data stream, security, and attributes). If it doesn’t match, you made a mistake in your pre-seeding or someone has changed the files after the fact. Start over.

4. Repeat for five more files (or more until you feel comfortable that pre-seeding was done perfectly).

Note: If you want to check every file, consider using DIR /B to build a list of all files on both servers, then using a FOR loop to export the hashes from all of them.

Final Considerations

Keep in mind that unless your data is 100% static or users are not allowed to modify files during pre-seeding and DFSR initial sync, some file conflicts are to be expected. These will be visible in the form of DFSR Event Log 4412 entries on the server that was pre-seeded. The point of pre-seeding is to minimize the amount of data to be replicated initially during the non-authoritative replication phase on the downstream server; unless data never changes there will always be a delta that DFSR will have to catch up after pre-seeding.

Series Index

- Ned “beanstack” Pyle

Image may be NSFW.
Clik here to view.

Replacing DFSR Member Hardware or OS (Part 3: N+1 Method)

Hello readers, Ned here again. In the previous two blog posts I discussed planning for DFSR server replacements and how to ensure you are properly pre-seeding data. Now I will show how to replace servers in an existing Replication Group using the N+1 Method to minimize interruption.

Make sure you review the first two blog posts before you continue:

Background

As mentioned previously, the “N+1” method entails adding a new replacement server in a one-to-one partnership with the server being replaced. That new computer may be using local fixed storage (likely for a branch file server) or using SAN-attached storage (likely for a hub file server). Because replication is performed to the replacement server – preferably with pre-seeded data – the interruption to existing replication is minimal and there is no period where replication is fully halted. This reduces risk as there is no single point of failure for end users, and backups can continue unmolested in the hub site.

The main downside is cost and capacity. For each N+1 operation you need an equal amount of storage available to the new computer, at least until the migration is complete. It also means that you need an extra server available for the operation on each previous node (if doing a hardware refresh this is not an issue, naturally).

Because a new server is being added for each old server in N+1, new hardware and a later OS can be deployed. No reinstallation or upgrades are necessary. The old server can be safely repurposed (or returned, if leased). DFSR supports renaming the new server to the old name; this may not be necessary if DFS Namespaces are being utilized.

Requirements

For each computer being replaced, you need the following:

  • A replacement server that will run simultaneously until the old server is decommissioned.
  • Enough storage for each replacement server to hold as much data as the old server.
  • If replacing a server with a cluster, two or more replacement servers will be required (this is typically only done on the hub servers).

Repro Notes

In my sample below, I have the following configuration:

  • There is one Windows Server 2003 R2 SP2 hub (HUB-DFSR) using a dedicated data drive provided by a SAN through fiber-channel.
  • There are two Windows Server 2003 R2 SP2 spokes (BRANCH-01 and BRANCH-02) that act as branch file servers.
  • Each spoke is in its own replication group with the hub (they are being used for data collection so that the user files can be backed up on the hub, and the hub is available if the branch file server goes offline for an extended period).
  • DFS Namespaces are generally being used to access data, but some staff connect to their local file servers by the real name through habit or lack of training.
  • The replacement computer is running Windows Server 2008 R2 with the latest DFSR hotfixes installed, including KB2285835.

I will replace the hub server with my new Windows Server 2008 R2 cluster and make it read-only to prevent accidental changes in the main office from ever overwriting the branch office’s originating data. Note that whenever I say “server” in the steps you can use a Windows Server 2008 R2 DFSR cluster.

Procedure

Phase 1 – Adding the new server

1. Inventory your file servers that are being replaced during the migration. Note down server names, IP addresses, shares, replicated folder paths, and the DFSR topology. You can use IPCONFIG.EXE, NET SHARE, and DFSRADMIN.EXE to automate these tasks. DFSMGMT.MSC can be used for all DFSR operations.

Image may be NSFW.
Clik here to view.
clip_image002

Image may be NSFW.
Clik here to view.
clip_image004

2. Bring the new DFSR server online.

3. Optional but recommended: Pre-seed the new server with existing data from the hub.

    Note: for pre-seeding techniques, see Replacing DFSR Member Hardware or OS (Part 2: Pre-seeding)

4. Add the new server as a new member of the first replication group.

Image may be NSFW.
Clik here to view.
clip_image006

Note: For steps on using DFSR clusters, reference:

5. Select the server being replaced as the only replication partner with the new server. Do not select any other servers.

Image may be NSFW.
Clik here to view.
clip_image008

6. Create (or select, if pre-seeded) the new replicated folder path on the replacement server.

Image may be NSFW.
Clik here to view.
clip_image010

Note: Optionally, you can make this a Read-Only replicated folder if running Windows Server 2008 R2. Make sure you understand the RO requirements and limitation by reviewing: http://blogs.technet.com/b/askds/archive/2010/03/08/read-only-replication-in-r2.aspx 

7. Complete the setup. Allow AD replication to converge (or force it with REPADMIN.EXE /SYNCALL). Allow DFSR polling to discover the new configuration (or force it with DFSRDIAG.EXE POLLAD).

Image may be NSFW.
Clik here to view.
clip_image012

8. At this point, the new server is replicating only with the old server being replaced.

Image may be NSFW.
Clik here to view.
clip_image014

Image may be NSFW.
Clik here to view.
clip_image016

9. When done, the new server will log a 4104 event. If pre-seeding was done correctly then there will be next to no 4412 conflict events (unless the environment is completely static there are likely to be some 4412’s, as users will continue to edit data normally).

Image may be NSFW.
Clik here to view.
clip_image018

10. Repeat for any other Replication Groups or Replicated folders configured on the old server, until the new server is a configured identically and has all data.

Phase 2 – Recreate the replication topology

1. Select the Replication Group and create a “New Topology”.

Image may be NSFW.
Clik here to view.
clip_image020

2. Select a hub and spoke topology.

Image may be NSFW.
Clik here to view.
clip_image022

    Note: You can use a full mesh topology with customization if using a more complex environment.

3. Make the new replacement server the new hub. The old server will act as a “spoke” temporarily until it is decommissioned; this allows for it to continue replicating any last minute user changes.

Image may be NSFW.
Clik here to view.
clip_image024

Image may be NSFW.
Clik here to view.
clip_image026

Image may be NSFW.
Clik here to view.
clip_image028

4. Force AD replication and DFSR polling again. Verify that all three servers are replicating correctly by creating a propagation test file using DFSRDIAG.EXE PropagationTest or DFSMGMT.MSC’s propagation test.

5. Create folder shares on the replacement server to match the old share names and data paths.

6. Repeat these steps above for any other RG’s/RF’s that are being replaced on these servers.

Phase 3 – Removing the old server

Note: this phase is the only one that potentially affects user file access. It should be done off hours in a change control window in order to minimize user disruption. In a reliably connected network environment with an administrator that is comfortable using REPADMIN and DFSRDIAG to speed up configuration convergence, the entire outage can usually be kept under 5 minutes.

1. Stop further user access to the old file server by removing the old shares.

Note: Stopping the Server service with command NET STOP LANMANSERVER will also temporarily prevent access to shares.

2. Remove the old server from DFSR replication by deleting the Member within all replication groups. This is done on the Membership tab by right-clicking the old server and selecting “Delete”.

Image may be NSFW.
Clik here to view.
clip_image030

3. Wait for the DFSR 4010 event(s) to appear for all previous RG memberships on that server before continuing.

Image may be NSFW.
Clik here to view.
clip_image032

4. At this point the old server is no longer allowing user data or replicating files. Rename the old server so that no accidental access can occur further. If part of DFS Namespace link targeting, remove it from the namespace as well.

Image may be NSFW.
Clik here to view.
clip_image034
Image may be NSFW.
Clik here to view.
clip_image036

5. Rename the replacement server to the old server name. Change the IP address to match the old server.

Image may be NSFW.
Clik here to view.
clip_image038

Note: This step is not strictly necessary, but provided as a best practice. Applications, scripts, users, or other computers may be referencing the old computer by name or IP even if using DFS Namespaces. If it is against IT policy to use server names and IP addresses instead of DFSN – and this is a recommended policy to have in place – then do not change the name/IP info; this will expose any incorrectly configured systems. Use of an IP address is especially discouraged as it means that Kerberos is not being used for security.

6. Force AD replication and DFSR polling. Validate that the servers correctly see the name change.

7. Add the new server as a DFSN link target if necessary or part of your design. Again, it is recommended that file servers be accessed by DFS namespaces rather than server names. This is true even if the file server is the only target of a link and users do not access the other hub servers replicating data.

8. Replication can be confirmed as continuing to work after the rename as well.

Image may be NSFW.
Clik here to view.
clip_image040

Image may be NSFW.
Clik here to view.
clip_image042

9. The process is complete.

Final Notes

As you can now see the steps to perform an N+1 migration operation are straightforward no matter if replacing a hub, branch, or all servers. Use of DFS Namespaces makes this more transparent to users. The actual outage time of N+1 is theoretically zero if not renaming servers and performing the operation off hours when users are not actively accessing data. Replication to the main office for never stops, so centralized backups can continue during the migration process.

All of these factors make N+1 the recommended DFSR node replacement strategy.

Series Index

- Ned “+1” Pyle

Image may be NSFW.
Clik here to view.

Replacing DFSR Member Hardware or OS (Part 4: Disk Swap)

Hello folks, Ned here again. Previously I covered how to use an N+1 server placement method to migrate an existing DFSR environment to new hardware or operating system. Now I will show you how to replace servers in an existing Replication Group using the disk swap method.

Make sure you review the first three blog posts before you continue:

Background

The “Data Disk Swap” method allows a new file server to replace an old one, but does not require new storage as it re-uses existing disks. This method typically entails a SAN or NAS storage backend as local data disks are typically in a RAID format that is difficult to keep intact between servers. A single data disk or RAID-1 configuration would be relatively easy to transfer between servers, naturally.

Because the DFSR data never has to be replicated or copied to the new replacement server, pre-seeding is accomplished for free. The downside here when compared to N+1 is that there will be a replication – and perhaps user access - interruption for as long as it takes to move the disks and reconfigure replication/file shares on the new replacement node. So while there is a significant cost savings, there is more risk and downtime for this method.

Because a new server is replacing an old server in the disk swap method new hardware and a later OS can be deployed. No reinstallation or upgrades are necessary. The old server can be safely repurposed (or returned, if leased). DFSR supports renaming the new server to the old name; this may not be necessary if DFS Namespaces are being utilized.

Requirements

For each computer being replaced, you need the following:

  • A replacement server.
  • If replacing a server with a cluster, two or more replacement servers will be required (this is typically only done on the hub servers).
  • A full backup with bare metal restore capability is highly recommended for each server being replaced. A System State backup of at least one DC in the domain hosting DFSR is also highly recommended.

Repro Notes

In my sample below, I have the following configuration:

  • There is one Windows Server 2003 R2 SP2 hub (HUB-DFSR) using a dedicated data drive provided by a SAN through fiber-channel.
  • There are two Windows Server 2003 R2 SP2 spokes (BRANCH-01 and BRANCH-02) that act as branch file servers.
  • Each spoke is in its own replication group with the hub (they are being used for data collection so that the user files can be backed up on the hub, and the hub is available if the branch file server goes offline for an extended period).
  • DFS Namespaces are generally being used to access data, but some staff connect to their local file servers by the real name through habit or lack of training.
  • The replacement computer is running Windows Server 2008 R2 with the latest DFSR hotfixes installed, including KB2285835.

I will replace the hub server with my new Windows Server 2008 R2 cluster and make it read-only to prevent accidental changes in the main office from ever overwriting the branch office’s originating data. Note that whenever I say “server” in the steps you can use a Windows Server 2008 R2 DFSR cluster.

Procedure

Note: this should be done off hours in a change control window in order to minimize user disruption. If the hub server is being replaced there will be no user data access interruption. If a branch server access by users however, the interruption may be several hours while the new server is swapped in. Replication - even with pre-seeding – may take substantial time to converge if there is a significant amount of data to check file hashes on.

1. Inventory your file servers that are being replaced during the migration. Note down server names, IP addresses, shares, replicated folder paths, and the DFSR topology. You can use IPCONFIG.EXE, NET SHARE, and DFSRADMIN.EXE to automate these tasks. DFSMGMT.MSC can be used for all DFSR operations.

Image may be NSFW.
Clik here to view.
clip_image002

2. Stop further user access to the old file server by removing the old shares.

Note: Stopping the Server service with command NET STOP LANMANSERVER will also temporarily prevent access to shares.

3. Remove the old server from DFSR replication by deleting the Member within all replication groups. This is done on the Membership tab by right-clicking the old server and selecting “Delete”.

Image may be NSFW.
Clik here to view.
clip_image004

Image may be NSFW.
Clik here to view.
clip_image006

Image may be NSFW.
Clik here to view.
clip_image008

Image may be NSFW.
Clik here to view.
clip_image010

4. Optional, but recommended: Use REPADMIN /SYNCALL and DFSRDIAG POLLAD to force AD replication and polling of configuration changes to occur faster in a widely distributed environment.

Image may be NSFW.
Clik here to view.
clip_image012

5. When the server being removed has logged a DFSR 4010 event log entry for all RG’s it was participating in, the storage being replicated previously can be disconnected from that computer.

6. Rename the replacement server to the old server name. Change the IP address to match the old server.

Image may be NSFW.
Clik here to view.
clip_image014

Note: This step is not strictly necessary, but provided as a best practice. Applications, scripts, users, or other computers may be referencing the old computer by name or IP even if using DFS Namespaces. If it is against IT policy to use server names and IP addresses instead of DFSN – and this is a recommended policy to have in place – then do not change the name/IP info; this will expose any incorrectly configured systems. Use of an IP address is especially discouraged as it means that Kerberos is not being used for security.

7. Bring the new replacement server or cluster online and attach the old storage. Verify files are accessible at this point before continuing with DFSR-specific steps.

Note: newly attached storage that each volume will have previous DFSR configuration info stored locally, including a database and other folders:

Image may be NSFW.
Clik here to view.
clip_image016

8. Remove the old DFSR configuration folder before configuring DFSR on the replacement server. If using Windows Server 2008 or Windows Server 2008 R2, this will require you to grant the Administrators group permission to the hidden operating system folder “System Volume Information” on the root of those drives. It will also require you to delete via a CMD prompt using the RD command as Windows Explorer does not allow file deletion in this folder:

RD “<drive>:\system volume information\DFSR” /s /q

Image may be NSFW.
Clik here to view.
clip_image018

Critical note: this step is necessary to prevent the DFSR service from incorrectly using previous database, log, or configuration data on the new server and potentially overwriting data incorrectly. It must not be skipped.

9. Add the new server as a new member of the first replication group if this server is a hub being replaced and is to be considered non-authoritative for data.

Image may be NSFW.
Clik here to view.
clip_image020

Image may be NSFW.
Clik here to view.
clip_image022

Note: For steps on using DFSR clusters, reference:

Critical note: if this server is the originating copy of user data – such as the branch server where all data is created or modified– delete the entire existing replication group and recreate it with this new server specified as the PRIMARY. Failure to follow this step may lead to data loss, as the server being added to an existing RG is always non-authoritative for any data and will lose any conflicts.

10. Select the previously replicated folder path on the replacement server.

Image may be NSFW.
Clik here to view.
clip_image024

Note: Optionally, you can make this a Read-Only replicated folder if running Windows Server 2008 R2. This must not be done on a server where users are allowed to create data.

11. Complete configuration of replication. Because all data already existed on the old server’s disk that was re-used, it is pre-seeded and replication should finish much faster than if it all had to be copied from scratch.

12. Force AD replication and DFSR polling.

13. When the server has logged a DFSR 4104 event (if non-authoritative) then replication is completed.

14. Verify that servers are replicating correctly by creating a propagation test file using DFSRDIAG.EXE PropagationTest or DFSMGMT.MSC’s propagation test.

Image may be NSFW.
Clik here to view.
clip_image026

Image may be NSFW.
Clik here to view.
clip_image028

15. Grant users access to their data by configuring shares to match what was used previously on the old server.

Note: This step is recommended after replication to avoid complexity and user data changing while initial sync is being performed. If necessary for business continuity, shares can instead be made available at the phase where the replacement server was brought online.

16. Add the new server as a DFSN link target if necessary or part of your design. Again, it is recommended that file servers be accessed by DFS namespaces rather than server names. This is true even if the file server is the only target of a link and users do not access the other hub servers replicating data.

17. The process is complete.

Final Notes

A data disk swap DFSR migration is less recommended than the N+1 method, as it causes a significant replication outage. During that timeframe, the latest data may not be available on hubs for backup. There is significant opportunity for human error here to make the outage much longer than necessary as well. If using certain local disk options (such as a RAID-5) this method may be totally unavailable to administrators.

On the other hand, this process can be logistically and financially more feasible for many customers and still gives straightforward steps with optimal performance due to inherent pre-seeding. All of these factors make disk swap method a less recommended but still advisable DFSR node replacement strategy.

Series Index

- Ned “swizzle” Pyle

Image may be NSFW.
Clik here to view.

Replacing DFSR Member Hardware or OS (Part 5: Reinstall and Upgrade)

Hello folks, Ned here again. Previously I explained how swapping out existing storage can be a method to migrate an existing DFSR environment to new hardware or operating system. Now in this final article I will discuss how reinstallation or an in-place upgrade can be used to deploy a later version of DFSR. Naturally this will not allow deployment of improved hardware and instead relies on existing infrastructure and storage.

Make sure you review the first four blog posts before you continue:

Background

The “reinstallation” method makes use of existing server hardware and storage by simply reinstalling the operating system. It does not require any new equipment expenditures or deployment, which can be very cost effective in a larger or more distributed environment. It also allows the user of later OS features without a time consuming or risky in-place upgrade. It also provides DFSR data pre-seeding for free, as the files being replicated are never removed. Previous OS versions and architectures are completely immaterial.

The downside to reinstallation is a total outage on that server until the setup and reconfiguration is complete, plus DFSR must be recreated for that server and perhaps for the entire Replication Group depending on the server’s place in data origination.

The “in-place upgrade” method also makes use of existing hardware by upgrading the existing operating system to a later version. This is also an extremely cost effective way to gain access to new features and also lowers the outage time as replication does not have to be recreated or resynchronized; once the upgrade is complete operation continues normally.

On the other hand upgrades may be impossible as 32-bit Win2003/2008 cannot be upgraded to Win2008 R2. As with any upgrade, there is some risk that the upgrade will fail to complete or end up in an inconsistent state, leading to lengthier troubleshooting process or a block of this method altogether. For that reason upgrades are the least recommended.

Requirements

Note: A full backup with bare metal restore capability is highly recommended for each server being replaced. A System State backup of at least one DC in the domain hosting DFSR is also highly recommended.

For more information on installing Windows Server 2008 R2, review: http://technet.microsoft.com/en-us/library/ee344846(WS.10).aspx

Repro Notes

In my sample below, I have the following configuration:

  • There is one Windows Server 2003 R2 SP2 hub (HUB-DFSR) using a dedicated data drive provided by a SAN through fiber-channel.
  • There are two Windows Server 2003 R2 SP2 spokes (BRANCH-01 and BRANCH-02) that act as branch file servers.
  • Each spoke is in its own replication group with the hub (they are being used for data collection so that the user files can be backed up on the hub, and the hub is available if the branch file server goes offline for an extended period).
  • DFS Namespaces are generally being used to access data, but some staff connect to their local file servers by the real name through habit or lack of training.
  • The replacement OS is Windows Server 2008 R2. After installation – or as part of its slipstreamed image if possible - it will have the latest DFSR hotfixes installed.

I will upgrade one branch server and reinstall the other branch server. Because of the upgrade process clustering is not possible, but it would be in a reinstallation scenario.

Procedure

Note: this should be done off hours in a change control window in order to minimize user disruption. If the hub server is being replaced there will be no user data access interruption in the branch. If a branch server being upgraded is accessed by users however, the interruption may be several hours while the upgrade or reinstallation takes place. Replication - even with pre-seeding – may take substantial time to converge if there is a significant amount of data to check file hashes on in the reinstallation scenario.

Reinstallation

1. Inventory your file servers that are being replaced during the migration. Note down server names, IP addresses, shares, replicated folder paths, and the DFSR topology. You can use IPCONFIG.EXE, NET SHARE, and DFSRADMIN.EXE to automate these tasks. DFSMGMT.MSC can be used for all DFSR operations.

Image may be NSFW.
Clik here to view.
image

2. Stop further user access to the old file server by removing the old shares.

Note: Stopping the Server service with command NET STOP LANMANSERVER will also temporarily prevent access to shares.

3. Remove the old server from DFSR replication by deleting the Member within all replication groups. This is done on the Membership tab by right-clicking the old server and selecting “Delete”.

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

4. Optional, but recommended: Use REPADMIN /SYNCALL and DFSRDIAG POLLAD to force AD replication and polling of configuration changes to occur faster in a widely distributed environment.

Image may be NSFW.
Clik here to view.
image

5. When the server being removed has logged a DFSR 4010 event log entry for all RG’s it was participating in proceed to the next step.

6. Start the OS installation either within the running OS or from boot up (via media or PXE).

Image may be NSFW.
Clik here to view.
image

7. Select a “Custom (advanced)” installation.

Image may be NSFW.
Clik here to view.
image

8. Specify the old volume where Windows was installed and accept the warning.

Note: When the installation commences all Windows, Program Files, and User Profile folders will be moved into a new Windows.old folder. Other data folders stored on the root of any attached drives will not move – this includes any previously replicated files.

Critical note: Do not delete, recreate, or format any drive containing previously replicated data!

Image may be NSFW.
Clik here to view.
image

9. When the installation completes and the server allows you to logon with local credentials, rename it to the old computer name and join it to the domain.

10. Install latest patches described in http://support.microsoft.com/kb/968429.

11. Add the new server as a new member of the first replication group if this server is a hub being replaced and is to be considered non-authoritative for data.

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

Critical note: if this server is the originating copy of user data – such as the branch server where all data is created or modified– delete the entire existing replication group and recreate it with this new server specified as the PRIMARY. Failure to follow this step may lead to data loss, as the server being added to an existing RG is always non-authoritative for any data and will lose any conflicts.

12. Select the previously replicated folder path on the replacement server.

Image may be NSFW.
Clik here to view.
image

Note: Optionally, you can make this a Read-Only replicated folder if running Windows Server 2008 R2. This must not be done on a server where users are allowed to create data.

13. Complete configuration of replication. Because all data already existed on the old server’s disk that was re-used, it is pre-seeded and replication should finish much faster than if it all had to be copied from scratch.

14. Force AD replication with REPADMIN /SYNCALL and DFSR polling with DFSRDIAG POLLAD.

15. When the server has logged a DFSR 4104 event (if non-authoritative) then replication is completed.

16. Validate replication with a test file.

Image may be NSFW.
Clik here to view.
image

17. Grant users access to their data by configuring shares to match what was used previously on the old server.

18. Add the new server as a DFSN link target if necessary or part of your design. Again, it is recommended that file servers be accessed by DFS namespaces rather than server names. This is true even if the file server is the only target of a link and users do not access the other hub servers replicating data.

19. The process is complete.

Upgrade

1. Start the OS installation either within the running OS or from boot up (via media, WDS, SCCM, etc). Be sure to review our recommendations around in-place upgrades: http://technet.microsoft.com/en-us/library/dd379511(WS.10).aspx.

Image may be NSFW.
Clik here to view.
image

2. Select “Upgrade” as the installation type.

Image may be NSFW.
Clik here to view.
image

3. Allow the installation to commence.

Image may be NSFW.
Clik here to view.
image

4. When the installation has completed and you are able to log back on to the computer with domain credentials, DFSR will commence functioning normally as it had on the previous OS. There is no need for further configuration or topology changes. There will be no new initial sync.

Image may be NSFW.
Clik here to view.
image

5. Creating a new test file in the replication groups on the upgraded server will sync to all other servers without issue, regardless of their current OS.

Image may be NSFW.
Clik here to view.
image

Image may be NSFW.
Clik here to view.
image

6. At this point the process is done.

Final Notes

While treated with suspicion due to the complexity and poor experiences of the past, upgrades are fully supported and when they operate smoothly they are certainly the lowest effort method to deploy a newer server OS. With changes in servicing and disk imaging starting in Windows Server 2008, they are also less likely to have lingering effects from previous OS files and settings.

However, reinstallation also gets you a newer OS and that install type is guaranteed not to have lingering effects from a previous OS installation. With a small amount of extra work, a reinstallation becomes a better long term solution with fewer questions around supportability, all while re-using existing hardware and data.

This concludes my series on replacing hardware and operating systems within a given set of DFSR Replication Groups. I hope you’ve found it helpful and illuminating. Now I can go back to being slightly crass and weird in my writing style like usual. :)

Series Index

- Ned “off to find freakish clipart” Pyle

Image may be NSFW.
Clik here to view.

Series Wrap-up and Downloads - Replacing DFSR Member Hardware or OS

Hey all, Ned here again. A few of you asked if the series around DFSR server replacements would have a “portable” version. I banged those up in DOCX, XPS, and PDF formats. Pick your poison below.

And just so you have one spot to link in Favorites, here are all five parts:

Thanks and I hope you enjoyed the series.

- Ned “holy crap, this was 54 pages with thinned margins” Pyle

Image may be NSFW.
Clik here to view.

New Directory Services Content 9/5-9/11

Only one new KB article of interest this week:

2157973

The Security event that has Event ID 4625 does not contain the user account name on a computer that is running Windows Vista, Windows Server 2008, Windows 7, or Windows Server 2008 R2

And the only blogs to note are Ned’s series on Replacing DFSR Member Hardware or OS:

Image may be NSFW.
Clik here to view.

Friday Mail Sack: Barbados Edition

Hello world, Ned here again. I’m back to write this week’s mail sack – just in time to be gone for the next two weeks on vacation and work travel. In the meantime Jonathan and Scott will be running the show, so be sure to spam the heck out of them with whatever tickles you. This week we discuss DFSR, Certificates, PKI, PowerShell, Audit, Infrastructure, Kerberos, NTLM, Active Directory Migration Tool, Disaster Recovery, and not-art.

Catluck en ’ dogluck!

Image may be NSFW.
Clik here to view.
image
 

Question

I need to understand what the difference between the various AD string type attribute syntaxes are. From http://technet.microsoft.com/en-us/library/cc961740.aspx : String(Octet), String(Unicode), Case-Sensitive String, String(Printable), String(IA5) et al. While I understand each type represents a different way to encode the data in the AD database, it isn't clear to me:

  1. Why so many?
  2. What differences are there in managing/using/querying them?
  3. If an application uses LDAP to update/read an attribute of one string type, is it likely to encounter issues if the same routine is used to update/read a different string type?

Answer

Active Directory has to support data-storage needs for multiple computer systems that may use different standards for representing data. Strings are the most variable data to be encoded because one has to account for different languages, scripts, and characters. Some standards limit characters to the ANSI character set (8-bit) while others specify another encoding type altogether (IA5 or PrintableString for X.509, for example).

Since Active Directory needs to store data suitable for all of these various systems, it needs to support multiple encodings for string data.

Management/query/read/write differences will depend very much on how you access the directory. If you use PowerShell or ADSI to access the directory, some level of automation is involved to properly handle the syntax type. PowerShell leverages the System.String class of the .NET Framework which handles, pretty much invisibly, the various string types.

Also, when we are talking about the 255-character extended ANSI character set, which includes the Latin alphabet used in English and most European Languages, then the various encodings are pretty much identical. You really won't encounter much of a problem until you start working in 2-byte character sets like Kanji or other Eastern scripts.

Question

Is it possible / advisable to run the CA service under an account different from SYSTEM with EFS enabled for some files that should not be read by system or would another solution be more appropriate?

Answer

No, running the CA service under any account other than Network Service is not supported. Users who are not trusted for Administrator access to the server should not be granted those rights.

[PKI and string type answers courtesy of Jonathan Stephens, the “Blaster” in our symbiotic “Master Blaster” relationship – Ned]

Question

Tons of people asking us about this article http://blogs.technet.com/b/activedirectoryua/archive/2010/08/04/conditions-for-kerberos-to-be-used-over-an-external-trust.aspx and if it is true or false or confused or what.

Answer

It’s wrong and confused and we’re getting this ironed out. Jonathan is going to create a whole blog post on how User Kerberos can function perfectly without a Kerberos Trust, or with an NTLM trust, or with no trust. It’s all smoke and mirrors basically – you don’t need a trust in all circumstances to use User Kerberos. Heck, don’t even have to use a domain-joined computer. For now, disregard that article please.

Question

I followed the steps outlined in this blog post: http://blogs.msdn.com/b/ericfitz/archive/2005/08/04/447951.aspx. Works like a champ and I see the data correctly in the Event Viewer. But when I try to use PowerShell 2.0 on one of those Win2003 DC’s with this syntax:

Get-EventLog -logname security -Newest 1 -InstanceId 566 | Where-Object { $_.entrytype -match "Success" } | Format-List

A bunch of the outputs are broken and unreadable (they look like un-translated GUID’s and variables). Like Object Type and Object Name, for example:

Image may be NSFW.
Clik here to view.
image

Answer

Ick, I can repro that myself.

This appears to be an issue in PowerShell 2.0 Get-EventLog cmdlet on Win2003 where an incorrect value is being displayed. You can’t have the issue on Win2008/2008 R2, I verified. Hopefully one of our Premier contract customers will report this issue so we can investigate further and see what the long term fix options are.

In the meantime though, here’s some sample workaround code I banged up using an alternative legacy cmdlet Get-WmiObject to do the same thing (including returning the latest event only, which makes this pretty slow):

Get-WmiObject -query "SELECT * FROM Win32_NTLogEvent Where Logfile = 'Security' and EventCode=566" | sort timewritten –desc | select –first 1

Slower and more CPU intensive, but it works.

Image may be NSFW.
Clik here to view.
image

A better long term solution (for both auditing and PowerShell) is get your DC’s running Win2008 R2.

Question

Do you have suggestions for pros/cons on breaking up a large DFSR replication group? One of our many replication groups has only one replicated folder. Over time that folder has gotten to be a bit large with various folders and shares (hosted as links) nested within. Occasionally there are large changes to the data and the replication backlog obviously impacts the ENTIRE folder. I have thought about breaking the group into several individual replication folders, but then I begin to shudder at the management overhead and monitoring all the various backlogs, etc.

  1. Is there a smooth way to transition an existing replication group with one replicated folder into one with many replicated folders? By "smooth" I mean no disruption to current replication if at all possible, and without re-replicating the data.
  2. What are the major pros/cons on how many replicated folders a given group has configured?

Answer

There’s no real easy answer – any change of membership or replicated folder within an RG means a re-synch of replication; the boundaries are discrete and there’s no migration tool. The fact that a backlog is growing won’t be helped by more or fewer RG/RF combos though, unless the RG/RF’s now involve totally different servers. Since the DFSR service’s inbound/outbound file transfer model is per server, moving things around locally doesn’t change backlogs significantly*.

So:

  1. No way to do this without total replication disruption (as you must rebuild the RG’s/RF’s in DFSR from scratch; the only saving grace here is if you don’t have to move data, you would get some pre-seeding for free).
  2. Since each RF would still have a staging/conflictanddeleted/installing/deleted folder each, there’s not much performance reasoning behind rolling a bunch of RF’s into a single RG. And no, you cannot use a shared structure. :) The main piece of an RG is administrative convenience: delegation is configured at an RG level for example, so if you had a file server admin that ran all the same servers that were replicating… stuff… it would be easier to organize those all as one RG.

* As a regular reader though, I imagine you’ve already seen this, which has some other ways to speed things up; that may help some of the choke ups:

http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx

Question

Is there an Add-QADPermission (from Quest) equivalent command is in AD PowerShell?

Answer

There is not a one-to-one cmdlet. But it can be done:

http://blogs.msdn.com/b/adpowershell/archive/2009/10/13/add-object-specific-aces-using-active-directory-powershell.aspx

It is – to be blunt – a kludge in our current implementation.

Question

I am working on an inter-forest migration that will involve a transitional forest hop. If I have to move the objects a second time to get them from a transition forest into our forest then will I lose the original SID History that is in the SID History attribute.?

Answer

You will end up with multiple SID history entries. It’s not an uncommon scenario to see customers would have been through multiple acquisitions and mergers end up with multiple SID histories. As far as authorization goes, it works fine and having more than one is fine:

http://msdn.microsoft.com/en-us/library/ms679833(VS.85).aspx

Contains previous SIDs used for the object if the object was moved from another domain. Whenever an object is moved from one domain to another, a new SID is created and that new SID becomes the objectSID. The previous SID is added to the sIDHistory property.

The real issue is user profiles. You have to make sure that ADMT profile translation is performed so that after users and computers are migrated the ProfileList registry entries are updated to use the user’s real current SID info. If you do not do this, when you someday need to use USMT to migrate data it will fail as it does not know or care about old SID history, only the SID in the profile and the current user’s real SID.

And then you will be in a world of ****.

Image may be NSFW.
Clik here to view.
image
 
Picture courtesy of the IRS

Question

Do you know if there is any problem with creating a DNS record with the name ldap.contoso.com name? Or maybe there will be some problems with other components of Active Directory if there is a record called “LDAP”?

Answer

Windows certainly will not care and we’ve had plenty of customers use that specific DNS name. We keep a document of reserved names as well, so if you don’t see something in this list, you are usually in good shape from a purely Microsoft perspective:

909264  Naming conventions in Active Directory for computers, domains, sites, and OUs
http://support.microsoft.com/default.aspx?scid=kb;EN-US;909264

This article is also good for winning DNS-related bar bets. If you drink at a pub called “The Geek and Spanner”, I suppose…

Image may be NSFW.
Clik here to view.
image

This is not that pub

Question

I'm currently working on a migration to Windows Server 2008 R2 AD forest – specifically the Disaster Recovery plan. Is it good idea to take one of the DCs offline, and after every successful "adprep operation" bring it back online? Or in case if something will go bad use this offline one to recreate domain?

Answer

The best solution is to put these plans in place:

Planning for Active Directory Forest Recovery
http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

That way no matter what happens under any circumstances (not just adprep), you have a way out. You can’t imagine how many customers we deal with every day that have absolutely no AD Disaster Recovery system in place at all.

Question

How did you make this kind of picture in your DFSR server replacement series?

Image may be NSFW.
Clik here to view.
image

[From a number of readers]

Answer

MS Office to the rescue for a non-artist like me. This is a modified version of the “relaxed perspective” picture format preset.

1. Create your picture, then select it and use the Picture Tools Format ribbon tab.

Image may be NSFW.
Clik here to view.
image

2. Use the arrows to see more of the style options, and you’ll see the one called “Relaxed Perspective, White”. Select that and your picture will now look like a three dimensional piece of paper.

Image may be NSFW.
Clik here to view.
image

3. I find that the default is a little too perspective though, so right-click it and select “Format Picture”.

 Image may be NSFW.
Clik here to view.
image
 

4. Use the 3-D Rotation menu to adjust the perspective and Y axis.

Image may be NSFW.
Clik here to view.
image

You can get pretty crazy with Office picture formatting.

Image may be NSFW.
Clik here to view.
image

Why yes sir, we do have plastic duck eight-ball clipart. Just the one today?

See you all in a few weeks,

Ned “please don’t audit me, I was kidding” Pyle

Image may be NSFW.
Clik here to view.

Hear hear

Kip Ng gives the sometimes unpopular but ultimately best advice:

IT Operations: The Reasons Why You Don’t Want To Be Unique

OpsVault is a newish blog by PFE’s talking about operational best practices; some of it is pretty common sense, some not so much. They raise topics that are worth some lively discussion (I sometimes wish they were a bit longer, commenting might encourage this). Give them a look.

- Ned "ok, now I'm really on vacation, I mean it" Pyle

Image may be NSFW.
Clik here to view.

New ADFS Content on TechNet Wiki

Adam Conkle has published some great troubleshooting, tips and tricks and how to articles on TechNet that should help you in evaluating and implementing Active Directory Federation Services.

AD FS - How to invoke a WS-Federation sign-out

AD FS 2.0 - "An unexpected error has occurred" error or blank page displayed attempting to log on to SharePoint, Event ID 23 logged

AD FS 2.0 - The service fails to start. "The service did not respond to the start or control request in a timely fashion. "

AD FS 2.0 - Query notification delivery failed because of the following error in service broker: 'The conversation handle "{GUID} is not found.'

Windows Identity Foundation (WIF) - FedUtil.exe on Windows Server 2003 fails with "Object Identifier (OID) is unknown."

AD FS 2.0 - Prompted for credentials when you are expecting to be allowed anonymous access

Windows Identity Foundation (WIF) - How to change certificate chain validation settings for web applications

AD FS 2.0 - How to set the Primary Federation Server in a WID Farm

AD FS 2.0 - The Admin event log shows Error 111 with System.ArgumentException: ID4216

Windows Identity Foundation (WIF) throws exception: "ID6018: Digest verification failed for reference"

AD FS 2.0 - Browsing to Federation Metadata fails "Unable to download federationmetadata.xml"

AD FS 2.0 - Continuously prompted for credentials when using FireFox 3.6.3

AD FS 2.0 - How to configure the SPN (servicePrincipalName) for the service account

AD FS 2.0 - Continuously prompted for credentials while using Fiddler Web Debugger

AD FS 2.0 - "Script is disabled. Click Submit to continue."

AD FS 2.0 - How to enable and immediately use AutoCertificateRollover

AD FS 2.0 - How to perform an unattended installation of an AD FS 2.0 STS or Proxy

AD FS 2.0 - The AD FS 2.0 Windows Service fails to start - Event 102 and 220 logged

AD FS 2.0 - How to manually run the AD FS 2.0 Initial Configuration

AD FS 2.0 - "ID4037: The key needed to verify the signature could not be resolved from the following security key identifier"

 -- Jonathan "Ned's Blog Monkey" Stephens

Image may be NSFW.
Clik here to view.

New Directory Services related content 9/12–9/18

KB Articles

Article ID

Title

2021766

Windows Server 2008 R2 Outbound trusts with Windows NT 4.0 domains do not validate or function correctly

2002584

Unable to select DNS Server role when adding a domain controller into an existing Active Directory domain

2028835

Windows 7 RSAT: Multiple tabs are missing when viewing user properties in Active Directory Users and Computers

983539

MS10-068: Vulnerability in Local Security Authority Subsystem Service could allow elevation of privilege

981550

MS10-068: Description of the security update for Active Directory: September 2010

Blogs

Title

Author

Friday Mail Sack: Barbados Edition

Pyle, Ned

Hear hear

Pyle, Ned

Putting sites at the center of the browsing experience, using the whole PC: IE9 Beta Available for Download

 

Parent Child Differencing Disks in Hyper-V

 

How to delegate AD permission to Organizational Units using the PowerShell command Add-QADPermission

 

More on searching group policy

 

UPHClean v1.6 Security Vulnerability Fix

 
Image may be NSFW.
Clik here to view.

AD LDS Schema Files Demystified

Hi, Russell here. When installing Active Directory Lightweight Domain Services (AD LDS) instances, it is quite possible to paint oneself into a corner rather quickly. That’s because LDS comes with minimal schema definitions. To truly make LDS useful to your applications, one must have an understanding of how best to take advantage of the included schema definition files.

When performing an LDS installation using the AD LDS Setup Wizard, you are offered several schema options:

Image may be NSFW.
Clik here to view.
clip_image002

When performing an installation using ADAM SP1, the following schema options are presented:

Image may be NSFW.
Clik here to view.
clip_image004

So how do you know which LDF files to select? Well seriously, it all depends upon your intentions, and I’m not talking about whether or not you want to ask our resident Elf out on a date.

Image may be NSFW.
Clik here to view.
clip_image005

Ideally, Schema definition requirements should be defined by your Application Developers. But as an AD or Server Administrator it will greatly benefit you to assist in the decision making process as the choices made during install are permanent. So what to pick?

Let’s start with definitions of the basic LDF files included in ADAM SP1:

  • MS-InetOrgPerson – Is a Microsoft implementation of the RFC 2798 LDAP Object Class. The InetOrgPerson object is used in many non-Microsoft X.500 and LDAP Directory Services to represent people within the enterprise.
  • MS-User.ldf - Is a Microsoft implementation of the X.500 User Class defined in RFC 1274. The User object class is the traditional representation for people within Microsoft’s Active Directory. Import of this LDF will provide AD LDS with a base user class, which can be used to define local users
  • MS-AZMan.ldf – Classes and attributes required to use ADAM/LDS as an authorization store for Windows Authorization Manager, AKA AZMan. Import of this LDF provides the base functionality to use AZMan .NET class libraries to leverage AD LDS as a role-based authentication store
  • MS-UserProxy.ldf – One of two LDF files required to use ADAM/LDS Bind Redirection Features. This ldf file extends default (no ldf imports) schema to allow user synchronization with Active Directory. Useful when you want to allow your internal forest(s) users extranet access to AD LDS hosted applications

I leaned on the word “implementation” in a couple of those definitions. That’s because whenever we discuss Internet RFCs, there is much that’s open to interpretation due to the use of the words “should,” “may,” “shall,” etc. as defined in Key words for use in RFCs to Indicate Requirement Levels. I also pointed out that UserProxy.ldf is one of two ldf files required to use ADAM/LDS for Bind Redirection to Active Directory. That’s because MS-ADAMSyncMetadata.ldf is missing from the ADAM SP1 Setup Wizard. (So is UserProxyFull). Windows Server 2008 and Windows Server 2008 R2 include these additional schema definitions as part of the Setup Wizard:

  • MS-ADAMSyncMetadata – Creates the ADAMSync engine classes and attributes necessary to synchronize Active Directory with ADAM/LDS. Adamsync.exe uses these classes and attributes to translate AD Users into ADAM/LDS users. This gives you the design flexibility of allowing domain users coming in via the internet, to logon to LDS & be authenticated via proxy to the domain.
  • MS-ADLDS-DisplaySpecifiers – New for Windows Server 2008 (It is 2010 after all). Provides the capability to manage ADAM/LDS replication configuration with AD Sites and Services
  • MS-UserProxyFull – Allows full user or inetorgperson attribute definition for synchronized Active Directory users. Originating in ADAM SP1 but hidden from the installation wizard, it is now available as an option to MS-UserProxy.ldf

What? Hidden from the installation wizard you say? How can that be? Easy, there are actually several, optional schema mods contained within the Windows\ADAM installation directory. The LDF Files are coded with “@@UI-Description: @@excludeFromList” to keep them out of the Setup Wizard GUI. In Windows Server 2008 R2, there are four other LDF files hidden from view:

Image may be NSFW.
Clik here to view.
clip_image007

These are actually some of the best files available. It is a shame they are hidden from view:

  • MS-adamschemaw2k3.ldf – This is a representation of Active Directory’s schema for Windows Server 2003 R2
  • MS-adamschemaw2k8.ldf – Like its little brother (no, not Scooter) this is a representation of Windows Server 2008 Schema
  • MS-ADAM-Upgrade-1 – Provides ADAM 1.0 and ADAM SP1 instances the ability to reload SSL certificates and makes the UnexpirePassword controlAccessRight to the schema.
  • MS-ADAM-Upgrade-2 – Introduces Windows Server 2008 R2 Recycle Bin Features into LDS

Now why would you need this enticing new feature in 2008 R2, such as the Recycle Bin? Uh, I don’t know, perhaps you like to see your users disappear with no way to recover? (No system state backup, no recycle bin to catch mistakes.) I work nights; I see many disaster recoveries, not just for AD LDS, but for AD too. This nifty feature can save you time and money – and most importantly – your job.  Until next time.

-Russell “Rusty aka R2 aka Spaniard” Despain

Image may be NSFW.
Clik here to view.

New Directory Services Content 9/19-9/25

Hi everyone.  We have a few new KB articles that came out last week, and a few blog posts of interest.

KB Articles

Article ID

Title

2384558

Inheritance of ownership in Group Policy Management Console does not work as expected

2018583

Windows 7 or Windows Server 2008 R2 domain join displays error "Changing the Primary Domain DNS name of this computer to "" failed...."

909264

Naming conventions in Active Directory for computers, domains, sites, and OUs

981575

A memory leak occurs in a .NET Framework 2.0-based application that uses the AesCryptoServiceProvider class

982861

Availability of Windows Internet Explorer 9 Beta

2078942

The CertEnroll control does not work in Internet Explorer 8 on a computer that is running Windows 7 or Windows Server 2008 R2

2345551

The Active Directory system discovery process cannot detect a client if the DNS suffix of the client differs from its DNS domain name in System Center Configuration Manager 2007 SP2

Blogs

Title

RODC – Password Replication Policy and Password Management

Override the hardcoded LDAP Query limits introduced in Windows Server 2008 and Windows Server 2008 R2

Enable Change Notifications between Sites – How and Why?

Exploring the User State Migration Toolkit (USMT) 4.0

Image may be NSFW.
Clik here to view.
Viewing all 274 articles
Browse latest View live